id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2303.17341
On the impact of $f(Q)$ gravity on the Large Scale Structure
We investigate the exponential $f(Q)$ symmetric teleparallel gravitation, namely $f(Q)=Q+\alpha Q_0(1-e^{-\beta\sqrt{Q/Q_0}})$ using \texttt{ME-GADGET} code to probe the structure formation with box sizes $L_{\mathrm{box}}=10/100$ Mpc$/h$ and middle resolution $N_p^{1/3}=512$. To reproduce viable cosmology within the aforementioned modified gravity theory, we first perform Markov Chain Monte Carlo (MCMC) sampling on OHD/BAO/Pantheon datasets and constrain a parameter space. Furthermore, we also derive theoretical values for deceleration parameter $q(z)$, statefinder pair $\{r,s\}$ and effective gravitational constant $G_{\mathrm{eff}}$, perform $Om(z)$ diagnostics. While carrying out N-body+SPH simulations, we derive CDM+baryons over density/temperature/mean molecular weight fields, matter power spectrum (both 2/3D, with/without redshift space distortions), bispectrum, two-point correlation function and halo mass function. Results for small and big simulation box sizes are therefore properly compared, halo mass function is related to the Seth-Tormen theoretical prediction and matter power spectrum to the standard \texttt{CAMB} output.
Oleksii Sokoliuk, Simran Arora, Subhrat Praharaj, Alexander Baransky, P. K. Sahoo
2023-03-30T12:51:10Z
http://arxiv.org/abs/2303.17341v1
# On the impact of \(f(Q)\) gravity on the Large Scale Structure ###### Abstract We investigate the exponential \(f(Q)\) symmetric teleparallel gravitation, namely \(f(Q)=Q+\alpha Q_{0}(1-e^{-\beta\sqrt{Q/\omega_{0}}})\) using ME-GADGET code to probe the structure formation with box sizes \(L_{\rm box}=10/100\ {\rm Mpc}/h\) and middle resolution \(N_{p}^{1/3}=512\). To reproduce viable cosmology within the aforementioned modified gravity theory, we first perform Markov Chain Monte Carlo (MCMC) sampling on OHD/BAO/Pantheon datasets and constrain a parameter space. Furthermore, we also derive theoretical values for deceleration parameter \(q(z)\), statefinder pair \(\{r,s\}\) and effective gravitational constant \(G_{\rm eff}\), perform \(Om(z)\) diagnostics. While carrying out N-body+SPH simulations, we derive CDM+baryons over density/temperature/mean molecular weight fields, matter power spectrum (both 2/3D, with/without redshift space distortions), bispectrum, two-point correlation function and halo mass function. Results for small and big simulation box sizes are therefore properly compared, halo mass function is related to the Seth-Tormen theoretical prediction and matter power spectrum to the standard CAMB output. keywords: Dark energy - Observations - Large-scale structure of Universe ## 1 Introduction Numerous independent cosmological observable predicts that universe undergoes the accelerated expansion phase at the present time (Riess et al., 1998; Perlmutter et al., 1999; Suzuki et al., 2012; Hinshaw et al., 2013). It is well-known that General Theory of Relativity (GR) is a quite successful theory on various cosmological scales, and it is able to describe the recent accelerated expansion of the universe by introducing the so-called cosmological constant (or \(\Lambda\) term) in the Einstein-Hilbert action integral. But, such term gives a rise to various issues, that were found in the papers of (SAHNI & STAROBINSKY, 2000; Padmanabhan, 2003). Since the gravitational Lagrangian is not practically restricted by only the linear Ricci scalar term, one can introduce additional terms to emulate effective dark energy and reproduce different universe evolutionary phases, such as cosmological inflation or late time accelerated expansion in order to overcome the aforementioned problems. There are a lot of different ways to modify general relativity - for example, by introducing some matter fields (canonical scalar field, vector and gauge boson fields, Dirac spinors, etc.). Another way is to present an entirely different notion of Lorentzian 4-manifold curvature by adjusting the metric-affine connection (De Felice & Tsujikawa, 2010; Capozziello & De Laurentis, 2011). For example, one could use the so-called torsion or non-metricity, which are constructed based on Weitzenbock and metric incompatible affine connections, respectively. Consequently, there are two analogs to GR, namely the Teleparallel Equivalent of GR (TEGR), introduced in (Hayashi & Shirafuji, 1979; Abedi et al., 2017) and Symmetric Teleparallel Equivalent of GR (STEGR) (Nester & Yo, 1999; Hohmann, 2021). In the current work, we will focus on the arbitrary parameterization of STEGR (\(f(Q)\) gravitation) in particular. A key aspect of \(f(Q)\) theory is the usage of a flat connection pertaining to the existence of affine coordinates in which all of its components vanish, converting covariant derivatives into partial derivatives (Hohmann, 2021; Dimakis et al., 2022; Zhao, 2022). So, it is possible to distinguish gravity from inertial effects in \(f(Q)\) theory. For many modified gravity theories, the development of the \(f(Q)\) theory provides a fresh starting point. Additionally, it offers a straightforward formulation in which self-accelerating solutions spontaneously appear in both the early and late universe. When compared to other geometric extensions of GR, both \(f(T)\) and \(f(Q)\) theories have a substantial benefit in that the background field equations are always of second order, which means that Ostrogradsky's theorem (Motohashi & Suyama, 2015) related instability issues are avoided. Up to this moment, \(f(Q)\) gravity has been incorporated in dozens of studies and is a very promising theory, that can reproduce the behaviour of both early (De et al., 2022) and late universe, satisfy constraints from Cosmic Microwave Background (CMB), SuperNovae (SN) distance modulus, Baryon Acoustic Oscillations (BAO), Observational Hubble Dataset (OHD) and primordial scalar index \(n_{s}\), standard sirens from LIGO/VIRGO/ET (D'Agostino & Nunes, 2022; Ferreira et al., 2022). For instance, exponential gravitation were constrained in the study by (Anagnostopoulos et al., 2021; Atayde & Frusciante, 2021), and the authors found out that such a theory can challenge concordance \(\Lambda\)CDM theory. There was also some studies carried out on the matter of \(f(Q)\) cosmography (Mandal et al., 2020) and energy conditions (Mandal et al., 2020; Koussour et al., 2022). Additionally, observational constraints on the \(f(Q)\) gravity have been established for a number of parameterizations of the \(f(Q)\) function using various observational probes (Lazkoz et al., 2019; Solanki et al., 2023; Jimenez et al., 2020). In the context of \(\mathrm{f(Q)}\) cosmology, a Hamiltonian formulation has been designed to carry out a canonical quantization procedure (Dimakis et al., 2021). Aside from these findings, \(f(Q)\) gravity has been the focus of several investigations in varied applications (Hu et al., 2022; Albuquerque & Frusciante, 2022; Fang et al., 2022; Esposito et al., 2022; Bajardi et al., 2020; Arora & Sahoo, 2022; Harko et al., 2018). In the current study, we are going to investigate the logarithmic \(f(Q)\) gravity in terms of observational constraints using Markov Chain Monte Carlo (MCMC) methodologies and high resolution N-body simulations, which will be discussed in the following subsections. ### N-body simulation as a probe of modified gravity To probe the validity of a particular modified theory of gravitation, one needs to incorporate various cosmological observables, ranging from cosmic expansion rate to clustering and structure formation history. The latter could be most effectively studied with the use of the so-called N-body simulations, that are well-known to be the best theoretical probe of the large scale structure of the universe, that provide information on the matter power spectrum/bispectrum, \(N\)-point correlation functions and halo mass function, void size function etc. Over the last few years, such approach has attracted some interest in the field of modified gravity (see the work of (Hassani & Lombriser, 2020)). Authors of the paper (Wilson & Bean, 2022) developed a pipeline to differentiate modified gravity theories from fiducial \(\Lambda\)CDM model and constrain those theories properly using voids in the N-body simulations, that are known to be less affected by the non-linear and baryonic physics in relation to the dark matter halos. Besides, in addition to voids, intrinsic shape alignments of massive halos and galaxy/halo angular clustering could be used to discriminate MOG theories from \(\Lambda\)CDM in the presence of massive neutrino (see (Lee et al., 2022) and (Droda et al., 2022) respectively). More-mentioned Halo Mass Function (further-HMF) were examined for \(f(Q)\) and Dvali-Gabadadze-Porrati (DGP) gravities in (Gupta et al., 2022). Widely used code \(\mathrm{MG}\)-\(\mathrm{Gadget}\), introduced and developed in (Puchwein et al., 2013) were employed to study \(f(R)\) Hu-Sawicki theory (Arnold et al., 2016, 2015; Giocoli et al., 2018), conformally coupled gravity (Ruan et al., 2022). In turn, we are going to use \(\mathrm{ME}\)-\(\mathrm{GADGET}\) code (for documentation, check (Zhang et al., 2018)) to study \(f(Q)\) gravity behaviour. Such code was applied to the case of \(f(T)\) teleparallel theory (Huang et al., 2022), interacting dark energy (Zhang et al., 2018; Zhao et al., 2022; Liu et al., 2022) and cubic vector galileon (Su et al., 2022). Our paper is organised as follows: in the first Section (1) we provide a little introduction into the topic of modified theories of gravity and N-body simulations. Consequently, in the Section (2) we present the foundations of symmetric teleparallel gravity an it's arbitrary parameterization, in the third section we adopt FLRW isotropic line element, derive field equations for our logarithmic choice of \(f(Q)\) function. In the Section (4) we therefore introduce each observational dataset of our consideration and perform MCMC analysis, in Section (5) we analyze the provided constraints deriving theoretical predictions for deceleration parameter, statefinder pair and \(Om(z)\). In the following section we set up the \(\mathrm{ME}\)-\(\mathrm{GADGET}\) suite and study the N-body output for small simulation box size, in (7) we therefore compare aforementioned results with the ones, obtained for large \(L_{\mathrm{box}}\). Finally, in the last section we present the concluding remarks on the key topics of our study. ## 2 Modified symmetric teleparallel gravitation Firstly we are going to start by introducing the fundamentals of the symmetric teleparallel theories of gravitation. In such theories, it is generally assumed that the scalar curvature of the manifold does vanishes (and therefore, \(R=0\)) as well as torsion, however non-metricity is non-zero (and describes gravitational interactions). Within the symmetric teleparallel and related theories, affine connection is metric-incompatible such that \(\nabla_{\mu}s_{\alpha\beta}\neq 0\). In order to present the formalism of symmetric teleparallel theory, one must firstly define the generalized metric affine connection (Lin & Zhai, 2021): \[\Gamma^{\alpha}_{\mu\nu}=\widetilde{\Gamma}^{\alpha}_{\mu\nu}+K^{\alpha}_{\mu \nu}+L^{\alpha}_{\mu\nu}. \tag{1}\] In the equation above \(\widetilde{\Gamma}^{\alpha}_{\mu\nu}\) is the usual Levi-Cevita metric-affine connection, that is widely used within the General Theory of Relativity: \[\widetilde{\Gamma}^{\alpha}_{\mu\nu}=\frac{1}{2}g^{\alpha\beta}(\partial_{\mu }g_{\beta\nu}+\partial_{\nu}g_{\beta\mu}-\partial_{\beta}g_{\mu\nu}). \tag{2}\] While other two terms in (1) are namely contortion and deformation tensors and they could be written below as follows: \[K^{\alpha}_{\mu\nu} = \frac{1}{2}g^{\alpha\beta}\left(T_{\mu\beta\nu}+T_{\nu}\beta_{\mu \nu}+T_{\beta\mu\nu}\right), \tag{3}\] \[L^{\alpha}_{\mu\nu} = -\frac{1}{2}g^{\alpha\beta}\left(Q_{\mu\beta\nu}+Q_{\nu}\beta_{ \mu}-Q_{\beta\mu\nu}\right). \tag{4}\] Here \(T^{\alpha}_{\mu\nu}=\Gamma^{\alpha}_{\mu\nu}-\Gamma^{\alpha}_{\nu\mu}\) is the torsion tensor and (Capozziello & D'Agostino, 2022) \[Q_{\alpha\mu\nu}=\nabla_{\alpha}g_{\mu\nu}=\partial_{\alpha}g_{\mu\nu}- \Gamma^{\beta}_{\alpha\mu}g_{\beta\nu}-\Gamma^{\beta}_{\alpha\nu}g_{\mu\beta}. \tag{5}\] Is obviously the non-metricity tensor. As we already mentioned, here and further we assume that both Ricci scalar curvature and torsion terms vanish and therefore we are left with only non-metricity. Therefore, to proceed with STEGR case one could derive the non-metricity scalar (fundamental quantity) from non-metricity tensor and its independent traces \(Q_{\alpha}=Q^{\mu}_{\alpha\mu}\) and \(\tilde{Q}^{\alpha}=Q^{\alpha\mu}_{\mu}\)(Lin & Zhai, 2021): \[Q=-g^{\mu\nu}(L^{\alpha}_{\beta\nu}L^{\beta}_{\mu\alpha}-L^{\beta}_{\alpha \beta}L^{\alpha}_{\mu\nu})=-P^{\alpha\beta\gamma}Q_{\alpha\beta\gamma}, \tag{6}\] Where deformation tensor \(\mathbf{L}\) was already defined previously and superpotential could be expressed in the following way: \[P^{\alpha}_{\mu\nu}=\frac{1}{4}\bigg{[}2Q^{\alpha}_{(\mu\nu)}-Q^{\alpha}_{\mu \nu}+Q^{\alpha}g_{\mu\nu}-\delta^{\alpha}_{(\mu}Q_{\nu)}-\tilde{Q}^{\alpha}g_{ \mu\nu}\bigg{]}. \tag{7}\] Here symmetric and antisymmetric parts of the tensor are: \[F_{(\mu\nu)}=\frac{1}{2}\bigg{(}F_{\mu\nu}+F_{\nu\mu}\bigg{)}, \tag{8}\] \[F_{[\mu\nu]}=\frac{1}{2}\bigg{(}F_{\mu\nu}-F_{\nu\mu}\bigg{)}. \tag{9}\] The condition of symmetric teleparallelism makes the generic affine connection to be inertial. The most general connection is \[\Gamma^{\alpha}_{\mu\nu}=\frac{\partial x^{\alpha}}{\partial\xi^{\sigma}} \frac{\partial^{2}\xi^{\sigma}}{\partial x^{\mu}\partial x^{\nu}}, \tag{10}\] where \(\xi^{\sigma}\) is an arbitrary function of spacetime position. We can always choose a coordinate \(x^{\alpha}=\xi^{\sigma}\) but utilizing a general coordinate transformation, where the general affine connection \(\Gamma^{\alpha}_{\mu\nu}=0\). We call this coordinate the coincident gauge (Jimenez et al., 2018). Thus, in the coincident gauge, we will have \(Q_{\alpha\mu\nu}=\partial_{\alpha}g_{\mu\nu}\), i.e. all the covariant derivatives are identical to ordinary derivatives. That was the fundamentals of symmetric teleparallel analogue of the General Theory of Relativity (GR). Now we are going to present the formalism of modified symmetric teleparallel cosmology. Einstein-Hilbert action integral of the aforementioned theory of gravity is therefore could be written down as follows (Jimenez et al., 2018): \[\mathcal{S}[g,\Gamma,\Psi_{i}]=\frac{1}{16\pi G}\int_{\mathcal{M}}-d^{4}xef(Q)+ \mathcal{S}_{\text{M}}[g,\Gamma,\Psi_{i}]. \tag{11}\] In the equation above, \(\mathcal{M}\) is the four dimensional Lorentzian manifold that we work on, \(g=\det g_{\mu\nu}=\prod_{\mu,\mu}g_{\mu\nu}\) is the metric tensor determinant, \(e=\sqrt{-g}\) and \(f(Q)\) is the arbitrary function of non-metricity scalar, that defines the modified theory of gravitation. Moreover, \(\Gamma\) is the curvature free affine connection and \(\mathcal{S}_{\text{M}}[g,\Gamma,\Psi_{i}]\) defines the contribution of additional matter fields \(\Psi_{i}\) to the total Einstein-Hilbert action integral. The reason for the above action and specific selection of the non-metricity scalar is that GR is recreated, up to a Boundary term for the choice \(f=Q\), i.e., for this choice, we recover the allegedly "symmetric teleparallel equivalent of GR". By varying the action (11) with respect to the metric tensor inverse \(g^{\mu\nu}\) (using least action principle \(\delta\mathcal{S}=0\)) we could obtain the corresponding field equations \[\frac{2}{\sqrt{-g}}\nabla_{\alpha}\left(\sqrt{-g}f_{Q}P^{\alpha\mu}\right)+ \frac{1}{2}g^{\mu}_{\ \nu}f+f_{Q}P^{\mu\nu\beta}Q_{\nu\alpha\beta}=T^{\mu}_{\ \nu}, \tag{12}\] Where \(f_{Q}=\frac{\partial f}{\partial Q}\) and the energy-momentum tensor could be easily derived from the variation of matter fields Lagrangian density: \[T_{\mu\nu}=-\frac{2}{\sqrt{-g}}\frac{\delta(\sqrt{-g}\mathcal{L}_{\text{M}}) }{\delta g^{\mu\nu}}. \tag{13}\] The connection of equation of motion can be computed by noticing that the variation of the connection with respect to \(\xi^{\alpha}\) is equivalent to performing a diffeomorphism so that \(\partial_{\xi}\Gamma^{\alpha}_{\mu\nu}=-\mathcal{L}_{\xi}\Gamma^{\alpha}_{\mu \nu}=-\nabla_{\mu}\nabla_{\nu}\xi^{\alpha}\)(Jimenez et al., 2020). Besides this, in the absence of hypermomentum, one can take the variation of equation (11) with respect to connection \[\nabla_{\mu}\nabla_{\nu}\left(\sqrt{-g}f_{Q}P^{\mu\nu}_{\ \alpha}\right)=0. \tag{14}\] For the metric and connection equations, one can notice that \(\mathcal{D}_{\mu}T^{\mu}_{\ \nu}=0\), where \(\mathcal{D}_{\mu}\) is the metric-covariant derivative. Therefore, since we already defined all of the necessary quantities, we could proceed further and set up the background spacetime. ## 3 Flrw Cosmology In order to study the evolution of our universe, it will be handful to assume that background spacetime is isotropic and homogeneous, namely is Friedmann-Lemaitre-Robertson-Walker (FLRW) spacetime (we assume that lapse function is unitary): \[ds^{2}=-dt^{2}+\sum_{I,J}a^{2}(t)dx^{i}dx^{J}. \tag{15}\] Here \(a(t)\) is the scale factor of the universe, it is a fundamental quantity that defines the evolution of the universe from its beginning. Consequently, with assumption (15), the non-metricity scalar is written as follows (Caruana et al., 2020) \[Q=6H^{2}, \tag{16}\] Where \(H=\dot{a}/a\) is the well-known Hubble parameter and dot over some quantity signifies the first order temporal derivative. Finally, one could evaluate the FLRW field equations of the \(f(Q)\) theory: \[3H^{2}=\kappa^{2}\left(p_{\text{m}}+\rho_{\text{eff}}\right), \tag{17}\] \[3H^{2}+2\dot{H}=-\kappa^{2}\left(p_{\text{m}}+p_{\text{eff}} \right). \tag{18}\] Here \(\kappa^{2}=8\pi G\) is the Einstein gravitational constant squared, \(\rho_{\text{m}}\) and \(p_{\text{m}}\) are, respectively, matter energy density and isotropic pressure. Moreover, in the equation above \(\rho_{\text{eff}}\) and \(p_{\text{eff}}\) are effective energy density and pressure that define the contribution of \(f(Q)\) gravity to the field equations. For modified STEGR, fields equations with plugged exact forms of effective quantities read: \[3H^{2}=\frac{\kappa^{2}}{2f_{Q}}\left(\rho_{m}+\frac{f}{2}\right), \tag{19}\] \[\left(12H^{2}f_{QQ}+f_{Q}\right)\dot{H}=-\frac{k^{2}}{2}\left(\rho_{m}+p_{m} \right). \tag{20}\] are effective DE energy density and pressure that define the contribution of the \(f(Q)\) cosmology with \[f_{Q}=\frac{\partial f(Q)}{\partial Q},\quad f_{QQ}=\frac{\partial^{2}f(Q)}{ \partial Q^{2}} \tag{21}\] The energy-momentum tensor of the cosmological fluid which is given by \[T_{\mu\nu}=\left(\rho+p\right)u_{\mu}u_{\nu}+pg_{\mu\nu}, \tag{22}\] which leads to conservation equation as \(\dot{\rho}+3H\left(\rho+p\right)=0\). In symmetric teleparallel gravity and its extensions, the conservation law \(T^{\mu}_{\ \nu,\mu}=0\) holds for the matter energy-momentum tensor. The \(T^{\mu}_{\ \nu,\mu}=0\) holds through (14) for the connection (Jimenez et al., 2018; Dimakis et al., 2022; Harko et al., 2018). ### Exponential \(f(Q)\) gravity This paper is particularly aimed at the investigation of one \(f(Q)\) gravity model - namely modified exponential \(f(Q)\) gravity (which is built from the linear and exponential terms respectively). In \(f(Q)\) theory, numerous cosmic possibilities have been examined using various exponential models, notably inflationary cosmology, BBN constraints, and dynamic system analysis.(Harko et al., 2018; Anagnostopoulos et al., 2023; Khyllep et al., 2023). For that kind of gravity, \(f(Q)\) function reads (we adapt the work of (Linder, 2009, 2010) for modified STEGR): \[f(Q)=Q+\alpha Q_{0}(1-e^{-\beta\sqrt{Q/Q_{0}}}) \tag{23}\] Where \(\alpha\), \(\beta\) are free MOG parameters, namely additional degrees of freedom. We can reduce the number of d.o.f by matching first Friedmann equation (19) at the present time (i.e. assuming that \(z=0\)): \[\alpha=-\frac{e^{\beta}(-1+\Omega_{m0}+\Omega_{\gamma_{0}})}{-1+e^{\beta}-\beta} \tag{24}\] Thus, the complexity of this form is just one step more than the standard \(\Lambda\)CDM. The exponential modified gravity could satisfy all stability and validity and not cross the pahntom divide line (Arora & Sahoo, 2022). In order to solve the field equations and obtain the numerical form of Hubble parameter, the definition of \(\dot{H}\) is \[\dot{H}=aH\frac{dH}{da}. \tag{25}\] We will solve the aforementioned field equation numerically, as we already stated, with Mathematica numerical ODE solver NDSolve. Initial conditions at the vanishing redshift for \(\dot{H}\) could be therefore set up (as a cosmographical quantity)(Mandal et al., 2020): \[\dot{H}_{0}=-H_{0}^{2}(1+q_{0}). \tag{26}\] Where \(q_{0}\) is the current deceleration parameter, we fix it to \(q_{0}=-0.55\)(Reid et al., 2019). Additionally, for MCMC training, as a truths we assume that the present value of the Hubble parameter is \(H_{0}=69\,\mathrm{km/s/Mpc}\) and that matter mass fraction at the present time is \(\Omega_{m0}=0.315\pm 0.007\), following the observational constraints of _Planck2018_(Planck Collaboration et al., 2020). ## 4 MCMC Constraints In this section, we want to constrain our \(f(Q)\) gravity model via observational datasets. To explore the parameter space, we will be using the Markov Chain Monte Carlo (MCMC) methodology and Python package emcee(Foreman-Mackey et al., 2013). ### Observational Hubble Data Observational Hubble Data is one of the most popular and plausible tests of the universe expansion history beyond GR and \(\Lambda\)CDM. The OHD sample is mainly obtained from the differential age of galaxies method (or just DAG) (Yu et al., 2018; Moresco, 2015). In this method, Hubble rate is usually obtained from the formula below \[H(z)=\frac{-1}{1+z}\frac{dz}{dt} \tag{27}\] In the current article, we will primarily use the OHD points derived from the so-called Cosmic Chronometers (CC) i.e., the massive and passively evolving galaxies. Using Cosmic Chronometers, ratio \(dz/dt\) could be derived from the \(\Delta z/\Delta t\), where \(\Delta z\) is the redshift separation in the galaxies sample and could be easily determined through precise and accurate spectroscopy. On the other hand, derivation of the \(\Delta t\) is much more challenging and requires some standard clocks. For that purpose, we could use massive, passively evolving, and old stellar populations that are present across a wide range of redshifts and therefore could be considered as cosmic chronometers. To determine the priors and likelihood functions (which are necessary), we used the \(H(z)\) dataset. To constrain our modified gravity model, we introduce the chi-squared function below \[\chi^{2}_{CC}=\sum_{i=1}^{N_{H}}\left[\frac{H_{i}^{\mathrm{th}}(p_{1},p_{2},...,p_{n},z_{i})-H_{i}^{\mathrm{obs}}(z_{i})}{\sigma_{H}(z_{i})}\right]^{2} \tag{28}\] The likelihood function that we will be using for MCMC sampling have its usual exponential form \[\mathcal{L}=\exp(-\chi^{2}/2) \tag{29}\] ### Pantheon SN Ia Sample We also used the Pantheon dataset to constrain our modified gravity with dark energy, which consists of data obtained from the 1048 Ia supernovae (discovered by the PANSTARRS DR1 (PS1) Medium Deep Survey, Low \(z\), SNLS, SDSS and HST (Scolnic et al., 2018; Chang et al., 2019)). For this dataset, redshift varies from \(z=0.01\) to \(z=2.26\). The corresponding chi-squared function reads: \[\chi^{2}_{SN}(p_{1},p_{2},...,p_{n})=\sum_{i,j=1}^{N_{SN}}\frac{\Delta\mu_{i} }{\sigma_{H}(z_{i})} \tag{30}\] Where \[\Delta\mu_{i}=\mu^{th}(p_{1},p_{2},....,p_{n})-\mu_{i}^{obs} \tag{31}\] And distance moduli is (Arora & Sahoo, 2020) \[\mu^{th}=5\log_{10}D_{L}(z)+\mu_{0},\quad\mu_{0}=5\log_{10}\frac{H_{0}^{-1}} {\mathrm{Mpc}}+25 \tag{32}\] \[D_{L}(z)=\frac{c(1+z)}{H_{0}}S_{K}\left(H_{0}\int_{0}^{z}\frac{d\bar{z}}{H( \bar{z})}\right) \tag{33}\] Here function \(S_{K}(x)\) is just \[S_{K}(x)=\begin{cases}\sinh(x\sqrt{\Omega_{K}})/\Omega_{K},&\Omega_{K}>0\\ x,&\Omega_{K}=0\\ \sin(x\sqrt{|\Omega_{K}|})/|\Omega_{K}|,&\Omega_{K}<0\end{cases}. \tag{34}\] It is known that our universe is spatially flat, and therefore \(\Omega_{K}=0\). The nuisance parameters in the Tripp formula (Tripp, 1998)\(\mu=m_{B}-M_{B}+\alpha x_{1}-\beta c+\Delta_{M}+\Delta_{B}\) were retrieved using the novel method known as BEAMS with Bias Correction (BBC) (Kessler & Scolnic, 2017), and the observed distance modulus is now equal to the difference between the corrected apparent magnitude \(M_{B}\) and the absolute magnitude \(m_{B}\) (\(\mu=m_{B}-M_{B}\)). Additionally, one can define the chi-squared function in terms of covariance matrix as follows (Deng & Wei, 2018): \[\chi^{2}_{\mathrm{SN}}=\Delta\mu^{T}\mathbf{C}^{-1}\Delta\mu \tag{35}\] Where covariance matrix consists of systematic and statistical uncertainties respectively (Conley et al., 2011): \[\mathbf{C}=\mathbf{D_{Stat}}+\mathbf{C_{\mathrm{sys}}} \tag{36}\] In the current work we assume that diagonal matrix of statistical uncertainties looks like \(\mathbf{D_{stat,ii}}=\sigma^{2}_{H(z_{i})}\). Besides, systematic uncertainties are derived using the Bias Corrections (BBC) method, introduced and developed in (Scolnic et al., 2018): \[\mathbf{C}_{ij,\mathrm{sys}}=\sum_{k=1}^{K}\left(\frac{\partial\mu^{obs}_{k} }{\partial S_{k}}\right)\left(\frac{\partial\mu^{obs}_{j}}{\partial S_{k}} \right)\sigma^{2}_{S_{k}} \tag{37}\] Indexes \(\{i,j\}\) denote the redshift bins for distance modulus, \(S_{k}\) here denotes the magnitude of systematic error, \(\sigma_{S_{k}}\) is respectively it's standard deviation uncertainty. ### Baryon Acoustic Oscillations Finally, we use Baryon Acoustic Oscillations (BAOs) to constrain our modified gravity model. BAOs arise in the early times of universe evolution. In earlier times, fermions and photons are strongly connected to each other due to the Thompson scattering. This mixture of baryons and photons behaves like a single fluid and can not gravitationally collapse. Moreover, this fluid oscillates because of the huge photonic pressure. These oscillations are called BAOs. The Characteristic scale of the BAO is defined by the so-called sound horizon \(r_{s}\), which is seen at the photon decoupling epoch with redshift \(z_{*}\): \[r_{s}=\frac{c}{\sqrt{3}}\int_{0}^{\frac{1}{1+z_{*}}}\frac{da}{a^{2}H\sqrt{1+(3 \Omega_{b0}/4\Omega_{\gamma 0})a}} \tag{38}\] Here \(\Omega_{b0}\) is known as the baryon mass density at present (\(z=0\)) and \(\Omega_{\gamma 0}\) is, respectively, photon mass density at present. Also, as it was noticed, angular diameter distance is derived directly from the BAO sound horizon. In this work, to constrain our MOG model with BAO, we will use the observational datasets with \(d_{A}(z_{*})/D_{V}(z_{BAO})\) data. Here we consider that \(d_{A}(z_{*})\) is the angular diameter distance in the comoving coordinates: \[d_{A}(z)=\int_{0}^{z}\frac{dz^{*}}{H(z^{*})} \tag{39}\] And \(D_{V}(z_{BAO})\) is the dilation scale: \[D_{V}(z)=(d_{A}(z)^{2}z/H(z))^{1/3} \tag{40}\] Finally, we also consider that photon decoupling epoch arise at the redshift (Planck Collaboration et al., 2016): \[z_{*}=1048[1+0.00124(\Omega_{b}h^{2})^{-0.738}][1+g_{1}(\Omega_{m}h^{2})^{9z}] \tag{41}\] Where, \[g_{1}=\frac{0.0783(\Omega_{b}h^{2})^{-0.238}}{1+39.5(\Omega_{b}h^{2})^{-0.763}} \tag{42}\] \[g_{2}=\frac{0.560}{1+21.1(\Omega_{b}h^{2})^{-1.81}} \tag{43}\] Figure 1: MCMC best fits from OHD, Pantheon and BAO datasets and joint distribution for exponential \(f\) (\(Q\)) model This dataset was gathered from the works of (Blake et al., 2011; Percival et al., 2010; Jarosik et al., 2011; Eisenstein et al., 2005; Giostri et al., 2012). Consequently, to perform the MCMC sampling, we need to define the chi squared function for our BAO dataset: \[\chi^{2}_{BAO}=X^{T}C^{-1}X \tag{44}\] Where \(X\) is the matrix of form (Giostri et al., 2012): \[X=\left(\begin{array}{c}\frac{d_{A}(z_{c})}{D_{V}(0.106)}-30.95\\ \frac{d_{A}(z_{c})}{D_{V}(0.20)}-17.55\\ \frac{d_{A}(z_{c})}{D_{V}(0.35)}-10.11\\ \frac{d_{A}(z_{c})}{D_{V}(0.44)}-8.44\\ \frac{d_{A}(z_{c})}{D_{V}(0.66)}-6.69\\ \frac{d_{A}(z_{c})}{D_{V}(0.73)}-5.45\\ \end{array}\right) \tag{45}\] We also performed the joint analysis form the combined \(OHD\) + \(SN\) + BAO by minimizing \(\chi^{2}_{OHD}+\chi^{2}_{SN}+\chi^{2}_{BAO}\). The results are, therefore, numerically derived from MCMC trained on OHD, Pantheon, BAO and joint datasets. Besides, results are placed on the Table (4.3) above for model free parameters \(H_{0}\), \(\beta\) and \(\Omega_{m0}\). Furthermore, the \(1-\sigma\) and \(2-\sigma\) likelihood contours for the possible subsets of parameter space are presented in Fig. 1. ### Statistical evaluation To evaluate the success of our MCMC analysis, one should perform the statistical evaluation using the so-called Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC). The first quantity, namely AIC can be expressed as follows (Akaike, 1974): \[\mathrm{AIC}=\chi^{2}_{\mathrm{min}}+2d \tag{46}\] With \(d\) being the number of free parameters in a chosen model. To compare our results with the well-known fiducial \(\Lambda\)CDM model, we are going to use the AIC difference between our modified gravity model and fiducial cosmology \(\Delta\mathrm{AIC}=|\mathrm{AIC}_{\Lambda\mathrm{CDM}}-\mathrm{AIC}_{\mathrm{ MOG}}|\). In that case, if \(\mathrm{AAIC}<2\), there is a strong evidence in favor of MOG model, while for \(4<\Delta\mathrm{AIC}\leq 7\) there is a little evidence if favor of MOG model of our consideration. Finally, for the case with \(\Delta\mathrm{AIC}>10\) there is practically no evidence in favor of MOG (Liddle, 2007). In addition, BIC is defined through the relation, written down below: \[\mathrm{BIC}=\chi^{2}_{\mathrm{min}}+d\ln N \tag{47}\] For that case, \(N\) is the number of data points being used for MCMC. For BIC, if \(\mathrm{ABIC}<2\), there is no strong evidence against chosen model that deviate from \(\Lambda\mathrm{CDM}\), if \(2\leq\mathrm{ABIC}<6\) there is evidence against the MOG model and finally for \(\mathrm{ABIC}>6\) there is strong evidence against MOG model. We therefore store the \(\chi^{2}_{\mathrm{min}}\)/AIC/BIC data for modified gravity model of our consideration in the Table (4.3). As we see, \(\Delta\mathrm{AIC}=1.25\) and \(\mathrm{ABIC}=3.01\) so that our model can very precisely mimic \(\Lambda\mathrm{CDM}\) one. ## 5 Validity of cosmological constraints In order to check the validity of the aforementioned applied cosmological constraints (such as Pantheon, BAO, or OHD), we will probe the behavior of some quantities, such as deceleration parameter \(q(z)\) or statefinder pair. Furthermore, one could define the so-called deceleration parameter: \[q=-\frac{\dot{H}}{H^{2}}-1. \tag{48}\] From the above equation, one could easily notice that both deceleration and Hubble parameters are related to each other by higher order temporal derivatives of scale factor \(a\). Consequently, to differentiate our model from other numerous MOG, DE/DM models, one could present the pair of parameters, the so-called statefinder pair (Sahni et al., 2003; Alam et al., 2003; Pasqua et al., 2015; Xu et al., 2018): \[r=\frac{\bar{a}}{aH^{3}}, \tag{49}\] \[s=\frac{r-1}{3(q-1/2)}. \tag{50}\] For the sake of simplicity, we could redefine the statefinder pair \(\{r,s\}\) fully in terms of deceleration parameter: \[r(z)=q(z)(1+2q(z))+q^{\prime}(z)(1+z), \tag{51}\] \[s(z)=\frac{r(z)-1}{3(q(z)-1/2)}. \tag{52}\] We are going to construct the phase plane \(r(z)-s(z)\), in which different points correspond to the various universe states, such that: * \(\Lambda\)CDM corresponds to \((s=0,r=1)\), * Chaplygin Gas (CG) corresponds to \((s<0,r>1)\), \begin{table} \begin{tabular}{l c c c} \hline \hline Datasets & \(H_{0}\) & \(\Omega_{m0}\) & \(\beta\) \\ \hline Hubble (OHD) & \(66.9\pm 3.3\) & \(0.320^{+0.055}_{-0.070}\) & \(4.3\pm 1.9\) \\ OHD+SNa & \(68.9\pm 1.7\) & \(0.290^{+0.028}_{-0.020}\) & \(5.3^{+1.8}_{-1.0}\) \\ OHD+SNa+BAO & \(68.9\pm 1.6\) & \(0.292\pm 0.016\) & \(5.6\pm 1.25\) \\ \hline Models & \(\chi^{2}_{\mathrm{min}}\) & AIC & BIC \\ \hline \(\Lambda\)CDM & 58.700 & 67.248 & 76.127 \\ \(f\) (\(Q\)) & 57.616 & 68.499 & 79.137 \\ \hline \end{tabular} \end{table} Table 1: Best-fit values of model parameters and statistical analysis Figure 2: Dimensionless mass density for matter and effective dark energy within exponential \(f\) (\(Q\)) gravitation * SCDM corresponds to (\(r=1,q=0.5\)), * Quintessence corresponds to (\(s>0,r<1\)). Consequently, we plot both statefinder parameter phase portraits, deceleration parameter, and additionally \(H(z)\) as probes of model validity in the cosmological sense in Figs. (3) and (4). Statefinder diagnostics and \(q(z)\) were performed only for the joint dataset, since other datasets shows the similar behavior as the joint solution. Remarkably, a transition from deceleration to acceleration phase on the third plot of the aforementioned figure is seen. A valid interval for \(q_{0}\) is marked as a gray area. As stated already, one may check the universe evolutionary scenario using statefinder pairs \(\{r,s\}\) and \(\{r,q\}\). From the \(r-s\) plane of our model, one could observe that the initial universe was filled with quintessence, then passed the \(\Lambda\)CDM phase and is currently reverting towards the quintessence scenario. On the other hand, in the \(r-q\) plane, it is evident that the universe once also passed through the \(\Lambda\)CDM phase. However, now our space-time is generally filled with quintessential fluid, it is expected that the future universe will eventually turn to the de-Sitter state (when \(\Lambda\) term will fully dominate). The point on quintessential fluid also coincides with MCMC observational constraints. The very last probe of cosmological validity is the well known \(Om(z)\) diagnostics, firstly presented in the paper (Sahni et al., 2008), where \(Om(z)\) is defined through equation (Pan et al., 2018; Harko et al., 2022): \[Om(z)=\frac{E^{2}(z)-1}{(1+z)^{3}-1} \tag{53}\] This parameter was derived to distinguish \(\Lambda\)CDM from other, more complicated cosmological models. In the equation (53), it is a handful to define \(E^{2}(z)=H^{2}(z)/H_{0}^{2}\), which is exactly the Hubble flow, dimensionless quantity normalized by the current Hubble parameter value. \(Om(z)\) parameter have a constant value for the \(\Lambda\)CDM model, which is same as the current matter mass density \(\Omega_{\rm m0}\). Consequently, we place numerical solution of \(Om(z)\) function for \(f(Q)\) model in the Fig.(4). For the sake of comparison, we as well plot \(Om(z)\) solutions within classical \(\Lambda\)CDM model and within \(\omega\) varying \(\Lambda\)CDM cosmologies. As one could easily notice, for our \(f(Q)\) model, \(Om(z)\) shows only \(Om(z)<\Omega_{m0}\) behavior in the distant past, which could lead to the presence of phantom fluid (for more information on the subject, see (Mostaghel et al., 2017)). However, at \(z\approx 2\), our model transits \(\Lambda\)CDM and has constantly growing trend, therefore in the near past and present times, quintessential fluid appears, which converges well with the statefinder diagnostics and MCMC. Finally, we also analyse both matter and effective dark energy mass densities for our model of modified gravitation to conclude on its validity. Corresponding results are plotted on the Figure (2). One can easily notice, both \(\Omega_{\rm m0}\wedge\Omega_{\Lambda\odot}\in[0,1]\) and their sum always converges to unity, epoch of the equality appears at redshift \(z\approx 0.35\), which is very near to the \(\Lambda\)CDM estimate. ## 6 N-body simulations of LSS with small \(\mathrm{L_{box}}\) As we already remarked previously, the main purpose of this paper is to perform N-body simulations of the comoving box that contain DM+baryonic matter and dark energy in exponential \(f(Q)\) gravitation and compare our results with the Large Scale Structure of concordance \(\Lambda\)CDM cosmology. For that aim, we will use the publicly available code ME-GADGET, a modification of the well-known hydrodynamical N-body code GADGET2. It was modified for generality so that one can perform simulations for practically any cosmological model. The code above was described in the pioneering works of (An et al., 2019; Zhang et al., 2019), whereas the tests are provided in (Zhang et al., 2018). This code as an input needs tables with Hubble flow \(H/H_{0}\) and the deviation of effective gravitational constant from the Newtonian one \(G_{\rm eff}/G_{N}\) (in some models of modified gravity, namely screened ones, such deviation exists only up to some scale \(k_{\rm screen}\) because of the so-called fifth force). The effective gravitational constant exact form was found in the paper (Jimenez et al., 2015) Figure 3: Hubble parameter \(H\left(z\right)\), deceleration parameter \(q\left(z\right)\) and distance modulus for exponential \(f\left(Q\right)\) gravity with best fit values from MCMC used. For comparison, we as well show the fiducial \(\Lambda\)CDM results 2020a): \[G_{\rm eff}=\frac{G_{N}}{f_{Q}} \tag{54}\] Equation above is being numerically solved assuming appropriate best fit values for free parameters of our model. As one could easily notice, at the very early time (high-\(z\) epochs), \(f(Q)\) gravity has Newtonian-like gravitational constant and then, at approximately \(a\approx 0.1\), \(G_{\rm eff}\) is being separated from \(G_{\rm N}\) for our model. Since we already defined needed inputs for ME-GADGET code, we could proceed further to fine-tuning our simulation setup. ### Simulation setup One needs to define various parameters to produce the simulations and initial conditions, based on the second order Lagrangian Perturbation Theory (namely, 2LPT). We want to obtain the mid-resolution simulations, therefore, particle number is \(N=512^{3}\) and mesh size is respectively \(N_{\rm mesh}=2\times 512^{3}\) as well. The simulation box has periodic vacuum boundary conditions and sides with length \(10\)Mpc/h. Initial conditions were produced with the Simp2LPTic code (see GitHub repository [https://github.com/liambx/Simp2LPTic](https://github.com/liambx/Simp2LPTic)), glass files (pre-initial conditions) were generated with the use of ccvt-precic (check [https://github.com/liaoshong/ccvt-precic](https://github.com/liaoshong/ccvt-precic)). We assumed that glass tile fraction is unitary. Moreover, cosmological parameters were borrowed from our MCMC constraints, discussed earlier: \(h=100H_{0}=0.689\pm 0.016\) (so-called "little-h"), \(\Omega_{m0}=0.292\pm 0.016\), leading to \(\Omega_{\Lambda 0}=0.708\), if one will not take into account radiation and massive neutrino species. On the other hand, baryon mass density equals to \(\Omega_{b}=0.0493\) (relation between total matter density and baryon one decides how much gas particles are present in the simulation). Moreover, matter power spectrum amplitude at \(k=8\)Mpc/h is assumed to be \(\sigma_{8}=0.811\pm 0.006\) and initial power spectrum is linear, constructed from the Eisenstein & Hu transfer function (Eisenstein & Hu, 1998) (power spectrum were constructed using code CAMB, see (Lewis & Challinor, 2011)). Initial conditions are being generated at the redshift \(z=10\) and spectrum index of scalar perturbations is \(n_{s}=0.9649\pm 0.0042\)(Planck Collaboration et al., 2020). ### Results In the current subsection, we are going to discuss the main results, obtained from the N-body simulations of the Large Scale Structure of the Universe. Firstly, we demonstrate the spatial slices of CDM over density \(\delta_{\rm CDM}=\overline{\rho}_{\rm CDM}/\rho_{\rm CDM}\) for our \(f(Q)\) gravity model with different values of redshift \(z\) on the Figure (5). In addition to the over density measurements, we also show the temperature of gas \(T\) that arise from Smoothed Particle Hydrodynamics (SPH) and mean molecular weight \(\mu=\overline{m}/m_{\rm HI}\), which defines the relation between mean particle mass and neutral hydrogen particle mass on the Figure (7) respectively. As one can easily notice, the DM walls are represented by smaller value of mean molecular weight. Besides, temperature maps show the well-known hot "bubbles" within the Inter-Galactic Medium (IGM) that are formed due to impinging galactic winds. Now we are going to investigate the matter power spectrum for our model. In comparison, we are going to use fiducial \(\Lambda\)CDM cosmology power spectrum, generated with the use of CAMB code (Lewis & Challinor, 2011; Lewis & Bridle, 2002; Lewis et al., 2000; Howlett et al., 2012)1. In order to extract \(P(k)\) for some value of redshift within our N-body framework, we used Python-based code Pylians3 (Villaescusa-Navarro, 2018)2. Footnote 1: Documentation for this code is stored in camb.readthedocs.io Footnote 2: For installation procedure and full documentation, refer to the pylians3.readthedocs.io We consequently compare the matter power spectrum on the Figure (7) with/without Redshift-Space Distorti Figure 4: Statefinder pairs and \(Om(z)\) function for exponential \(f\) (\(Q\)) gravity, \(\Lambda\)CDM and \(\omega\) varying \(\Lambda\)CDM cosmologies both \(X\), \(Y\) and \(Z\) axes. As we noticed during numerical analysis, up to some \(k\) near \(k_{\rm Box}\) limit for our simulation, \(P(k)\) spectrum in Fourier space does reconstruct non-linear matter power spectrum, given by CAMB, while Redshift-Space Distorted (RSD) one behave like linear matter power spectrum, as expected. Also, it is worth to notice that difference between RSD and regular matter power spectrum is bigger for CDM+Gas case. Finally, effect of RSDs in our simulations is almost isotropic, so that \(\Delta\)(RSD) differs only by few percents with the change of RSD direction axis. ### Halo mass function Now we correspondingly derive the well-known halo mass function (further - just HMF), that obviously define the number of halos at a certain mass. Firstly, we built the halo catalogue for all of our snapshots with the use of halo/subhalo structure finder, namely ROCKSTAR (for more information on the subject, refer to the documentation paper (Behroozi et al., 2013)). Afterwards, we built the binned halo mass function, which is based on the values of \(M_{200c}\) (the mass of enclosed halo volume with energy density 200 times bigger that critical density of the universe \(\rho_{\rm cr}\)). Results are respectively plotted on the Figure (9) with the added Seth-Tormen theoretical prediction for halo mass Figure 5: N-body simulations snapshot (CDM over density) for \(f\) (\(Q\)) gravity with best fit MCMC values on different redshifts function, based on _Planck2018_ fiducial cosmology and CAMB power spectrum at the \(z=0\). Seth-Tormen HMF were computed using the python package pynbody(Pontzen et al., 2013). From the Figure (9) shown above one could easily notice that generally our prediction for halo mass function from the MOG N-body simulation shows values of \(n\) that approximately coincide with the ones that theoretically predicted by the Seth-Tormen HMF. ## 7 LSS with large \(L_{\rm{box}}\): comparison As we previously mentioned, now we are going to perform an analysis of N-body simulation for bigger simulation box size, namely with \(L_{\rm{box}}=100h^{-1}{\rm{Mpc}}\). In that case, we only differ force resolution (\(\epsilon=3.9{\rm{kpc}}\)), other cosmological parameters are assumed to be the same. At first, we as usual plot the CDM over density field for vanishing redshift at the Figure (10). In addition, we also plot the matter power spectrum for CDM, CDM+baryons on the Figure (11). As an obvious consequence of a larger box size, one can notice that maximum wave number \(k\) grew to \(k_{\rm{max}}\approx 20\;h/{\rm{Mpc}}\). Even at such big scales, our matter Figure 6: SPH simulation snapshots of \(f\) (\(Q\)) gravity for gas temperature \(T\) and mean molecular weight \(\mu\) power spectrum, derived from the corresponding N-body simulation converge with theoretical prediction from CAMB code with up to sub-percent accuracy. As we noticed previously for small simulation box, axis of redshift-space distorsions had a vary small impact on the matter power spectrum. This statement holds for large \(L_{\rm box}\) as well. In the previous section we discussed the halo mass function for the case with smaller simulation box. Now we can discuss the same matter but for the larger \(L_{\rm Box}\). As it appears, HMF extracted from the simulation replicates the Seth-Tormen HMF almost perfectly up to \(M\approx 10^{14}M_{\odot}\) (see Figure (9)). However, at bigger halo masses, our simulation HMF slightly differs from the theoretical prediction, which is usually observed in N-body simulations. ### Reduced bispectrum from 3PCF Finally, we also are going to introduce the so-called reduced bispectrum, which is derived from the regular bispectrum and matter power spectrum via the following relation, written down below: \[Q=\frac{B}{P_{1}P_{2}+P_{2}P_{3}+P_{1}P_{3}} \tag{55}\] Where \(P_{i}=P_{m}(k_{i})\). We consequently plot the relation between bispectrum of large and small cosmological volumes on the Figure Figure 8: Monopole redshift-space distorted two point correlation function with \(L_{\rm Box}=10/100h^{-1}\)Mpc for \(f\) (\(Q\)) log-like modified gravity. In relation we plot Quijote simulation correlation function for Planck fiducial cosmology with Gpc wide box. Also, for each case we display the scale, at which BAO bump occurs Figure 7: Matter power spectrum with/without RSDs for \(f\) (\(Q\)) gravity vs. CAMB linear/non-linear \(P(k)\) for \(\Lambda\)CDM. Dashed N-body \(P(k)\) represent the CDM-only power spectrum, while solid line represent CDM+Gas \(P(k)\). Error bars represent Ly \(\alpha\) forest observations on high \(z\) Figure 9: Halo mass function for \(f\) (\(Q\)) gravitation with \(L_{\rm Box}=10\) Mpc\(/h\) and theoretical prediction for HMF by Seth-Tormen (12). It is easy to notice that for smaller wave number (\(k_{1}=5\)), relation between \(Q(k_{1},k_{2},k_{3})\) for both cases has a mean value of \(\approx 1.9\) for all bins of angle \(\theta\) (where it's maximum value is \(\theta=\pi\), which is the angle between two sides of triangle \(k_{1}\) and \(k_{2}\)). On the other hand, for relatively big \(k_{1}\) (in our case, it's \(k_{1}=6h/\)Mpc), deviation of large \(L_{\rm box}\) reduced bispectrum from the small one is smaller because of the the fact that wave number span is shifted towards bigger values, while in the first case \(k_{1}=5h/\)Mpc were on the box size limit for smaller simulation, which distorted the results and caused the deviation to grow. ### 2PCF for \(f(Q)\) gravitation In addition to the matter power spectrum/reduced bispectrum and halo mass function, we as well derive the two point correlation function (further - just 2PCF) in a real space for CDM halos. Generally, 2PCF is defined as follows: \[\xi(|\mathbf{x}_{1}-\mathbf{x}_{2}|)=\langle\delta_{m}(\mathbf{x}_{1})\delta_ {m}(\mathbf{x}_{2})\rangle \tag{56}\] Where \(\mathbf{x}_{i}\) is three-dimensional position of an \(i\)-th CDM halo and \(\delta_{m}\) is CDM over density parameter. We show the monopole redshift-space distorted two point correlation functions for both large and small simulations boxes on the Figure (8), where we added Quijote simulations (Villaescusa-Navarro et al., 2020) 2PCF, that admits _Planck_ fiducial Figure 10: Present day snapshot of CDM over density for \(L_{\rm box}=100h^{-1}\)Mpc run cosmology. For the sake of completeness, we additionally marked BAO bumps for each case with color-coded dotted line. It is obvious, that in the case of small simulation box size, permitted range of \(R\) is very small (up to \(R\approx 2\times 10^{0}h^{-1}\)Mpc) and because of the small box size, correlation function is under sampled and does not coincide with Quijote one. On the other hand, for \(L_{\rm Box}=100h^{-1}\)Mpc simulation, correlation function corresponds to the Quijote one with sub-percent accuracy for range \(R\in[2\times 10^{0},10^{1}]\). Now, we can proceed to the latest topic of our consideration, namely two-dimensional power spectra. ### 2D matter power spectrum We plot the two-dimensional matter power spectrum for small and large boxes on the Figure (13) with/without the presence of redshift-space distortions. As one can easily notice, on the plots with RSDs, the so-called "Finger of God" effect is observed, which arise because of the large scatter of galaxies recessional velocities at the small scales. Also, it is worth to inform that 2D matter power spectra for both box sizes are very alike. Now, since we discussed all of the topics for both simulation volumes within the modified theory of gravitation, we can proceed to the concluding remarks on the key findings within our study. ## 8 Conclusions One can describe gravity using several geometric bases. The STGR, which attributes gravity to the nonmetricity tensor, has recently drawn much attention. A fascinating method for studying modified gravity is \(f(Q)\) gravity, an extension of STGR. This study examined large scale structure formation observables using N-body simulations of \(f(Q)\) gravitation for the first time to assess the theory's validity to cosmological context. Simulations were run with the use of ME-GADGET code, modification of the widely known GADGET-2 code for two simulation boxes, namely \(L_{\rm Box}=10h^{-1}\)Mpc and \(L_{\rm Box}=100h^{-1}\)Mpc to decide on an optimal box size and compare the results for both simulation volumes. We first performed Markov Chain Monte Carlo (MCMC) analysis for our exponential \(f(Q)\) model to obtain best-fit values of MOG free parameters in Section (4). To test the fits provided by MCMC, we obtained theoretical predictions for the dimensionless mass densities \(\Omega_{m0}\) and \(\Omega_{\Lambda 0}\), Hubble parameter \(H(z)\), deceleration parameter \(q(z)\) and statefinder pair \(\{r,s\}\), \(Om(z)\) parameter, placing graphical results on the Figures (2), (3) and (4) respectively. As we noticed, Hubble parameter respected low redshift observations as well as deceleration parameter provided correct values of \(q_{0}\) and transitional Figure 11: Matter power spectrum with/without RSDs for \(f\) (\(Q\)) gravity vs. CAMB linear/non-linear \(P(k)\) for \(\Lambda\)CDM. Dashed N-body \(P(k)\) represent the CDM-only power spectrum, while solid line represent CDM+Gas \(P(k)\). Error bars represent Ly \(\alpha\) forest observations on high \(z\). For this case, we have assumed large simulation box size of \(100h^{-1}\)Mpc Figure 12: Relation of reduced matter power spectrum \(Q(k_{1},k_{2},k_{3})\) for both small and large simulation volumes with \(1\sigma\) deviation, varying \(k_{1}=2k_{2}\) redshift within the constrained range. Moreover, statefinder diagnostics predict that initially universe was in Quintessence phase, passed the \(\Lambda\)CDM state up to Quintessence again. Finally, \(Om(z)\) demonstrated that at the high-\(z\) range, our universe was filled with phantom fluid, passed \(\Lambda\)CDM EoS at \(z\approx 2\) and now again has a phantom equation of state. After theoretical predictions, we started working on the N-body simulations whose primary findings corresponding to the quantities of interest are as follows: * Three-dimensional matter power spectrum monopole \(P_{k}\): this was the first probe of a large-scale structure we studied in the present work. We plotted non-linear matter power spectra (with/without RSDs) for both small and large simulation volumes on the Figures (7) and (11) respectively, where we plotted CAMB linear/non-linear fiducial power spectra and observational data from Ly-\(\alpha\) forest for the sake of comparison. One can notice that within the permitted range of wave number \(k\) (limited by mean inter particle separation and simulation box length), non-linear matter power-spectra from small/large N-body simulations coincide with the CAMB one. However, for \(L_{\rm Box}=10h^{-1}\)Mpc, non-linear \(P_{k}\) coincide with linear CAMB prediction too early because of the small box size. * Halo Mass Function: the second significant quantity that can solely conclude the validity of a simulation. We place Seth-Tormen's theoretical IIMF and the ones extracted from our N-body simulations in Figure (9). As we found, our small box size can cannot provide sufficient halo masses and reproduce viable halo mass function at all mass ranges up to the resolution limit, while large simulation follows Seth-Tormen prediction very precisely within the large span of halo masses \(\log_{10}M/M_{\odot}\in[10,14]\), but gets slightly smaller than theoretical prediction for higher masses. * Two-Point Correlation Function monopole \(\xi_{0}(r)\): we as well investigate the redshift-space distorted correlation function monopoles in the Figure (8), where Quijote simulations correlation function is plotted to compare our results to fiducial cosmology. As remarked during numerical analysis, small box simulation fails to predict correct CDM halo correlations. On the other hand, in the range, \(R\in[2\times 10^{9},10^{1}]h^{-1}\)Mpc large box simulation precisely reconstructs Quijote data. * Reduced bispectrum \(Q(k_{1},k_{2},k_{3})\): for the reduced bispectra case, we plotted the relation \((\Omega_{\rm Large}-\Omega_{\rm Small})/\Omega_{\rm Small}\) with different \(k_{1}\) values (which acts as a triangle side length) on the Figure (12). Figure 13: Two-dimensional matter power spectrum for small and large simulations within \(f\) (\(\mathbf{Q}\)) gravitation with/without RSDs We observed that this relation is generally close to \(\approx 1.5\) across all bins of the angle between \(k_{1}\) and \(k_{2}\) (namely \(\theta\)) if one will assume value of \(k_{1}\) that is not on the resolution limit for both cases (while it is worth to notice that we only adopted the case, where \(k_{2}=2k_{1}\)). * Two-dimensional matter power spectrum \(P_{m}(k_{\parallel},k_{\perp})\): This is the last quantity extracted from our N-body simulations. We plotted 2D power spectra for both sims in Figure (13) with/without redshift-space distortions. From the plots, we noticed the so-called "Finger of God" effect that is present in the RSD case because of the elongated positions of CDM halos. In conclusion, considering all the above points, the small simulation volume experiment failed to recreate matter power spectrum and correlation function correctly. However, second one, namely more extensive N-body simulation provided both viable 3D/2D matter power spectrum, correlation function, and halo mass functions and, therefore, we can consider exponential \(f(Q)\) model to be a viable substitution of fiducial \(\Lambda\)CDM cosmology, since it not only satisfy many large scale structure constraints mentioned above, but also provide correct distance modulus up to high redshift values, where \(\Lambda\)CDM fails. In the following papers, it will be interesting to investigate this modified gravity model using hybrid N-body and SPH simulations that incorporate supernova/AGN feedback, star and galaxy formation, jets, etc. using code GIZMO that allows the use of tabulated Hubble parameter and effective gravitational constant. It will, however, require a lot more computational resources (on the scale of millions of CPU hours). ## Data Availability Statement There are no new data associated with this article. ## Acknowledgements Sokoliuk O. performed the work in frame of the "Mathematical modeling in interdisciplinary research of processes and systems based on intelligent supercomputer, grid and cloud technologies" program of the NAS of Ukraine. The authors gratefully acknowledge the computing time provided on the high performance computing facility, Sharanga, at the Birla Institute of Technology and Science - Pilani, Hyderabad Campus, PKS acknowledges the Science and Engineering Research Board, Department of Science and Technology, Government of India for financial support to carry out the Research project No.: CRG/2022/001847. SA acknowledges BITS-Pilani, Hyderabad campus for the Institute fellowship. We are very much grateful to the honorable referee and to the editor for the illuminating suggestions that have significantly improved our work in terms of research quality, and presentation.
2308.15624
Detection of Mild Cognitive Impairment Using Facial Features in Video Conversations
Early detection of Mild Cognitive Impairment (MCI) leads to early interventions to slow the progression from MCI into dementia. Deep Learning (DL) algorithms could help achieve early non-invasive, low-cost detection of MCI. This paper presents the detection of MCI in older adults using DL models based only on facial features extracted from video-recorded conversations at home. We used the data collected from the I-CONECT behavioral intervention study (NCT02871921), where several sessions of semi-structured interviews between socially isolated older individuals and interviewers were video recorded. We develop a framework that extracts spatial holistic facial features using a convolutional autoencoder and temporal information using transformers. Our proposed DL model was able to detect the I-CONECT study participants' cognitive conditions (MCI vs. those with normal cognition (NC)) using facial features. The segments and sequence information of the facial features improved the prediction performance compared with the non-temporal features. The detection accuracy using this combined method reached 88% whereas 84% is the accuracy without applying the segments and sequences information of the facial features within a video on a certain theme.
Muath Alsuhaibani, Hiroko H. Dodge, Mohammad H. Mahoor
2023-08-29T20:45:41Z
http://arxiv.org/abs/2308.15624v1
# Detection of Mild Cognitive Impairment Using Facial Features in Video Conversations ###### Abstract Early detection of Mild Cognitive Impairment (MCI) leads to early interventions to slow the progression from MCI into dementia. Deep Learning (DL) algorithms could help achieve early non-invasive, low-cost detection of MCI. This paper presents the detection of MCI in older adults using DL models based only on facial features extracted from video-recorded conversations at home. We used the data collected from the I-CONECT behavioral intervention study (NCT02871921), where several sessions of semi-structured interviews between socially isolated older individuals and interviewers were video recorded. We develop a framework that extracts spatial holistic facial features using a convolutional autoencoder and temporal information using transformers. Our proposed DL model was able to detect the I-CONECT study participants' cognitive conditions (MCI vs. those with normal cognition (NC)) using facial features. The segments and sequence information of the facial features improved the prediction performance compared with the non-temporal features. The detection accuracy using this combined method reached 88% whereas 84% is the accuracy without applying the segments and sequences information of the facial features within a video on a certain theme. + Footnote †: footnoteinfoinfo]capposition=topsep=-1pt, innertopmargin=1pt, innertopmargin=1pt, innertopmargin=1pt, innertopmargin=1pt, innertopmargin=1pt, innertopmargin=1pt, innertopmargin=1pt, innertopmargin=1pt, innertopmargin=1pt, innertopmargin=1pt, innertopmargin=1pt, innertopmargin=1pt, innertopmargin=1pt, innertopmargin=1pt, innertopmargin=1pt, innertopmargin=1pt, innermarginmargin=1pt, innermarginmargin=1pt, innermarginmargin=1pt, innermarginmargin=1pt, innermarginmargin=1pt, innermarginmargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt, innermargin=1pt,margin= innermargin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin= innermargin=1pt,margin=1pt, innermargin=1pt,margin= innermargin=1pt,margin= innermargin=1pt, innermargin=1pt,margin=1pt, innermargin=margin=1pt, innermargin=1pt,margin= innermarginmargin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt,margin= innermargin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt, innermargin=1pt,margin=1pt, innermargin=1pt,margin= innermargin=1pt, innermargin=1pt,margin= innermarginmargin=1pt, innermargin=1pt,margin= innermargin=1pt, innermargin=1pt, innermargin=1pt,margin= innermargin=1pt,margin=1pt, innermargin=1pt,margin= innermargin=1pt,margin= innermarginmargin=1pt, innermargin=1pt,margin= innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=marginmargin=1pt, innermargin=1pt,margin= innermargin=1pt, innermargin=1pt, innermargin=margin=1pt, innermargin=1pt, innermargin=1pt, innermargin=marginmargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=margin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt=1pt, innermargin=1pt, innermargin=1pt= innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt= innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin=1pt, innermargin= of the older participants. We hypothesize that patterns in facial features among MCI make them distinguishable from those with normal cognition. We propose a DL method that detects cognitive conditions in older adults using their facial features in the I-CONECT dataset. The purpose of this paper is to show that the facial features and interaction information of the participants during the semi-structured interviews are impacted by the cognitive conditions of participants. The remainder of this paper is structured as follows. Section 2 reviews the related work of detecting MCI using facial features. Section 3 describes the dataset and explains the preprocessing procedure and feature extraction. Section 4 provides the experimental implementation and results. Finally, Section 5 concludes this paper with a discussion and future research directions. ## 2 Related Work In this section, we will review past studies in the field of computer vision to determine human cognitions mainly based on facial features, including eye movements and facial expressions. **Eye Features**: Several studies have investigated the differences in eye features between NC and MCI [19, 20, 21]. These eye features include eye movements, gaze, and blinking. Alzahrani _et al._[22] computed an eye aspect ratio from six points of participants' facial landmarks extracted using Dlib [23] and Openface [24] libraries, which have pre-trained models for facial features extraction. They calculated the eye blinking rate for every participant and applied basic machine learning algorithms to predict participants' cognitive conditions. Because they used two different pre-trained models, they reported a 10% accuracy difference between the models. Chen _et al._[20] have analyzed the eye movement patterns of participants while performing face recognition tasks. They found that older adults tend to focus on the center of the target face especially the elderly with low Montreal Cognitive Assessment (MoCA) scores. Furthermore, Nam _et al.[25]_ recorded participants' faces while they watch emotion-evoking videos to analyze the correlation between the eye and head movements. Using Openface [24], they extracted the eye and head movements. They analyzed these data in order to observe the attention and concentration decline of dementia patients. They concluded that AD patients tend to have eye movements in the same vertical direction as their heads. Haque _et al._[26] developed a deep learning model which estimates participants' eye gaze during memory-triggering tasks in a clinical setting. They were able to distinguish cognitive impairment participants by tracking their eye gaze and viewing time during the tasks. **Facial Expression**: Studies have shown that cognitive impairment does affect a person's facial expression [27, 28]. Using computer vision approaches, researchers have extracted facial action units [29] or estimated facial expression using deep learning methods [30] to predict participants' cognitive condition. Jiang _et al._[31] recorded participants' faces while undergoing a memory test in a clinical setting. They extracted participants' facial expressions using a pre-trained Convolutional Neural Network (CNN) on a popular facial expression dataset that is not dedicated to elderly faces. The estimated facial expression for the video frames are average for subject-level facial expression. Traditional machine learning algorithms were implemented to classify cognitively impaired participants from cognitively unimpaired ones. They also considered the viewing time of the targeted region. Fei _et al._[30] extracted participants' facial expressions when participants are watching emotional stimuli. The authors computed the emotional occurrence among cognitive conditions elderly. They selected time periods of the participants' emotional occurrence which has a large difference between MCI and NC. A traditional machine learning algorithm was used to classify participants' conditions. Gerlowska _et al._[32] showed no differences in the frequency of emotions expressed in patients and healthy controls. An emotional functioning pattern among older adults was noticed despite cognitive dysfunctions. The analysis was based on mimicking 6 basic emotions that were shown on a screen for the subjects and manually selecting 10-15 most intense seconds. Tanaka _et al._[29] extracted facial landmarks using Openface [24] from participants while responding to an agent system. They tracked facial landmarks that represent facial action units during the participants' response to the agent. They developed an algorithm to extract talking segments using a sequence of facial landmarks that surround the mouth. Using this approach, they were able to classify participants with dementia using traditional machine learning methods. **Holistic Facial Features**: There are limited works that use holistic facial features to detect participants' cognitive conditions. Umeda-Kameyama _et al._[33] implemented deep learning models to detect cognitive impairment conditions of participants' facial images. Consequently, spatial facial features are extracted for the training and evaluation of the model. They found that the model had better prediction performance when using the lower half of participants' facial images. Lee _et al._[34] applied two-stream ConvNet using spatial and motion features to classify participants' conditions with frame sequence intervals of 10 frames per segment. We focused on studies that adapted computer vision algorithms in facial feature extraction either implementing DL models that are trained on large datasets to capture these features or proposing a model that is solely for the purpose of detecting the participants' cognitive status. Regardless of the adapted methods, the participant's facial features led to the revealing of the cognitive conditions. In most of the works, participants were in a clinical setting in which the study staff had control over the room lighting and point view of participants' faces [30, 31]. However, video recording of participants' faces in a home setting is very challenging to have an automated method to extract facial features. We have faced some challenges in this respect. Additional challenges will be also discussed later in this paper. In addition to the reviewed above, facial landmarks extracted using pre-trained models do not achieve highly accurate results on the elderly, especially older adults with cognitive impairment conditions [35, 36]. Therefore, the detection of facial landmarks still needs to be improved for the elderly. This motivated our approach where we extracted holistic facial features using Convolutional Autoencoder (CAE). ## 3 Materials and Methods In this section, we introduce the I-CONECT dataset and our approach to preprocessing the participants' appearance during the interviews. We also explain our method of spatial and temporal extractions of features. ### Dataset The Internet-Based Conversational Engagement Clinical Trial (I-CONECT) has video-recorded semi-structured interviews between the socially-isolated-older participants and trained conversational staff [18]. The participants have a minimum age limit of 75 and live in socially isolated environments to be included in this study. These participants were clinically diagnosed with either MCI or NC at the baseline. For each session, the participants and interviewer have a specific theme to discuss such as pets, foods, history, etc. Each session is approximately 30 minutes in duration. A total of 158 themes of video chats have been recorded among 69 subjects who were assigned to the intervention (i.e. video-chats) group. We have selected four themes for this study based on the number of participants with recordings. Table 1 shows the overview of selected themes (Summertime, Self-Care, Halloween, Cities and Towns) with the trial's week number, the total number of participants that attended, and the number of selected videos based on the quality evaluation which we discuss in Section 3.2. Demographic characteristics of participants vary between every theme since some participants have missed these themes' sessions or their videos are excluded in the current study due to quality issues with the videos. Since the selected participants of the themes are inconsistent, Table 2 shows the demographics by the themes. In our study, we made sure that videos with the same topic were analyzed separately because topics would trigger specific emotions that do affect facial features. The participants were assessed for their cognitive status (i.e., MCI vs. NC) at three-time points (baseline, Month 6, Month 12). We used the cognitive status assessed closest to the video chats. In our experiment, we used the clinical status assessed at the beginning of the trial except for the Halloween theme which occurred near the six-month (24 weeks) evaluation. ### Data preprocessing The I-CONECT dataset requires preprocessing to make it suitable for training on our model. The preprocessing steps include: selecting the participants' appearance during the videos, face extraction, quality evaluation, and facial feature extraction. The videos include the participants' IDs \begin{table} \begin{tabular}{l c c c} \hline \hline Themes & Trial’s Week & Total Participants & Selected Videos \\ \hline Summertime & 1 & 66 & 30 \\ Self-Care & 2 & 60 & 30 \\ Halloween & 23 & 59 & 32 \\ Cities and Towns & 9 & 59 & 39 \\ \hline \hline \end{tabular} \end{table} Table 1: Themes general information. \begin{table} \begin{tabular}{l c c c c} \hline \hline Themes & Age, Years & Education & Condition & Gender \\ & (mean) (SD) & (mean) (SD) & (MCI/NC) & (MF) \\ \hline Summertime & 80.7 (4.75) & 15.8 (2.79) & 14/16 & 8/22 \\ Self-Care & 80.5 (4.24) & 15.4 (2.63) & 16/14 & 11/19 \\ Halloween & 80.3 (3.73) & 15.3 (2.5) & 14/18 & 12/20 \\ Cities and Towns & 80.7 (4.31) & 15.5 (2.59) & 19/20 & 15/24 \\ \hline \hline \end{tabular} \end{table} Table 2: Demographic information of participants for selected themes. Figure 1: The Prepossessing Steps of Participants’ Videos. during the facial appearance. Thus, optical character recognition (OCR) helped to determine the desired frames from the video with less computation and higher confidence. We used CRAFT [37] which is a pre-trained model that performs scene text detection on natural images. Because the videos have different frame rates, we fixed the frame extraction rate of the videos. We performed a downsampling of the original frame per second (fps) which ranged from 30 fps to 10 fps. In our experiment, we choose a frame rate of 10 fps by following the frameshift value from Equation 1 to perform frame selection. \[FrameShift= Rounddown\left(\frac{fps_{original}}{fps_{target}}\right) \tag{1}\] In order to detect participants' faces, we used the retina face pre-trained model [38] to detect and extract the subject faces. Figure 1 shows the preprocessing steps to extract the facial features from the participant video interviews. To ensure that all extracted faces belong to participants, the detected faces are referenced by face size and coordination within the frames. Faces within the created area of interest are kept which results in better face extraction of the participants. We randomly evaluated the majority of the extracted faces from every video. The quality evaluation criteria are as follows: 1. Very good: all the faces are clear and the lighting is good. 2. Good: all the faces are clear and the lighting is acceptable. 3. Ok: most of the faces are clear or need further processing to enhance the extracted faces. 4. Poor: some of the extracted faces are unclear, i.e. screen obstacle covers part of the faces. 5. Very poor: most of the extracted faces are not acceptable due to lighting conditions or subjects' faces being barely detectable. This step is performed manually because it is challenging to automate this step. The participants' videos that do not meet the quality standards are dropped. We set the passing limit for video ratings of very good and good. ### Unsupervised learning **Facial Features**: Since there are no pre-trained CNN models suitable for the elderly especially those with cognitive impairment [36], we initialized an autoencoder neural network architecture which is integrated to solve the issue of extracting meaningful features from unlabeled data. The autoencoder depends on three components which are the encoder and decoder, and latent code. We integrated a CNN for the encoder and decoder of the autoencoder to extract subjects' facial features. We extracted the subject's facial features at the frame level. We applied a CAE neural network with ResNet-50 architecture [39] in the encoder and the reverse architecture of ResNet-50 as a decoder with a latent feature size of 128. Table 3 shows the architecture of ResNet-50 with the output size of network layers based on our input size of the faces. Using this approach, we generated a feature vector of every frame. The feature vectors should have embedded features of facial expression, head pose, eye gaze, and general facial features. extracted faces are inconsistent in terms of width and height. Thus, they are resized to a width and height of 96. The face size is set to avoid upsampling of participants' faces. **Feature analysis**: It is challenging to interpret the latent features of an autoencoder with the original image. For example, the values of the vector elements that represent facial expressions, poses, or eye gazes are still ambiguous. In fact, it is an ongoing research topic, nevertheless, we show that the reconstructed image preserved the facial features of the original image. Figure 2 shows examples of a participant's face with different facial features that are encoded and decoded using our CAE model after training. Since we are dealing with videos, consecutive frames should have similar features with only slight changes, based on the participant's facial movement, which could represent the changes in facial expression and eye features as well. The latent feature vectors have embedded participants' facial features based on their interaction during the interviews. The CAE preserves the related facial attributes in the latent feature vectors. In order to prove and ensure the relativity of facial features within these vectors, we have calculated the cosine similarity among various facial representations of the embedding features of the same subject. Figure 3 shows the values of latent feature vector similarity among the selected facial frames of the same participant during the same theme discussion. We set a reference facial embedding vector that has a neutral facial expression, clear face presentation, and the participant's head pose is relatively ideal during the interview. This empirical result shows the difference in the facial feature representation. It is clear to observe that the vector that has the largest difference is when the participant raised a paper that covered part of the face which three off the embedding features. Although the cosine similarity ranges from -1 to 1 where 1 is the same vector and -1 is the opposite vector, all these similarity have values above 0 which indicate they are similar in the facial features of the same participant. \begin{table} \begin{tabular}{c c c} \hline layer name & output size & 50-layer \\ \hline conv1 & \(48\times 48\) & \(7\times 7,64\), stride 2 \\ & & \(3\times 3\) max pool, stride 2 \\ conv2\_x & \(24\times 24\) & \(1\times 1,\ 64\) \\ & & \(3\times 3,\ 64\) \\ & & \(1\times 1,\ 256\) \\ conv3\_x & \(12\times 12\) & \(3\times 3,\ 128\) \\ & & \(1\times 1,\ 512\) \\ & & \(1\times 1,\ 256\) \\ conv4\_x & \(6\times 6\) & \(3\times 3,\ 256\) \\ & & \(1\times 1,\ 1024\) \\ conv5\_x & \(3\times 3\) & \(3\times 3,\ 512\) \\ & & \(1\times 1,\ 2048\) \\ & & average pool, 128-d fc, softmax \\ \hline \end{tabular} \end{table} Table 3: The ResNet-50 Architecture of 96 image size input. ### Temporal information **Segments and sequences**: We have extracted features that fed to the model which represent segments and sequences of the participants' interaction. We define a segment as consecutive frames within a video that have extracted faces. A segment within a video is ended when three consecutive frames do not show a participant's face as the main face in the frame. We also refer to a sequence as a set of consecutive frames. In the selection of the sequence and segments of the videos, a sequence must present all its interval facial features. Thus, the segment is dropped if its number of frames is less than the sequence size. During the feature arrangement, we made sure that consecutive frames are set to the same sequence. The video has three main parts which are the participant's face, the interviewer's face, and the theme introduction slides. We are interested in the participants' faces; thus, we kept the participants' frames and labeled their appearance as it is shown in Figure 4. Consequently, we are also showing a snippet of the number of segments and sequences of participants that have different cognitive conditions. Figure 5 shows the plot of segments and sequence indices within a video. Although this is a small example of the sequence and segment information, the figure leads to the idea that MCI participants have less continuous taking time compared to the NC participants. This indicates that interviewers tend to follow up more. **Temporal features**: The transformers were introduced by [40] for machine translation. Although transformers are widely used in the Natural Language Processing field, the computer vision field also benefited from the concept and is being applied to images and videos [41, 42]. The idea of representing a frame as a token should help the prediction of subjects' conditions. Since transformers have proven their capability in various deep-learning interventions, they can capture temporal information from sequential frame features due to the attention mechanism represented in Equation 2 where Q, K, and V are query, key, and value, respectively. Transformers also use multi-head attention to perform a parallel computation of the attention mechanism where a key, query, and value are split into a number of heads and concatenated after the self-attention is applied. Equation 3 shows the computation process of attention scores where self-attention is calculated for every head where \(W^{o}\) is a learnable parameter that is multiplied by the concatenation of all the heads. Heads equation is shown in Equation 4. \[Attention(Q,K,V)=softmax\left(\frac{QK^{T}}{\sqrt{d_{k}}}\right)V \tag{2}\] \[MultiHead(Q,K,V)=Concat(head_{1},head_{2},...head_{i})W^{o} \tag{3}\] Where \[head_{i}=Attention(QW^{Q}_{i},KW^{K}_{i},VW^{V}_{i}) \tag{4}\] Transformers are able to capture the dependency of the input data, so the representation of the feature vectors and classification tokens affect the behavior of the model. Thus, we created a classification token for every sequence which is a common practice for sequence classification using transformers [41, 43]. The classification token is fed to a classifier to perform sequence classification of the input. Thus, this arbitrary token would have learned from the self-attention mechanism and positional embedding layer the label of the sequence input. One of the important components of transformers is the positional embedding layer. This layer preserves the positions of the sequential representation for the attention mechanism. This layer inspired us to have the sequence and segment representation of every video added to the transformer. This positioning of the facial features could yield a hierarchical representation of an overall representation of a participant's video. Figure 3: The vector cosine similarities between several latent feature vectors. Figure 2: Examples of participant’s face and autoencoder reconstructed Ones. ## 4 Experiments and Results In this section, we present our experiment implementation details. Then, we explain the section performance of our model. Eventually, we study the model performance while changing some of the hyperparameters of the model in the ablation study. ### Implementation We implemented our models using Python 3.8.10 and PyTorch 1.12.0.0+cu102 and ran the experiments on the NVIDIA GeForce GTX 1080 GPU. The CAE model has a latent size of 128 with input and reconstructed images of size \(96\times 96\). The model was trained for 32 epochs using the Adam optimizer and Mean Square Error (MSE) loss function. The embedded facial features / latent feature vectors are updated until the MSE reaches a small value between the original face and the reconstructed one. The transformer was implemented in the same environment and trained for 40 epochs with Adam optimizer and weighted binary cross-entropy (BCE) loss function that is shown in Equation 5. The model consists of four transformer encoder layers, three positional embedding layers (sequential, sequence, and segment), and a classifier. The encoder has a hidden dimension equal to the latent feature vector size which is 128 and two heads. The classifier is a multilayer perceptron (MLP) which has three fully connected layers of [64, 32, 2]. The model is trained with a dropout of the value of 0.2. Figure 6 shows the transformer diagram. The first stage arranges the facial feature sequences with the appendant of a classification token. The second stage is adding the positional embeddings of the sequences and the transformer encoder. The third stage contains the MLP which takes the classification tokens output and makes a binary classification decision after a sigmoid function maps the model output. \[Weight\,BCE=-\beta(y_{i}log(p_{i})+(1-y_{i})log(1-p_{i})) \tag{5}\] where \(\beta\) is the weight assigned to the positive class. In our sequence approach, we defined \(M\in\mathbb{R}^{\mathrm{hcf}}\) as a sequence where \(l\) is the sequence size (15) and \(F\in\mathbb{R}^{128}\) is the latent feature vector. The segments are defined as \(S=\{M_{1},M_{2},...M_{s}\}\). The video is presented as \(V=\{S_{1},S_{2},...S_{e}\}\) or \(\{M_{1},M_{2},...M_{n}\}\). The positional embeddings are calculated as follows \(P=P_{M}+P_{S}+P_{p}\). Eventually, the model input is \(Z=z+P\) where z refers to the latent feature vectors. \(P\in\mathbb{R}^{128}\) is the overall positional embeddings that capture the sequential representation of feature vectors because self-attention in transformers does not consider the timing appearance of the facial features. Considering the same concept, the positional embeddings within a video of the sequences and segments are captured in this layer. The purpose of this implementation is to have a hierarchical representation of every participant's video as we evaluate the model on the participants' video level. ### Results We separated the training and evaluation of the model based on the interview themes. Thus, all results will refer to the target theme. Accuracy, F1 score, and Area under the ROC Curve (AUC) are the evaluation metrics. The accuracy reflects the percentage of participants that were correctly classified during the testing. The F1 score is a combination score of precision and recall. AUC measures the model's ability to distinguish between binary cognitive conditions. Figure 4: The frame extraction from a video with segments and sequences labeling. Figure 5: An example of a number of segments and sequences of participants with different cognitive conditions. We randomly separated the participants into 10-folds and performed cross-validation. We ensured the representation of participants from both cognitive conditions are assigned to the test fold during the random assigning. The procedure is to classify a video based on the percentage of the correct classified sequences within a video. We labeled a video as correctly classified if the majority of sequences within a video are correctly classified. Table 4 shows the baseline performance of our model which considers the embedded facial features and does not consider the positions of the sequences and segments of the videos. The segments and sequences positional embeddings have improved the detection performance of the participants' cognitive conditions. The combinations of both of these positions performed better than either of them. In fact, applying one of these positions could drop the detection accuracy as it is presented for the Halloween theme when we add only one of these positional information the performance drops. The Self-care theme has the least detection performance among the themes which is impacted by the quality of the participants' facial appearance during the interview. We can observe that by adding the segments and sequences information the transformer helped the model analyze the participants' interaction behavior or the length of their response to the interviewers' questions regardless of which theme is discussed. The results show that adding the segment and sequence information does improve the overall detection of the participants' condition. Thus, MCI is detectable using facial features; however, the interaction boosted the model performance. Other studies have detected MCI participants in the I-CONECT dataset using different modalities. The main focus of the studies on the dataset was on speech signals. Table 5 shows the best results of the studies and compares them with our top performance result among the themes in this study. Asgari _et al._[44] extracted Linguistic Inquiry and Word Count (LIWC) features from participants' transcribed data and classified the participants. Chen _et al._[45] have used a topic-based method to detect participants with MCI. Tang _et al._[46] integrated the acoustic and linguistic markers from the participants' speeches to distinguish the participants. ### Ablation study In this section, we evaluated the model's detecting performance with variations of the model hyperparameters. We show the effects of different sequence sizes, overlapping of sequences, and loss function. **Sequence size**: we studied the effect of changing the sequence sizes when we feed the transformer with participants' facial features. Table 6 shows the performance of the model while applying different sequence sizes [15, 20, 25]. It is important to mention that changing the sequence size would change the total number of sequences per participant and the number of segments. In fact, the facial temporal features extracted from the transformer will be affected depending on the sequence size as well. The sequence size of 15 has performed overall better than the size of 20 and 25. The Cities and Towns theme performance was sustained across all sequence sizes. The prediction on the Self-care theme had a slight improvement when the sequence size is 20 but it dropped when the sequence size is 25. **Sequence overlapping**: we studied the effect of sequence overlapping while keeping the sequence size fixed. Table 7 shows the accuracies of 0%, 20%, and 40% overlapping sizes while selecting sequences of facial features during the training and evaluation. This study shows our sequence covered the majority of participants' features during the interviews. The overlapping of sequences will increase the number of sequences and keep the number of segments fixed. Thus, the accuracies decreased with the overlapping except for the Cities and Towns themes. With the consideration of Cities and Towns theme performance shown in Table 4, the accuracy of adding segment information was similar to adding sequence and segment information. However, the increase of sequence information caused the model a better prediction of the participants' cognitive conditions. In other words, sequence augmentation improved the prediction for the Cities and Towns theme. **Loss function**: although participants' conditions are balanced for all selected themes, the number of sequences that represent each participant varies so the weighted binary cross-entropy loss function triggers the class imbalance in the training for every fold. Here, we study the concept of applying the weighted BCE loss function to help the model performance compared with the average BCE loss function 6. Table 8 shows the accuracies of different themes with consideration of changing the loss function. Across all themes, the accuracies improved by at least 2.6% with the weighted BCE loss function. We investigate this to prove that the model prediction performance improved with the weighted BCE loss function. This loss function would act as if the class with fewer samples has more samples based on the ratio weight that is updated for every fold during the training. \[BCE=-\frac{1}{N}\sum_{i=1}^{N}(y_{i}log(p_{i})+(1-y_{i})log(1-p_{i})) \tag{6}\] Our work has several limitations, especially regarding the video quality. In our approach, the manual quality check is one of the challenges which takes a considerably long time to evaluate. However, an automated computer-based approach will help achieve the evaluation faster; therefore, more themes can be included in the study. The number of eliminated videos within a theme is concernedly large which is mainly based on the video quality. The video quality does not only refer to the typical technical challenges of the videos such as room lighting, viewpoint, and facial appearance but it also considers the full face during the interview because some participants did not show their faces for the majority of the video. Therefore, older adults are having challenges in handling computer interme \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{ \begin{tabular}{c} Themes \\ \end{tabular} } & \multicolumn{6}{c}{sequence size} \\ \cline{2-10} & \multicolumn{2}{c}{15} & \multicolumn{2}{c}{20} & \multicolumn{2}{c}{25} \\ \cline{2-10} & Accuracy & F1 & AUC & Accuracy & F1 & AUC & Accuracy & F1 & AUC \\ \hline Summertime & **83.3\%** & 0.85 & 0.83 & 80\% & 0.82 & 0.79 & 77\% & 0.8 & 0.76 \\ Self-care & 66.7\% & 0.58 & 0.66 & **70\%** & 0.66 & 0.7 & 60\% & 0.54 & 0.59 \\ Halloween & **87.5\%** & 0.89 & 0.87 & 81.2\% & 0.83 & 0.81 & 81.2\% & 0.84 & 0.8 \\ Cities and Towns & **79.5\%** & 0.78 & 0.8 & 79.5\% & 0.78 & 0.8 & 79.5\% & 0.76 & 0.8 \\ \hline \hline \end{tabular} \end{table} Table 6: The evaluation of different sizes of sequences. \begin{table} \begin{tabular}{l c c c} \hline \hline \multicolumn{4}{c}{sequence overlapping percentage} \\ \multicolumn{1}{c}{Themes} & 0\% & 20\% & 40\% \\ \hline Summertime & **83.3\%** & 80\% & 80\% \\ Self-care & **66.7\%** & 63.3\% & 60\% \\ Halloween & **87.5\%** & 84.4\% & 75\% \\ Cities and Towns & 79.5\% & 79.5\% & **82.1\%** \\ \hline \hline \end{tabular} \end{table} Table 7: Detection comparison between sequence overlapping. \begin{table} \begin{tabular}{l c c} \hline \hline \multicolumn{4}{c}{sequence overlapping percentage} \\ \multicolumn{1}{c}{Themes} & 0\% & 20\% & 40\% \\ \hline Summertime & **83.3\%** & 80\% & 80\% \\ Self-care & **66.7\%** & 63.3\% & 60\% \\ Halloween & **87.5\%** & 84.4\% & 75\% \\ Cities and Towns & 79.5\% & 79.5\% & **82.1\%** \\ \hline \hline \end{tabular} \end{table} Table 8: Comparison between applying weighted BCE and BCE as a model loss function. ## 5 Conclusion In this paper, we demonstrated a method to detect older adults with MCI using their facial features in visual semi-structured interviews from the I-CONECT study. The method is based on extracting spatial features and temporal information that is conducted by using a CAE and the self-attention mechanism in the transformers. We showed the interaction timing of the facial features of subjects is important as it helped boost the performance of our model in distinguishing the cognitive condition among participants. We conclude that using segments and sequence indices within a video does improve the prediction of the participants' conditions. Participants' facial features vary during the topic discussion. Thus, future work will study the sequences' contribution to cognitive condition detection among the participants including the speech data and the automated video quality assessment using DL algorithms. ## CRediT authorship contribution statement **Muath Alsuhaibani:** Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Software, Validation, Visualization, Writing - original draft. **Hiroko H. Dodge:** Investigation, Resources, Validation, Writing - review & editing. **Mohammad H. Mahoor:** Investigation, Resources, Supervision, Project administration, Validation, Writing - review & editing. ## Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## Acknowledgements The following federal fund supported this project: R01AG051628, R01AG056102, RF1AG672448 from the National Institute of Health (NIH) in the United States to Dr. Hiroko Dodge. Dr. Mohammad Mahoor received a grant from the Colorado Office of Economic Development and International Trade in support of this research project.
2306.15388
On reachability categories, persistence, and commuting algebras of quivers
For a finite quiver $Q$, we study the reachability category $\mathbf{Reach}_Q$. We investigate the properties of $\mathbf{Reach}_Q$ from both a categorical and a topological viewpoint. In particular, we compare $\mathbf{Reach}_Q$ with $\mathbf{Path}_Q$, the category freely generated by $Q$. As a first application, we study the category algebra of $\mathbf{Reach}_Q$, which is isomorphic to the commuting algebra of $Q$. As a consequence, we recover, in a categorical framework, previous results obtained by Green and Schroll; we show that the commuting algebra of $Q$ is Morita equivalent to the incidence algebra of a poset, the reachability poset. We further show that commuting algebras are Morita equivalent if and only if the reachability posets are isomorphic. As a second application, we define persistent Hochschild homology of quivers via reachability categories.
Luigi Caputi, Henri Riihimäki
2023-06-27T11:26:47Z
http://arxiv.org/abs/2306.15388v2
# On reachability categories and commuting algebras of quivers ###### Abstract. For a finite connected quiver \(Q\), we study the reachability, or incidence, category \(\mathbf{Reach}_{Q}\). We investigate the properties of \(\mathbf{Reach}_{Q}\) from both a categorical and a topological viewpoint. In particular, we compare \(\mathbf{Reach}_{Q}\) with \(\mathbf{Path}_{Q}\), i.e. the category freely generated by \(Q\). Then, we study the category algebra of \(\mathbf{Reach}_{Q}\), which is isomorphic to the _commuting algebra_ of \(Q\). As a consequence, we recover, in a categorical framework, previous results obtained by Green and Schroll: we prove that the commuting algebra of \(Q\) is Morita equivalent to the incidence algebra of a poset, the reachability poset, and that its global dimension is bounded. We further provide applications to graph theory and homology theories of quivers: passing to reachability categories induces a functorial condensation of graphs, which, in turn, yields invariance of certain algebraic invariants such as Hochschild homology. Department of Mathematics, University of Torino - [email protected] ## 1. Introduction Finite dimensional algebras (over a field \(\mathbb{K}\) of characteristic \(0\)), and their representations, have been classically studied through finite quivers \(Q\). The bridge is given by the path algebras \(\mathbb{K}Q\). The general result is that finite dimensional (basic) algebras can be obtained, up to isomorphism, as quotients of path algebras by _admissible_ ideals; e.g. ideals generated by differences of paths in a quiver \(Q\). Among the various options, the _parallel ideal_, generated by differences of directed paths with same source and target, has been of utmost importance. Taking the quotient of a path algebra by its parallel ideal has a categorical counterpart. In fact, one can define the category with objects the vertices of a quiver \(Q\), and with set of morphisms \(\operatorname{Hom}(v,w)\) the set of all paths in \(Q\) between \(v\) and \(w\). The resulting category \(\mathbf{Path}_{Q}\), called the _path category_ of \(Q\), is the category freely generated by \(Q\). Other quotients of path algebras can also be obtained as quotients of path categories by relations imposed on the commuting morphisms. There is a broad spectrum of categories \(\mathbf{C}\) that can be associated to a quiver \(Q\) this way; the path algebra with the trivial quotient can be seen at one end of this spectrum. A further classical step, algebraically, is to consider the associated category algebra \(R\mathbf{C}\) over a ring \(R\). In this paper, we consider a category at the other end of this spectrum of categories generated by a finite quiver, namely the _reachability category_, or _incidence category_\(\mathbf{Reach}_{Q}\). \(\mathbf{Reach}_{Q}\) is obtained by imposing the relation of _reachability_ on the Hom-sets of \(\mathbf{Path}_{Q}\). More precisely, we set \(\operatorname{Hom}(v,w)\) in \(\mathbf{Reach}_{Q}\) to contain a unique morphism if and only if there exists a morphism \(v\to w\) in \(\mathbf{Path}_{Q}\), i.e. if \(w\) can be reached from \(v\) via a path in \(Q\) (see Definition 3.1). The resulting reachability category is thin, in particular an \(EI\)-category (i.e. every endomorphism is an isomorphism), and the described process can be seen as a "categorification" of taking the quotient of \(\mathbb{K}Q\) by its parallel ideal. Then, as every thin category is equivalent to a poset, we construct, for a given finite quiver \(Q\), the _reachability poset_\(\mathcal{R}(Q)\) associated to \(Q\). Category algebras of equivalent categories are in particular Morita equivalent, thence we obtain a categorical enhancement of the following result: **Theorem 1.1**.: _Let \(Q\) be a finite quiver, and \(\mathbb{K}\) be a field. Then, the quotient of the path algebra \(\mathbb{K}Q\) of \(Q\) by its parallel ideal \(C\) is Morita equivalent to an incidence algebra._ This result (see Theorem 5.3), to the authors' knowledge, is due to Green and Schroll [1], and was proved by a direct algebraic analysis. Our aim in this paper is to bring forth the categorical methods in obtaining such results, which facilitate rather straightforwardly the proofs. The quotient algebras laying in between \(R\mathbf{Path}_{Q}\) and \(R\mathbf{Reach}_{Q}\) might be difficult to understand in any particular case. However, if a relevant equivalence of categories is established, within the spectrum bounded by the categories \(\mathbf{Path}_{Q}\) and \(\mathbf{Reach}_{Q}\), then Morita equivalence results follow immediately. We believe that analysing the combinatorics of a quiver might lead one to see the categorical equivalences quite transparently; for example, in the case studied in the present paper, \(\mathcal{R}(Q)\) collapses all the strongly connected components of \(Q\) into single vertices. After introducing the notion of reachability category, we proceed with studying both the categorical and topological properties of this category, and its relation to \(\mathbf{Path}_{Q}\). We point out here that a limited amount of the combinatorial information of the quiver is lost when passing to the reachability category; for this reason, we complement the work with a discussion on a (topological) comparison between \(\mathbf{Path}_{Q}\) and \(\mathbf{Reach}_{Q}\) (see e.g. Proposition 3.6). If we denote by \(\operatorname{diam}(Q)\) the maximal length across all directed paths in \(\mathcal{R}(Q)\), we directly obtain the following corollary (also found in [13, Theorem 6.4]). **Corollary 1.2** (Corollary 5.7).: _Let \(Q\) be a finite quiver. Then, \(\operatorname{gl.dim}\mathbb{K}Q/C\leq\operatorname{diam}(Q)\)._ We proceed with a perspective on applications to graphs, to their representations, and to homology theories of directed graphs, or more generally, of quivers. From this perspective, the aforementioned Morita equivalence preserves homology theories, such as Hochschild or cyclic homology [11]. As going from quivers to reachability categories is functorial on the nose (cf. Theorem 3.13), the categorical equivalence also yields a functorial version of the operation of condensation of directed graphs. We obtain the following (see Proposition 5.9): **Proposition 1.3**.: _Reachability of quivers induces a functor \(\mathbf{Quiver}\to\mathbf{Quiver}\) which associates to a quiver \(Q\) the condensation of its transitive closure._ We conclude with a more historical note. Theorem 1.1, in fact, should be viewed in the context of Rota's work on incidence algebras [10]. Since Rota's seminal paper, incidence algebras have become objects of intensive interest, with a special appeal in combinatorics and representation theory. For example, it is a result of Stanley [13] that going from locally finite posets to incidence algebras is conservative. This means that if the incidence algebras of two given posets are isomorphic, then the posets themselves are also isomorphic. Other characterisations and structural theorems about incidence algebras are well known, as those provided in [14] or, more recently, in [15]. In our framework, incidence algebras are in particular category algebras of posets, seen as categories. Then, driven by the above considerations, this work contributes, and complements, previously existing results on structural properties of incidence algebras, by employing a categorical framework. As path algebras can be defined for any quiver, it is natural to ask what is the suitable notion of "incidence algebra" for general quivers. The categorical viewpoint enables us to gain a deeper understanding of the process of going from posets to incidence algebras, and to extend it to the whole category of quivers. Our contribution to this part of the story, in the spirit of Stanley's work, is Corollary 5.5. Finally, the notion of incidence algebras has also been generalised in different ways to various, apparently unrelated, contexts. To the authors' taste, maybe the most fascinating application of incidence algebras is the proof that Hochschild homology is simplicial, due to Gerstenhaber and Schack [13] (see also [12, 11, 1]). As a consequence of Gerstenhaber and Schack's result, Hochschild homology groups of incidence algebras are isomorphic to classical homology groups of simplicial complexes. In virtue of this connection, establishing that a given algebra is (isomorphic, or Morita equivalent to) an incidence algebra of an opportune poset, is then interesting not just for structural characterisations, but also from a topological viewpoint. Furthermore, this is also interesting in relation to applications of Hochschild, cyclic and \(K\)-theoretic methods in generalised persistent homology, cf. [1, 2], and it is well known that cyclic homology theories, or algebraic \(K\)-theory, are Morita invariant. In applying such methods to data in the form of directed graphs, it is often important, in view of a reasonable computability, that the graphs under hand are acyclic. In most real cases, graphs are not acyclic, and constructing acyclic graphs is rarely functorial. Our Proposition 1.3 can be seen as a functorial alternative when working with similar problems. ### Acknowledgements The authors wish to thank Ehud Meir for his useful comments and feedbacks on the first draft of the paper. LC wishes to warmly thank Francesco Vaccarino for valuable discussions on the topic. ## 2. Quivers and categories In this section, we collect some basic notions about quivers, and recall some main definitions needed in the follow-up. Let \(\mathbf{2}\) denote the category with objects \(E\) and \(V\), and two non-identity morphisms \(s,t\colon E\to V\), called _source_ and _target_. Let \(\mathbf{Fin}\) be the full subcategory of \(\mathbf{Set}\) of finite sets. Then, a (finite) _quiver_ is a functor \(Q\colon\mathbf{2}\to\mathbf{Fin}\). By a quiver, unless otherwise specified, we will usually mean a _finite_ quiver. Equivalently, a finite quiver can be represented as a directed graph with a set of vertices \(V\) and a set of directed edges \(E\). For each edge \(e\in E\) the source and target maps describe the source \(s(e)\) and the target \(t(e)\) of \(e\); we will graphically represent \(e\) by an arrow \(s(e)\longrightarrow t(e)\). When using this representation, we will also denote a quiver by the quadruple \((V,E,s,t)\). We denote edges \(e\) by ordered pairs \((v,w)\) of vertices corresponding to the source \(v\) and the target \(w\) of \(e\). Note that _self-loops_, i.e. edges \((v,v)\), and multiple edges between two vertices are allowed. Morphisms of quivers are natural transformations of functors. The category \(\mathbf{Quiver}\) of finite quivers and morphisms of quivers is the functor category \(\mathbf{Fun}(\mathbf{2},\mathbf{Fin})\). _Remark 2.1_.: Let \(Q=(V,E,s,t)\) and \(Q^{\prime}=(V^{\prime},E^{\prime},s^{\prime},t^{\prime})\) be two quivers. A morphism \(f\colon Q\to Q^{\prime}\) boils down to requiring that the two diagrams commute. Note that morphisms of quivers can collapse edges to self-loops. Consider for example the following quivers: \[Q=\ 0\ \raisebox{-14.226378pt}{\includegraphics[]{fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig We can associate to any quiver \(Q\) a small category \(\mathbf{Path}_{Q}\), called the _path category_ of \(Q\). The path category has the vertices of \(Q\) as the set of objects. The set of morphisms between the vertices \(v\) and \(w\) consists of all possible _paths_ in \(Q\) from \(v\) to \(w\). For each vertex \(v\) the trivial path with an empty sequence of edges is taken to be the identity morphism \(1_{v}\) at \(v\). The path category is then the category freely generated by \(Q\). There is a forgetful functor \(U\) from the category \(\mathbf{Cat}\) of small categories and functors to the category of (possibly infinite) quivers, obtained by forgetting which arrows are the identities and which the compositions. Such forgetful functor has a left adjoint, which is the free functor sending a quiver \(Q\) to \(\mathbf{Path}_{Q}\) (see e.g. [10, Section II.7]). The category \(\mathbf{Path}_{Q}\) can be in some cases difficult to work with. For example, if the quiver \(Q\) contains cycles, i.e. directed paths from a vertex \(v\) to itself, then the free functor will enforce the category \(\mathbf{Path}_{Q}\) to have infinitely many morphisms. As quivers are rarely acyclic, in Section 3 we will introduce another family of categories naturally associated to quivers, which is the main object of study of this paper. Our main results in Section 5 deal with the notion of category algebras, a classical source of invariants of categories; we recall the definition. **Definition 2.4**.: Let \(\mathbf{C}\) be a category and \(R\) be a commutative ring with unity. The _category algebra_\(R\mathbf{C}\) is the free \(R\)-module with basis the set of morphisms of \(\mathbf{C}\). The product on the basis elements is given by \[f\cdot g=\begin{cases}f\circ g&\text{when the composition exists in $\mathbf{C}$}\\ 0&\text{otherwise}\end{cases}\] and then it is linearly extended to the whole \(R\mathbf{C}\). The category algebra \(R\mathbf{C}\) is an associative \(R\)-algebra. If \(\mathbf{C}\) has finitely many objects, then \(R\mathbf{C}\) is also unital. The unit is given by \(\sum_{c\in\mathbf{C}}1_{c}\), where \(1_{c}\) is the identity endomorphism of the object \(c\) in \(\mathbf{C}\). The definition of the category algebra is a generalisation of the classical definition of group algebra. In fact, if \(G\) is a group, seen as a category with a single object and \(G\) as morphisms, then the associated category algebra is nothing but the classical group algebra. We also have the following classical examples. **Example 2.5**.: When \(\mathbf{C}\) is the path category \(\mathbf{Path}_{Q}\), then the category algebra \(R\mathbf{Path}_{Q}\) is the classical _path algebra_ of \(Q\). Recall that every poset \((P,\leq)\) can be seen as a category \(\mathbf{P}\) in a standard way: there is a unique morphism \(p\to q\) if and only if \(p\leq q\). For a poset \(P\), Rota introduced the concept of _incidence algebra_, cf. [11, Section 3]; this is the algebra generated by the relations \(p\leq q\), with convolution product. Equivalently, the incidence algebra of a poset is the quotient of its path algebra with respect to the parallel ideal [10]. Incidence algebras provide other examples of category algebras: **Example 2.6**.: Let \(P\) be a finite poset and \(\mathbf{P}\) its associated category. Then, the category algebra \(R\mathbf{P}\) of \(\mathbf{P}\) is isomorphic to the incidence algebra of \(P\). A category \(\mathbf{C}\), via the forgetful functor \(U\), can be regarded also as a quiver; hence, we can form the path category \(\mathbf{Path}_{\mathbf{C}}\). _Remark 2.7_.: By [11, Proposition 2.2.6], the obvious functor \(\phi\colon\mathbf{Path}_{\mathbf{C}}\to\mathbf{C}\) induces a surjective homomorphism \(\phi\colon R\mathbf{Path}_{\mathbf{C}}\to R\mathbf{C}\); its kernel is generated by \[\{\xrightarrow{\alpha_{1}}\xrightarrow{\alpha_{2}}-\xrightarrow{\alpha_{2} \alpha_{1}}\}\] where \(\alpha_{1}\) and \(\alpha_{2}\) are morphisms of \(\mathbf{C}\). Then, this map induces a natural isomorphism of \(R\)-algebras between \(R\mathbf{Path}_{\mathbf{C}}/\ker(\phi)\) and \(R\mathbf{C}\). When \(\mathbf{C}\) is a poset \(P\), seen as a category, then the map \(\phi\) in Remark 2.7 has the path algebra of \(P\) as domain, and the incidence algebra of \(P\) as target. Hence, the induced isomorphism gives back the equivalent definition of incidence algebra as quotient of the path algebra by the parallel ideal. Furthermore, if the base ring is a field, the parallel ideal is zero if and only if \(P\) is a tree (as a poset, i.e. if for each \(p\in P\), the set \(\{s\in P\mid s<p\}\) is well-ordered); see also [10]. This happens if and only if the incidence algebra of \(P\) is hereditary. We can summarise it as follows: _Remark 2.8_.: Let \(P\) be a finite poset. Then, its associated path algebra and incidence algebra are isomorphic if and only if \(P\) is a tree. Recall that a quiver \(Q\) is _strongly connected_ if it contains a path from \(x\) to \(y\) and from \(y\) to \(x\), for every pair of vertices \(x\) and \(y\). A subquiver \(Q^{\prime}\subset Q\) is a _strongly connected component_ of \(Q\) if it is strongly connected and maximal with respect to this property in the sense that no vertices or edges can be included in \(Q^{\prime}\) without it becoming non-strongly connected. For a directed graph \(\mathcal{G}\), the _condensation_\(c(\mathcal{G})\) of \(\mathcal{G}\) is the directed acyclic graph with the strongly connected components of \(\mathcal{G}\) as its vertices. Two vertices \(X\) and \(Y\) have a directed edge \((X,Y)\) in \(c(\mathcal{G})\) if there is an edge \((x,y)\) in \(\mathcal{G}\) for some \(x\in X\) and \(y\in Y\). Observe that, by definition, the condensation of a directed cycle is the quiver with one vertex and a self-loop. When taken in the category of directed graphs, where the morphisms are edge preserving, condensation does not always yield a functor (see e.g. the text after [1, Remark 1.11]). However, it is easy to see that condensation yields an endofunctor in \(\mathbf{Quiver}\). **Lemma 2.9**.: _Condensation \(c\colon\mathbf{Quiver}\to\mathbf{Quiver}\) is a functor._ Proof.: Let \(f\colon Q\to Q^{\prime}\) be a morphism of finite quivers. Incidence relations are preserved, and by Lemma 2.3\(f\) sends strongly connected components of \(Q\) to strongly connected components of \(Q^{\prime}\). Therefore, when passing to the condensation, we get a morphism \(c(f)\colon c(Q)\to c(Q^{\prime})\). It is now easy to see that this respects compositions and identities. ## 3. The reachability category of a quiver In this section, we introduce the reachability category of a finite quiver. We provide a comparison with the classical path category and study some algebraic and topological properties. ### The category \(\mathbf{Reach}_{Q}\) Let \(Q\) be a finite quiver. **Definition 3.1**.: The _incidence_, or _reachability_, category \(\mathbf{Reach}_{Q}\) is the category with objects the vertices of \(Q\), and for \(v,w\in Q\), the Hom-set \(\mathbf{Reach}_{Q}(v,w)\) is defined as follows: \[\mathbf{Reach}_{Q}(v,w)\coloneqq\begin{cases}*&\text{if there is a path from $v$ to $w$ in $Q$}\\ \emptyset&\text{otherwise}\end{cases}\] The Hom-set \(\mathbf{Reach}_{Q}(v,v)\) is defined as the identity on \(v\), given by the trivial path at \(v\). The reference to incidence algebras will be clearer in the next sections. We will mainly use the term _reachability_ because of the analogy with graph theory. In graph theory, in fact, the notion of reachability of a vertex \(w\) from a vertex \(v\) refers to the existence of a path from \(v\) to \(w\). Definition 3.1 is the direct categorical extension of this notion. Recall that a category \(\mathbf{C}\) is called _thin_ if for any pair of objects \(c,c^{\prime}\in\mathbf{C}\) there is at most one morphism \(c\to c^{\prime}\) between them. **Lemma 3.2**.: _The reachability category of a quiver \(Q\) is a thin category._ Proof.: By definition, \(\mathbf{Reach}_{Q}(v,w)\) has a single morphism if and only if there is a path in \(Q\) from \(v\) to \(w\). Hence, there is at most one morphism between each pair of objects. Recall that an \(EI\)-category is a category in which every endomorphism is an isomorphism. **Corollary 3.3**.: _The reachability category of a quiver \(Q\) is an \(EI\)-category._ Proof.: Take any pair of vertices \(x\) and \(y\) of \(Q\) such that there are paths from \(x\) to \(y\) and from \(y\) to \(x\). By thinness there are unique morphisms \(f\colon x\to y\) and \(g\colon y\to x\) in \(\mathbf{Reach}_{Q}\) which compose into a unique endomorphism on \(x\). But the identity \(1_{x}\) is also a unique endomorphism on \(x\), hence \(gf=1_{x}\); it follows that \(gf\) is an isomorphism. The reachability category is strictly related to the path category of \(Q\). It is therefore expected that, in some cases, these categories are isomorphic. Before providing the full characterisation, we can see that an isomorphism of categories holds for polytrees. Recall that a directed graph, or a quiver, is called a _polytree_ if its underlying undirected graph is a tree. Then, the following is straightforward from the definitions: _Remark 3.4_.: If \(Q\) is a polytree, then the categories \(\mathbf{Path}_{Q}\) and \(\mathbf{Reach}_{Q}\) are isomorphic. To see this, take any two vertices \(v\) and \(w\) of \(Q\). By the polytree condition, there is at most one directed path from \(v\) to \(w\), and both the Hom-sets \(\mathbf{Reach}_{Q}(v,w)\) and \(\mathbf{Path}_{Q}(v,w)\) contain at most one morphism. As the two categories share the same set of objects, it follows that we can construct an isomorphism between them. Remark 3.4 does not give a complete characterisation of quivers \(Q\) for which \(\mathbf{Path}_{Q}\) and \(\mathbf{Reach}_{Q}\) are isomorphic categories. For example, consider the quiver on four vertices and four directed edges as illustrated in Figure 2. Then, it is easy to see that \(\mathbf{Path}_{Q}\) and \(\mathbf{Reach}_{Q}\) are isomorphic also in this case, even though \(Q\) is not a polytree. In order to completely characterise the family of finite quivers for which the path category and the reachability category are isomorphic, we introduce the definition of _quasi-bigons_. Let \(B_{m,n}\) be the quiver illustrated in Figure 3. We use the convention that, when \(m,n=0\), then \(B_{0,0}\) denotes the quiver on vertices \(x\) and \(y\), with two edges from \(x\) to \(y\) and no other intermediate vertex. **Definition 3.5**.: We say that \(B\) is a quasi-bigon of a quiver \(Q\) if it is a subquiver of \(Q\) isomorphic to \(B_{m,n}\) for some \(m,n\geq 0\). If \(B\) is the whole quiver \(Q\), we will simply say that \(Q\) is a quasi-bigon. **Proposition 3.6**.: _Let \(Q\) be a finite connected quiver. Then, the categories \(\mathbf{Path}_{Q}\) and \(\mathbf{Reach}_{Q}\) are isomorphic if and only if \(Q\) does not contain directed cycles nor quasi-bigons._ Proof.: \(\Leftarrow\): Take any two vertices \(v\) and \(w\) of \(Q\). As there are no directed cycles, nor quasi-bigons in \(Q\), it follows that there is at most one directed path from \(v\) to \(w\). Then, the categories \(\mathbf{Path}_{Q}\) and \(\mathbf{Reach}_{Q}\) are isomorphic. \(\Rightarrow\): Assume that the two categories are isomorphic. They both have the vertices of \(Q\) as set of objects, and there is a bijection between \(\mathbf{Path}_{Q}(v,w)\) and \(\mathbf{Reach}_{Q}(v,w)\) for each pair of vertices \(v,w\) of \(Q\). Assume first \(v=w\); then \(\mathbf{Reach}_{Q}(v,v)\) contains only the identity of \(v\). Likewise, in \(\mathbf{Path}_{Q}(v,v)\) there is only a single morphism, implying the non-existence of directed cycles at \(v\). Assume now \(v\neq w\). Then, either \(\mathbf{Reach}_{Q}(v,w)=\emptyset=\mathbf{Path}_{Q}(v,w)\), and there are no directed paths in \(Q\) from \(v\) to \(w\), or \(\mathbf{Reach}_{Q}(v,w)=\{\phi\}=\mathbf{Path}_{Q}(v,w)\) for a \(v-w\) path \(\phi\). In the latter there is precisely one directed path in \(Q\) from \(v\) to \(w\). Hence, there are no quasi-bigons in \(Q\). Note that Proposition 3.6 provides a categorical enhancement of Remark 2.8. Furthermore, the proof suggests an algorithmic way to check whether the categories \(\mathbf{Path}_{Q}\) and \(\mathbf{Reach}_{Q}\) are Figure 3. The quiver \(B_{m,n}\). Figure 2. An alternating quiver on four vertices. isomorphic. This boils down to checking whether there are simple paths creating cycles or quasi-bigons. To make it more rigorous and algorithmic, we can use the notion of contractions. Recall that the _contraction_ of a quiver \(Q\) with respect to the edge \(e\) is the quiver \(Q/e\) obtained from \(Q\) by contracting \(e\) to a point. In other words, the edge \(e=(v,v^{\prime})\) is removed, and its source \(v\) and its target \(v^{\prime}\) are identified into a new vertex \(w\); edges incident to \(v\) or \(v^{\prime}\), in \(Q\), are set to be incident to \(w\) in \(Q/e\). In this work, we will only allow contractions of edges of type \((v,w)\) with \(v\neq w\); this means that we do not allow contractions of self-loops. More generally, one can consider contractions of simple paths, see also [1, Section 5.1.1]. We shall consider here contractions of maximal simple paths. Let \(\Gamma=(e_{0},e_{1},\ldots,e_{n})\) be a _simple_ path of a quiver \(Q\) that is maximal with respect to inclusion, i.e. there is no other directed simple path \(\Gamma^{\prime}\) in \(Q\) properly containing \(\Gamma\). We will use the following notion: **Definition 3.7**.: For a finite quiver \(Q\), the _path contraction_ of the simple path \(\Gamma=(e_{0},e_{1},\ldots,e_{n})\) in \(Q\) is the quiver \(Q/\Gamma\) obtained from \(Q\) by contracting all the edges of \(\Gamma\), but \(e_{0}\). Note that, if \(\Gamma\) is a single edge not contained in any longer simple path, then the path contraction of \(\Gamma\) in \(Q\) yields back the quiver \(Q\). **Example 3.8**.: If \(Q\) is the quiver \(B_{m,n}\) and \(\Gamma\) is the simple path \(((x,v_{1}),(v_{1},v_{2}),\ldots,(v_{m},y))\) represented in Figure 3, then the path contraction of \(\Gamma\) in \(B_{m,n}\) is isomorphic to the quiver \(B_{0,n}\); analogously, the path contraction of the path \(((x,w_{1}),\ldots,(w_{n},y))\) in \(B_{m,n}\) is isomorphic to the quiver \(B_{m,0}\). We now want to iterate the procedure of path contraction described in Definition 3.7, so to run it across all possible simple paths. Consider an ordering on the set of maximal simple paths of \(Q\), and call such set \(\mathfrak{P}\). **Definition 3.9**.: The _path reduction_ of a quiver \(Q\), with respect to a given ordering on its maximal simple paths, is the quiver \(P(Q)\) obtained from \(Q\) by path contraction of each element of \(\mathfrak{P}\). Note that contractions are not morphisms in the category **Quiver**. Nevertheless, composition of contractions is commutative up to isomorphism of quivers, and the procedure of iteratively contracting maximal simple paths yields again a quiver. **Example 3.10**.: Let \(Q\) be a directed cycle with at least two edges. Then, its path reduction is isomorphic to the quiver with two vertices and two directed edges with the opposite orientation. Analogously, the path reduction of a quasi-bigon \(B_{m,n}\) is \(B_{0,0}\). Observe that, in general, directed cycles in a quiver yield either multiple edges or self-loops; see also Figure 4. Observe that the path reduction of a finite quiver \(Q\) has no simple paths of length \(\geq 2\), and no directed cycles of length \(\geq 3\). An orientation \(o\) on an unoriented graph \(G\) is called _alternating_ if there exists a partition \(V\sqcup W\) of \(V(G)\) such that all elements of \(V\) have indegree \(0\) and all elements of \(W\) have outdegree \(0\) (cf. [1, Definition 2.7]), see also Figure 2. We call a quiver \(Q\) alternating if its orientation is alternating. The existence of an alternating orientation is equivalent to \(G\) being a bipartite graph (cf. [1]). **Example 3.11**.: Let \(Q\) be an alternating quiver. Then, \(Q\) is isomorphic to its path reduction. In fact, in an alternating quiver there are no simple paths of length \(2\). Figure 4. A quiver and its path reduction. The effect of taking the path reduction of a quiver is that it reduces the length of simple paths in \(Q\). Contractions do not create new cycles, and preserve the homotopy type of the quivers. A quiver is called _simple_ if it has no self-loops nor multiple edges. Then, Proposition 3.6 directly implies the following alternative condition for path categories and reachability categories to be isomorphic: **Corollary 3.12**.: _Let \(Q\) be a finite connected quiver, equipped with an ordering of its maximal simple paths. Then, the categories \(\mathbf{Path}_{Q}\) and \(\mathbf{Reach}_{Q}\) are isomorphic if and only if the path reduction \(P(Q)\) of \(Q\) is a simple alternating quiver._ Proof.: By Proposition 3.6, the categories \(\mathbf{Path}_{Q}\) and \(\mathbf{Reach}_{Q}\) are isomorphic if and only if \(Q\) does not contain directed cycles nor quasi-bigons. Now observe that \(Q\) does not contain directed cycles nor quasi-bigons if and only if, for any chosen ordering on its maximal simple paths, the path reduction of \(Q\) is an alternating directed graph, with no self-loops nor multiple edges. We conclude this section with showing that going from quivers to reachability categories is functorial. Let \(\mathbf{Thin}\) denote the category of small thin categories, which is a full subcategory of the category of small categories \(\mathbf{Cat}\). **Theorem 3.13**.: _Taking the reachability category yields a functor_ \[\operatorname{Reach}\colon\mathbf{Quiver}\to\mathbf{Thin}\] _from the category of quivers to the category of thin categories._ Proof.: The functor Reach sends a quiver \(Q\) to its associated reachability category \(\mathbf{Reach}_{Q}\). By Lemma 2.3 a morphism \(\phi\colon Q\to Q^{\prime}\) of quivers sends a directed path in \(Q\) to a directed path in \(Q^{\prime}\). Observe also that, by definition of quivers, \(\phi\) is a natural transformation. Then, define \(\operatorname{Reach}(\phi)\colon\mathbf{Reach}_{Q}\to\mathbf{Reach}_{Q^{\prime}}\) to be the functor induced by the natural transformation \(\phi\) on the objects, and such that, to a non-trivial morphism \(f\colon v\to w\) in \(\mathbf{Reach}_{Q}\), it associates the unique morphism between \(\phi(v)\) and \(\phi(w)\). It is easy to see that Reach respects compositions of morphism of quivers, and that it does yield a functor from the category of quivers to the category of small categories. Now, by Lemma 3.2, the reachability category is a thin category; hence, the functor just described has as target the category of thin categories. _Remark 3.14_.: We point out here that the reachability category \(\mathbf{Reach}_{Q}\) can alternatively be constructed by taking quotients of the Hom-sets of the path category \(\mathbf{Path}_{Q}\). In fact, one can define a functor \(F\colon\mathbf{Path}_{Q}\to\mathbf{Path}_{Q}/\!\!\sim\) that is the identity on objects, and such that the congruence \(\sim\) collapses all paths between two objects to a single morphism. However, this construction would lead us to define functorial quotients on \(\mathbf{Cat}\), seen as a \(2\)-category; Theorem 3.13 instead provides a direct proof of the functoriality of Reach. ### Topological properties In this subsection, we proceed with a comparison of path categories and reachability categories from the lens of topology. To do so, we study the nerve of these categories. In the follow-up, by geometric realisation \(|Q|\) of a quiver \(Q\), we shall mean the geometric realisation of \(Q\) as an undirected graph (i.e. by forgetting the directions of the edges and then taking the realisation of the obtained CW-complex). First, we recall the following: **Proposition 3.15** ([11, Ex. 4.3]).: _The classifying space \(|\mathrm{N}(\mathbf{Path}_{Q})|\) of the nerve of the path category of a quiver \(Q\) has the homotopy type of the geometric realisation \(|Q|\)._ Therefore, the classifying space of the nerve \(\mathrm{N}(\mathbf{Path}_{Q})\) has the homotopy type of a wedge of circles, and its homology yields the same homology as \(Q\). In order to get more refined invariants, one can twist the homology groups, for example by using coefficients in a functor [10]. Alternatively, one can deform the underlying category. Recall that a functor \(F\colon\mathbf{C}\to\mathbf{D}\) is an equivalence of categories if and only if \(F\) is full, faithful, and for all \(d\in\mathbf{D}\), there exists \(c\in\mathbf{C}\) such that \(F(c)\simeq d\). Furthermore, recall that an equivalence of categories induces an homotopy equivalence between the respective nerves. We start with the following examples: **Example 3.16**.: Consider the quiver \(Q\) consisting of four vertices and four edges as illustrated in Figure 5. Observe that, in virtue of Proposition 3.6, the associated path category and reachability category are not isomorphic. By Proposition 3.15, the nerve of \(\mathbf{Path}_{Q}\) is homotopic to the circle \(S^{1}\). The nerve of \(\mathbf{Reach}_{Q}\), on the other hand, is contractible since \(\mathbf{Reach}_{Q}\) has an initial object \(0\). The graph representations of these two categories are illustrated in Figure 6, where we have omitted the identity morphisms on the vertices. **Example 3.17**.: Consider the quiver \(L_{n}\) illustrated in Figure 7. Again, in virtue of Proposition 3.6, the associated path category and reachability category are not isomorphic. Its path category is homotopic to \(\bigvee_{i=1}^{n}S^{1}\). However, the associated reachability category is equivalent to the category with one object and one identity morphism. In fact, imposing the reachability condition implies that the morphisms \(v_{i}\to v_{i+1}\) and \(v_{i+1}\to v_{i}\) are inverses of one another. As an equivalence of categories preserves the homotopy type of the nerve, we get \(|\mathrm{N}(\mathbf{Reach}_{L_{n}})|\simeq*\). Motivated by the previous example, we can state the following: **Proposition 3.18**.: _Let \(Q\) be a strongly connected quiver. Then, \(\mathbf{Reach}_{Q}\) is contractible._ Proof.: Choose an object \(q\) of \(Q\), and let \(\mathbf{1}\) be the category with one object and a single identity morphism. Then, the functor \(F\colon\mathbf{1}\to\mathbf{Reach}_{Q}\) is an equivalence of categories. Therefore, the nerve of \(\mathbf{Reach}_{Q}\) is homotopic to the nerve of \(\mathbf{1}\), which is contractible. By Proposition 3.6 and Proposition 3.15, the nerve \(|\mathrm{N}(\mathbf{Reach}_{Q})|\) of a finite connected quiver \(Q\) with no directed cycles nor quasi-bigons is homotopic to the geometric realisation \(|Q|\). On the other hand, if \(Q\) contains directed cycles or quasi-bigons, and \(H\) is a strongly connected component of \(Q\), then by Proposition 3.18\(\mathbf{Reach}_{H}\) is contractible. We have seen that taking the reachability category is functorial. Therefore, the inclusion of \(H\) in \(Q\) induces a functor \(\mathbf{Reach}_{H}\to\mathbf{Reach}_{Q}\), hence a continuous map \(*\simeq|\mathrm{N}(\mathbf{Reach}_{H})|\to|\mathrm{N}(\mathbf{Reach}_{Q})|\) which is a cofibration. In other words, replacing a strongly connected component in \(Q\) with a single vertex does not change the homotopy type of \(|\mathrm{N}(\mathbf{Reach}_{Q})|\) (see also [1, Lemma 10.2]). Although, by Proposition 3.6, the categories \(\mathbf{Path}_{Q}\) and \(\mathbf{Reach}_{Q}\) are not isomorphic when \(Q\) contains directed cycles, we can generally get rid of the directed cycles: **Proposition 3.19**.: _Let \(Q\) be a finite connected quiver with no quasi-bigons. Then, there is a homotopy equivalence_ \[|\mathrm{N}(\mathbf{Path}_{c(Q)})|\simeq|\mathrm{N}(\mathbf{Reach}_{Q})|\,\] _where \(c(Q)\) is the condensation of \(Q\)._ Figure 5. The quiver \(Q\). Figure 6. The path category and the reachability category of the quiver \(Q\) in Figure 5. Figure 7. The quiver \(L_{n}\). Proof.: The condensation of \(Q\) does not have directed cycles nor quasi-bigons, hence \(|\mathrm{N}(\mathbf{Path}_{c(Q)})|\) and \(|\mathrm{N}(\mathbf{Reach}_{c(Q)})|\) are homotopy equivalent. Proceeding as in the proof of Proposition 3.18, collapsing the directed cycles of \(Q\) does not change the homotopy type of \(\mathbf{Reach}_{Q}\), hence the nerves \(|\mathrm{N}(\mathbf{Reach}_{Q})|\) and \(|\mathrm{N}(\mathbf{Reach}_{c(Q)})|\) are homotopy equivalent. The statement follows. Proposition 3.19 does not hold if the quiver \(Q\) is the Hasse diagram of a poset which is not a tree (as a graph); hence, we can not exhibit a complete classification of the homotopy classes of \(\mathbf{Reach}_{Q}\). In fact, consider the following example: **Example 3.20**.: Let \(P\) be the face poset of a simplicial complex \(X\). Its associated reachability category is the poset \(P\) itself, seen as a category. Its category algebra is the incidence algebra of \(P\). The nerve of \(\mathbf{Reach}_{P}\) is homotopy equivalent to the barycentric subdivision of \(X\). Therefore, if the simplicial complex \(X\) has non-trivial homotopy groups in degree \(\geq 2\), and \(Q\) is the Hasse diagram of \(P\), then the nerve of \(\mathbf{Reach}_{Q}\) has the same (non-trivial) homotopy type of \(X\). On the other hand, the homotopy groups of the nerve of a path category are always trivial in dimension \(\geq 2\) due to Proposition 3.15. ## 4. The reachability poset In this section we associate to each quiver a poset: the reachability poset. The association turns a quiver into a poset via an explicit and functorial construction. We first need to recall the notion of skeletal categories, and related properties. Recall that a _preordered set_ is a pair \((X,\leq)\) consisting of a set \(X\) and a binary relation \(\leq\), called a _preorder_, that is reflexive and transitive. Therefore, any preordered set yields a thin category by declaring a unique morphism \(x\to y\) whenever \(x\leq y\). Vice versa, a thin category yields, up to isomorphism of categories, a preordered set by setting the relation \(x\leq y\) between objects \(x\) and \(y\) if and only if there is a morphism \(x\to y\). Therefore, the category of thin categories is equivalent to the category of preorders. As a consequence, the category \(\mathbf{Reach}_{Q}\) can be seen as a preordered set, and the functor Reach can be interpreted as a functor with values in the category \(\mathbf{Preord}\) of preordered sets. If we relax the isomorphism assumption, a thin category, up to equivalence of categories, is simply a poset. Recall the notion of skeletal categories. **Definition 4.1**.: A category \(\mathbf{C}\) is _skeletal_ if each of its isomorphism classes has just one object. The _skeleton_\(\mathrm{sk}\,\mathbf{C}\) of \(\mathbf{C}\) is the unique (up to isomorphism) skeletal category equivalent to \(\mathbf{C}\). Assuming the axiom of choice, every category has a skeleton; in fact, the skeleton \(\mathrm{sk}\,\mathbf{C}\) can be constructed by choosing one object in each isomorphism class of \(\mathbf{C}\), and then by defining \(\mathrm{sk}\,\mathbf{C}\) to be the full subcategory on this collection of objects (cf. [10, Proposition 2.6.4]). Then, the skeleton construction yields an equivalence of categories \(\mathrm{sk}\,\mathbf{C}\hookrightarrow\mathbf{C}\) between the skeletal subcategory and the category \(\mathbf{C}\) itself. However, this construction can not be promoted to an endofunctor \(\mathrm{sk}:\mathbf{Cat}\to\mathbf{Cat}\) of the category of small categories; it yields a pseudofunctor, but we will not use this fact. Roughly speaking, for categories \(\mathbf{A},\mathbf{B}\) and a functor \(F\colon\mathbf{A}\to\mathbf{B}\), it is possible to _choose_ a functor \(\mathrm{sk}\,F\colon\mathrm{sk}\,\mathbf{A}\to\mathrm{sk}\,\mathbf{B}\) whose inclusion into \(\mathbf{B}\) is naturally isomorphic to the restriction of \(F\) to \(\mathrm{sk}\,\mathbf{A}\), but these choices will not be strictly functorial. This intuition can be formalised. We first need to recall a definition, see e.g. [1, Definition 4.16]: **Definition 4.2**.: Let \(\mathbf{C}\) be a subcategory of \(\mathbf{D}\), and \(d\) an object of \(\mathbf{D}\). A _reflection_ for \(d\) is a morphism \(\rho\colon d\to c\) in \(\mathbf{D}\) from \(d\) to \(c\in\mathbf{C}\) such that the following universal property is satisfied: for any \(f\colon d\to c^{\prime}\) in \(\mathbf{D}\) with \(c^{\prime}\in\mathbf{C}\), there exists a unique morphism \(f^{\prime}\colon c\to c^{\prime}\) of \(\mathbf{C}\) such that this diagram commutes. A subcategory \(\mathbf{C}\) of \(\mathbf{D}\) with the property that each object \(d\in\mathbf{D}\) has a reflection is called a _reflective_ subcategory. Equivalently, a full subcategory \(\mathbf{C}\) of a category \(\mathbf{D}\) is said to be reflective in \(\mathbf{D}\) if the inclusion functor from \(\mathbf{C}\) to \(\mathbf{D}\) has a left adjoint. For reflective subcategories, the following result is standard: **Proposition 4.3** ([1, Proposition 4.22]).: _Let \(\mathbf{C}\) be a reflective subcategory of \(\mathbf{D}\), and for each \(d\in\mathbf{D}\) let \(\rho_{d}\colon d\to c_{d}\) be a reflection. Then, there exists a unique functor \(R\colon\mathbf{D}\to\mathbf{C}\) such that:_ * \(R(d)=c_{d}\) _for all_ \(d\) _in_ \(\mathbf{D}\)_;_ * _for each morphism_ \(f\colon d\to d^{\prime}\) _in_ \(\mathbf{D}\)_, the diagram_ _commutes._ The full subcategory of skeletal categories is reflective in the category of small categories (see e.g. [1, Corollary 4.2], along with [11]). As a consequence, we can construct a functor from the subcategory of preorders in \(\mathbf{Cat}\) to the category of posets as follows. Let \(\mathbf{Preord}\) be the category of preorders and order-preserving maps, and let \(\mathbf{Poset}\) be the full subcategory of posets. For each preordered set \((P,\leq)\), consider the equivalence relation \(\simeq\) for which \(p\simeq q\) if and only if \(p\leq q\) and \(q\leq p\). Then, the quotient map \(\rho\colon P\to P/\simeq\), where \(P/\simeq\) is equipped with the induced order structure, is a reflection for \(P\) (cf. [1, Section 4.17]). Then, there is a unique functor \(L\colon\mathbf{Preord}\to\mathbf{Poset}\) induced by the described reflections. We point out that, sometimes, this functor is also called a _posetal reflection_, see e.g. [10]. Furthermore, each other choice would yield a different functor, but all such functors are naturally isomorphic. Considering the composition of the functors described above, we have constructed a functor \[\mathbf{Quiver}\xrightarrow{\operatorname{Reach}}\mathbf{Preord}\hookrightarrow \mathbf{Cat}\xrightarrow{L}\mathbf{Poset} \tag{1}\] from the category of quivers to the category of posets, which in turn can be seen again as categories. We can summarise it as the functor \[\mathcal{R}\colon\mathbf{Quiver}\to\mathbf{Poset} \tag{2}\] that associates to a quiver the poset resulting from the composition \(\mathcal{R}\coloneqq L\circ\operatorname{Reach}\). **Definition 4.4**.: For a finite quiver \(Q\), the poset \(\mathcal{R}(Q)\) is called the _incidence_, or _reachability, poset_ of \(Q\). We can give a more direct and explicit description of the reachability poset; the above discussion shows that this construction promotes it to an actual functor: _Remark 4.5_.: Given a quiver \(Q\), the objects of \(\mathcal{R}(Q)\) are the vertices of \(Q\) modulo the equivalence relation \(\simeq\) which identifies \(v\) and \(w\) if and only if there are directed paths from \(v\) to \(w\) and from \(w\) to \(v\). We denote by \([v]\) the equivalence class of \(v\). Then, we declare \([v]\leq[w]\) if and only if there are representatives \(v\) of \([v]\) and \(w\) of \([w]\), together with a directed path from \(v\) to \(w\) in \(Q\). Note that, graph theoretically, this is the transitive closure of the directed acyclic graph resulting from the condensation of \(Q\). We note here that we could have defined the reachability poset directly from the quiver \(Q\) without passing to categories. The reason to follow the whole composition in Equation (1) is that this way we have both the reachability category and the reachability poset which, as categories, are equivalent. Therefore, homological invariants of categories yield the same invariants for quivers; we return to this in Section 5.3. ## 5. Applications In this section we show some consequences of the theory on reachability categories developed so far. In particular, we show that commuting algebras are Morita equivalent to incidence algebras of reachability posets, and we analyse condensation of directed graphs in this context. We conclude with a view on homology theories of quivers. ### Applications to commuting algebras For a finite quiver \(Q\) and \(\mathbb{K}\) a field, let \(\mathbb{K}Q/C\) be the _commuting algebra_ introduced in [10], i.e. the path algebra \(\mathbb{K}Q\) of \(Q\) modulo its parallel ideal \(C\). Then, we have the following characterisation of category algebras of reachability categories: **Lemma 5.1**.: _The category algebra of \(\mathbf{Reach}_{Q}\) is isomorphic to the commuting algebra \(\mathbb{K}Q/C\)._ Proof.: By Example 2.5, if \(Q\) is a finite quiver, and \(\mathbb{K}\) a field, then the category algebra of \(\mathbf{Path}_{Q}\) is the classical path algebra \(\mathbb{K}Q\) of \(Q\). The commuting algebra \(\mathbb{K}Q/C\) is obtained from \(\mathbb{K}Q\) by taking the quotient with respect to the parallel ideal generated by all differences of finite directed paths in \(Q\) with the same source and target. On the other hand, consider the linear map \[\mathbb{K}\mathbf{Path}_{Q}\longrightarrow\mathbb{K}\mathbf{Reach}_{Q}\] of vector spaces, induced by the functor \(F\colon\mathbf{Path}_{Q}\rightarrow\mathbf{Reach}_{Q}=\mathbf{Path}_{Q}/\sim\) of Remark 3.14. The kernel of this linear map is precisely the vector subspace generated by differences of paths in \(Q\) with same source and target, i.e. the parallel ideal of \(\mathbb{K}Q\). The composition of paths is preserved, hence we have an algebra isomorphism. Recall that two unital rings are said to be Morita equivalent if and only if their categories of left (or right) modules are equivalent, cf. [1, Chapter 6]. The following fact is well known (see e.g. [21, Proposition 2.2]): **Proposition 5.2**.: _If \(\mathbf{C}\) and \(\mathbf{D}\) are equivalent categories with finitely many objects, and \(R\) is a commutative ring, then the category algebras \(R\mathbf{C}\) and \(R\mathbf{D}\) are Morita equivalent._ As a consequence of Proposition 5.2, if \(Q\) is a finite quiver and \(R\) a base commutative ring with unit, the category algebras associated to \(\mathbf{Reach}_{Q}\) and \(\mathcal{R}(Q)\) are Morita equivalent. Therefore, we recover one of the main results of [10] in this setting: **Theorem 5.3**.: _Let \(Q\) be a finite quiver and \(\mathbb{K}\) a field. Then, the commuting algebra \(\mathbb{K}Q/C\) is Morita equivalent to the incidence algebra of \(\mathcal{R}(Q)\)._ Proof.: The category \(\mathbf{Reach}_{Q}\) is equivalent to the poset \(\mathcal{R}(Q)\), seen as a category. As \(Q\) is finite, by Proposition 5.2, the associated category algebras are Morita equivalent. As by Lemma 5.1 the category algebra of \(\mathbf{Reach}_{Q}\) is isomorphic to the commuting algebra of \(Q\), these are also Morita equivalent. Hence, the commuting algebra of \(Q\) is Morita equivalent to the category algebra of the reachability poset \(\mathcal{R}(Q)\), which is an incidence algebra. _Remark 5.4_.: To the authors' knowledge, the fact that the commuting algebras are Morita equivalent to incidence algebras was first proven in [10]. As taking category algebras is functorial, by composition we get a functor \[\mathbf{Quiver}\rightarrow\mathbf{Alg}_{R} \tag{3}\] from quivers to \(R\)-algebras. This functor associates to a quiver \(Q\) the incidence algebra of the reachability poset \(\mathcal{R}(Q)\). By [11, Theorem 1], if the incidence algebras of two locally finite posets \(P\) and \(Q\) are isomorphic, as \(\mathbb{K}\)-algebras, then also \(P\) and \(Q\) are isomorphic. Hence, when restricting to the category of posets, this functor is conservative. Note that the extension to the whole category of quivers of the functor in Eq. (3) does not yield a conservative functor. However, we can still infer the following: **Corollary 5.5**.: _Let \(\mathbb{K}\) be a field. If the commuting algebras of the finite posets \(P\) and \(Q\) are isomorphic, as \(\mathbb{K}\)-algebras, then the reachability categories \(\mathbf{Reach}_{P}\) and \(\mathbf{Reach}_{Q}\) are isomorphic._ Proof.: Let \(P\) and \(Q\) be finite posets. Then, seen as as quivers, they generate the reachability categories \(\mathbf{Reach}_{P}\) and \(\mathbf{Reach}_{Q}\), which are still posets. In fact, they agree with the reachability posets \(\mathcal{R}(P)\) and \(\mathcal{R}(Q)\). By assumption, the associated category algebras are isomorphic, and such isomorphism is reflected in an isomorphism between the incidence algebras of \(\mathbf{Reach}_{P}\) and \(\mathbf{Reach}_{Q}\). By [15, Theorem 1], the posets, hence the reachability categories, are also isomorphic. Consider now the forgetful functor \(U\colon\mathbf{Cat}\to\mathbf{Quiver}\). Then, composition with the functor \(\mathcal{R}\) of Equation (2) yields an endofunctor \[R\coloneqq U\circ\mathcal{R}\colon\mathbf{Quiver}\to\mathbf{Quiver} \tag{4}\] of the category of (finite) quivers. _Remark 5.6_.: By construction, the quiver \(R(Q)\) does not contain non-trivial directed cycles or multiple edges. If one forgets also the self-loops, the quiver \(R(Q)\) becomes acyclic. Recall that the global dimension of a ring \(R\) is the supremum of the set of projective dimensions of all \(R\)-modules. For a finite quiver \(Q\), we denote by \(\operatorname{diam}(Q)\) the maximal length across all directed simple paths in \(R(Q)\). **Corollary 5.7**.: _Let \(Q\) be a finite quiver. Then,_ \[\operatorname{gl.dim}\mathbb{K}Q/C\leq\operatorname{diam}(Q)\] _where \(\mathbb{K}Q/C\) is the commuting algebra of \(Q\) over the field \(\mathbb{K}\)._ Proof.: The reachability category of a finite quiver is an \(EI\)-category by Corollary 3.3. Moreover, for each object \(x\) of \(\mathbf{Reach}_{Q}\), the automorphism group of \(x\) is trivial. Then, by [14, Theorem 5.3.1], we have \[\operatorname{gl.dim}\mathbb{K}\mathbf{Reach}_{Q}\leq\ell(\mathbf{Reach}_{Q})\] where \(\ell(\mathbf{Reach}_{Q})\) is the maximal length of chains of non-isomorphisms in the poset \(\mathcal{R}(Q)\). Note that each such chain is in bijection with a directed path in \(R(Q)\), hence \(\ell(\mathbf{Reach}_{Q})=\operatorname{diam}(Q)\). The statement now follows from Lemma 5.1. ### Quiver condensation Motivated by applications in topological data analysis, the aim of this section is to present a functorial operator on the category of quivers that resembles condensation of graphs. We start with the following observation: _Remark 5.8_.: Let \(H\) be a strongly connected component of a finite quiver \(Q\). Then the image of \(H\) in \(R(Q)\) is a vertex \([h]\) corresponding to the equivalence class of any vertex \(h\) in \(H\). **Proposition 5.9**.: _Let \(Q\) be a finite quiver. Then \(R(Q)\) is isomorphic to the condensation of the transitive closure of \(Q\)._ Proof.: The quiver \(R(Q)\) is, by construction, obtained by first taking category \(\mathbf{Reach}_{Q}\) of \(Q\); this can be identified with the transitive closure of \(Q\). Now, each strongly connected component of \(Q\) yields a strongly connected component in the transitive closure. By Remark 5.8, such a component \(H\) is represented by a single vertex \([h]\) in \(R(Q)\). All other edges connecting two strongly connected components \(H\) and \(H^{\prime}\), say directed from \(h\) in \(H\) to \(h^{\prime}\) in \(H^{\prime}\), are sent to edges \(([h],[h^{\prime}])\) in \(R(Q)\). Hence, the vertices of \(R(Q)\) are the strongly connected components of \(Q\), and there is an edge \((v,w)\) in \(R(Q)\) if and only if there is an edge \((h,h^{\prime})\) in the transitive closure of \(Q\), with \(h,h^{\prime}\) not strongly connected. This is enough to show that \(R(Q)\) is isomorphic to the condensation of the transitive closure of \(Q\). Note that, if the quiver \(Q\) is isomorphic to its transitive closure (e.g. when it is alternating with all the self-loops), then \(R(Q)\) is precisely the condensation of \(Q\). We can put it in a commutative diagram where \(\mathcal{F}\) is either the functor \(\mathbf{Path}\) or \(\mathbf{Reach}\); then, they both can be seen as functorial approximations of the condensation \(c\). This observation yields Proposition 1.3. ### Homological considerations Via the categorical framework one can usually associate a homology theory to a quiver by using the nerve construction (see Section 3.2). One can also study the homology of the category algebra \(R\mathbf{C}\) for a category \(\mathbf{C}\) associated to a quiver \(Q\); examples include Hochschild homology \(\operatorname{HH}\) (or cyclic homology, cf. [1]). The interest in cyclic homology theories comes to us in view of applications to topological data analysis and persistent homology of directed graphs [1, 2, 13]. In such general frameworks, the homological invariants for \(\mathbf{Path}_{Q}\) might vanish beyond degree \(1\) (recall Proposition 3.15), and they are readily computable when the quiver, and the associated category, are acyclic. For Hochschild (co)homology in degrees \(0\) and \(1\), we can resort to a well known result due to Happel (see also [1, Proposition 4.4]): **Theorem 5.10** ([10]).: _If \(Q=(V,E,s,t)\) is a connected quiver without oriented cycles and \(\mathbb{K}\) is an algebraically closed field, then_ \[\dim_{\mathbb{K}}\operatorname{HH}^{i}(A)=\dim_{\mathbb{K}}\operatorname{HH }_{i}(A)=\begin{cases}1&\text{if }i=0\\ 0&\text{if }i>1\\ 1-n+\sum_{e\in E}\dim_{\mathbb{K}}e_{t(e)}Ae_{s(e)}&\text{if }i=1\end{cases}\] _where \(A=\mathbb{K}\mathbf{Path}_{Q}\) is the path algebra of \(Q\), \(n=|V|\) is the number of vertices of \(Q\) and \(e_{t(e)}Ae_{s(e)}\) is the subspace of \(A\) generated by all the possible paths from \(s(e)\) to \(t(e)\) in \(Q\)._ An equivalence of categories induces a homological invariance, and thus homological invariants of categories yield the same invariants for quivers; for example, we get \(\operatorname{HH}_{*}(\mathbf{Reach}(Q))\cong\operatorname{HH}_{*}(\mathcal{ R}(Q))\). In [1] Hochschild homology was used as an invariant of path algebras arising from certain connectivity structures in persistent homology of directed networks. There, the non-functorial graph theoretical condensation was employed for computational purposes to allow the use of Happel's theorem. The categorical framework as shown in Section 5.2 allows us now to have a fully functorial composition \[\mathbf{Quiver}\xrightarrow{U\circ\mathbf{Reach}}\mathbf{Quiver}\xrightarrow{c }\mathbf{Quiver}\xrightarrow{\mathbb{K}}\mathbb{K}\text{-Alg}\xrightarrow{ \operatorname{HH}_{i}}\mathbf{Vect}\, \tag{5}\] landing in the category of finite dimensional vector spaces, and where the equivalence of categories induced by \(c\circ U\circ\mathbf{Reach}\) preserves \(\operatorname{HH}_{*}(\mathbf{Reach}(Q))\). Furthermore, by Example 3.20, \(\operatorname{HH}\) can be of arbitrarily high degree, yielding non-trivial invariants also in degrees \(>1\)[12]. The key feature of persistent homology are the various stability theorems, see for example [1, Sections 3.3 and 6]. These assert that the process of going from a data object to the associated homological invariant is \(1\)-Lipschitz continuous; this continuity is stated with respect to suitable metrics, both between the data objects and the invariants [1, 1]. The persistent Hochschild homology pipeline for directed graphs in [1] was shown to be stable for acyclic graphs. Thanks to the functoriality of Eq. (5), we have shown here that changing the point of view from \(\mathbf{Path}(Q)\) to \(\mathbf{Reach}(Q)\) and employing \(5\) yields a _functorial and stable_ persistent Hochschild homology for general quivers.
2303.08261
Ca-dimers, solvent layering, and dominant electrochemically active species in Ca(BH$_4$)$_2$ in THF
Divalent ions, such as Mg, Ca, and Zn, are being considered as competitive, safe, and earth-abundant alternatives to Li-ion electrochemistry. However, the challenge remains to match electrode and electrolyte materials that stably cycle with these new formulations, based primarily on controlling interfacial phenomena. We explore the formation of electroactive species in the electrolyte Ca(BH$_4$)$_2$ in THF through molecular dynamics simulation. Free-energy analysis indicates that this electrolyte has a majority population of neutral Ca dimers and monomers, albeit with diverse molecular conformations as revealed by unsupervised learning techniques, but with an order of magnitude lower concentration of possibly electroactive charged species, such as the monocation, CaBH$_4^+$ , which we show is produced via disproportionation of neutral Ca(BH$_4$)$_2$ complexes. Dense layering of THF molecules within 1 nm of the electrode surface (modeled here using graphite) hinders the approach of reducible species to within 0.6 nm and instead enhances the local concetration of species in a narrow intermediate-density layer from 0.7-0.9 nm. A dramatic increase in the monocation population in this intermediate layer is induced at negative bias, supplied by local dimer disproportionation. We see no evidence to support any functional role of fully-solvated Ca$^{2+}$ in the electrochemical activity of this electrolyte. The consequences for performance and alternative formulations are discussed in light of this molecular-scale insight.
Ana Sanz Matias, Fabrice Roncoroni, Siddharth Sundararaman, David Prendergast
2023-03-14T22:29:10Z
http://arxiv.org/abs/2303.08261v2
Ca-dimers and solvent layering determine electrochemically active species in Ca(BH\({}_{4}\))\({}_{2}\) in THF ###### Abstract Divalent ions, such as Mg, Ca, and Zn, are being considered as competitive, safe, and earth-abundant alternatives to Li-ion electrochemistry. However, the challenge remains to match electrode and electrolyte materials that stably cycle with these new formulations, based primarily on controlling interfacial phenomena. We explore the formation of electroactive species in the electrolyte Ca(BH\({}_{4}\))\({}_{2}\) in THF through molecular dynamics simulation. Free-energy analysis indicates that this electrolyte has a majority population of neutral Ca dimers and monomers, albeit with diverse molecular conformations as revealed by unsupervised learning techniques, but with an order of magnitude lower concentration of possibly electroactive charged species, such as the monocation, CaBH\({}_{4}^{+}\), which we show is produced via disproportionation of neutral Ca(BH\({}_{4}\))\({}_{2}\) complexes. Dense layering of THF molecules within 1 nm of the electrode surface (modeled here using graphite) hinders the approach of reducible species to within 0.6 nm and instead enhances the local concentration of species in a narrow intermediate-density layer from 0.7-0.9 nm. A dramatic increase in the monocation population in this intermediate layer is induced at negative bias, supplied by local dimer disproportionation. We see no evidence to support any functional role of fully-solvated Ca\({}^{2+}\) in the electrochemical activity of this electrolyte. The consequences for performance and alternative formulations are discussed in light of this molecular-scale insight. ## I Introduction To increase the rate of conversion to renewable energy sources, electrification of various energy-intensive aspects of society is underway. The concomitant demand for electrochemical energy storage solutions increasingly highlights the limits of Li-ion technologies with respect to performance, safety and sustainability. Multivalent ions such as Mg\({}^{2+}\), Ca\({}^{2+}\), Zn\({}^{2+}\), or even Al\({}^{3+}\), offer more earth-abundant alternatives, some with higher theoretical specific capacity and reduced safety concerns due to self-passivation of metal anodes.[1, 2, 3, 4, 5] However, realizing performant electrochemical cells using these ions is hindered due to various issues driven by interfacial phenomena: low power output and charging rates due to large overpotentials and associated electrolyte decomposition and interphase growth.[6, 7, 8, 9] At issue is our lack of understanding of the specific complex nature of solvation in suitable electrolytes for multivalent ions and the identification of which of these species are active at the electrode-electrolyte interface and why.[10] We may be biased by familiarity with aqueous solutions and the ability of water to generate perfect electrolytes, with fully dissociated and solvated ions, for many salts. However, organic solvents (required for a sufficiently wide window of electrochemical stability in batteries), with typically lower dielectric constants and larger molecular sizes, have increased residence times for coordinating highly charged cations and have relatively little interaction with anions, unlike ambipolar water molecules which more easily solvate both charges. Molecular dynamics provides a window to the inner workings of electrolytes, revealing details of coordination of cations by solvent molecules and anions. However, in the study of highly-charged species in poor dielectrics, we must take care to avoid sampling only a limited set of coordination states due to their long lifetimes and unavoidable limitations in computing time and complexity. Free-energy sampling allows us the opportunity to pose fundamental questions regarding the chemical composition of a poor electrolyte and the mechanisms by which its solvated species interconvert.[11, 12, 13, 14, 15, 16] Furthermore, mining the large quantities of compositional and conformational data produced by these simulations presents its own challenge. Here we rely on recently developed unsupervised learning approaches[17] to provide a faster path to extracting molecular-scale insight and guidance for future experiments to validate our predictions. We study Ca(BH\({}_{4}\))\({}_{2}\) in THF as a promising electrolyte candidate,[18] spurred on by recent studies of the bulk electrolyte[19] and the anode interface.[6, 20] We can also contrast its behavior with Mg(TESI)\({}_{2}\) in THF, studied recently using free energy sampling.[21] Ref.[19] proposed that neutral Ca dimers (Ca\({}_{2}\)(BH\({}_{4}\))\({}_{4}\)) facilitate the disproportionation of neutral monomers into (active) monocations CaBH\({}_{4}^{+}\) and anions Ca(BH\({}_{4}\))\({}_{3}^{-}\). In the same work, molecular cluster calculations (embedded in a polarizable continuum model) indicated that the dimer is the second most stable conformation after the neutral monomer, but lacked specific solvent interactions beyond the first coordination shell and any Debye screening from finite ion concentrations. Interfacial characterization reveals the ready formation of solid-electrolyte interphases incorporating oxidized boron and even embedded calcium hydride.[6] And debate continues as to the presence or electrochemical relevance of the fully-solvated Ca\({}^{2+}\) di-cation.[22, 23, 24, 18, 25, 26] In this work, we reveal intricate details of the popula tion of species in the bulk of this electrolyte and striking differences within a nanometer of its electrode interfaces, which are further exaggerated by potential differences. We discuss the consequences of these predictions for functioning cells. ## Results ### In the bulk electrolyte We employ free-energy analysis for a model of the bulk electrolyte at room temperature (RT), comprising two Ca\({}^{2+}\) ions (and four borohydride anions) dissolved in THF (Fig. 1 a), using empirical force fields (see Methods and Supporting Information). The free energy surface (Fig. 1 c) is sampled using metadynamics[27] with respect to two collective variables: the Ca-B coordination number, which controls the charge of complexes, and the Ca-Ca distance, which for a system with only two Ca ions, distinguishes between monomers and dimers. Integration over the free energy surface indicates that the most favorable species in solution are Ca-Ca dimers (Fig. 1 b). These are neutral complexes with the formula Ca\({}_{2}\)(BH\({}_{4}\))\({}_{4}\) and come in two varieties: long dimers (36%) bridged by a single BH\({}_{4}^{-}\) and short dimers (21%) bridged by two anions (see below for more detail). The next most common species is the neutral monomer complex, Ca(BH\({}_{4}\))\({}_{2}\) (40%). It is notable that only a small percentage (3%) of the solvated population is predicted to be charged, the monocation, CaBH\({}_{4}^{+}\), with its complementary anion, Ca(BH\({}_{4}\))\({}_{3}^{-}\). The free energy for the formation of fully solvated dications, Ca\({}^{2+}\), in this model, is too high (\(\sim\)19 kT) to support a significant population. These results provide some reordering with respect to static estimates which predict the neutral monomer as most favorable, followed by a single (short) dimer conformation, the monocation and the anion.[19] From the relative populations of each species, it is clear that neglecting the long dimer would lead to this conclusion. Furthermore, it is clear from RT sampling that there are multiple possible conformations for the dimer at finite temperature (as we explore below). However, both sets of calculations agree that the fully solvated dication Ca\({}^{2+}\) is the least favored in this Figure 1: **Analysis of the bulk electrolyte.** (**a**) One time step sampled from our molecular dynamics model of the bulk electrolyte showing two Ca\({}^{2+}\) dications and 4 BH\({}_{4}^{-}\) anions in THF at room temperature (RT). (**b**) Populations of electrolyte species derived from their respective free energies \(\Delta G\) (kT), which were obtained by integration of (**c**) the free-energy surface sampled using metadynamics with respect to Ca-Ca distance and Ca-B(H\({}_{4}\)) coordination number. Minimum energy pathways for dimer disproportionation from the neutral species, Ca(BH\({}_{4}\))\({}_{2}\) are indicated by dashed lines. (**d**) Population analysis with respect to anion and THF coordination and conformation about Ca ions obtained via unsupervised learning, indicating the diversity of species umbrella-sampled from points on the free-energy surface corresponding to the neutral monomer (CN(Ca-B) = 1.9 and d(Ca-Ca) = 11 Å) and the long (L) and short (S) dimers (d(Ca-Ca) \(<\) 7 Å). Atomic structures of dominant structures (populations larger than 7.5%) are shown. set of possibilities (1-1.2 eV higher in energy by static estimates [19], 0.48 eV from free-energy sampling at RT). Our 2D free energy surface (Fig. 1 c) reveals that direct interconversion of these solvated objects, by removal or addition of borohydride anions, is prevented by quite steep free energy barriers (24.4 kT or \(\sim\)0.6 eV). The easier path to disproportionation (forming charged species from neutral monomers) is through the formation of dimers, with exchange of borohydride anions before dissociation into the complex monocation and anion (with activation energies of 2.4 - 10 kT, Fig. 1 c and Fig. S1): \[2\,\mathrm{Ca(BH_{4})_{2}}\longrightarrow\mathrm{Ca_{2}(BH_{4})_{4}} \longrightarrow\mathrm{CaBH_{4}}^{+}+\mathrm{Ca(BH_{4})_{3}}^{-}\] At a minimum, this explains the prevalence of dimers in spectroscopic analysis of the bulk electrolyte (using EXAFS and Raman) [19] and the observation of saturating ionic conductivity with increasing concentration as more neutral dimers form (prior to precipitation). [19; 28] Strikingly, we find that bulk populations of each complex are conformationally quite diverse. Our metadynamics simulations project the full free energy landscape onto only a few collective variables, however, multiple molecular conformations may satisfy these constraints (Ca-B coordination number and Ca-Ca distance, in this case). Data-mining techniques (dimensionality reduction, hierarchical clustering and permutation-invariant alignment [17] - details in Methods and SI) applied to selective umbrella sampling of local minima in the free energy landscape reveal a rich variety of solvated isomeric structures, summarized in Fig. 1 d. For example, the neutral monomer, \(\mathrm{Ca(BH_{4})_{2}}\), when additionally coordinated by three THF molecules (\(\sim 60\%\) of its population) adopts mostly bent borohydride arrangements with a large dipole moment (8 D, see SI). In addition, we found a significant portion (\(\sim 30\%\)) of monomers coordinated by 4 THF molecules with an axial borohydride arrangement and low dipole moment (2 D). These two structures had been proposed separately as the minimum energy structure from quantum-chemical cluster calculations [19] and molecular dynamics simulations, [28] respectively. By contrast, the monocation occurs mostly with 5 coordinating THF molecules, in agreement with previous reports. [19] We find that dimers are always neutral (with four borohydride anions) and are split in two main spatial configurations characterized as short (SD) and long (LD) dimers with average Ca-Ca equilibrium distances of 4.48 and 5.25 A, respectively (Fig. S2). Furthermore, each was found to have sub-populations with 4-7 solvent molecules, and, within those, several stereoisomers (Fig. 1 d). Key differences between the long and short dimers are the presence of a single-anion or double-anion bridge and predominant 6 THF coordination or mixed 5-6 THF coordination, respectively. Short dimers are in excellent agreement with a previously proposed double-bridged dimer structure with a 4.4 ACa-Ca distance (from EXAFS fitting data [19]). The set of structures shown here expands upon and underscores the configurational flexibility of calcium. [19] And, based on our understanding of the efficient disproportionation pathways to form charged species via dimerization (discussed above), it makes sense that the dominant dimer conformations form from combinations of bent and axial borohydride arrangements of the neutral monomers (e.g., long dimer isomers 1, 3 and 7 with 6 THF molecules in Fig. 1 d are bent-bent, axial-bent and axial-bent combinations, respectively). Although all dimers here are neutral, each Ca ion within a given dimer may be locally coordinated by 1 to 4 anions, with 1 (long) or 2 (short) shared between them. Most commonly we observe [3,2] or [3,3] anion arrangements for long or short dimers, respectively. Small populations of [1,4] dimers are found and are key to some interfacial disproportionation processes discussed below and in Fig. 5. ### At the electrode-electrolyte interface What happens to species in the electrolyte as they approach an interface, such as the electrode surface? Firstly, as expected from simple statistical mechanics modeling of molecules at hard interfaces, [29] the solvent, THF, adopts a layered molecular structure [21; 30] near the surface with a dense layer (DL) at 3-6 A, followed by a low-density 'gap', and an intermediate density layer (IDL) at \(\sim\) 7-10 A from the surface, with 2.5, 0.3 and 1.5 times the bulk THF density, respectively (Fig. 2 a). A third collective coordinate, calcium distance from the interface, sampled with metadynamics allows us to obtain the interfacial, 3D free-energy landscape (Fig. 2 b), of which the most distant slice is the 2D bulk free-energy landscape in Fig. 1 c. Dissolved species in the electrolyte near the electrode surface respect this underlying solvent structure. Now, free-energy minima are distributed between the bulk, IDL, and DL, and separated by barriers, as indicated by minimum energy pathways of neutral and charged species approaching the interface in the 3D landscape (Fig. 2 c and d). A key observation is that the IDL defines an attractive basin for most species, especially dimers and monocations, with minima in free energy that are lower than in the bulk, implying that this is a narrow interfacial region for enhanced concentration of solutes. Conversely, the DL defines a region from which solutes may be excluded due to additional free energy costs, without assistance of some applied bias (see below). This has some generally important consequences for electrochemistry in this electrolyte at a non-interacting electrode (such as graphite). Negligible populations of species that might be specifically adsorbed or adjacent to the electrode would imply the absence of inner-sphere electron transfer events during reduction. Similarly, if the electroactive species are likely to be highly coordinated in the IDL, then the necessary outer-sphere electron transfer events may very well lead to reductive decomposition of the coordinating species (solvent or anions) leading to low Coulombic efficiency and the formation of a solid-electrolyte interphase (SEI). Based on this study, our claim is that this inefficiency is as much a consequence of the strong solvent layering at the interface as it is of the salt that defines the electrolyte. Specifically, we find that, next to an unbiased non-interacting electrode (Fig. 3 a), the IDL is dominated (\(\sim 75\%\)) by dimers (long dimers in particular, 55%), with reduced populations of neutral monomers (\(\sim 10\%\)) and a very slight increase to 8% in the monocation population (Fig. 3 b). Only 6% of the interfacial population is in the DL, according to thermodynamic integration of the free energy. In the DL, the monocation population decreases and dimers dominate (\(\sim 85\%\)) - especially short dimers, in contrast to bulk and IDL populations. Unsupervised clustering analysis of structures from umbrella-sampling trajectories at the IDL and DL (z = 8.25 and 5.75 A, respectively) reveals a reduced number of favored dimer isomers compared to the bulk, with one given isomer making \(>\)20 % of the population in each case (Fig. 3 c-d). In the IDL, this is a bent-axial long dimer with 6 solvating THF molecules (indexed as Isomer 7 or LD 6 THF - 7), already a favored species in the bulk. In the DL, on the other hand, a short dimer also with 6 THF molecules (SD 6 THF - 2) dominates. Furthermore, we find that the orientation of dimers at the interface is discretized (Fig. 3 e,f). Favored IDL dimers have mostly flat orientations of the Ca-Ca vector relative to the graphite surface, with two dense layer THF molecules involved in solvation. Similarly, the dominant dimer orientation in the DL is flat (i.e., in a plane parallel to the surface), with both calcium ions embedded in that dense region, and with two THF molecules from the IDL contributing to solvation. Additionally, perpendicular dimers form at least 11% of the DL population. Isomer 4 of the short dimer with 5 THF molecules (SD THF 5 - 4) is an asymmetric dimer, with calcium ions solvated by 2 and 3 THF molecules that sit in the DL and on the edge of the IDL, respectively (Fig. S3). Note that the specific conformation of these coordination complexes defines their effective dipole moment, which may, in some cases, be orthogonal to the Ca-Ca vector of the dimer. An example in the DL is the favored isomer SD 6 THF - 2, with its Ca-Ca vector parallel to the surface but three BH\({}_{4}^{-}\) sitting closer to the interface, near the corresponding DL free energy minimum for isolated borohydride in THF next to graphite (z= \(\sim\)4 A, see Fig. S4). Dipole Figure 2: **Effect of solvent layering at the interface**. (**a**) The free-energy profile (red) of a single THF molecule in THF at RT with respect to distance from a graphite interface and the corresponding oxygen density profile (black), with slight adjustment (dashed lines) at negative bias. The Dense Layer (DL), Intermediate Density Layer (IDL) and bulk regions are color-coded hereafter in red, blue and gray. The inset shows a snapshot of the interfacial region sampled from molecular dynamics indicating THF molecules lying flat against the graphite surface and somewhat constrained at the DL-IDL interface. (**b**) The 3D free energy landscape derived from metadynamics with respect to Ca-Ca distance, C-B coordination number, and distance from the graphite surface for the neutral interface. Minimum free-energy paths for (**c**) neutral and (**d**) charged species in the electrolyte arriving at the interface from the bulk under zero (solid) and negative (dashed) bias conditions indicated by the graphite surface charge density, \(\sigma\). Sudden jumps in the free-energy profiles reflect variations with respect to other collective variables in the free-energy landscape. Under negative bias the long and short dimers are not distinguished in our sampling. In general, negative bias stabilizes charged species at the interface. orientation will be discussed in more detail below in the context of biased interfaces. As observed in the bulk electrolyte, the absence of fully solvated dications is more pronounced at the interface. This speaks to the rigidity of the THF coordination sphere around this multivalent ion (similar to that around Mg as seen in Ref. [21]) whose size (an effective radius \(\sim 5\) A) prevents the close approach of this species to the electrode. The preferred, relatively flat orientation of THF molecules in the dense layer is incompatible with the preferred radial coordination of the cation and likely forces a reduction in coordination number that raises the free energy of fully-solvated Ca\({}^{2+}\) (reducing its relative population) relative to complexes coordinated by the more compact borohydride anions. ### Biased Interfaces So far, it seems that Ca(BH\({}_{4}\))\({}_{2}\) in THF is a poor electrolyte, with only a small fraction of the salt concentration (3%) present as charged species in the bulk, albeit with a noticeable enhancement to 10% in the intermediate density layer (IDL). Otherwise this electrolyte is dominated by neutral species (monomers and dimers). This is consistent with previous discussion of undissoci Figure 3: **Distribution of species at the neutral interface.** Molecular model of the simulation box, obtained from a snapshot of the equilibrated MD trajectory (**a**), showing a dimer at the IDL (in blue), near the DL (in red). Integrated free-energy and corresponding population per layer (**b**), color-coded as red, blue and gray for the DL, IDL and bulk, respectively. Unsupervised clustering analysis of umbrella sampling trajectories at the DL and IDL (z = 5.75 and 8.25 Å) show the relative populations of dimer isomers per layer (**c,d**). Representative structures have discrete orientations, which depend on the layer (insets in **e,f**). Color lines show the orientation of the Ca-Ca axis with respect to the surface normal, where 90\({}^{*}\) is parallel to the surface, as shown in IDL dominating dimer LD 6 THF - 7 (also in a), while DL dimer SD 5 THF - 4 is mostly perpendicular. Dipoles have also discrete orientations with respect to the surface normal for given isomers (shown in black, dotted lines) and are roughly aligned to the Ca-Ca axis in long, flat dimers (IDL) or perpendicular, as in DL dimer SD 6 THF - 2. ated neutral species as dominant in Ca-based electrolytes with boron-containing anions [10]. However, the bulk free energy landscape (Fig. 1 c) indicates that interconversion of species is possible (more details below) and some equilibrium exists between charged and neutral species. We explore the impact of biased/charged electrodes on the population of solutes by evenly distributing opposing charges on either face of the two-layer graphite electrode model, which, under periodic boundary conditions, polarizes the electrolyte. We considered two specific charge states with (1) 0.065 and (2) 0.13 e/nm\({}^{2}\). We estimated that these surface charge densities raise the Fermi level of a graphene model by 0.56 and 0.74 eV, respectively (see Sec. Supplementary Information 2). Screening at the interface by the solvent and electric double layer will result in a much smaller potential difference across the interface. A lower bound for this bias is 0.038 and 0.065 V, respectively, provided by the Grahame equation, assuming a simulated bulk concentration of 0.12 M. [31] (Note that at higher bulk concentrations, with a shorter Debye screening length, this estimated potential difference should be even smaller - at 1.65 M\({}^{6}\) the calculation would indicate 0.01 V and 0.02 V, respectively). The larger of these electrode surface charge densities is not enough to draw monocations into the dense layer, which are held outside this region by a barrier of approximately 0.25 eV. From Fig. 2 (a) we see that the solvent layering remains practically unchanged upon charging the electrode. Dimers are still favored in the intermediate density layer (IDL) and further from the interface in charge state 1 (Fig. S6). However, in charge state 2 some charged species become strongly stabilized at the interface. Specifically, at this larger negative bias the most favored species within the intermediate (IDL) and dense(DL) layers is now the monocation, CaBH\({}_{4}^{+}\), which almost entirely displaces the neutral dimers and monomers (Fig. 4 a, b). Due to the higher free energy of species in the DL, the interfacial population is largely limited to the IDL (99.4%), according to thermodynamic integration. At this potential, the approach of the monocation to the electrode, through the DL, would have to be an endergonic process, requiring 5.7 kT of free energy, overcoming a barrier of 7.5 kT. Although, this is some improvement over the case of the neutral electrode, which presents a barrier of 9.9 kT to enter the DL, with a required input of 9.1 kT of free energy. We can understand these free energy costs by following the evolution in solvation and associated dipole orientation of the monocation. Unsupervised learning analysis of various umbrella sampling trajectories: in the IDL, at the edge of the DL and in the DL (z = 8.25, 5.8 and 4.8 A, Fig. 4 c-e), reveals that five-fold THF coordination dominates the IDL population, with the BH\({}_{4}^{-}\) anion pointing away from the surface - as one might expect given the direction of the electric field at the negative electrode. However, this dipole reorients at the edge of the DL, likely due to mixed DL/IDL solvent coordination, and a small four-fold coordinated population appears. Ultimately, in the center of the DL, four-fold coordination dominates, with the dipole of the predominant isomer pointing away from the surface again. This complicated and costly path for the monocation to reach the electrode further emphasizes the important role of the THF solvent layering and specific coordination in determining the electrochemical activity. By the same token, the fully-solvated dication, Ca\({}^{2+}\), with its somewhat rigid first solvation shell, is still too unfavorable to define a noticeable population at the negative electrode, despite its higher electrostatic charge. In fact, we find that the dication is only slightly more stable than at the neutral interface (Fig. 2 d). ### Generation of active species Thus far, it seems that the monocation, CaBH\({}_{4}^{+}\), is the strongest candidate for the electroactive species in this electrolyte, given its high population in the vicinity of the electrode upon charging to a sufficiently high potential difference. We do not explore reduction potentials in this work. However, as we have seen, the appearance of the monocation with close proximity to the interface, in the IDL, indicates that it is a likely candidate for an outer-sphere electron transfer process. With increasing potential difference, to overcome the free energy costs associated with penetration of the dense THF layer (DL) and reduced solvation, we would predict an inner-sphere reduction process. However, the fact remains that this poor electrolyte (only 3% of species in solution are charged, as mentioned above) must supply the IDL with monocations in the first place, via some disproportionation mechanism from neutral species (most likely dimers), and replenish the same species while electroreduction and electrode deposition consumes them. Any barriers in this supply chain should be evident in the kinetics of the electrochemistry as an observed deposition overpotential or associated rate limitations in charging cells with this electrolyte. To shed light on the underlying processes, we approach the generation of charged species (monocations) as a two-step process, involving dimer reorganization and disproportionation. As we show below, the most stable or prevalent dimer species are not readily disposed to disproportionate. Some molecular rearrangement of borohydride anions and solvent molecules is first required. The subsequent disproportionation follows two major pathways, which can occur in the bulk electrolyte or at the interface, and which we have investigated both at neutral and negative biases. In the neutral cell, increased dimer concentrations in the intermediate density layer (IDL) are readily explained through analysis of the free energy landscape (Fig. 2). The minimum energy path (MEP) for disproportionation in the presence of the interface indicates that neutral species in the bulk can easily flow, without a signif icant free energy barrier, into the IDL. Migration from the bulk to the IDL is essentially barrierless for all species except Ca\({}^{2+}\). As summarized in Fig. 5 a, to prepare for disproportionation of the most favorable dimers, some reorganization into an intermediate (less stable) dimer is required and ultimately Ca-Ca separation into ionic species follows two main paths, A or B. For Path A, the dominant long dimer configuration (**1** in Fig. 5 a) can undergo a low-barrier reorganization, through a short dimer (**2**), to another long dimer with similar coordination of Ca ions with borohydrides (**3**). Then, the Ca-Ca distance increases (**4**) up to 6.7 A at the transition state so that the nascent monocation Ca ion can increase its number of coordinating THF molecules, from the original 3-4 to the preferred 5, upon full dissociation. Path B branches from the short (**2**) or intermediate long (**3**) conformations, through further borohydride and solvent reorganization, to a higher energy dimer (**5**), with a [1,4] anion coordination, which then disproportionates to produce the monocation (**6**). Figures 5 b and c outline the MEP for disproportionation at the electrode interface or in the bulk electrolyte under neutral or negative bias. Overall, at the electrode interface, in the intermediate density layer (IDL), disproportionation follows Path A, whereas Path B is preferred in the bulk, likely due to configuration **5** being more favorable in the bulk than at the interface (Fig. S7). We find that interfacial (IDL) disproportionation at zero bias via Path A is favored, since it has slightly lower barriers (E\({}_{a,3\to 4}\) = 8.4 kT and E\({}_{a,4\to 6}\) = 9.8 kT) and a lower free energy cost than bulk disproportionation via Path B (Fig. 5 b). This is in agreement with the slight increase in monocation population observed at the IDL in Fig. 3 (Slight variations in barriers between bulk and interface models can be due to differences in the collective variables and grid-spacing employed in our metadynamics simulations, SI Table 1). At the negatively charged electrode, we have seen (Fig. 4) that the monocation is the most favored species in the IDL, completely displacing the previously domi Figure 4: **Populations at a biased interface. Snapshot obtained from the equilibrated trajectory of charged state 2 (\(\sigma\) = \(\pm\) 0.13 e nm\({}^{-2}\)) with highlighted layering (red/blue for DL/IDL) at the negative interface showing the IDL solvated monocation (**a**). Free-energy and population distribution ((**b**) at the negatively charged interface is dominated by the monocation, most favored at the IDL. Unsupervised clustering analysis of Umbrella Sampling trajectories of the monocation at the IDL, close to the edge of the DL (z = 5.8 Å) and close to the center of the DL (z = 4.8 Å) (**c**) classifies the structures into two isomers of the five-fold and four-fold THF coordinated monocation, which differ by slight changes in the local geometry of the coordinating THF molecules. The corresponding Ca-B dipole orientation with respect to the surface normal (\(\vec{\mu}\)), with 180\({}^{\circ}\) being the B pointing away from the surface, indicate discrete orientation at the interface (**d**). Representative structures of the main isomers at each layer in their most likely orientation, with the surface on the left side (**e**).** nant dimers with increasing negative charge on the electrode (charge state 2). Due to the size limits of our simulations, we may well expect that the bulk thermodynamics are somewhat different from those under neutral conditions, however, disproportionation still follows Path B in our simulations (Fig. 5 c), albeit with an additional step involving the formation of a long dimer conformation (**3**). Similarly, Path A is still preferred in the IDL. Although disproportionation occurs in the bulk, barrier-less pathways to the IDL suggest that dimers can approach the charged IDL and undergo disproportionation quite favorably. Therefore, upon negative charging, the concentration of monocations should increase in the IDL, based on a favorable free energy and necessary supply from the local (IDL) dimer population via interfacial disproportionation. This free-energy analysis shows little activity in the dense layer (DL) due to its low population of dissolved species. However, as noted above, the slight stabilization of the monocation in the DL at negative bias results in a slight increase within the overall interfacial (DL plus IDL) population of this species - from \(\sim\)0.2% in the neutral DL to \(\sim\)0.6% in the DL at charge state 2. Structural analysis indicates that these monocations have their anionic end pointing away from the negative surface and have lower solvent coordination. We have stated before that electrochemical activity is a strong (exponentially decaying) function of distance of the reducible species from the electrode. It is also very likely enhanced by reduced cation coordination. The question remains whether these benefits outweigh the low population of such species in the net electron transfer rate at similar potential differences. ## Discussion Based on our analysis of the Ca(BH\({}_{4}\))\({}_{2}\)--THF electrolyte and its interfacial speciation, we can propose the following phenomenology that may explain existing observation and provide guidance and interpretation of future characterization efforts. First and foremost, the solvent, THF, and its strong solvation and interfacial layering dictate much of what we have observed. The strong solvation of the dication essentially prevents it from taking part in electrochemistry. And the monocation requires a threshold negative bias to begin to dominate as an interfacial species, albeit confined to the intermediate density layer (IDL), 7-10 A from the electrode surface. The dense solvent layer (DL) is only sparsely populated with solvated or partially solvated species. That negative bias can begin to drive the interfacial population of charged species significantly above that present in the bulk indicates that this poor electrolyte is "activated" upon charging. Therefore, meaningful characterization of the electrochemical activity of this electrolyte requires operando measurements that are sensitive to within a nanometer of the interface. The rich isomer subpopulations with distinct orientations in the IDL make it an excellent playground for interfacially-sensitive, polarization-dependent spectroscopies that can capture these differences. Furthermore, the bias-dependence switch in local population from oriented dimers to monocations should be observable with chemically-sensitive vibrational [32; 33; 34] and electronic probes. [35; 36; 37] The required potential differences to enrich the IDL with charged species and, presumably, the disruption and population of the DL with more charged species at even higher potentials, may be consistent with known overpotentials for Ca deposition. Plating and stripping of Ca using this electrolyte carries a first, short dura Figure 5: **Disproportionation pathways** Scheme depicting the main two steps of dimer disproportionation (**a**): reorganization (**1-5**), followed by separation into ions (**6**) through distinctive paths A and B. Minimum energy pathways of disproportionation at the neutral IDL and bulk (**b**) and at the charge state 2 IDL and bulk (**c**), obtained from the 3D free-energy landscapes. Since each point can be labelled according to the sampled collective variables, we have color-coded the distance to the surface in the IDL and bulk tones (blue, gray). tion overpotential of \(\sim\)250 mV, followed by a \(\sim\)100 mV overpotential in subsequent cycles.[18] The remaining free-energy cost (5.7 kT) for monocations to access the DL in our higher charge state simulations (estimated to be at 0.065 V), implies that the final potential difference to enrich the DL with monocations for reduction is \(\sim\) 0.2 V. It remains to be determined how a more strongly interacting electrode surface (which may be present after the first cycle) alters the THF solvent layering and, thereby, the thermodynamics of electrolyte species within the first nanometer of the evolving electrode surface. Strong solvent and anion coordination of electrochemically active species, which has dominated our analysis, is very likely the source of solid-electrolyte interphase (SEI) formation due to electroreduction and decomposition of these ligands at the interface. Although all small molecule polar solvents should present dense layers at interfaces, the inaccessibility of the DL is also, in part, determined by the choice of anion, whose electrode surface affinity, solvophobicity, size and cation coordination strength may facilitate overcoming the molecular packing in the DL and reduce associated overpotentials, ideally without the production of a thick interphase due to its own electrochemical instability. An example of a solvophobic anion that can bridge the THF dense layer is TFSI\({}^{-}\), which, according to previous free-energy studies, is more favorable in the DL and IDL regions than in the bulk solvent (\(\sim\)2 kT)[21]. This contrasts with BH\({}_{4}^{-}\) (Fig. 2 b) which has reduced populations at the interface, especially in the DL. We have seen that Mg, another divalent cation, which when fully-solvated dication is also hindered from close approach to the neutral electrode, but when combined with TFSI\({}^{-}\) can arrive stably at the interface, effectively in the dense layer (barrier \(<\)4 kT, \(\Delta\)G \(<\) 3 kT, z=6.3 A). A non-sparsely populated dense layer with significant monocation population seems likely in the MgTFSI\({}_{2}\)--THF electrolyte. Yet, we know, TFSI is both inherently and electrochemically unstable, both for Mg[38] and for Ca.[39] The instability of TFSI may be driven by its tight coordination of cations through its sulfone groups. Increased electroreductive stability in large anions that simultaneously span the small solvent molecule dense layer may be afforded by considering non-cation-coordinating species. For example, closoboranes have been studied with Mg in tetraglyme.[40] These bulkier anions may also lead to better electrolytes overall (for example, preventing the formation of dimers), readily producing charged electroactive species. Although, strong solvent coordination, as we have seen for the fully-solvated Ca\({}^{2+}\), may still lead to significant overpotentials for electrodeposition and associated solvent decomposition and SEI growth. From the computational perspective, we have highlighted the value of free-energy exploration and unsupervised learning to reveal the complexity of this nominally simple electrolyte and tried to connect our simulations to observed electrochemical behavior and characterization. However, as in all theoretical models, our study has some inherent limitations. The complexity of the system and the time-scales required to explore different coordination complexes required the use of empirical force fields rather than ab initio methods. The finite number of dissolved species in our simulations mimics only low concentrations, with limited Debye screening, and our force-fields employ static partial charges which must be scaled appropriately to attempt to reproduce dielectric screening due to solvent or anion polarizability. The notable outcome of this study is a reinforcement of the notion that performant nonaqueous multivalent electrolytes (with high ionic conductivity and low overpotential) have competing requirements for multivalent ion coordination: to be strongly coordinating to keep the salt dissolved and the ionic conductivity high while not so strongly coordinating that the ion cannot break free from solvation during electrodeposition. The need for strong solvent coordination is driven by competition with favorable ionic bonds between counterions. If we want to maintain the advantages of earth-abundant, multivalent ions, then one option would be to switch to larger anions without specific coordinating moieties. However, this study highlights that there may be other options to consider that sideline the isolated multivalent ion altogether, which may even be irrelevant in terms of electrochemical activity. Firstly, that coordination with counterions may actually help bring active species closer to the electrode - that the ability of the species to respect or disrupt the solvent inhomogeneous structure at the interface can dictate which species approaches the electrode closest. Secondly, that incomplete dissolution and the activity of oligomers (dimers in this case) could be key to improved electrochemical activity albeit balanced by some power limitations due to reduced bulk ionic conductivity and the need to regenerate electroactive species through a disproportionation equilibrium. Clearly we have more work to do, both in terms of understanding the reduction of these clusters and their dissociation in the reduced state leading to Ca deposition; the potential negative side-effects of reductive instability of the coordinating species, already connected to SEI formation;[6] and the ultimate origins of measured overpotentials and currents in experiments. However, we have highlighted the importance of free energy sampling to attack such complex problems, even within relatively ideal conditions and contexts, and look forward to seeing more studies of this kind in the future. ## Methods Metadynamics sampling (MTD) with a classical force-field was used to obtain free-energy landscapes as a function of \(n\) collective variables. Equilibrium populations at critical points of a given landscape were then collected with Umbrella Sampling (US). Structural analysis of the US trajectories was then performed with a python-based unsupervised learning protocol. Free-energy sampling.Metadynamics free-energy sampling27 was carried out using the COLVARS module41 implemented in LAMMPS.42 Systems (see Table S1 for a full list) were generated using Packmol43. Concentration values were chosen to avoid forcing aggregation. Dimerization free energy surfaces show that at Ca-Ca distances larger than \(\sim 10\) A, the free energy converges with respect to Ca-Ca separation. That is the cutoff we consider between "dissociated" and "aggregated", giving an effective radius of 5 A for the first coordination shell of the dication. Additionally, g(r) shows that the second solvation shell (O-THF) settles at around 6 A. Assuming an optimal close-packing of ions, we obtain that SSIPs would be unavoidable at concentrations below 0.5 M for a 6 A radius and 0.7 M for a 5 A radius. Hence, we selected concentrations \(\leq\) 0.03M, well below these limits. System equilibration consisted of conjugate-gradient minimization to avoid steric clashes, followed by a short NVT warm-up to room temperature (298 K) using a 1 fs timestep. Box-size equilibration was achieved by continuing the trajectory under NPT conditions at 1 atm with a 2 fs timestep. A final NVT step with the equilibrated lattice parameters ( 20 ns) was sufficient to bring the systems to equilibrium. Force-field parameters44,45 were validated in our previous work on the same system.17 The graphene was frozen in place by setting the forces to zero in order to ensure neutral and charged simulations were comparable. These MD setup was kept for the MTD and US simulations. Metadynamics calculations parameters - the width of the grid along a collective variable (W), height of the Gaussian 'hills' used to bias the potential (H) and the frequency at which they are added (Freq) - can be found in Table S1 and were chosen to ensure convergence, namely, that the simulation reached diffusive regime in the given collective variable space.21 Faster completion times were achieved by taking advantage of multiple-walker metadynamics, which allows to parallelize sampling among several trajectories (replicas) that update their biased potential at a given frequency (RepFreq) with the total biased potential. Despite this, the grid used in the three-dimensional MTD calculations was necessarily coarser. The minima explored here tend to be separated by more than 0.7 A (e.g., between the long and short dimer, or between solvent layers, which is larger than the coarser grid resolution. In order to keep cell neutrality and consistence with the neutral simulation, charged states 1 and 2 were generated by adding equal and opposite charges (\(\pm\) 2 \(\mu\)C/cm\({}^{2}\)) distributed evenly among two graphene layers, emulating a positively charged and a negatively charged electrode. Free-energy sampling was performed with combinations of the following collective variables: the distance between the calcium and the center of mass of the top graphene layer (dZ); the coordination number between the calcium and the boron atom in a given BH\({}_{4}\) (CN(Ca-B), r\({}_{0}\) = 3.8 A); and, to track dimerization, the distance between two calcium atoms (dCa) or a Ca-Ca coordination number (CN(Ca-Ca), r\({}_{0}\) = 6.5 A). Note that the state free-energies in Fig. 1 b) were obtained by thermodynamic integration of our 2D free energy surface (Fig. 1 c) over regions delimited by given collective variable values (e.g., for the Ca(BH\({}_{4}\))\({}_{4}^{2-}\) state, a Ca-Ca distance larger that 7 A and a CN(Ca-B) larger than 3.5). Collective variables apply to only one of the calcium atoms, and hence the free energy is averaged over all (remaining) possible coordinations/distances for the other atom. Hence, the room-temperature Boltzmann probability speaks of the likelihood of finding a state formed by the constrained species (e.g., Ca(BH\({}_{4}\))\({}_{4}^{2-}\)) in the environment of the remaining, unconstrained species. Since our system contains two calciums and four borohydrides, the un-constrained space is different depending on the value of the collective variables. In the Ca(BH\({}_{4}\))\({}_{4}^{2-}\) basin, the un-constrained Ca can only be fully solvated. On the other hand, in the Ca2+ basin, the other calcium can exist in five different ion coordination states (Ca\({}^{2+}\), Ca(BH\({}_{1}\))\({}^{+}\), Ca(BH\({}_{4}\))\({}_{2}\), Ca(BH\({}_{4}\))\({}_{3}^{-}\), and Ca(BH\({}_{4}\))\({}_{4}^{2-}\)). This is the reason why no symmetry is expected on the free energy surface along the coordination axis, e.g., between CN=0 and CN=4; and CN=1 and CN=3. Population samplingThe structures and equilibrium population of points of interest in the free energy surface were collected using Umbrella Sampling. Initial structures were obtained from metadynamics trajectory snapshots at the desired point in the CV space. The collective variable coordinates were restrained with a harmonic potential centered at the desired value, with force constant 1/w\({}^{2}\) where w is the width of the collective variable grid (see Table S1). Then, trajectories of 150-200 ns were calculated under similar MD parameters as the final equilibration step and MTD run. Data analysis. Umbrella sampling trajectories were analyzed with an unsupervised learning methodology recently developed by us.17 In this protocol, between 5000 and 10000 local atomic arrangements were extracted from each US trajectory, which when there aligned while taking into account possible permutations between similar elements (e.g. THF - O).46,47 Classification based on their structural similarity was carried out using dimensionality reduction48,49 and clustering50 algorithms, in an ASE-compatible51 environment. The \(n\)-dimensional free energy landscapes obtained from the MTD sampling were explored with a Jupyter-adapted version of the MEP-SAnd module52 in order to find critical points and minimum energy pathways. ## Data availability Additional computational details and free-energy information referenced in the text can be found in the Supporting Information. The datasets generated and analysed during the current study are available from the corresponding author on reasonable request. ## Author contributions ASM performed the calculations, analyzed the data and wrote the original draft with support from DP. ASM and FR developed and tested the clustering algorithm. SS contributed to force field development. DP supervised and managed the project. All authors contributed to editing the manuscript. ## Conflicts of interest There are no conflicts to declare. ## Acknowledgements This work was fully supported by the Joint Center for Energy Storage Research (JCESR), an Energy Innovation Hub funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences. The theoretical analysis in this work was supported by a User Project at The Molecular Foundry and its computing resources, managed by the High Performance Computing Services Group at Lawrence Berkeley National Laboratory (LBNL), supported by the Director, Office of Science, Office of Basic Energy Sciences, of the United States Department of Energy under Contract DE-AC02-05CH11231.
2301.09706
Harmonic complex structures and special Hermitian metrics on products of Sasakian manifolds
It is well known that the product of two Sasakian manifolds carries a 2-parameter family of Hermitian structures $(J_{a,b},g_{a,b})$. We show in this article that the complex structure $J_{a,b}$ is harmonic with respect to $g_{a,b}$, i.e. it is a critical point of the Dirichlet energy functional. Furthermore, we also determine when these Hermitian structures are locally conformally K\"ahler, balanced, strong K\"ahler with torsion, Gauduchon or $k$-Gauduchon ($k\geq 2$). Finally, we study the Bismut connection associated to $(J_{a,b}, g_{a,b})$ and we provide formulas for the Bismut-Ricci tensor $\operatorname{Ric}^B$ and the Bismut-Ricci form $\rho^B$. We show that these tensors vanish if and only if each Sasakian factor is $\eta$-Einstein with appropriate constants and we also exhibit some examples fulfilling these conditions, thus providing new examples of Calabi-Yau with torsion manifolds.
Adrián Andrada, Alejandro Tolcachier
2023-01-23T20:24:47Z
http://arxiv.org/abs/2301.09706v3
# Harmonic complex structures and special Hermitian metrics on products of Sasakian manifolds ###### Abstract. It is well known that the product of two Sasakian manifolds carries a 2-parameter family of Hermitian structures \((J_{a,b},g_{a,b})\). We show in this article that the complex structure \(J_{a,b}\) is harmonic with respect to \(g_{a,b}\), i.e. it is a critical point of the Dirichlet energy functional. Furthermore, we also determine when these Hermitian structures are locally conformally Kahler, balanced, strong Kahler with torsion, Gauduchon or \(k\)-Gauduchon (\(k\geq 2\)). Finally, we study the Bismut connection associated to \((J_{a,b},g_{a,b})\) and we provide formulas for the Bismut-Ricci tensor \(\operatorname{Ric}^{B}\) and the Bismut-Ricci form \(\rho^{B}\). We show that these tensors vanish if and only if each Sasakian factor is \(\eta\)-Einstein with appropriate constants and we also exhibit some examples fulfilling these conditions, thus providing new examples of Calabi-Yau with torsion manifolds. 2020 Mathematics Subject Classification: 53C15, 53C25, 53D15 This work was partially supported by CONICET, SECyT-UNC and FONCyT (Argentina) and the MATH-AMSUD Regional Program 21-MATH-06. ## 1. Introduction In this paper we study the \(2n\)-dimensional Hermitian manifolds \(M\) with \(n\geq 2\), manifold with flat Bismut connection, then its universal cover is a Lie group \(G\) equipped with a bi-invariant metric and a left invariant complex structure compatible with the metric. In particular, \(G\) is the product of a compact semisimple Lie group and a real vector space. Our first goal in this article is to generalize some of the properties of Calabi-Eckmann manifolds mentioned above to the product of two arbitrary Sasakian manifolds, since it is well known that odd-dimensional spheres carry a canonical Sasakian structure. It was shown by Morimoto [48] that the product of two normal almost contact manifolds has a natural complex structure. This was later generalized independently by Tsukada [57] and Watson [63], who showed the existence of a family of complex structures \(J_{a,b}\) for \(a,b\in\mathbb{R}\), \(b\neq 0\), which correspond to the complex structures on \(\mathbb{S}^{2p+1}\times\mathbb{S}^{2q+1}\) given in [15]. They also showed the existence of a family of compatible Hermitian metrics \(g_{a,b}\). We will restrict to the case of a product of two Sasakian manifolds and the corresponding Hermitian structures \((J_{a,b},g_{a,b})\) will be the central object of study throughout the paper. A second goal of this article is to study the Bismut connection associated to the Hermitian structure \((J_{a,b},g_{a,b})\). Concretely, we study the vanishing of the Bismut-Ricci tensor \(\operatorname{Ric}^{B}\) and the Bismut-Ricci form \(\rho^{B}\). It will turn out that these conditions are closely related to a particular family of Sasakian manifolds, called \(\eta\)_-Einstein_. A Sasakian manifold is called \(\eta\)-Einstein if the Ricci tensor of the Sasakian metric satisfies \(\operatorname{Ric}=\lambda g+\nu\eta\otimes\eta\) for certain constants \(\lambda,\nu\in\mathbb{R}\), where \(\eta\) is the \(1\)-form dual to the Reeb vector field. The article is structured as follows. In SS2 we recall basic notions on Sasakian manifolds and their transverse geometry and present some preliminary results. In SS3 we study the Levi-Civita connection of the metric \(g_{a,b}\) on the product \(S_{1}\times S_{2}\), where \(S_{1}\) and \(S_{2}\) are Sasakian manifolds, and we use this in SS4 to show that \(J_{a,b}\) is harmonic on \((S_{1}\times S_{2},g_{a,b})\) (see Theorem 4.2). Next, in SS5, we study the balanced, LCK, SKT and \(k\)-Gauduchon (\(k\geq 2\)) conditions on \(S_{1}\times S_{2}\). We show in Theorem 5.9 that the Hermitian structure \((J_{a,b},g_{a,b})\) on \(S_{1}\times S_{2}\) is always Gauduchon (i.e. \((n-1)\)-Gauduchon) and it is \(k\)-Gauduchon (\(2\leq k\leq n-2\)) if and only if is astheno-Kahler. This complements the result in [24], where it was shown that \((J_{a,b},g_{a,b})\) is \(1\)-Gauduchon if and only if it is astheno-Kahler. The astheno-Kahler condition was previously characterized in [46]. Finally, in SS6 we provide an explicit expression for the Bismut connection associated to \((J_{a,b},g_{a,b})\) in terms of the characteristic connections on \(S_{1}\) and \(S_{2}\). As an application of this explicit expression, we provide formulas for the Bismut-Ricci tensor \(\operatorname{Ric}^{B}\) and the Bismut-Ricci form \(\rho^{B}\), and determine when they vanish (Theorems 6.6 and 6.12, respectively). More precisely, we show that \(\operatorname{Ric}^{B}=0\) or \(\rho^{B}=0\) hold if and only if both Sasakian factors \(S_{1}\) and \(S_{2}\) are \(\eta\)-Einstein with certain appropriate constants \((\lambda_{1},\nu_{1})\) and \((\lambda_{2},\nu_{2})\); and we exhibit examples of Hermitian structures \((J_{a,b},g_{a,b})\) with \(\operatorname{Ric}^{B}=0\) or \(\rho^{B}=0\), leading to new examples of CYT structures. ### Acknowledgments The authors are grateful to Romina Arroyo, Jorge Lauret, Henrique Sa Earp and Jeffrey Streets for their useful comments and suggestions. The authors would also like to thank the hospitality of the Instituto de Matematica, Estatistica e Computacao Cientifica at UNICAMP (Brazil), where they were introduced to the theory of harmonic almost complex structures. ## 2. Preliminaries on Sasakian manifolds An _almost contact structure_ on a differentiable manifold \(M^{2n+1}\) is a triple \((\varphi,\xi,\eta)\), where \(\varphi\) is a (1,1)-type tensor field, \(\xi\) a vector field, and \(\eta\) a \(1\)-form satisfying \[\varphi^{2}=-\operatorname{Id}+\eta\otimes\xi,\quad\eta(\xi)=1,\] which imply \(\varphi(\xi)=0\) and \(\eta\circ\varphi=0\). \((M,\varphi,\xi,\eta)\) is called an _almost contact manifold_ and \(\xi\) is called the Reeb vector field. The tangent bundle of \(M\) splits as \(TM=\mathcal{D}\oplus\mathcal{L}\), where \(\mathcal{D}=\operatorname{Ker}\eta=\operatorname{Im}\varphi\) and \(\mathcal{L}\) is the line bundle spanned by \(\xi\). On the product manifold \(M\times\mathbb{R}\) there is a natural almost complex structure \(J\) defined by \[J\left(X+f\frac{d}{dt}\right)=\varphi X-f\xi+\eta(X)\frac{d}{dt}, \tag{2.1}\] where \(X\in\mathfrak{X}(M)\), \(t\) is the coordinate on \(\mathbb{R}\) and \(f\) is a smooth function on \(M\times\mathbb{R}\). If \(J\) is integrable, the almost contact structure is said to be _normal_. This is equivalent to the vanishing of the tensor field \[N_{\varphi}:=[\varphi,\varphi]+d\eta\otimes\xi,\] where \([\varphi,\varphi]\) is the Nijenhuis torsion of \(\varphi\) defined by \[[\varphi,\varphi](X,Y)=[\varphi X,\varphi Y]+\varphi^{2}[X,Y]-\varphi[\varphi X,Y]-\varphi[X,\varphi Y].\] Note that if the almost contact structure is normal then \[\varphi[\xi,X]=[\xi,\varphi X]\qquad\text{for all }X\in\mathfrak{X}(M). \tag{2.2}\] In particular, \[[\xi,X]\in\Gamma(\mathcal{D})\qquad\text{for all }X\in\Gamma(\mathcal{D}). \tag{2.3}\] An _almost contact metric structure_ on \(M\) is \((\varphi,\xi,\eta,g)\), where \((\varphi,\xi,\eta)\) is an almost contact structure and \(g\) is a Riemannian metric on \(M\) satisfying \[g(\varphi X,\varphi Y)=g(X,Y)-\eta(X)\eta(Y),\quad X,Y\in\mathfrak{X}(M). \tag{2.4}\] This equation implies: \[g(\varphi X,Y)=-g(X,\varphi Y)\qquad\text{and}\qquad g(\xi,X)=\eta(X),\] for all \(X,Y\in\mathfrak{X}(M)\). That is, \(\varphi\) is skew-symmetric and the vector field \(\xi\) is \(g\)-dual to the \(1\)-form \(\eta\). In analogy with the almost Hermitian setting, the fundamental \(2\)-form \(\Phi\) can be defined by \[\Phi(X,Y)=g(X,\varphi Y).\] A normal almost contact manifold \(S\) is called _Sasakian_ if \(d\eta=2\Phi\). In particular, \(\Phi\) is exact and \(\eta\) is a contact form on \(S\), i.e. \(\eta\wedge(d\eta)^{n}\) is a volume form. Sasakian manifolds form the most important class of almost contact metric manifolds, due to their close relation with Kahler manifolds. Indeed, the Riemannian cone of an almost contact metric manifold endowed with the almost complex structure given by (2.1) is Kahler if and only if the structure is Sasakian. Some properties of Sasakian manifolds that we will need in forthcoming sections are stated in the following lemma. **Lemma 2.1**.: _If \((S,\varphi,\xi,\eta,g)\) is a Sasakian manifold with fundamental \(2\)-form \(\Phi(X,Y)=g(X,\varphi Y)\) and \(d\eta=2\Phi\), then:_ * \(\xi\) _is a unit Killing vector field on_ \(S\)_,_ * \(\nabla_{X}\xi=-\varphi X\) _for all_ \(X\in\mathfrak{X}(S)\)_; in particular,_ \(\nabla_{\xi}\xi=0\)_,_ * \(\nabla_{\xi}X=[\xi,X]-\varphi X\) _for all_ \(X\in\mathfrak{X}(S)\)_,_ * \((\nabla_{X}\varphi)Y=g(X,Y)\xi-\eta(Y)X\) _for all_ \(X,Y\in\mathfrak{X}(S)\)_._ The proof of Lemma 2.1 is standard and can be found in [9]. A consequence of this lemma is a generalization of (2.3). More precisely, on a Sasakian manifold we have \[[\xi,X]\in\mathcal{D}\quad\text{for all }X\in\mathfrak{X}(S). \tag{2.5}\] Indeed, decomposing \(X=\eta(X)\xi+X^{\mathcal{D}}\) with \(X^{\mathcal{D}}\in\Gamma(\mathcal{D})\), we have \[[\xi,X]=[\xi,\eta(X)\xi+X^{\mathcal{D}}]=\xi(\eta(X))\xi+[\xi,X^{\mathcal{D}}],\] \[\xi(\eta(X))=\xi g(\xi,X)=g(\nabla_{\xi}\xi,X)+g(\xi,\nabla_{\xi}X)=-g(\xi,\varphi X )=0,\] where we have used Lemma 2.1(ii) in the third and fourth equalities. Therefore \[[\xi,X]=[\xi,X^{\mathcal{D}}]\in\Gamma(\mathcal{D})\] due to (2.3). ### Transverse geometry of Sasakian manifolds Let \((S,\varphi,\eta,\xi,g)\) be a Sasakian manifold of dimension \(2n+1\). Recall that \(\mathcal{D}=\operatorname{Ker}\eta=\operatorname{Im}\varphi\) is a subbundle of \(TS\) of rank \(2n\). The Sasakian structure induces on \(\mathcal{D}\) a natural connection \(\nabla^{T}\), called the transverse Levi-Civita connection which, for any \(U\in\Gamma(\mathcal{D})\), is defined by \[\nabla^{T}_{\xi}U=[\xi,U],\qquad\nabla^{T}_{X}U=(\nabla_{X}U)^{\mathcal{D}}, \quad X\in\Gamma(\mathcal{D}), \tag{2.6}\] where \((\cdot)^{\mathcal{D}}\) denotes the projection onto \(\mathcal{D}\). This is the only connection on \(\mathcal{D}\) that satisfies \[\nabla^{T}_{X}(\varphi|_{\mathcal{D}})=0,\qquad\nabla^{T}_{X}(g|_{\mathcal{D} })=0,\qquad\nabla^{T}_{U}V-\nabla^{T}_{V}U=[U,V]^{\mathcal{D}}, \tag{2.7}\] for any \(X\in\mathfrak{X}(S)\) and \(U,V\in\Gamma(\mathcal{D})\). Note that \[\nabla_{U}V=-\Phi(U,V)\xi+\nabla^{T}_{U}V,\quad U,V\in\Gamma(\mathcal{D}), \tag{2.8}\] which implies \[[U,V]=-2\Phi(U,V)\xi+[U,V]^{\mathcal{D}},\quad U,V\in\Gamma(\mathcal{D}). \tag{2.9}\] Using (2.6)-(2.9) we obtain the following result. **Lemma 2.2**.: _For any \(U,V,W\in\Gamma(\mathcal{D})\) the following identities hold:_ * \(\nabla^{T}_{[U,V]^{\mathcal{D}}}W=\nabla^{T}_{[U,V]}W+2\Phi(U,V)[\xi,W]\)_,_ * \(\nabla_{[U,V]}W=2\Phi(U,V)\varphi W-\Phi([U,V]^{\mathcal{D}},W)\xi+\nabla^{T} _{[U,V]}W\)_,_ * \(R(U,V)W=R^{T}(U,V)W+\Phi(V,W)\varphi U-\Phi(U,W)\varphi V-2\Phi(U,V)\varphi W\)_,_ * \(R(U,V)\xi=0\)_._ Proof.: (i) is a straightforward consequence of (2.9) and (2.6): \[\nabla^{T}_{[U,V]}W =\nabla^{T}_{(-2\Phi(U,V)\xi+[U,V]^{\mathcal{D}})}W\] \[=-2\Phi(U,V)[\xi,W]+\nabla^{T}_{[U,V]^{\mathcal{D}}}W.\] For (ii), using (2.9) and (2.8) we compute \[\nabla_{[U,V]}W =\nabla_{(-2\Phi(U,V)\xi+[U,V]^{\mathcal{D}})}W\] \[=-2\Phi(U,V)([\xi,W]-\varphi W)+\nabla_{[U,V]^{\mathcal{D}}}W\] \[=-2\Phi(U,V)([\xi,W]-\varphi W)-\Phi([U,V]^{\mathcal{D}},W)\xi+ \nabla^{T}_{[U,V]^{\mathcal{D}}}W\] \[\quad+2\Phi(U,V)[\xi,W]\] \[=2\Phi(U,V)\varphi W-\Phi([U,V]^{\mathcal{D}},W)\xi+\nabla^{T}_{[ U,V]}W,\] where we have used (i) in the fourth equality. For (iii), beginning with the definition \(R(U,V)W=\nabla_{U}\nabla_{V}W-\nabla_{V}\nabla_{U}W-\nabla_{[U,V]}W\) and using (2.6), (2.9), Lemma 2.1 and (ii) we obtain \[R(U,V)W =-U(\Phi(V,W))\xi+\Phi(V,W)\varphi U-\Phi(U,\nabla^{T}_{V}W)\xi+ \nabla^{T}_{U}\nabla^{T}_{V}W\] \[\quad+V(\Phi(U,W))\xi-\Phi(U,W)\varphi V+\Phi(V,\nabla^{T}_{U}W) \xi-\nabla^{T}_{V}\nabla^{T}_{U}W\] \[\quad-(2\Phi(U,V)\varphi W-\Phi([U,V]^{\mathcal{D}},W)\xi+\nabla^ {T}_{[U,V]}W)\] Next, using (i), the definition of \(\Phi\) and (2.7) we arrive at \[R(U,V)W =-g(\nabla_{U}^{T}V,\varphi W)\xi-g(V,\varphi\nabla_{U}^{T}W)\xi+ \Phi(V,W)\varphi U-g(U,\varphi\nabla_{V}^{T}W)\xi\] \[\quad+g(\nabla_{V}^{T}U,\varphi W)\xi+g(U,\varphi\nabla_{V}^{T}W) \xi-\Phi(U,W)\varphi V+g(V,\varphi\nabla_{U}^{T}W)\xi\] \[\quad-2\Phi(U,V)\varphi W+\Phi([U,V]^{\mathcal{D}},W)\xi+R^{T}(U, V)W\] \[=R^{T}(U,V)W+\Phi(V,W)\varphi U-\Phi(U,W)\varphi V-2\Phi(U,V) \varphi W,\] and (iii) is proved. For (iv), we compute \[R(U,V)\xi =\nabla_{U}\nabla_{V}\xi-\nabla_{V}\nabla_{U}\xi-\nabla_{[U,V]}\xi\] \[=-\nabla_{U}\varphi V+\nabla_{V}\varphi U+\varphi[U,V]\] \[=\Phi(U,\varphi V)-\nabla_{U}^{T}\varphi V-\Phi(V,\varphi U)\xi+ \nabla_{V}^{T}\varphi U+\varphi[U,V]^{\mathcal{D}}\] \[=-g(U,V)\xi-\varphi\nabla_{U}^{T}V+g(V,U)\xi+\varphi\nabla_{V}^{ T}U+\varphi[U,V]^{\mathcal{D}},\] according to Lemma 2.1(ii). It follows from (2.7) that this last expression vanishes, therefore \(R(U,V)\xi=0\), and the proof is complete. **Corollary 2.3**.: _For any \(U,V\in\Gamma(\mathcal{D})\), each curvature endomorphism \(R(U,V)\) preserves \(\mathcal{D}\). Moreover, for \(V=\varphi U\) we have that \(R(U,\varphi U)|_{\mathcal{D}}\) commutes with \(\varphi|_{\mathcal{D}}\)._ ## 3. Hermitian structures on the product of Sasakian manifolds We recall next the following construction, developed independently by Tsukada [57] and Watson [63], both based on a previous construction due to Morimoto [48], using ideas from [15]. With this construction, one can define a Hermitian structure on the product of two manifolds equipped with normal almost contact metric structures. We will focus later on the product of Sasakian manifolds. Let \(M_{1}\) and \(M_{2}\) be differentiable manifolds of dimension \(2n_{1}+1\) and \(2n_{2}+1\) and let \((\varphi_{1},\xi_{1},\eta_{1},g_{1})\) and \((\varphi_{2},\xi_{2},\eta_{2},g_{2})\) be almost contact metric structures on \(M_{1}\) and \(M_{2}\), respectively. For \(a,b\in\mathbb{R}\), \(b\neq 0\), we can induce an almost Hermitian structure \((J_{a,b},g_{a,b})\) on the product manifold \(M:=M_{1}\times M_{2}\) as follows: for \(X_{1}\in\mathfrak{X}(M_{1})\) and \(X_{2}\in\mathfrak{X}(M_{2})\), define an almost complex structure \(J_{a,b}\) on \(M\) by \[J_{a,b}(X_{1}+X_{2}) =\varphi_{1}X_{1}-\left(\frac{a}{b}\eta_{1}(X_{1})+\frac{a^{2}+b^ {2}}{b}\eta_{2}(X_{2})\right)\xi_{1}\] \[\quad+\varphi_{2}X_{2}+\left(\frac{1}{b}\eta_{1}(X_{1})+\frac{a}{ b}\eta_{2}(X_{2})\right)\xi_{2}. \tag{3.1}\] Next, define a Riemannian metric \(g_{a,b}\) on \(M\) by \[g_{a,b}(X_{1}+X_{2},Y_{1}+Y_{2}) =g_{1}(X_{1},Y_{1})+a[\eta_{1}(X_{1})\eta_{2}(Y_{2})+\eta_{1}(Y_{ 1})\eta_{2}(X_{2})]\] \[\quad+g_{2}(X_{2},Y_{2})+(a^{2}+b^{2}-1)\eta_{2}(X_{2})\eta_{2}(Y_ {2}). \tag{3.2}\] It is an easy exercise on quadratic forms to verify that \(g_{a,b}\) is indeed positive definite and \(J_{a,b}\) is Hermitian with respect to \(g_{a,b}\). Regarding \(\mathfrak{X}(M_{1})\) and \(\mathfrak{X}(M_{2})\) as subalgebras of \(\mathfrak{X}(M)\) in a natural manner, (3.1) and (3.2) can be rewritten in the following way, where \(U_{i}\in\Gamma(\mathcal{D}_{i})\): \[J_{a,b}\xi_{1}=-\frac{a}{b}\xi_{1}+\frac{1}{b}\xi_{2},\qquad J_{a,b}U_{1}= \varphi_{1}U_{1},\] \[J_{a,b}\xi_{2}=-\frac{a^{2}+b^{2}}{b}\xi_{1}+\frac{a}{b}\xi_{2},\qquad J_{a,b}U_ {2}=\varphi_{2}U_{2},\] and, for \(X_{i},Y_{i}\in\mathfrak{X}(M_{i})\): \[g_{a,b}(X_{1},Y_{1}) =g_{1}(X_{1},Y_{1}),\qquad g_{a,b}(X_{1},X_{2})=a\eta_{1}(X_{1}) \eta_{2}(X_{2})\] \[g_{a,b}(X_{2},Y_{2}) =g_{2}(X_{2},Y_{2})+(a^{2}+b^{2}-1)\eta_{2}(X_{2})\eta_{2}(Y_{2}),\] Note that \(g_{a,b}\) coincides with \(g_{1}\) on \(M_{1}\) and with \(g_{2}\) on \(\mathcal{D}_{2}\), but it modifies the length of \(\xi_{2}\) by a factor of \(a^{2}+b^{2}\); also, \(\xi_{1}\) and \(\xi_{2}\) are no longer orthogonal whenever \(a\neq 0\). Moreover, \(g_{a,b}\) is the product \(g_{1}\times g_{2}\) if and only if \(a=0,b=\pm 1\). Morimoto's original construction corresponds to the case \(a=0\), \(b=1\). He proved the following result: **Proposition 3.1**.: _[_48_, Proposition 3]_ _Let \((\varphi_{1},\xi_{1},\eta_{1})\) and \((\varphi_{2},\xi_{2},\eta_{2})\) be almost contact structures on \(M_{1}\) and \(M_{2}\), respectively. Then the almost complex structure \(J_{0,1}\) on \(M=M_{1}\times M_{2}\) is integrable if and only if both almost contact structures are normal._ More generally, the following result can be proved in the same way as in [48]: **Proposition 3.2**.: _Let \((\varphi_{1},\xi_{1},\eta_{1})\) and \((\varphi_{2},\xi_{2},\eta_{2})\) be almost contact structures on \(M_{1}\) and \(M_{2}\), respectively. If both almost contact structures are normal then the almost complex structure \(J_{a,b}\) is integrable for any \(a\in\mathbb{R}\), \(b\in\mathbb{R}\), \(b\neq 0\)._ From now on, we will deal only with the case when \((S_{1},\varphi_{1},\xi_{1},\eta_{1},g_{1})\) and \((S_{2},\varphi_{2},\xi_{2},\eta_{2},g_{2})\) are Sasakian manifolds. We will denote \(M_{a,b}:=S_{1}\times S_{2}\) equipped with the Hermitian structure \((J_{a,b},g_{a,b})\). Moreover, we will denote simply \(J:=J_{a,b}\), \(g:=g_{a,b}\) since there will be no risk of confusion. In forthcoming sections we will need explicit formulas for the Levi-Civita connection \(\nabla\) on \(M_{a,b}\) associated to \(g\) in terms of the Levi-Civita connections \(\nabla^{1}\) and \(\nabla^{2}\) on \((S_{1},g_{1})\) and \((S_{2},g_{2})\), respectively. We will use the following expressions which appear for instance in [42]: given \(X_{i},Y_{i},Z_{i}\in\mathfrak{X}(S_{i})\), we have that \[\begin{split} g(\nabla_{X_{1}}Y_{1},Z_{1})&=g_{1}( \nabla^{1}_{X_{1}}Y_{1},Z_{1}),\quad g(\nabla_{X_{1}}Y_{1},Z_{2})=a\eta_{1}( \nabla^{1}_{X_{1}}Y_{1})\eta_{2}(Z_{2})\\ g(\nabla_{X_{2}}Y_{2},Z_{1})&=a\eta_{2}(\nabla^{2 }_{X_{2}}Y_{2})\eta_{1}(Z_{1})\\ g(\nabla_{X_{2}}Y_{2},Z_{2})&=g_{2}(\nabla^{2}_{X_ {2}}Y_{2},Z_{2})+(a^{2}+b^{2}-1)[\eta_{2}(\nabla^{2}_{X_{2}}Y_{2})\eta_{2}(Z_{ 2})\\ &\qquad-\eta_{2}(X_{2})g_{2}(\varphi_{2}Y_{2},Z_{2})-\eta_{2}(Y_ {2})g_{2}(\varphi_{2}X_{2},Z_{2})]\\ g(\nabla_{X_{1}}Y_{2},Z_{1})&=-a\eta_{2}(Y_{2})g_{1} (\varphi_{1}X_{1},Z_{1}),\quad g(\nabla_{X_{1}}Y_{2},Z_{2})=-a\eta_{1}(X_{1})g_ {2}(\varphi_{2}Y_{2},Z_{2})\\ g(\nabla_{X_{2}}Y_{1},Z_{1})&=-a\eta_{2}(X_{2})g_{1} (\varphi_{1}Y_{1},Z_{1}),\quad g(\nabla_{X_{2}}Y_{1},Z_{2})=-a\eta_{1}(Y_{1})g_ {2}(\varphi_{2}X_{2},Z_{2})\end{split} \tag{3.3}\] The next result follows from the set of equations (3.3): **Corollary 3.3**.: _With notation as above,_ 1. \(\nabla_{X_{1}}Y_{1}=\nabla^{1}_{X_{1}}Y_{1}\in\mathfrak{X}(S_{1})\)_,_ 2. \(\nabla_{X_{2}}Y_{2}=\nabla^{2}_{X_{2}}Y_{2}-(a^{2}+b^{2}-1)[\eta_{2}(X_{2}) \varphi_{2}Y_{2}+\eta_{2}(Y_{2})\varphi_{2}X_{2}]\in\mathfrak{X}(S_{2})\)_,_ 3. \(\nabla_{X_{1}}Y_{2}=-a[\eta_{2}(Y_{2})\varphi_{1}X_{1}+\eta_{1}(X_{1})\varphi_{2 }Y_{2}]\in\mathfrak{X}(S_{1})\oplus\mathfrak{X}(S_{2})\)_,_ 4. \(\nabla_{X_{2}}Y_{1}=-a[\eta_{2}(X_{2})\varphi_{1}Y_{1}+\eta_{1}(Y_{1})\varphi_{2 }X_{2}]\in\mathfrak{X}(S_{1})\oplus\mathfrak{X}(S_{2})\)_._ _In particular, \(\nabla_{\xi_{1}}\xi_{1}=\nabla_{\xi_{2}}\xi_{2}=\nabla_{\xi_{1}}\xi_{2}=\nabla_ {\xi_{2}}\xi_{1}=0\)._ Using the previous corollary we compute next \(\nabla J\), which will be needed in the proof of Lemma 4.3 below. **Lemma 3.4**.: _For any \(X_{i},Y_{i}\in\mathfrak{X}(S_{i})\), \(i=1,2\),_ 1. \((\nabla_{X_{1}}J)Y_{1}=g_{1}(X_{1},Y_{1})\xi_{1}-\eta_{1}(Y_{1})X_{1}-\frac{a} {b}\Phi_{1}(X_{1},Y_{1})\xi_{1}+\frac{1}{b}\Phi_{1}(X_{1},Y_{1})\xi_{2}\)_,_ 2. \((\nabla_{X_{2}}J)Y_{2}=[g_{2}(X_{2},Y_{2})+(a^{2}+b^{2}-1)\eta_{2}(X_{2})\eta_ {2}(Y_{2})]\xi_{2}-(a^{2}+b^{2})\eta_{2}(Y_{2})X_{2}\)__ \[-\frac{a^{2}+b^{2}}{b}\Phi_{2}(X_{2},Y_{2})\xi_{1}+\frac{a}{b}\Phi_{2}(X_{2},Y _{2})\xi_{2}\] 3. \((\nabla_{X_{1}}J)Y_{2}=a\eta_{2}(Y_{2})\eta_{1}(X_{1})\xi_{1}-a\eta_{2}(Y_{2} )X_{1}+b\eta_{2}(Y_{2})\varphi_{1}X_{1}\)_._ 4. \((\nabla_{X_{2}}J)Y_{1}=a[\eta_{1}(Y_{1})\eta_{2}(X_{2})\xi_{2}-\eta_{1}(Y_{1}) X_{2}]-b\eta_{1}(Y_{1})\varphi_{2}X_{2}\)_._ _In particular, \(\nabla_{\xi_{1}}J=0\) and \(\nabla_{\xi_{2}}J=0\)._ Proof.: We compute \(\nabla J\) using Corollary 3.3 and the definition of \(J\). For (i), \[(\nabla_{X_{1}}J)Y_{1}=\nabla_{X_{1}}JY_{1}-J\nabla_{X_{1}}^{1}Y_{1}\] We will expand each term in detail. On the one hand, \[\nabla_{X_{1}}JY_{1} =\nabla_{X_{1}}(\varphi_{1}Y_{1}-\frac{a}{b}\eta_{1}(Y_{1})\xi_{1 }+\frac{1}{b}\eta_{1}(Y_{1})\xi_{2})\] \[=\nabla_{X_{1}}^{1}\varphi_{1}Y_{1}-\frac{a}{b}(X_{1}(\eta_{1}(Y_ {1}))\xi_{1}-\eta_{1}(Y_{1})\varphi_{1}X_{1})\] \[\quad+\frac{1}{b}(X_{1}(\eta_{1}(Y_{1}))\xi_{2}-a\eta_{1}(Y_{1}) \varphi_{1}X_{1}).\] On the other hand, \[-J\nabla_{X_{1}}^{1}Y_{1}=-\varphi_{1}\nabla_{X_{1}}^{1}Y_{1}+\frac{a}{b}\eta _{1}(\nabla_{X_{1}}^{1}Y_{1})\xi_{1}-\frac{1}{b}\eta_{1}(\nabla_{X_{1}}^{1}Y_ {1})\xi_{2}.\] Putting these two expressions together we arrive at \[(\nabla_{X_{1}}J)Y_{1} =(\nabla_{X_{1}}^{1}\varphi_{1})Y_{1}-\frac{a}{b}g_{1}(\nabla_{X _{1}}\xi_{1},Y_{1})\xi_{1}+\frac{1}{b}g_{1}(\nabla_{X_{1}}^{1}\xi_{1},Y_{1})\xi _{2}\] \[=g_{1}(X_{1},Y_{1})\xi_{1}-\eta_{1}(Y_{1})X_{1}-\frac{a}{b}\Phi_{1 }(X_{1},Y_{1})\xi_{1}+\frac{1}{b}\Phi_{1}(X_{1},Y_{1})\xi_{2},\] where we have used Lemma 2.1(iv). Next, for (ii), \[(\nabla_{X_{2}}J)Y_{2}=\nabla_{X_{2}}JY_{2}-J(\nabla_{X_{2}}^{2}Y_{2}-(a^{2}+b ^{2}-1)[\eta_{2}(X_{2})\varphi_{2}Y_{2}+\eta_{2}(Y_{2})\varphi_{2}X_{2}]).\] The first term is equal to \[\nabla_{X_{2}}JY_{2} =\nabla_{X_{2}}(\varphi_{2}Y_{2}-\frac{a^{2}+b^{2}}{b}\eta_{2}(Y_ {2})\xi_{1}+\frac{a}{b}\eta_{2}(Y_{2})\xi_{2})\] \[=\nabla_{X_{2}}^{2}\varphi_{2}Y_{2}-(a^{2}+b^{2}-1)\eta_{2}(X_{2} )\varphi_{2}^{2}Y_{2}\] \[\quad-\frac{a^{2}+b^{2}}{b}(X_{2}(\eta_{2}(Y_{2}))\xi_{1}-a\eta_ {2}(Y_{2})\varphi_{2}X_{2})\] \[\quad+\frac{a}{b}(X_{2}(\eta_{2}(Y_{2}))\xi_{2}-(a^{2}+b^{2})\eta _{2}(X_{2})\varphi_{2}X_{2}),\] and the second term is equal to \[-\varphi_{2}(\nabla_{X_{2}}^{2}Y_{2}-(a^{2}+b^{2}-1)[\eta_{2}(X_{ 2})\varphi_{2}Y_{2}+\eta_{2}(Y_{2})\varphi_{2}X_{2}])\] \[\quad+\frac{a^{2}+b^{2}}{b}\eta_{2}(\nabla_{X_{2}}^{2}Y_{2})\xi_{ 1}-\frac{a}{b}\eta_{2}(\nabla_{X_{2}}^{2}Y_{2})\xi_{2}.\] Therefore, \[(\nabla_{X_{2}}J)Y_{2} =(\nabla_{X_{2}}^{2}\varphi_{2})Y_{2}+(a^{2}+b^{2}-1)\eta_{2}(Y_{2}) \varphi_{2}^{2}X_{2}\] \[\quad-\frac{a^{2}+b^{2}}{b}\Phi_{2}(X_{2},Y_{2})\xi_{1}+\frac{a}{b }\Phi_{2}(X_{2},Y_{2})\xi_{2}\] \[=(g_{2}(X_{2},Y_{2})+(a^{2}+b^{2}-1)\eta_{2}(X_{2})\eta_{2}(Y_{2} ))\xi_{2}-(a^{2}+b^{2})\eta_{2}(Y_{2})X_{2}\] \[\quad-\frac{a^{2}+b^{2}}{b}\Phi_{2}(X_{2},Y_{2})\xi_{1}+\frac{a}{b }\Phi_{2}(X_{2},Y_{2})\xi_{2},\] using again Lemma 2.1(iv). Now, for (iii) and (iv) we compute \[(\nabla_{X_{1}}J)Y_{2} =\nabla_{X_{1}}(\varphi_{2}Y_{2}-\frac{a^{2}+b^{2}}{b}\eta_{2}(Y_ {2})\xi_{1}+\frac{a}{b}\eta_{2}(Y_{2})\xi_{2})\] \[\quad+aJ(\eta_{2}(Y_{2})\varphi_{1}X_{1}+\eta_{1}(X_{1})\varphi_{ 2}Y_{2})\] \[=-a\eta_{1}(X_{1})\varphi_{2}^{2}Y_{2}+b\eta_{2}(Y_{2})\varphi_{1 }X_{1}+a(\eta_{2}(Y_{2})\varphi_{1}^{2}X_{1}+\eta_{1}(X_{1})\varphi_{2}^{2}Y_{ 2})\] \[=a\eta_{2}(Y_{2})\varphi_{1}^{2}X_{1}+b\eta_{2}(Y_{2})\varphi_{1}X _{1}\] \[=a\eta_{2}(Y_{2})\eta_{1}(X_{1})\xi_{1}-a\eta_{2}(Y_{2})X_{1}+b \eta_{2}(Y_{2})\varphi_{1}X_{1},\] \[(\nabla_{X_{2}}J)Y_{1} =\nabla_{X_{2}}(\varphi_{1}Y_{1}-\frac{a}{b}\eta_{1}(Y_{1})\xi_{1 }+\frac{1}{b}\eta_{1}(Y_{1})\xi_{2})+aJ(\eta_{2}(X_{2})\varphi_{1}Y_{1}+\eta_ {1}(Y_{1})\varphi_{2}X_{2})\] \[=-a\eta_{2}(X_{2})\varphi_{1}^{2}Y_{1}-\frac{a}{b}\eta_{1}(Y_{1}) (-a\varphi_{2}X_{2})-\frac{a^{2}+b^{2}}{b}\eta_{1}(Y_{1})\varphi_{2}X_{2}\] \[\quad+a(\eta_{2}(X_{2})\varphi_{1}^{2}Y_{1}+\eta_{1}(Y_{1}) \varphi_{2}^{2}X_{2})\] \[=a[\eta_{1}(Y_{1})\eta_{2}(X_{2})\xi_{2}-\eta_{1}(Y_{1})X_{2}]-b \eta_{1}(Y_{1})\varphi_{2}X_{2}.\] The last statement follows easily from the previous computations. In the next result \(R\) denotes the curvature tensor of \(\nabla\), while \(R^{i}\) denotes the curvature tensor of \(\nabla^{i}\), \(i=1,2\). We shall compute only the curvature tensors which we will need in the proof of Theorem 4.2. Let us set \(\lambda_{a,b}:=a^{2}+b^{2}-1\) to shorten a little bit the statement and proof of the lemma. **Lemma 3.5**.: _With notation as above, for \(U_{i},V_{i}\in\Gamma(\mathcal{D}_{i}),Z_{i}\in\mathfrak{X}(S_{i})\),_ * \(R(\xi_{1},\xi_{2})=0\)_,_ * \(R(U_{1},V_{1})Z_{1}=R^{1}(U_{1},V_{1})Z_{1}\) _and_ \(R(U_{1},V_{1})Z_{2}=-2a\Phi_{1}(U_{1},V_{1})\varphi_{2}Z_{2}\)_,_ * \(R(U_{2},V_{2})Z_{1}=-2a\Phi_{2}(U_{2},V_{2})\varphi_{1}Z_{1}\) _and_ \(R(U_{2},V_{2})Z_{2}=R^{2}(U_{2},V_{2})Z_{2}+\lambda_{a,b}[\Phi_{2}(V_{2},Z_{2}) \varphi_{2}U_{2}-\Phi_{2}(U_{2},Z_{2})\varphi_{2}V_{2}\)__ \[-2\Phi_{2}(U_{2},V_{2})\varphi_{2}Z_{2}].\] _In particular, \(R(U_{i},V_{i})\xi_{1}=R(U_{i},V_{i})\xi_{2}=0\)._ Proof.: For (i), we compute \(R(\xi_{1},\xi_{2})Z_{i}\) for \(Z_{i}\in\mathfrak{X}(S_{i})\), \(i=1,2\), using Corollary 3.3 and properties of Sasakian manifolds such as Lemma 2.1, (2.2) and (2.5). For \(Z_{1}\) we have \[R(\xi_{1},\xi_{2})Z_{1} =\nabla_{\xi_{1}}\nabla_{\xi_{2}}Z_{1}-\nabla_{\xi_{2}}\nabla_{ \xi_{1}}Z_{1}\] \[=\nabla_{\xi_{1}}(-a\varphi_{1}Z_{1})-\nabla_{\xi_{2}}([\xi_{1},Z _{1}]-\varphi_{1}Z_{1})\] \[=-a([\xi_{1},\varphi_{1}Z_{1}]-\varphi_{1}^{2}Z_{1})+a\varphi_{1} ([\xi_{1},Z_{1}]-\varphi_{1}Z_{1})\] \[=0,\] and for \(Z_{2}\) we have \[R(\xi_{1},\xi_{2})Z_{2} =\nabla_{\xi_{1}}\nabla_{\xi_{2}}Z_{2}-\nabla_{\xi_{2}}\nabla_{\xi_ {1}}Z_{2}\] \[=\nabla_{\xi_{1}}([\xi_{2},Z_{2}]-\varphi_{2}Z_{2}-(a^{2}+b^{2}-1) \varphi_{2}Z_{2})-\nabla_{\xi_{2}}(-a\varphi_{2}Z_{2})\] \[=-a(\varphi_{2}[\xi_{2},Z_{2}]-(a^{2}+b^{2})\varphi_{2}^{2}Z_{2})\] \[\quad+a([\xi_{2},\varphi_{2}Z_{2}]-\varphi_{2}^{2}Z_{2}-(a^{2}+b^ {2}-1)\varphi_{2}^{2}Z_{2})\] \[=0.\] Hence (i) is proved. For (ii), first note that \(R(U_{1},V_{1})Z_{1}=R^{1}(U_{1},V_{1})Z_{1}\) follows from the fact that \(\nabla_{X_{1}}Y_{1}=\nabla_{X_{1}}^{1}Y_{1}\) for \(X_{1},Y_{1}\in\mathfrak{X}(S_{1})\). For \(Z_{2}\), we compute using Corollary 3.3 \[R(U_{1},V_{1})Z_{2} =\nabla_{U_{1}}\nabla_{V_{1}}Z_{2}-\nabla_{V_{1}}\nabla_{U_{1}}Z _{2}-\nabla_{[U_{1},V_{1}]}Z_{2}\] \[=\nabla_{U_{1}}(-a\eta_{2}(Z_{2})\varphi_{1}V_{1})-\nabla_{V_{1}} (-a\eta_{2}(Z_{2})\varphi_{1}U_{1})\] \[\quad+a[\eta_{2}(Z_{2})\varphi_{1}[U_{1},V_{1}]+\eta_{1}([U_{1},V _{1}])\varphi_{2}Z_{2}]\] \[=a\eta_{2}(Z_{2})(-\nabla_{U_{1}}\varphi_{1}V_{1}+\nabla_{V_{1}} \varphi_{1}U_{1}+\varphi_{1}[U_{1},V_{1}])+a\eta_{1}([U_{1},V_{1}])\varphi_{2 }Z_{2}\] \[=-a\eta_{2}(Z_{2})(\Phi_{1}(U_{1},\varphi_{1}V_{1})\xi_{1}- \nabla_{U_{1}}^{1,T}\varphi_{1}V_{1}-\Phi(V_{1},\varphi_{1}U_{1})\xi_{1}\] \[\quad+\nabla_{V_{1}}^{1,T}\varphi_{1}U_{1}+\varphi_{1}(\nabla_{U _{1}}^{1,T}V_{1}-\nabla_{V_{1}}^{1,T}U_{1}))-2a\Phi_{1}(U_{1},V_{1})\varphi_ {2}Z_{2}\] \[=-2a\Phi_{1}(U_{1},V_{1})\varphi_{2}Z_{2},\] due to (2.7), where here \(\nabla^{1,T}\) denotes the transverse connection on the Sasakian manifold \(S_{1}\). For (iii), the computation of \(R(U_{2},V_{2})Z_{1}\) is completely analogous to the computation of \(R(U_{1},V_{1})Z_{2}\) in (ii) and so we omit it. Finally, we compute \[R(U_{2},V_{2})Z_{2} =\nabla_{U_{2}}\nabla_{V_{2}}Z_{2}-\nabla_{V_{2}}\nabla_{U_{2}}Z _{2}-\nabla_{[U_{2},V_{2}]}Z_{2}\] \[=\nabla_{U_{2}}(\nabla_{V_{2}}^{2}Z_{2}-\lambda_{a,b}\eta_{2}(Z_ {2})\varphi_{2}V_{2})-\nabla_{V_{2}}(\nabla_{U_{2}}^{2}Z_{2}-\lambda_{a,b}\eta _{2}(Z_{2})\varphi_{2}U_{2})\] \[\quad-\nabla_{[U_{2},V_{2}]}^{2}Z_{2}+\lambda_{a,b}[\eta_{2}(Z_{ 2})\varphi_{2}[U_{2},V_{2}]+\eta_{2}([U_{2},V_{2}])\varphi_{2}Z_{2}]\] \[=\nabla_{U_{2}}^{2}\nabla_{V_{2}}^{2}Z_{2}-\lambda_{a,b}\eta_{2}( \nabla_{V_{2}}^{2}Z_{2})\varphi_{2}U_{2})\] \[\quad-\lambda_{a,b}\left(U_{2}(\eta_{2}(Z_{2}))\varphi_{2}V_{2}+ \eta_{2}(Z_{2})\nabla_{U_{2}}\varphi_{2}V_{2}\right)\] \[\quad-\nabla_{V_{2}}^{2}\nabla_{U_{2}}^{2}Z_{2}+\lambda_{a,b}\eta _{2}(\nabla_{U_{2}}^{2}Z_{2})\varphi_{2}V_{2})\] \[\quad+\lambda_{a,b}\left(V_{2}(\eta_{2}(Z_{2}))\varphi_{2}U_{2}+ \eta_{2}(Z_{2})\nabla_{V_{2}}\varphi_{2}U_{2}\right)\] \[\quad-\nabla_{[U_{2},V_{2}]}^{2}Z_{2}+\lambda_{a,b}[\eta_{2}(Z_{ 2})\varphi_{2}[U_{2},V_{2}]-2\Phi_{2}(U_{2},V_{2})\varphi_{2}Z_{2}]\] \[=R^{2}(U_{2},V_{2})Z_{2}-\lambda_{a,b}\left(g_{2}(\varphi_{2}V_{2 },Z_{2})\varphi_{2}U_{2}-g_{2}(\varphi_{2}U_{2},Z_{2})\varphi_{2}V_{2}\right)\] \[\quad-\lambda_{a,b}\eta_{2}(Z_{2})\left(-\Phi_{2}(U_{2},\varphi_{2 }V_{2})\xi_{2}+\nabla_{U_{2}}^{2,T}\varphi_{2}V_{2}\right.\] \[\quad\left.+\Phi_{2}(V_{2},\varphi_{2}U_{2})\xi_{2}-\nabla_{V_{2 }}^{2,T}\varphi_{2}U_{2}-\varphi_{2}(\nabla_{U_{2}}^{2,T}V_{2}-\nabla_{V_{2}}^ {2,T}U_{2})\right)\] \[\quad-2\lambda_{a,b}\Phi_{2}(U_{2},V_{2})\varphi_{2}Z_{2}\] \[=R^{2}(U_{2},V_{2})Z_{2}\] \[\quad+\lambda_{a,b}(\Phi_{2}(V_{2},Z_{2})\varphi_{2}U_{2}-\Phi_{2}( U_{2},Z_{2})\varphi_{2}V_{2}-2\Phi_{2}(U_{2},V_{2})\varphi_{2}Z_{2}).\] The last statement follows easily from (ii), (iii) and Lemma 2.2(iv). ## 4. Harmonicity of the complex structure \(J_{a,b}\) with respect to \(g_{a,b}\) Let \((M,g)\) denote a Riemannian manifold. According to [65], a \(g\)-orthogonal almost complex structure is harmonic if and only if \[[J,\nabla^{*}\nabla J]=0,\] where \(\nabla^{*}\nabla J\) is the _rough Laplacian_ of \(J\) defined by \(\nabla^{*}\nabla J=\operatorname{Tr}\nabla^{2}J\). That is, if \(\{u_{1},\dots,u_{2n}\}\) is a local orthonormal frame on \(M\), then \[(\nabla^{*}\nabla J)(W)=\sum_{i=1}^{2n}(\nabla^{2}_{u_{i},u_{i}}J)(W),\quad W \in\mathfrak{X}(M),\] where the second covariant derivative of \(J\) is given by \[(\nabla^{2}_{U,V}J)(W)=(\nabla_{U}(\nabla_{V}J))(W)-(\nabla_{\nabla_{U}V}J)(W).\] It is clear that \(\nabla^{*}\nabla J\) is a \((1,1)\)-tensor on \(M\). Let \((M^{2n},J,g)\) be an almost Hermitian manifold. According to [65], the following 2-form \(\rho\) plays a special role when determining if the almost complex structure \(J\) is harmonic: \[\rho=\mathcal{R}(\omega)\in\Omega^{2}(M),\] where \(\omega\) is the fundamental 2-form associated to \((J,g)\) and \(\mathcal{R}\) is the curvature operator acting on 2-forms. This 2-form \(\rho\) is a natural generalization of the Chern-Ricci form of a Kahler manifold, although in general it is not closed. It can be seen that the skew-symmetric tensor \(P:TM\to TM\) obtained by contracting \(\rho\) and \(g\), i.e. \(\rho(X,Y)=g(PX,Y)\), is given by \[P(X)=\frac{1}{2}\sum_{i=1}^{2n}R(e_{i},Je_{i})X,\quad X\in\mathfrak{X}(M), \tag{4.1}\] where \(\{e_{i}\}\) is any orthonormal local frame of \(M\). Also, let \(\delta J\in\mathfrak{X}(M)\) denote the codifferential of \(J\), that is, the unique vector field on \(M\) satisfying \[g(\delta J,X)=\delta\omega(X)\qquad\text{for all }X\in\mathfrak{X}(M),\] where \(\delta\omega\) is the codifferential of \(\omega\). Since \(\delta\omega\) is given by \[\delta\omega(X)=-\sum_{i=1}^{2n}(\nabla_{e_{i}}\omega)(e_{i},X),\] for any local orthonormal frame \(\{e_{i}\}\) of \(M\), we obtain the following expression for \(\delta J\): \[\delta J=\sum_{i=1}^{2n}(\nabla_{e_{i}}J)(e_{i}). \tag{4.2}\] With all these ingredients we may recall the following result from [65]. In the Appendix we give an elementary proof of this fact.1 Footnote 1: The sign in the formula is different from [65] since there \(\omega=g(J,\cdot)\) but for us \(\omega=g(\cdot,J\cdot)\). **Proposition 4.1**.: _[_65_, Theorem 2.8]_ _Let \(J\) be the almost complex structure of a \(2n\)-dimensional almost Hermitian manifold \((M,J,g)\). If \(J\) is integrable then_ \[[J,\nabla^{*}\nabla J]=2(\nabla_{\delta J}J-[J,P]).\] _In particular, \(J\) is harmonic if and only if \([J,P]=\nabla_{\delta J}J\)._ The main result of this section is the following. **Theorem 4.2**.: _Let \((S_{1}^{2n_{1}+1},\varphi_{1},\xi_{1},\eta_{1},g_{1})\) and \((S_{2}^{2n_{2}+2},\varphi_{2},\xi_{2},\eta_{2},g_{2})\) be two Sasakian manifolds. If \((J,g):=(J_{a,b},g_{a,b})\) denotes the complex structure on \(M_{a,b}=S_{1}\times S_{2}\) given in (3.1) and (3.2) then \(J\) is harmonic with respect to \(g\), for any \(a,b\in\mathbb{R},\,b\neq 0\)._ First we prove an auxiliary result, which generalizes [57, Lemma 5.4], where the case of Calabi-Eckmann manifolds is considered. **Lemma 4.3**.: _With the notation of Theorem 4.2, the codifferential \(\delta J\) of the complex structure \(J\) on \(M_{a,b}\) is given by:_ \[\delta J=2n_{1}\xi_{1}+2n_{2}\xi_{2}.\] _Moreover, \(\nabla_{\delta J}J=0\)._ Proof.: We will compute \(\delta J\) using (4.2). Let us consider a local orthonormal frame on \(M_{a,b}\) of the following form: \[\left\{\xi_{1},J\xi_{1}=-\frac{a}{b}\xi_{1}+\frac{1}{b}\xi_{2},e_{1},\ldots,e_ {2n_{1}},f_{1},\ldots,f_{2n_{2}}\right\},\] where each \(e_{j}\) is a local section of \(\mathcal{D}_{1}\) and each \(f_{k}\) is a local section of \(\mathcal{D}_{2}\). With this frame, (4.2) becomes \[\delta J=(\nabla_{\xi_{1}}J)\xi_{1}+(\nabla_{J\xi_{1}}J)J\xi_{1}+\sum_{j=1}^{ 2n_{1}}(\nabla_{e_{j}}J)e_{j}+\sum_{k=1}^{2n_{2}}(\nabla_{f_{k}}J)f_{k}.\] Since \(J\) is integrable, it follows from [34, Corollary 2.2] that \((\nabla_{J\xi_{1}}J)J\xi_{1}=(\nabla_{J\xi_{1}}J)J\xi_{1}=0\), \((\nabla_{e_{j}}J)e_{j}=\xi_{1}\) and \((\nabla_{f_{k}}J)f_{k}=\xi_{2}\), so that \[\delta J=\sum_{j=1}^{2n_{1}}\xi_{1}+\sum_{k=1}^{2n_{2}}\xi_{2}=2n_{1}\xi_{1}+2 n_{2}\xi_{2},\] as stated. Finally, the last statement follows from Lemma 3.4. Proof of Theorem 4.2.: As in Lemma 4.3, we consider a local orthonormal frame on \(M_{a,b}\) of the following form: \[\left\{\xi_{1},J\xi_{1}=-\frac{a}{b}\xi_{1}+\frac{1}{b}\xi_{2},e_{1},\ldots,e _{2n_{1}},f_{1},\ldots,f_{2n_{2}}\right\}, \tag{4.3}\] where each \(e_{j}\) is a local section of \(\mathcal{D}_{1}\) and each \(f_{k}\) is a local section of \(\mathcal{D}_{2}\). Since \(J\) is integrable, it follows from Proposition 4.1 and Lemma 4.3 that \(J\) is harmonic if and only if \([J,P]=0\), with \(P\) defined as in (4.1) for the local frame (4.3). We prove next that \(J\) and \(P\) commute indeed. We begin by computing \(P\): \[-2P =2R(\xi_{1},J\xi_{1})+\sum_{j}R(e_{j},\varphi_{1}e_{j})+\sum_{k} R(f_{k},\varphi_{2}f_{k})\] \[=\sum_{j}R(e_{j},\varphi_{1}e_{j})+\sum_{k}R(f_{k},\varphi_{2}f_{ k}),\] due to Lemma 3.5. Next we show that \([J,R(e_{j},\varphi_{1}e_{j})]=[J,R(f_{k},\varphi_{2}f_{k})]=0\). Recalling that \(R(e_{j},\varphi_{1}e_{j})\xi_{i}=R(f_{k},\varphi_{1}f_{k})\xi_{i}=0\), \(i=1,2\), it is enough to show that \([J,R(e_{j},\varphi_{1}e_{j})]\) and \([J,R(f_{k},\varphi_{2}f_{k})]\) vanish when evaluated in sections of \(\mathcal{D}_{1}\) and \(\mathcal{D}_{2}\). We use Lemma 3.5 for all the computations below. First, for \(U_{1}\in\Gamma(\mathcal{D}_{1})\), we compute \[[J,R(e_{j},\varphi_{1}e_{j})](U_{1})=J(R^{1}(e_{j},\varphi_{1}e_{j})U_{1})-R^{ 1}(e_{j},\varphi_{1}e_{j})\varphi_{1}U_{1}=0,\] due to Corollary 2.3. Now, for \(U_{2}\in\Gamma(\mathcal{D}_{2})\), \[[J,R(e_{j},\varphi_{1}e_{j})](U_{2}) =2a\Phi_{1}(e_{j},\varphi_{1}e_{j})U_{2}-R(e_{j},\varphi_{1}e_{j} )\varphi_{2}U_{2}\] \[=2a\Phi_{1}(e_{j},\varphi_{1}e_{j})U_{2}-2a\Phi_{1}(e_{j},\varphi _{1}e_{j})U_{2}\] \[=0.\] Similarly, \[[J,R(f_{k},\varphi_{2}f_{k})](U_{1})=2a\Phi_{2}(f_{k},\varphi_{2}f_{k})U_{1}-2a \Phi_{2}(f_{k},\varphi_{2}f_{k})U_{1}=0.\] Finally, \[[J,R(f_{k},\varphi_{2}f_{k})](U_{2}) =J(R(f_{k},\varphi_{2}f_{k})U_{2})-R(f_{k},\varphi_{2}f_{k})\varphi _{2}U_{2}\] \[=J(R^{2}(f_{k},\varphi_{2}f_{k})U_{2})-R^{2}(f_{k},\varphi_{2}f_{k })\varphi_{2}U_{2}\] \[\quad+\lambda_{a,b}(-\Phi_{2}(\varphi_{2}f_{k},U_{2})f_{k}+\Phi_ {2}(f_{k},U_{2})\varphi_{2}f_{k}\] \[\qquad\qquad+2\Phi_{2}(f_{k},\varphi_{2}f_{k})U_{2})\] \[\quad-\lambda_{a,b}(\Phi_{2}(\varphi_{2}f_{k},\varphi_{2}U_{2}) \varphi_{2}f_{k}+\Phi_{2}(f_{k},\varphi_{2}U_{2})f_{k}\] \[\qquad\qquad+2\Phi_{2}(f_{k},\varphi_{2}f_{k})U_{2})\] \[=0,\] since \(J(R^{2}(f_{k},\varphi_{2}f_{k})U_{2})=\varphi_{2}(R^{2}(f_{k},\varphi_{2}f_{k })U_{2})\) due to Corollary 2.3. It follows that \([J,P]=0\) and thus the complex structure \(J=J_{a,b}\) is harmonic with respect to the metric \(g=g_{a,b}\). _Remark 4.4_.: As mentioned before, Wood proved that Calabi-Eckmann manifolds equipped with the product of round metrics are harmonic. Moreover, he showed that, with the possible exception of \(\mathbb{S}^{1}\times\mathbb{S}^{3}\) and \(\mathbb{S}^{3}\times\mathbb{S}^{3}\), Calabi-Eckmann manifolds are _unstable_ (see [65, SS7]), that is, they do not correspond to a local minimum of the variational problem (or equivalently, if the second variation of the energy is positive, see [64]). However, his proof depends strongly on geometric properties of the spheres with the round metrics. This could indicate that the study of stability in general on a product of Sasakian manifolds is more complicated and it will be pursued in a future paper. ## 5. Further properties of the Hermitian structures \((J_{a,b},g_{a,b})\) In this section we will focus on products of Sasakian manifolds which admit special Hermitian structures. Let \((M,J,g)\) be a Hermitian manifold with \(\dim_{\mathbb{C}}M=n\). The complex structure \(J\) can be extended naturally to differential forms on \(M\) as follows: for a \(p\)-form \(\alpha\), the \(p\)-form \(J\alpha\) is given by \[J\alpha=\alpha, p=0\] \[(J\alpha)(\cdot,\ldots,\cdot)=\alpha(J\cdot,\ldots,J\cdot), p>0.\] The real differential operator \(d^{c}\) is then defined by \[d^{c}\alpha=-J^{-1}dJ\alpha=(-1)^{p}JdJ\alpha,\quad\alpha\in\Omega^{p}(M).\] It is well known that \(dd^{c}=2\sqrt{-1}\partial\bar{\partial}\). If the fundamental 2-form \(\omega(\cdot,\cdot)=g(\cdot,J\cdot)\) is closed, then the metric \(g\) is Kahler. The topology of compact Kahler manifolds is well understood, and there are several topological obstructions for the existence of a Kahler metric. In the literature other conditions on the fundamental 2-form which are weaker than being closed have been introduced. We recall next some of these non-Kahler Hermitian conditions. If \((M,J,g)\) is a Hermitian manifold with \(\dim_{\mathbb{C}}M=n\), then the metric \(g\) is said to be: 1. _Balanced_ if its fundamental form \(\omega\) satisfies \(d\omega^{n-1}=0\), or equivalently \(\delta\omega=0\), where \(\delta\) is the codifferential associated to \(g\). 2. _Locally conformally Kahler_ (LCK) if there exists an open covering \(\{U_{i}\}_{i}\) of \(M\) and smooth functions \(f_{i}\) on each \(U_{i}\) such that \(e^{-f_{i}}g\) is Kahler. This definition is equivalent to the existence of a closed 1-form \(\theta\) such that \(d\omega=\theta\wedge\omega\). The 1-form \(\theta\) coincides with the _Lee form_ associated to \((J,g)\), which is defined (on any Hermitian manifold) by: (5.1) \[\theta=\frac{1}{n-1}(\delta\omega)\circ J.\] If the Lee form of an LCK structure is parallel with respect to the Levi-Civita connection of \(g\), the LCK metric is called _Vaisman_. 3. _Strong Kahler with torsion_ (SKT), also called _pluriclosed_, if its fundamental form satisfies \(\partial\bar{\partial}\omega=0\), or equivalently, \(dd^{c}\omega=0\). SKT metrics have applications in type II string theory and in 2-dimensional supersymmetric \(\sigma\)-models [28, 39, 54]. Moreover, they have also relations with generalized Kahler geometry (see for instance [5, 36]). 4. _Astheno-Kahler_ if its fundamental form satisfies \(dd^{c}\omega^{n-2}=0\). Clearly this notion only makes sense for \(n\geq 3\). Jost and Yau introduced these metrics in [40] to study Hermitian harmonic maps and to extend Siu's rigidity theorem to non-Kahler manifolds. 5. _Gauduchon_ if its fundamental form satisfies \(dd^{c}\omega^{n-1}=0\). Any compact Hermitian manifold admits a Gauduchon metric in its conformal class, and it is unique in this class up to homotheties, due to a renowned result by Gauduchon [29]. 6. _k-Gauduchon_, for \(1\leq k\leq n-1\), if its fundamental form \(\omega\) satisfies \[dd^{c}\omega^{k}\wedge\omega^{n-k-1}=0.\] These metrics were introduced in [26] and they generalize the notion of Gauduchon metric, which corresponds precisely to \(k=n-1\). Moreover, SKT and astheno-Kahler metrics are respectively 1-Gauduchon and \((n-2)\)-Gauduchon. _Remark 5.1_.: Notice that when \(n=3\) a Hermitian metric is SKT if and only if it is astheno-Kahler, and in this case it is also 1-Gauduchon. As in previous sections, \(S_{1}\) and \(S_{2}\) will be Sasakian manifolds with \(\dim S_{i}=2n_{i}+1\), \(n_{i}\in\mathbb{N}_{0}\), \(i=1,2\), and the product manifold \(M_{a,b}=S_{1}\times S_{2}\) will be equipped with the Hermitian structure \((J_{a,b},g_{a,b})\) defined in (3.1) and (3.2). The fundamental 2-form \(\omega_{a,b}=g_{a,b}(\cdot,J_{a,b}\cdot)\) associated to \((J_{a,b},g_{a,b})\) is given by \[\omega_{a,b}=\Phi_{1}+\Phi_{2}-b\eta_{1}\wedge\eta_{2}, \tag{5.2}\] where \(\Phi_{i}\) and \(\eta_{i}\) have been extended to the product in a natural way. Since both manifolds are Sasakian, the exterior derivative of \(\omega_{a,b}\) is given by \[d\omega_{a,b}=-2b(\Phi_{1}\wedge\eta_{2}-\eta_{1}\wedge\Phi_{2}). \tag{5.3}\] Since \(b\neq 0\), it is clear that \(d\omega_{a,b}=0\) if and only if \(\Phi_{1}=\Phi_{2}=0\), which can only happen when \(n_{1}=n_{2}=0\), that is, \(\dim S_{1}=\dim S_{2}=1\), hence \(\dim M_{a,b}=2\). As we are interested in the non-Kahler setting, from now on we will assume that \(n_{1}+n_{2}\geq 1\), so that \(\dim M_{a,b}\geq 4\). Regarding the Hermitian structures \((J_{a,b},g_{a,b})\), Matsuo proved in [46] that such a Hermitian structure is astheno-Kahler if and only if the following condition holds: \[n_{1}(n_{1}-1)+2an_{1}n_{2}+n_{2}(n_{2}-1)(a^{2}+b^{2})=0, \tag{5.4}\] provided that \(n_{1}+n_{2}\geq 2\), that is, \(\dim_{\mathbb{C}}(S_{1}\times S_{2})\geq 3\). Later, in [24], Fino and Ugarte studied when the Hermitian structures \((J_{a,b},g_{a,b})\) are 1-Gauduchon and they obtained that this happens if and only if (5.4) holds, that is, if and only if they are astheno-Kahler. _Remark 5.2_.: For any \(n_{1}\) and \(n_{2}\) such that \(n_{1}+n_{2}\geq 2\) and \(n_{2}\geq 1\) there are values of \(a\) and \(b\) such that \((J_{a,b},g_{a,b})\) is astheno-Kahler. Indeed, if \(n_{2}=n_{1}=1\) then \(a=0\), and if \(n_{2}=1\) \(n_{1}\geq 2\), then \(a=-\frac{n_{1}-1}{2}\); in both cases \(b\) is arbitrary. Finally, if \(n_{2}\geq 2\) the possible values of \(a,b\) satisfy \[a\in\left(-\frac{n_{1}}{n_{2}-1}-\frac{\sqrt{n_{1}n_{2}(n_{1}+n_{2}-1)}}{n_{2}( n_{2}-1)},-\frac{n_{1}}{n_{2}-1}+\frac{\sqrt{n_{1}n_{2}(n_{1}+n_{2}-1)}}{n_{2}( n_{2}-1)}\right),\] and \[b^{2}=-a^{2}-\frac{2n_{1}n_{2}a+n_{1}(n_{1}-1)}{n_{2}(n_{2}-1)}>0.\] Note that if the Hermitian structure \((J_{a,b},g_{a,b})\) is astheno-Kahler then \(a\leq 0\), and moreover, \(a=0\) if and only if \(n_{1}=n_{2}=1\). In particular, Morimoto's structure \((J_{0,1},g_{0,1})\) satisfies (5.4) if and only if \(n_{1}=n_{2}=1\). In the next results we study when \((J_{a,b},g_{a,b})\) on \(M_{a,b}=S_{1}\times S_{2}\) lies in one of the special classes of Hermitian structures mentioned above. **Proposition 5.3**.: _The Hermitian structure \((J_{a,b},g_{a,b})\) on \(M_{a,b}=S_{1}\times S_{2}\) with \(n_{1}+n_{2}\geq 1\) is not balanced._ Proof.: The Hermitian structure is balanced if and only if \(\delta\omega_{a,b}=0\), where \(\omega_{a,b}\) is the associated fundamental 2-form and \(\delta\) is the codifferential. Equivalently, \(\delta J_{a,b}=0\), where \(\delta J_{a,b}\) is the vector field on \(M_{a,b}\) dual to \(\delta\omega_{a,b}\). However, it follows from Lemma 4.3 that \(\delta J_{a,b}=2n_{1}\xi_{1}+2n_{2}\xi_{2}\neq 0\) since \(n_{1}+n_{2}\geq 1\). _Remark 5.4_.: Some products \(S_{1}\times S_{2}\) of Sasakian manifolds carry balanced Hermitian structures, different from any \((J_{a,b},g_{a,b})\). For instance, if \(H_{3}\) is the 3-dimensional Heisenberg group, it is well known that \(H_{3}\) carries a natural left invariant Sasakian structure (see Table 1 in SS6.1 below). On the other hand, it was shown in [22, 59] that \(G:=H_{3}\times H_{3}\) carries a left invariant balanced Hermitian structure. If \(\Gamma_{1}\) and \(\Gamma_{2}\) are co-compact discrete subgroups of \(H_{3}\) then \(\Gamma:=\Gamma_{1}\times\Gamma_{2}\) is a discrete co-compact subgroup of \(G\). Therefore the nilmanifold \(\Gamma\backslash G\cong(\Gamma_{1}\backslash H_{3})\times(\Gamma_{2} \backslash H_{3})\) is a product of Sasakian 3-manifolds and carries a balanced Hermitian structure. **Proposition 5.5**.: _The Hermitian structure \((J_{a,b},g_{a,b})\) on \(M_{a,b}=S_{1}\times S_{2}\) is non-Kahler LCK if and only if \(\dim S_{1}=1\) and \(\dim S_{2}\geq 3\) or \(\dim S_{2}=1\) and \(\dim S_{1}\geq 3\). Moreover, the LCK structure is Vaisman._ Proof.: According to (5.1), the Lee form \(\theta\) associated to \((J_{a,b},g_{a,b})\) is given by \[\theta=\frac{1}{n_{1}+n_{2}}\delta\omega_{a,b}\circ J_{a,b}.\] Due to 4.3, together with (3.1) and (3.2), we arrive easily at the following expression for \(\theta\): \[\theta=\frac{2b}{n_{1}+n_{2}}(n_{2}\eta_{1}-n_{1}\eta_{2}). \tag{5.5}\] Replacing (5.2), (5.3) and (5.5) in the LCK condition \(d\omega=\theta\wedge\omega\), we arrive at \[-n_{2}\Phi_{1}\wedge\eta_{2}+n_{1}\eta_{1}\wedge\Phi_{2}=n_{2}\eta_{1}\wedge \Phi_{1}-n_{1}\eta_{2}\wedge\Phi_{2}.\] Comparing the components in \(S_{1}\) and \(S_{2}\) of each term in this equation, we observe that all of them have to vanish, so that \[n_{2}\Phi_{1}\wedge\eta_{2}=n_{1}\eta_{1}\wedge\Phi_{2}=n_{2}\eta_{1}\wedge \Phi_{1}=n_{1}\eta_{2}\wedge\Phi_{2}=0.\] If \(n_{2}\neq 0\) then \(\Phi_{1}\wedge\eta_{2}=\eta_{1}\wedge\Phi_{1}=0\) and this happens if and only if \(\Phi_{1}=0\), or equivalently \(\dim S_{1}=1\). In a similar way, if \(n_{1}\neq 0\) then \(\dim S_{2}=1\). The fact the LCK structure is actually Vaisman, that is, \(\nabla\theta=0\), follows immediately from Corollary 3.3. Next we will analyze the Hermitian conditions which involve the operator \(d^{c}\). First, it was shown in [46] that \(J_{a,b}\Phi_{1}=\Phi_{1}\) and \(J_{a,b}\Phi_{2}=\Phi_{2}\). On the other hand, it is easy to verify that \[J_{a,b}\eta_{1}=\frac{a}{b}\eta_{1}+\frac{a^{2}+b^{2}}{b}\eta_{2},\quad J_{a,b} \eta_{2}=-\frac{1}{b}\eta_{1}-\frac{a}{b}\eta_{2}.\] Using this, together with (5.3) and the definition of \(d^{c}\), the following expressions are obtained: \[\begin{split} d^{c}\omega_{a,b}&=2[\Phi_{1}\wedge( \eta_{1}+a\eta_{2})+\Phi_{2}\wedge(a\eta_{1}+(a^{2}+b^{2})\eta_{2})],\\ dd^{c}\omega_{a,b}&=4[\Phi_{1}^{2}+2a\Phi_{1} \wedge\Phi_{2}+(a^{2}+b^{2})\Phi_{2}^{2}],\\ d\omega_{a,b}\wedge d^{c}\omega_{a,b}&=4b[\Phi_{1} ^{2}+2a\Phi_{1}\wedge\Phi_{2}+(a^{2}+b^{2})\Phi_{2}^{2}]\wedge\eta_{1}\wedge \eta_{2}\\ &=b\,dd^{c}\omega_{a,b}\wedge\eta_{1}\wedge\eta_{2}.\end{split} \tag{5.6}\] **Proposition 5.6**.: _The Hermitian structure \((J_{a,b},g_{a,b})\) on \(M_{a,b}=S_{1}\times S_{2}\) is non-Kahler SKT if and only if:_ 1. \(\dim M_{a,b}=4\)_, or_ 2. \(\dim S_{1}=\dim S_{2}=3\) _and_ \(a=0\)_._ Proof.: The Hermitian structure is SKT if and only if \(dd^{c}\omega_{a,b}=0\). It follows from (5.6) that this happens if and only if \[\Phi_{1}^{2}+2a\Phi_{1}\wedge\Phi_{2}+(a^{2}+b^{2})\Phi_{2}^{2}=0.\] Comparing the components in \(S_{1}\) and \(S_{2}\) of each term in this equation, we deduce that the SKT condition is equivalent to \[\Phi_{1}^{2}=0,\quad\Phi_{2}^{2}=0\quad\text{and}\quad a\Phi_{1}\wedge\Phi_{2} =0. \tag{5.7}\] If (5.7) holds, from the first two equalities we obtain that \(\dim S_{i}\leq 3\) for \(i=1,2\). Since the metric is non-Kahler, \(\dim S_{i}=3\) for \(i=1\) or \(i=2\). If both \(\dim S_{1}=\dim S_{2}=3\), it follows from \(a\Phi_{1}\wedge\Phi_{2}=0\) that \(a=0\). Conversely, if the conditions in the statement hold then it is clear that (5.7) is satisfied. _Remark 5.7_.: It follows from Proposition 5.6 that if \(S_{1}\times S_{2}\) admits a non-Kahler SKT structure then at least one of the factors has dimension \(3\). According to [30], any compact \(3\)-dimensional Sasakian manifold is diffeomorphic to one of the following manifolds: \[\mathbb{S}^{3}/\Gamma,\qquad H_{3}/\Gamma,\qquad\widetilde{\mathrm{SL}}(2, \mathbb{R})/\Gamma,\] where \(\Gamma\) is a discrete subgroup of the isometry group of the corresponding canonical Sasakian metric. _Remark 5.8_.: Note that according to Proposition 5.5 and Proposition 5.6, the Hermitian structures \((J_{a,b},g_{a,b})\) on the the complex surfaces \(S_{1}\times S_{2}\) with \(\dim S_{1}=1\) and \(\dim S_{2}=3\) are both Vaisman and SKT. This is not a surprise since it was proved in [23, Theorem A] that a Hermitian metric on a complex surface is Vaisman if and only if the metric is SKT and the Bismut connection satisfies the first Bianchi identity. The fact that the Bismut connection associated to \((J_{a,b},g_{a,b})\) on the complex surface \(S_{1}\times S_{2}\) satisfies the first Bianchi identity follows from [23, Theorem 3.2] and [7, Proposition 3.2]. In the following theorem we characterize when the Hermitian structure \((J_{a,b},g_{a,b})\) is \(k\)-Gauduchon for \(\dim_{\mathbb{C}}M_{a,b}\geq 4\). Since the case \(k=1\) was already done in [24], we will restrict to the case \(2\leq k\leq n-1\). We obtain that the metric \(g_{a,b}\) is always Gauduchon and, furthermore, it is \(k\)-Gauduchon with \(2\leq k\leq n-1\) if and only if it is astheno-Kahler, just as in the case \(k=1\). **Theorem 5.9**.: _Let \(M_{a,b}=S_{1}\times S_{2}\) be a product of Sasakian manifolds with \(n:=n_{1}+n_{2}+1\geq 4\). Then the Hermitian structure \((J_{a,b},g_{a,b})\) on \(M_{a,b}\) is \(k\)-Gauduchon with \(2\leq k\leq n-1\) if and only if the constants \(a\) and \(b\) satisfy_ \[(n-1-k)[n_{1}(n_{1}-1)+2an_{1}n_{2}+n_{2}(n_{2}-1)(a^{2}+b^{2})]=0. \tag{5.8}\] _In particular, \((J_{a,b},g_{a,b})\) is Gauduchon and it is \(k\)-Gauduchon with \(2\leq k\leq n-2\) if and only if it is astheno-Kahler._ Proof.: Let us denote \(J=J_{a,b}\) and \(\omega=\omega_{a,b}\). We follow the lines of the proof of [46, Theorem 4.1]. We will use equations (5.6) and the fact that \(J\omega^{k}=\omega^{k}\) for all \(k\) since \(\omega\) is a \((1,1)\)-form on \(M\). For \(2\leq k\leq n-1\), we compute \[dd^{c}\omega^{k}\wedge\omega^{n-k-1} =d(JdJ\omega^{k})\wedge\omega^{n-k-1}\] \[=k\,d(J(\omega^{k-1}\wedge d\omega))\wedge\omega^{n-k-1}\] \[=k\,d(\omega^{k-1}\wedge d^{c}\omega)\wedge\omega^{n-k-1}\] \[=k[(k-1)\omega^{k-2}\wedge d\omega\wedge d^{c}\omega+\omega^{k-1} \wedge dd^{c}\omega]\wedge\omega^{n-k-1}\] \[=k[(k-1)d\omega\wedge d^{c}\omega+\omega\wedge dd^{c}\omega] \wedge\omega^{n-3}\] \[=k\,dd^{c}\omega\wedge[b(k-2)\eta_{1}\wedge\eta_{2}+\Phi_{1}+ \Phi_{2}]\wedge\omega^{n-3}.\] Since \((\eta_{1}\wedge\eta_{2})^{2}=0\), it follows from the binomial theorem that \[\omega^{n-3} =(\Phi_{1}+\Phi_{2}-b\,\eta_{1}\wedge\eta_{2})^{n-3}\] \[=(\Phi_{1}+\Phi_{2})^{n-3}-(n-3)(\Phi_{1}+\Phi_{2})^{n-4}\wedge(b \,\eta_{1}\wedge\eta_{2}),\] and therefore \[dd^{c}\omega^{k}\wedge\omega^{n-k-1} =k\,dd^{c}\omega\wedge[b(k-n+1)(\Phi_{1}+\Phi_{2})^{n-3}\wedge \eta_{1}\wedge\eta_{2}+(\Phi_{1}+\Phi_{2})^{n-2}]\] \[=k\,dd^{c}\omega\wedge[b(k-n+1)\eta_{1}\wedge\eta_{2}+(\Phi_{1}+ \Phi_{2})]\wedge(\Phi_{1}+\Phi_{2})^{n-3}.\] Given that \(\Phi_{1}^{p}=0\) when \(p>n_{1}\) and \(\Phi_{2}^{p}=0\) when \(p>n_{2}\), an index \(j\) satisfies \(j\leq n_{1}\) and \(n-3-j\leq n_{2}\) only when \(n_{1}-2\leq j\leq n_{1}\). Hence, \[(\Phi_{1}+\Phi_{2})^{n-3}=\binom{n-3}{n_{1}-2}\Phi_{1}^{n_{1}-2}\wedge\Phi_{2 }^{n_{2}}+\binom{n-3}{n_{1}-1}\Phi_{1}^{n_{1}-1}\wedge\Phi_{2}^{n_{2}-1}+ \binom{n-3}{n_{1}}\Phi_{1}^{n_{1}}\wedge\Phi_{2}^{n_{2}-2}.\] Therefore, using the expression for \(dd^{c}\omega\) given in (5.6), \[dd^{c}\omega^{k}\wedge\omega^{n-k-1}=4bk(k-n+1)C(n,n_{1})\Phi_{1}^{n_{1}} \wedge\Phi_{2}^{n_{2}}\wedge\eta_{1}\wedge\eta_{2},\] where \(C(n,n_{1})=\binom{n-3}{n_{1}-2}+2a\binom{n-3}{n_{1}-1}+(a^{2}+b^{2})\binom{n- 3}{n_{1}}\). Since \(4bk\neq 0\) and \(\Phi_{1}^{n_{1}}\wedge\Phi_{2}^{n_{2}}\wedge\eta_{1}\wedge\eta_{2}\) is a volume form on \(M_{a,b}\), we get that \(dd^{c}\omega^{k}\wedge\omega^{n-k-1}=0\) if and only if \((k-n+1)C(n,n_{1})=0\), and this is equivalent to (5.8). Let us now analyze the \(k\)-Gauduchon condition in the case \(\dim_{\mathbb{C}}M_{a,b}=3\). In [24] it was proved that the metric \((J_{a,b},g_{a,b})\) is \(1\)-Gauduchon if and only if it is SKT if and only if \(a=0\). We deal next with the only missing case \(k=2\), which corresponds to Gauduchon metrics. **Proposition 5.10**.: _Let \(M_{a,b}=S_{1}\times S_{2}\) be a product of Sasakian manifolds with \(\dim_{\mathbb{C}}M_{a,b}=3\). Then the Hermitian structure \((J_{a,b},g_{a,b})\) is Gauduchon._ Proof.: It follows from the computations in the proof of Theorem 5.9 that \[dd^{c}\omega^{2} =2\,dd^{c}\omega\wedge(\Phi_{1}+\Phi_{2})\] \[=8[\Phi_{1}^{3}+(2a+1)\Phi_{1}^{2}\wedge\Phi_{2}+(a^{2}+b^{2}+2a) \Phi_{1}\wedge\Phi_{2}^{2}+(a^{2}+b^{2})\Phi_{2}^{3}],\] using (5.6). We have to analyze two cases: (i) \(\dim S_{1}=\dim S_{2}=3\) and (ii) \(\dim S_{1}=1\) and \(\dim S_{2}=5\). For (i), note that \(\Phi_{i}^{2}=0\) for \(i=1,2\) and therefore \(dd^{c}\omega^{2}=0\). For (ii), we have that \(\Phi_{1}=0\) and \(\Phi_{2}^{3}=0\) and therefore again \(dd^{c}\omega^{2}=0\). ## 6. The Bismut connection on \(M_{a,b}=S_{1}\times S_{2}\) In this section we exhibit an explicit formula for the Bismut connection \(\nabla^{B}\) on \(M_{a,b}=S_{1}\times S_{2}\) associated to the Hermitian structure \((J_{a,b},g_{a,b})\), in terms of the characteristic connections of the Sasakian manifolds \(S_{1}\) and \(S_{2}\). As an application we study the Ricci curvature \(\operatorname{Ric}^{B}\) and the Ricci form \(\rho^{B}\) associated to \(\nabla^{B}\). In particular, we characterize the Hermitian structures such that \(\operatorname{Ric}^{B}=0\) and those known as Calabi-Yau with torsion, i.e. \(\rho^{B}=0\), in terms of Sasakian \(\eta\)-Einstein manifolds. We use this characterization to provide examples of such manifolds. Let us recall some basic facts about metric connections with totally skew-symmetric torsion. For more details we refer to [1]. A metric connection \(D\) with torsion \(T\) on a Riemannian manifold \((M,g)\) is said to have _totally skew-symmetric torsion_, or _skew torsion_ for short, if the \((0,3)\)-tensor field \(T\) defined by \[T(X,Y,Z)=g(T(X,Y),Z)\] is a \(3\)-form. The relation between \(D\) and the Levi-Civita connection \(\nabla\) is then given by \[D_{X}Y=\nabla_{X}Y+\frac{1}{2}T(X,Y). \tag{6.1}\] It is well known that \(D\) has the same geodesics as \(\nabla\). Both on Hermitian manifolds and Sasakian manifolds there is a distinguished metric connection with skew-symmetric torsion: 1. On any Hermitian manifold \((M,J,g)\) there exists a unique metric connection \(\nabla^{B}\) with skew-symmetric torsion such that \(\nabla^{B}J=0\) (see [8]). This connection is known as the _Bismut_ connection (or _Strominger_) and its torsion \(3\)-form is given by \(T^{B}(X,Y,Z)=d^{c}\omega(X,Y,Z)=-d\omega(JX,JY,JZ)\), where \(\omega=g(\cdot,J\cdot)\) is the fundamental \(2\)-form. 2. On any Sasakian manifold \((S,\varphi,\xi,\eta,g)\) there exists a unique metric connection \(\nabla^{C}\) with skew-symmetric torsion such that \(\nabla^{C}\varphi=\nabla^{C}\eta=\nabla^{C}\xi=0\) (see [25]). It is known as the _characteristic_ connection2 and its torsion is given by \(T^{C}=\eta\wedge d\eta\). If we consider \(T^{C}\) as the usual \((1,2)\)-tensor \(T^{C}(X,Y)=\nabla^{C}_{X}Y-\nabla^{C}_{Y}X-[X,Y]\), we obtain that \(T^{C}\) is given by Footnote 2: In fact, it was proved in [25] that the characteristic connection exists for a larger class of almost contact metric manifolds, namely, those which satisfy that \(N_{\varphi}\) is totally skew-symmetric and \(\xi\) is a Killing vector field. \[T^{C}(X,Y)=2(-\eta(X)\varphi Y+\eta(Y)\varphi X+\Phi(X,Y)\xi). \tag{6.2}\] We compute now the Bismut connection on \(M_{a,b}=S_{1}\times S_{2}\) associated to \((J_{a,b},g_{a,b})\). **Proposition 6.1**.: _The Bismut connection \(\nabla^{B}\) associated to the Hermitian structure \((J_{a,b},g_{a,b})\) on the product \(M_{a,b}=S_{1}\times S_{2}\) of two Sasakian manifolds is given by:_ 1. \(\nabla^{B}_{X_{1}}Y_{1}=\nabla^{1,C}_{X_{1}}Y_{1}\in\mathfrak{X}(S_{1})\)_,_ 2. \(\nabla^{B}_{X_{2}}Y_{2}=\nabla^{2,C}_{X_{2}}Y_{2}-2(a^{2}+b^{2}-1)\eta_{2}(X_ {2})\varphi_{2}Y_{2}\in\mathfrak{X}(S_{2})\)_,_ 3. \(\nabla^{B}_{X_{1}}Y_{2}=-2a\eta_{1}(X_{1})\varphi_{2}Y_{2}\in\mathfrak{X}(S_{2})\)_,_ 4. \(\nabla^{B}_{X_{2}}Y_{1}=-2a\eta_{2}(X_{2})\varphi_{1}Y_{1}\in\mathfrak{X}(S_{1})\)_,_ _where \(\nabla^{iC}\) is the characteristic connection on the Sasakian manifold \(S_{i}\), \(i=1,2\)._ _In particular, \(\nabla^{B}\xi_{1}=\nabla^{B}\xi_{2}=0\). Moreover, the Lee form \(\theta\) associated to \((J_{a,b},g_{a,b})\) is \(\nabla^{B}\)-parallel, \(\nabla^{B}\theta=0\)._ Proof.: From (5.3) we obtain the following formula for \(T^{B}\): \[T^{B}=2[\Phi_{1}\wedge(\eta_{1}+a\eta_{2})+\Phi_{2}\wedge(a\eta_{1}+(a^{2}+b^{2}) \eta_{2})]. \tag{6.3}\] Replacing \(T^{B}\) in (6.1) and using the expressions for the Levi-Civita connection obtained in Corollary 3.3 we obtain (i)-(iv). It is immediate to verify from (i)-(iv) that both \(\xi_{1}\) and \(\xi_{2}\) are \(\nabla^{B}\)-parallel. From this, together with the fact that \(\eta\) is \(\nabla^{C}\)-parallel on any Sasakian manifold, we obtain that \(\nabla^{B}\eta_{1}=\nabla^{B}\eta_{2}=0\). Then it follows from (5.5) that \(\nabla^{B}\theta=0\). **Corollary 6.2**.: _On \(M_{0,1}=S_{1}\times S_{2}\) equipped with the Morimoto's structure \((J_{0,1},g_{0,1})\), the associated Bismut connection satisfies_ \[\nabla^{B}_{X_{1}+X_{2}}(Y_{1}+Y_{2})=\nabla^{1,C}_{X_{1}}Y_{1}+\nabla^{2,C}_ {X_{2}}Y_{2}.\] In what follows we will study the vanishing of the Ricci curvatures associated to \(\nabla^{B}\). Let us recall first that on a Hermitian manifold \((M^{2n},J,g)\) the Ricci tensor \(\operatorname{Ric}^{B}\) and the Ricci form \(\rho^{B}\) of the Bismut connection \(\nabla^{B}\) are defined by \[\operatorname{Ric}^{B}(X,Y)=\sum_{i=1}^{2n}g(R^{B}(u_{i},X)Y,u_{i}),\quad\rho^ {B}(X,Y)=\frac{1}{2}\sum_{i=1}^{2n}g(R^{B}(X,Y)u_{i},Ju_{i}), \tag{6.4}\] where \(\{u_{1},\dots,u_{2n}\}\) is a local orthonormal frame of \(M\). We will compute \(\operatorname{Ric}^{B}\) and \(\rho^{B}\) using formulas appearing in [39], and as a consequence we will have no need to determine explicitly the Bismut curvature tensor \(R^{B}\). In this article we will use the following convention, as in [32, 33]. We will say that the Hermitian structure \((J,g)\) on \(M\) is _Calabi-Yau with torsion_ (CYT, for short) if the Ricci form \(\rho^{B}\) associated to the Bismut connection vanishes identically, or equivalently, the (restricted) holonomy group of the Bismut connection is contained in \(\operatorname{SU}(n)\). Resuming our study of the Bismut connection on the product of Sasakian manifolds, we recall the following result in [7], which can also be verified using Proposition 6.1. **Proposition 6.3**.: _[_7_, Proposition 3.2]_ _The Bismut connection \(\nabla^{B}\) associated to \((J_{a,b},g_{a,b})\) on \(M_{a,b}=S_{1}\times S_{2}\) has parallel torsion, i.e., \(\nabla^{B}T^{B}=0\)._ As a consequence, it follows from [17] that the Bismut curvature \(R^{B}\) associated to \((J_{a,b},g_{a,b})\) satisfies \[g(R^{B}(X,Y)Z,W)=g(R^{B}(Z,W)X,Y)\] and from this it can be readily seen that the Bismut-Ricci tensor \(\operatorname{Ric}^{B}\) is symmetric. According to [39] this is equivalent to \[\delta T^{B}=0, \tag{6.5}\] where \(\delta\) is the codifferential and \(T^{B}\) is the torsion \(3\)-form. _Remark 6.4_.: Proposition 6.3 allows us to determine which of the Hermitian structures \((J_{a,b},g_{a,b})\) on \(M_{a,b}\) are _Kahler-like_, i.e. they satisfy, for any vector fields \(X,Y,Z,W\), * (First Bianchi identity) \(R^{B}(X,Y,Z)+R^{B}(Y,Z,X)+R^{B}(Z,X,Y)=0\), * \(R^{B}(X,Y,Z,W)=R^{B}(JX,JY,Z,W)\). In fact, Proposition 6.3 implies the second condition due to [4, Lemma 3.13]. Moreover, due to [23, Theorem 3.2], the Bismut connection associated to \((J_{a,b},g_{a,b})\) on \(M_{a,b}\) satisfies the first Bianchi identity if and only if \((J_{a,b},g_{a,b})\) is SKT. Therefore, \((J_{a,b},g_{a,b})\) is Kahler-like if and only if it is SKT. We proceed now to study when the Hermitian structure \((J_{a,b},g_{a,b})\) on \(M_{a,b}=S_{1}\times S_{2}\) satisfies either \(\operatorname{Ric}^{B}=0\) or \(\rho^{B}=0\). We will show that these structures are closely related to a special family of Sasakian manifolds, namely, the \(\eta\)-Einstein ones. A \((2n+1)\)-dimensional Sasakian manifold \((S,\varphi,\xi,\eta,g)\) is said to be \(\eta\)_-Einstein_ if the Ricci curvature tensor of the metric \(g\) satisfies the equation \[\operatorname{Ric}=\lambda g+\nu\eta\otimes\eta, \tag{6.6}\] for some constants \(\lambda,\nu\in\mathbb{R}\). It is well known that on any Sasakian manifold \(S\) the Riemannian curvature tensor \(R\) satisfies \[R(X,Y)\xi=\eta(Y)X-\eta(X)Y,\] for all \(X,Y\in\mathfrak{X}(S)\). It follows easily from this equation that \(\operatorname{Ric}(\xi,X)=2n\eta(X)\) for any vector field \(X\) on \(S\) and using this fact we obtain: * \(\lambda+\nu=2n\), * the \(\eta\)-Einstein condition (6.6) with constants \((\lambda,2n-\lambda)\) is equivalent to (6.7) \[\operatorname{Ric}(U,V)=\lambda g(U,V),\qquad U,V\in\Gamma(\mathcal{D}).\] Another immediate consequence of the definition is that every \(\eta\)-Einstein manifold is necessarily of constant scalar curvature \(s=2n(\lambda+1)\). In the next results we characterize the Hermitian structures \((J_{a,b},g_{a,b})\) such that \(\operatorname{Ric}^{B}=0\). We will need an explicit expression for the (1,2)-tensor \(T^{B}\), which we summarize in the following lemma, whose proof follows from Proposition 6.1 and equation (6.2). **Lemma 6.5**.: _Let \(X_{i}\in\mathfrak{X}(S_{i})\) and \(\{\xi_{1},J\xi_{1},e_{1},\ldots,e_{2n_{1}},f_{1},\ldots,f_{2n_{2}}\}\) be a local orthonormal frame on \(M_{a,b}=S_{1}\times S_{2}\) as in (4.3). Then,_ \[T^{B}(X_{1},\xi_{1}) =2\varphi_{1}X_{1},\quad T^{B}(X_{1},J\xi_{1})=0\] \[T^{B}(X_{1},e_{j}) =-2(\eta_{1}(X_{1})\varphi_{1}e_{j}+\Phi_{1}(e_{j},X_{1})\xi_{1}),\] \[T^{B}(X_{1},f_{k}) =-2a\eta_{1}(X_{1})\varphi_{2}f_{k},\] \[T^{B}(X_{2},\xi_{1}) =2a\varphi_{2}X_{2},\quad T^{B}(X_{2},J\xi_{1})=2b\varphi_{2}X_{2},\] \[T^{B}(X_{2},e_{j}) =-2a\eta_{2}(X_{2})\varphi_{1}e_{j},\] \[T^{B}(X_{2},f_{k}) =-2(a^{2}+b^{2})\eta_{2}(X_{2})\varphi_{2}f_{k}+2\Phi_{2}(X_{2},f _{k})\xi_{2}.\] _In particular, for \(X_{i},Y_{i}\in\Gamma(\mathcal{D}_{i})\) we obtain_ \[T^{B}(X_{i},Y_{i})=2\Phi_{i}(X_{i},Y_{i})\xi_{i},\quad\text{and}\quad T^{B}(X_ {1},Y_{2})=0.\] **Theorem 6.6**.: _The Bismut-Ricci tensor \(\operatorname{Ric}^{B}\) associated to the Hermitian structure \((J_{a,b},g_{a,b})\) is given by:_ \[\operatorname{Ric}^{B}(X_{1},Y_{1}) =\operatorname{Ric}^{1}(X_{1},Y_{1})-2g_{1}(X_{1},Y_{1})-(2n_{1}- 2)\eta_{1}(X_{1})\eta_{1}(Y_{1})\] \[\operatorname{Ric}^{B}(X_{1},Y_{2}) =0\] \[\operatorname{Ric}^{B}(X_{2},Y_{2}) =\operatorname{Ric}^{2}(X_{2},Y_{2})-2(2a^{2}+2b^{2}-1)g_{2}(X_{2 },Y_{2})\] \[\quad+[2(2a^{2}+2b^{2}-1)-2n_{2})]\eta_{2}(X_{2})\eta_{2}(Y_{2}),\] _where \(X_{i},Y_{i}\in\mathfrak{X}(S_{i})\), for \(i=1,2\) and \(\operatorname{Ric}^{i}\) denotes the Ricci curvature associated to the Levi-Civita connection \(\nabla^{i}\) on \(S_{i}\). In particular, \(\operatorname{Ric}^{B}=0\) if and only if both \(S_{1}\) and \(S_{2}\) are \(\eta\)-Einstein _with constants_ \[(\lambda_{1},\nu_{1}) =(2,2n_{1}-2),\] \[(\lambda_{2},\nu_{2}) =(2(2a^{2}+2b^{2}-1),2n_{2}-2(2a^{2}+2b^{2}-1)),\] _respectively._ Proof.: To compute the Bismut-Ricci tensor \(\operatorname{Ric}^{B}\) associated to \((J_{a,b},g_{a,b})\) we will use the formula for \(\operatorname{Ric}^{B}\) on a general Hermitian manifold \((M^{2n},g)\) obtained in [39, Proposition 3.1]: \[\operatorname{Ric}(X,Y)=\operatorname{Ric}^{B}(X,Y)+\frac{1}{2}\delta T^{B}(X, Y)+\frac{1}{4}\sum_{i=1}^{2n}g(T^{B}(X,u_{i}),T^{B}(Y,u_{i})),\] where \(\operatorname{Ric}\) denotes the Ricci tensor of \(g\) and \(\{u_{i}\}_{i=1}^{2n}\) is a local orthonormal frame. Using that \(\delta T^{B}=0\) for the Hermitian structure \((J_{a,b},g_{a,b})\) on \(M_{a,b}=S_{1}\times S_{2}\), in the local orthonormal frame (4.3) the expression of \(\operatorname{Ric}^{B}\) simplifies to \[\operatorname{Ric}^{B}(X,Y) =\operatorname{Ric}(X,Y)-\frac{1}{4}\big{[}g(T^{B}(X,\xi_{1}),T^{ B}(Y,\xi_{1}))+g(T^{B}(X,J\xi_{1}),T^{B}(Y,J\xi_{1}))\] \[\quad+\sum_{j=1}^{2n_{1}}g(T^{B}(X,e_{j}),T^{B}(Y,e_{j}))+\sum_{k =1}^{2n_{2}}g(T^{B}(X,f_{k}),T^{B}(Y,f_{k}))\big{]}, \tag{6.8}\] for \(X,Y\in\mathfrak{X}(M_{a,b})\). In [42] the Ricci curvature of the metric \(g_{a,b}\) on \(M_{a,b}\) has been computed: \[\operatorname{Ric}(X_{1},Y_{1}) =\operatorname{Ric}^{1}(X_{1},Y_{1})+2a^{2}n_{2}\eta_{1}(X_{1}) \eta_{1}(Y_{1})\] \[\operatorname{Ric}(X_{1},Y_{2}) =2a(n_{1}+n_{2}(a^{2}+b^{2}))\eta_{1}(X_{1})\eta_{2}(Y_{2})\] \[\operatorname{Ric}(X_{2},Y_{2}) =\operatorname{Ric}^{2}(X_{2},Y_{2})-2\lambda_{a,b}g_{2}(X_{2},Y_ {2})+2[n_{1}a^{2}+\lambda_{a,b}+n_{2}(a^{2}+b^{2})^{2}-n_{2}]\eta_{2}(X_{2}) \eta_{2}(Y_{2}),\] where \(\lambda_{a,b}=a^{2}+b^{2}-1\). Next we compute \(\operatorname{Ric}^{B}\) using (6.8), the formulas for \(\operatorname{Ric}\) above and Lemma 6.5. For \(X_{1},Y_{1}\in\mathfrak{X}(S_{1})\), we have that \[\operatorname{Ric}^{B}(X_{1},Y_{1})=\operatorname{Ric}^{1}(X_{1},Y_{1})+2a^{2}n_{2}\eta_{1}(X_{1})\eta_{1}(Y_{1})-\frac{1}{4}\Big{[}4g_{1}( \varphi_{1}X_{1},\varphi_{1}Y_{1})\] \[\quad+4\sum_{j=1}^{2n_{1}}g_{1}(\eta_{1}(X_{1})\varphi_{1}e_{j}+ \Phi_{1}(e_{j},X_{1})\xi_{1},\eta_{1}(Y_{1})\varphi_{1}e_{j}+\Phi_{1}(e_{j},Y _{1})\xi_{1})\] \[\quad+\sum_{k=1}^{2n_{2}}g_{2}(-2a\eta_{1}(X_{1})\varphi_{2}f_{k},-2a\eta_{1}(Y_{1})\varphi_{2}f_{k})\Big{]}\] \[=\operatorname{Ric}^{1}(X_{1},Y_{1})+2a^{2}n_{2}\eta_{1}(X_{1}) \eta_{1}(Y_{1})-\frac{1}{4}\Big{[}4g_{1}(\varphi_{1}X_{1},\varphi_{1}Y_{1})+8 n_{1}\eta_{1}(X_{1})\eta_{1}(Y_{1})\] \[\quad+4\sum_{j=1}^{2n_{1}}g_{1}(e_{j},\varphi_{1}X_{1})g_{1}(e_{j },\varphi_{1}Y_{1})+8n_{2}a^{2}\eta_{1}(X_{1})\eta_{1}(Y_{1})\Big{]}\] \[=\operatorname{Ric}^{1}(X_{1},Y_{1})-g_{1}(\varphi_{1}X_{1},\varphi _{1}Y_{1})-2n_{1}\eta_{1}(X_{1})\eta_{1}(Y_{1})-g_{1}(\varphi_{1}X_{1},\varphi _{1}Y_{1})\] \[=\operatorname{Ric}^{1}(X_{1},Y_{1})-2g_{1}(X_{1},Y_{1})-(2n_{1}- 2)\eta_{1}(X_{1})\eta_{1}(Y_{1}),\] where we have used (2.4) in the third equality. For \(X_{1}\in\mathfrak{X}(S_{1}),Y_{2}\in\mathfrak{X}(S_{2})\), \[\operatorname{Ric}^{B}(X_{1},Y_{2})=2a[n_{1}+n_{2}(a^{2}+b^{2})]\eta_{1}(X_{1}) \eta_{2}(Y_{2})-\frac{1}{4}\big{[}\underbrace{g(2\varphi_{1}X_{1},2a\varphi_{2 }Y_{2})}_{=0}\] \[+\sum_{j=1}^{2n_{1}}g_{1}(-2(\eta_{1}(X_{1})\varphi_{1}e_{j}+\Phi_{1}(e_{j},X_{ 1})\xi_{1},-2a\eta_{2}(Y_{2})\varphi_{1}e_{j})\] \[+\sum_{k=1}^{2n_{2}}g_{2}(-2a\eta_{1}(X_{1})\varphi_{2}f_{k},-2(a^{2}+b^{2}) \eta_{2}(Y_{2})\varphi_{2}f_{k}+2\Phi_{2}(Y_{2},f_{k})\xi_{2})\big{]}\] \[=2a[n_{1}+n_{2}(a^{2}+b^{2})]\eta_{1}(X_{1})\eta_{2}(Y_{2})-\frac{1}{4}\big{[} 8an_{1}\eta_{1}(X_{1})\eta_{2}(Y_{2})+8n_{2}a(a^{2}+b^{2})\eta_{1}(X_{1})\eta_ {2}(Y_{2})\big{]}\] \[=0.\] Finally, for \(X_{2},Y_{2}\in\mathfrak{X}(S_{2})\), \[\operatorname{Ric}^{B}(X_{2},Y_{2})=\operatorname{Ric}^{2}(X_{2},Y_{2})-2 \lambda_{a,b}g_{2}(X_{2},Y_{2})+2[n_{1}a^{2}+\lambda_{a,b}+n_{2}(a^{2}+b^{2})^ {2}-n_{2}]\eta_{2}(X_{2})\eta_{2}(Y_{2})\] \[-\frac{1}{4}\big{[}4(a^{2}+b^{2})g_{2}(\varphi_{2}X_{2},\varphi_{2}Y_{2})+\sum _{j=1}^{2n_{1}}g_{1}(-2a\eta_{2}(X_{2})\varphi_{1}e_{j},-2a\eta_{2}(Y_{2}) \varphi_{1}e_{j})\] \[+\sum_{k=1}^{2n_{2}}\big{(}g(-2(a^{2}+b^{2})\eta_{2}(X_{2})\varphi_{2}f_{k},-2 (a^{2}+b^{2})\eta_{2}(Y_{2})\varphi_{2}f_{k})+g(2\Phi_{2}(X_{2},f_{k})\xi_{2},2\Phi_{2}(Y_{2},f_{k})\xi_{2})\big{)}\big{]}\] \[=\operatorname{Ric}^{2}(X_{2},Y_{2})-2\lambda_{a,b}g_{2}(X_{2},Y_{2})+2[n_{1} a^{2}+\lambda_{a,b}+n_{2}(a^{2}+b^{2})^{2}-n_{2}]\eta_{2}(X_{2})\eta_{2}(Y_{2})\] \[-\frac{1}{4}\big{[}4(a^{2}+b^{2})g_{2}(\varphi_{2}X_{2},\varphi_{2}Y_{2})+8n_{ 1}a^{2}\eta_{2}(X_{2})\eta_{2}(Y_{2})\] \[+8n_{2}(a^{2}+b^{2})^{2}\eta_{2}(X_{2})\eta_{2}(Y_{2})+4\sum_{k=1}^{2n_{2}}(a^{ 2}+b^{2})g_{2}(f_{k},\varphi_{2}X_{2})g_{2}(f_{k},\varphi_{2}Y_{2})\big{]}\] \[=\operatorname{Ric}^{2}(X_{2},Y_{2})-2\lambda_{a,b}g_{2}(X_{2},Y_{2})+2[n_{1} a^{2}+\lambda_{a,b}+n_{2}(a^{2}+b^{2})^{2}-n_{2}]\eta_{2}(X_{2})\eta_{2}(Y_{2})\] \[-2(a^{2}+b^{2})g_{2}(\varphi_{2}X_{2},\varphi_{2}Y_{2})-2n_{1}a^{2}\eta_{2}(X_ {2})\eta_{2}(Y_{2})-2n_{2}(a^{2}+b^{2})^{2}\eta_{2}(X_{2})\eta_{2}(Y_{2})\] \[=\operatorname{Ric}^{2}(X_{2},Y_{2})-2(2a^{2}+2b^{2}-1)g_{2}(X_{2},Y_{2})+[2(2a ^{2}+2b^{2}-1)-2n_{2})]\eta_{2}(X_{2})\eta_{2}(Y_{2})\] where we have used that \(g(\xi_{2},\xi_{2})=a^{2}+b^{2}\) in the second equality and (2.4) in the last one. The last statement concerning \(\operatorname{Ric}^{B}=0\) is clear. **Corollary 6.7**.: _On \(M_{0,1}=S_{1}\times S_{2}\) equipped with Morimoto's structure \((J_{0,1},g_{0,1})\), the Bismut-Ricci tensor \(\operatorname{Ric}^{B}\) vanishes if and only if \(S_{1}\) and \(S_{2}\) are \(\eta\)-Einstein with constants_ \[(\lambda_{1},\nu_{1}) =(2,2n_{1}-2),\] \[(\lambda_{2},\nu_{2}) =(2,2n_{2}-2),\] _respectively._ _Remark 6.8_.: It follows from the expressions for \(\operatorname{Ric}^{B}\) obtained in Theorem 6.6 that \[\operatorname{Ric}^{B}(\xi_{1},X)=\operatorname{Ric}^{B}(\xi_{2},X)=0,\] for any \(X\in\mathfrak{X}(M_{a,b})\). _Remark 6.9_.: We note that the name Bismut-Ricci flat has been used recently to denote metric connections with skew-symmetric torsion such that the torsion \(3\)-form is closed and the associated Ricci tensor vanishes (see for instance [27, 49, 50]). In our context, this reduces to Hermitian structures \((J_{a,b},g_{a,b})\) which are SKT (since \(dT^{B}=dd^{c}\omega=0\)) and satisfy \(\operatorname{Ric}^{B}=0\) It follows from Proposition 5.6 that, in the SKT case, \(\dim(S_{1}\times S_{2})\leq 6\) and, in dimension \(6\), the only Bismut-Ricci flat structures of the form \((J_{a,b},g_{a,b})\) occur on products \(S_{1}\times S_{2}\) with \(\dim S_{1}=\dim S_{2}=3\) and \(a=0\). Moreover, we will see in SS6.1 that Theorem 6.6 implies that both \(S_{1}\) and \(S_{2}\) are quotients of the form \(\mathbb{S}^{3}/\Gamma\) with \(\Gamma\) a finite subgroup of \(\operatorname{SU}(2)\), which is identified with \(\mathbb{S}^{3}\). As an application of Theorem 6.6, using a result in [21], we obtain information about the canonical bundle of the compact complex manifold \((M_{a,b},J_{a,b})\) when \(\operatorname{Ric}^{B}=0\). **Proposition 6.10**.: _The product \(M_{a,b}=S_{1}\times S_{2}\) of two compact Sasakian manifolds equipped with the Hermitian structure \((J_{a,b},g_{a,b})\) such that \(\operatorname{Ric}^{B}=0\) does not have holomorphically trivial canonical bundle, provided \(n_{1}\geq 1\) and \(n_{2}\geq 1\)._ Proof.: According to [21, Theorem 4.1], if \((M_{a,b},J_{a,b})\) admitted a non-vanishing holomorphic \((n_{1}+n_{2},0)\)-form then the fact that \(\operatorname{Ric}^{B}=0\) would imply that \((M_{a,b},J_{a,b},g_{a,b})\) is conformally balanced. This would mean that \(d\theta_{a,b}=0\), however, it follows from (5.5) that \[d\theta_{a,b}=\frac{4b}{n_{1}+n_{2}}(n_{2}\Phi_{1}-n_{1}\Phi_{2}),\] which is non-zero since \(n_{1}\geq 1\) and \(n_{2}\geq 1\). Therefore the canonical bundle of \((M_{a,b},J_{a,b})\) is not trivial. **Corollary 6.11**.: _Let \(S\) be a Sasakian \(\eta\)-Einstein manifold of dimension \(\geq 3\) with constants \((\lambda,\nu)\), \(\lambda>-2\). Then, \((\mathbb{S}^{3}\times S,J_{a,b})\) does not have trivial canonical bundle, for \(a,b\) such that \(a^{2}+b^{2}=\frac{\lambda_{2}+2}{4}\)._ Examples of Sasakian \(\eta\)-Einstein manifolds with \(\lambda>-2\) will appear in SS6.1. We analyze in the next result the CYT condition on \(M_{a,b}=S_{1}\times S_{2}\), namely, \(\rho^{B}=0\). **Theorem 6.12**.: _Assuming \(n_{1}\geq 1\) and \(n_{2}\geq 1\), the Bismut-Ricci form \(\rho^{B}\) associated to the Hermitian structure \((J_{a,b},g_{a,b})\) on \(M_{a,b}=S_{1}\times S_{2}\) is given by:_ \[\rho^{B}(X_{1},Y_{1}) =\operatorname{Ric}^{1}(X_{1},\varphi_{1}Y_{1})-2(2n_{1}+2an_{2}- 1)\Phi_{1}(X_{1},Y_{1}),\] \[\rho^{B}(X_{1},Y_{2}) =0,\] \[\rho^{B}(X_{2},Y_{2}) =\operatorname{Ric}^{2}(X_{2},\varphi_{2}Y_{2})-2[2an_{1}+2(a^{2} +b^{2})n_{2}-1]\Phi_{2}(X_{2},Y_{2}).\] _In particular, \((J_{a,b},g_{a,b})\) is CYT if and only if both \(S_{1}\) and \(S_{2}\) are \(\eta\)-Einstein with constants \((\lambda_{1},\nu_{1})\) and \((\lambda_{2},\nu_{2})\) respectively, where_ \[\lambda_{1}=4(n_{1}+an_{2})-2,\quad\text{and}\quad\lambda_{2}=4(an_{1}+(a^{2} +b^{2})n_{2})-2. \tag{6.9}\] Proof.: We start by recalling a formula for \(\rho^{B}\) on a \(2n\)-dimensional Hermitian manifold equipped with the Bismut connection, due to [39]: \[\rho^{B}(X,Y)=\operatorname{Ric}^{B}(X,JY)+(\nabla^{B}_{X}\theta)JY+\frac{1}{ 4}\lambda^{\omega}(X,Y),\] where \(\theta\) denotes the Lee form and the \(2\)-form \(\lambda^{\omega}\) is defined by \[\lambda^{\omega}(X,Y)=\sum_{i=1}^{2n}dT^{B}(X,Y,u_{i},Ju_{i}),\] in a local orthonormal frame \(\{u_{i}\}_{i=1}^{2n}\). In our case of a Sasakian product \(M_{a,b}=S_{1}\times S_{2}\), we will the use the local orthonormal frame (4.3). Note that, according to Proposition 6.1, we have that \(\nabla^{B}\theta=0\). We compute first \(\lambda^{\omega}\) (note that these computations only make sense when \(n_{i}\geq 1\), \(i=1,2\)). From (6.3) we obtain \[dT^{B}=4[\Phi_{1}^{2}+2a\Phi_{1}\wedge\Phi_{2}+(a^{2}+b^{2})\Phi_{2}^{2}].\] This says that \(dT^{B}(X,Y,Z,W)=0\) when one of \(X,Y,Z,W\) is \(\xi_{1}\) or \(\xi_{2}\). Thus, \[\iota_{\xi_{1}}\lambda^{\omega}=\iota_{\xi_{2}}\lambda^{\omega}=0, \tag{6.10}\] and for \(X,Y\in\mathfrak{X}(M_{a,b})\), \[\lambda^{\omega}(X,Y)=\sum_{j=1}^{2n_{1}}dT^{B}(X,Y,e_{j},\varphi_{1}e_{j})+ \sum_{k=1}^{2n_{2}}dT^{B}(X,Y,f_{k},\varphi_{2}f_{k}).\] To compute these terms we will use a formula for \(dT^{B}\) in terms of \(T^{B}\) that appears in the proof of [39, Proposition 3.1]. Given that \(\nabla^{B}T^{B}=0\), this formula simplifies to \[dT^{B}(X,Y,Z,W)=\mathop{\mathfrak{S}}_{X,Y,Z}2g(T^{B}(X,Y),T^{B}(Z,W)),\] where \(\mathop{\mathfrak{S}}_{X,Y,Z}\) denotes the cyclic sum of \(X,Y,Z\). Then, it follows from Lemma 6.5 that, for \(X_{1},Y_{1}\in\Gamma(\mathcal{D}_{1})\), \[dT^{B}(X_{1},Y_{1},e_{j},\varphi_{1}e_{j}) =2\mathop{\mathfrak{S}}_{X_{1},Y_{1},e_{j}}g(T^{B}(X_{1},Y_{1}),T^ {B}(e_{j},\varphi_{1}e_{j}))\] \[=8\mathop{\mathfrak{S}}_{X_{1},Y_{1},e_{j}}g(\Phi_{1}(X_{1},Y_{1} )\xi_{1},\Phi_{1}(e_{j},\varphi_{1}e_{j})\xi_{1})\] \[=8[\Phi_{1}(X_{1},Y_{1})\Phi_{1}(e_{j},\varphi_{1}e_{j})+\Phi_{1} (Y_{1},e_{j})\Phi_{1}(X_{1},\varphi_{1}e_{j})\] \[\quad+\Phi_{1}(e_{j},X_{1})\Phi_{1}(Y_{1},\varphi_{1}e_{j})]\] \[=8[-\Phi_{1}(X_{1},Y_{1})+g_{1}(e_{j},\varphi_{1}Y_{1})g_{1}(e_{ j},X_{1})-g_{1}(e_{j},\varphi_{1}X_{1})g_{1}(e_{j},Y_{1})]\] and \[dT^{B}(X_{1},Y_{1},f_{k},\varphi_{2}f_{k}) =2[g(T^{B}(X_{1},Y_{1}),T^{B}(f_{k},\varphi_{2}f_{k}))+g(T^{B}(Y_ {1},f_{k}),T^{B}(Y_{1},\varphi_{2}f_{k}))\] \[\quad+g(T^{B}(f_{k},X_{1}),T^{B}(Y_{1},\varphi_{2}f_{k}))]\] It follows from Lemma 6.5 that \(T^{B}(U_{1},U_{2})=0\) for \(U_{i}\in\Gamma(\mathcal{D}_{i})\), so we arrive at \[dT^{B}(X_{1},Y_{1},f_{k},\varphi_{2}f_{k}) =8g(\Phi_{1}(X_{1},Y_{1})\xi_{1},\Phi_{2}(f_{k},\varphi_{2}f_{k}) \xi_{2})\] \[=-8a\Phi_{1}(X_{1},Y_{1}).\] Therefore \[\lambda^{\omega}(X_{1},Y_{1}) =\sum_{j=1}^{2n_{1}}dT^{B}(X_{1},Y_{1},e_{j},\varphi_{1}e_{j})+ \sum_{k=1}^{2n_{2}}dT^{B}(X_{1},Y_{1},f_{k},\varphi_{2}f_{k})\] \[=8[-2n_{1}\Phi_{1}(X_{1},Y_{1})+2\Phi_{1}(X_{1},Y_{1})]-16an_{2} \Phi_{1}(X_{1},Y_{1})\] \[=-16(n_{1}+an_{2}-1)\Phi_{1}(X_{1},Y_{1}).\] For \(X_{1}\in\Gamma(\mathcal{D}_{1}),Y_{2}\in\Gamma(\mathcal{D}_{2})\), it follows from \(T^{B}(U_{1},U_{2})=0\), when \(U_{i}\in\Gamma(\mathcal{D}_{i})\), that \[\lambda^{\omega}(X_{1},Y_{2})=0.\] Finally, for \(X_{2},Y_{2}\in\Gamma(\mathcal{D}_{2})\), \[dT^{B}(X_{2},Y_{2},e_{j},\varphi_{1}e_{j}) =8g(\Phi_{2}(X_{2},Y_{2})\xi_{2},\Phi_{1}(e_{j},\varphi_{1}e_{j}) \xi_{1})\] \[=8a\Phi_{2}(X_{2},Y_{2})\Phi_{1}(e_{j},\varphi_{1}e_{j})\] \[=-8a\Phi_{2}(X_{2},Y_{2}),\] and \[dT^{B}(X_{2},Y_{2},f_{k},\varphi_{2}f_{k}) =2\mathop{\hbox{\Large$\mathop{\hbox{\Large$\mathfrak{S}$}}}}_{X_{2},Y_ {2},f_{k}}g(\Phi_{2}(X_{2},Y_{2})\xi_{2},\Phi_{2}(f_{k},\varphi_{2}f_{k})\xi_{2})\] \[=8(a^{2}+b^{2})[\Phi_{2}(X_{2},Y_{2})\Phi_{2}(f_{k},\varphi_{2}f_{ k})+\Phi_{2}(Y_{2},f_{k})\Phi_{2}(X_{2},\varphi_{2}f_{k})\] \[\quad+\Phi_{2}(f_{k},X_{2})\Phi_{2}(Y_{2},\varphi_{2}f_{k})]\] \[=8(a^{2}+b^{2})[-\Phi_{2}(X_{2},Y_{2})+g_{2}(f_{k},\varphi_{2}Y_{ 2})g_{2}(f_{k},X_{2})\] \[\quad-g_{2}(f_{k},\varphi_{2}X_{2})g_{2}(f_{k},Y_{2})].\] Hence, \[\lambda^{\omega}(X_{2},Y_{2}) =\sum_{j=1}^{2n_{1}}dT^{B}(X_{2},Y_{2},e_{j},\varphi_{1}e_{j})+ \sum_{k=1}^{2n_{2}}dT^{B}(X_{2},Y_{2},f_{k},\varphi_{2}f_{k})\] \[=-16an_{1}\Phi_{2}(X_{2},Y_{2})+8(a^{2}+b^{2})[-2n_{2}\Phi_{2}(X_{ 2},Y_{2})+2\Phi_{2}(X_{2},Y_{2})]\] \[=-16[an_{1}+(a^{2}+b^{2})(n_{2}-1)]\Phi_{2}(X_{2},Y_{2}).\] Now we proceed to compute \(\rho^{B}\). First, note that \(\rho^{B}(X_{1},Y_{2})=0\) when \(X_{1}\in\mathfrak{X}(S_{1})\) and \(Y_{2}\in\mathfrak{X}(S_{2})\) since \(\operatorname{Ric}^{B}(X_{1},Y_{2})=\lambda^{\omega}(X_{1},Y_{2})=0\). Moreover, it follows easily from Remark 6.8 and (6.10) that \[\rho^{B}(\xi_{i},X)=0,\quad X\in\mathfrak{X}(M_{a,b}). \tag{6.11}\] Therefore, it is enough to compute \(\rho^{B}(X_{i},Y_{i})\), \(i=1,2\), for \(X_{i},Y_{i}\in\Gamma(\mathcal{D}_{i})\). From the computations above we obtain that, for \(X_{1},Y_{1}\in\Gamma(\mathcal{D}_{1})\), \[\rho^{B}(X_{1},Y_{1}) =\operatorname{Ric}^{B}(X_{1},\varphi_{1}Y_{1})+\frac{1}{4} \lambda^{\omega}(X_{1},Y_{1})\] \[=\operatorname{Ric}^{1}(X_{1},\varphi_{1}Y_{1})-2\Phi_{1}(X_{1},Y _{1})-4(n_{1}+an_{2}-1)\Phi_{1}(X_{1},Y_{1})\] \[=\operatorname{Ric}^{1}(X_{1},\varphi_{1}Y_{1})-2(2n_{1}+2an_{2}- 1)\Phi_{1}(X_{1},Y_{1}),\] where we have used Theorem 6.6 in the second equality. For \(X_{2},Y_{2}\in\Gamma(\mathcal{D}_{2})\), \[\rho^{B}(X_{2},Y_{2}) =\operatorname{Ric}^{B}(X_{2},\varphi_{2}Y_{2})-4(an_{1}+(a^{2}+b ^{2})(n_{2}-1))\Phi_{2}(X_{2},Y_{2})\] \[=\operatorname{Ric}^{2}(X_{2},\varphi_{2}Y_{2})-2(2a^{2}+2b^{2}-1 )\Phi_{2}(X_{2},Y_{2})\] \[\quad-4(an_{1}+(a^{2}+b^{2})(n_{2}-1))\Phi_{2}(X_{2},Y_{2})\] \[=\operatorname{Ric}^{2}(X_{2},\varphi_{2}Y_{2})-2[2an_{1}+2(a^{2} +b^{2})n_{2}-1]\Phi_{2}(X_{2},Y_{2}),\] where we have used again Theorem 6.6 in the second equality. Therefore, according to (6.7) and using that \(\varphi_{i}\) is an isomorphism on \(\mathcal{D}_{i}\), \(i=1,2\), we arrive at \[\rho^{B}\equiv 0\iff\begin{cases}\operatorname{Ric}^{1}=\lambda_{1}g_{1}+(2n_{1}- \lambda_{1})\eta_{1}\otimes\eta_{1},\\ \operatorname{Ric}^{2}=\lambda_{2}g_{2}+(2n_{2}-\lambda_{2})\eta_{2}\otimes \eta_{2},\end{cases}\] where \(\lambda_{1}=4(n_{1}+an_{2})-2\) and \(\lambda_{2}=4(an_{1}+(a^{2}+b^{2})n_{2})-2\), and this finishes the proof. **Corollary 6.13**.: _Assuming \(n_{1}\geq 1,n_{2}\geq 1\), the product manifold \(M_{0,1}=S_{1}\times S_{2}\) equipped with Morimoto's structure \((J_{0,1},g_{0,1})\) is CYT if and only if \(S_{1}\) and \(S_{2}\) are \(\eta\)-Einstein with constants \((\lambda_{1},\nu_{1})\) and \((\lambda_{2},\nu_{2})\) respectively, where_ \[\lambda_{1}=4n_{1}-2,\quad\text{and}\quad\lambda_{2}=4n_{2}-2\] _Remark 6.14_.: We analyze here the missing cases \(n_{1}=0\) or \(n_{2}=0\) (with \(n_{1}+n_{2}\geq 1\)). Following the lines of the proof of Theorem 6.12 we get: * When \(n_{2}=0\), the Hermitian structure \((J_{a,b},g_{a,b})\) is CYT if and only if \(S_{1}\) is \(\eta\)-Einstein with constants \((\lambda_{1},\nu_{1})\), \(\lambda_{1}=4n_{1}-2\). * When \(n_{1}=0\), the Hermitian structure \((J_{a,b},g_{a,b})\) is CYT if and only if \(S_{2}\) is \(\eta\)-Einstein with constants \((\lambda_{2},\nu_{2})\), \(\lambda_{2}=4(a^{2}+b^{2})n_{2}-2\). Using the expression for the Bismut-Ricci form \(\rho^{B}\) obtained in Theorem 6.12. we can determine when the Hermitian structure \((J_{a,b},g_{a,b})\) on a Sasakian product \(M_{a,b}=S_{1}\times S_{2}\) is _static_. This notion was introduced by Streets and Tian in [52, 53]: an SKT Hermitian metric \(g\) on a complex manifold \((M^{2n},J)\) is called _static_ if its Bismut-Ricci form satisfies \[(\rho^{B})^{1,1}=\alpha\omega,\quad\alpha\in\mathbb{R}, \tag{6.12}\] where \((\rho^{B})^{1,1}\) denotes the \((1,1)\)-component of \(\rho^{B}\) given by \((\rho^{B})^{1,1}(\cdot,\cdot)=\frac{1}{2}(\rho^{B}(\cdot,\cdot)+\rho^{B}(J \cdot,J\cdot))\). Static metrics are closely related to the _pluriclosed flow_, introduced in [52], which is the parabolic flow for SKT metrics defined by \[\frac{\partial}{\partial t}\omega=-(\rho^{B})^{1,1},\quad\omega(0)=\omega_{0}.\] Thus, static metrics are to the pluriclosed flow what Einstein metrics are to the Ricci flow. Furthermore, when \(\alpha=0\) these metrics are fixed points of the pluriclosed flow. The following relation between \(\rho^{B}(JX,JY)\) and \(\rho^{B}(X,Y)\) was proved in [39, Corollary 3.2]: \[\rho^{B}(JX,JY)-\rho(X,Y)=\delta T^{B}(JX,Y)-(\nabla^{B}_{JX}\theta)Y+(\nabla^ {B}_{Y}\theta)JX. \tag{6.13}\] Using (6.5), (6.13) and \(\nabla^{B}\theta=0\) (see Proposition 6.1) we obtain that \(\rho^{B}(JX,JY)=\rho^{B}(X,Y)\) and as a consequence, \[(\rho^{B})^{1,1}=\rho^{B}, \tag{6.14}\] that is, \(\rho^{B}\) is \(J\)-invariant. Due to (6.14) and taking (5.2) into account, condition (6.12) becomes \[\rho^{B}=\alpha\,\omega_{a,b}=\alpha(\Phi_{1}+\Phi_{2}-b\eta_{1}\wedge\eta_{2}).\] Since \(\rho^{B}(\xi_{1},\xi_{2})=0\) (due to (6.11)) and \(\omega_{a,b}(\xi_{1},\xi_{2})=-b\neq 0\), we obtain that \(\rho^{B}(\xi_{1},\xi_{2})=\alpha\,\omega_{a,b}(\xi_{1},\xi_{2})\) if and only if \(\alpha=0\). Therefore, condition (6.12) reduces to the CYT condition. To sum up, the Hermitian structure \((J_{a,b},g_{a,b})\) on a product of Sasakian manifolds \(M_{a,b}=S_{1}\times S_{2}\) satisfies (6.12) if and only if \(\alpha=0\) and \((J_{a,b},g_{a,b})\) is CYT. Let us recall that \((J_{a,b},g_{a,b})\) is SKT only in dimensions 4 and 6: any such structure is SKT in dimension 4, and in dimension 6, \(\dim S_{1}=\dim S_{2}=3\) and \(a=0\) (see Proposition 5.6). In dimension 4, we will see in SS6.1 that Remark 6.14 implies that one factor is one-dimensional and the other one is a quotient of the form \(\mathbb{S}^{3}/\Gamma\) with \(\Gamma\) a finite subgroup of \(\operatorname{SU}(2)\simeq\mathbb{S}^{3}\). In dimension 6, we will see in SS6.1 that Theorem 6.12 implies that both \(S_{1}\) and \(S_{2}\) are quotients of this form. To sum up, **Proposition 6.15**.: _The Hermitian structure \((J_{a,b},g_{a,b})\) on \(M_{a,b}=S_{1}\times S_{2}\) is static if and only if_ * \(\dim M_{a,b}=4\)_, one of the Sasakian factors is 1-dimensional and the other one is a quotient of_ \(\mathbb{S}^{3}\) _by a finite subgroup, or_ * \(\dim S_{1}=\dim S_{2}=3\)_,_ \(a=0\) _and both_ \(S_{1}\) _and_ \(S_{2}\) _are quotients of_ \(\mathbb{S}^{3}\) _by finite subgroups._ _Moreover, these metrics are fixed points of the pluriclosed flow._ _Remark 6.16_.: The fact that \(\mathbb{S}^{3}\times\mathbb{S}^{3}\) carries a CYT structure is well known (see [32]). ### Examples In order to exhibit examples of Sasakian \(\eta\)-Einstein manifolds that fulfill the conditions in Theorems 6.6 and 6.12, we recall the notion of \(\mathcal{D}\)-homothetic deformations (or simply \(\mathcal{D}\)-homotheties), introduced by Tanno in [55]. Given a Sasakian manifold \((S,\varphi,\xi,\eta,g)\), consider the transformation \[\varphi^{\prime}=\varphi,\quad\xi^{\prime}=s^{-1}\xi,\quad\eta^{\prime}=s\eta, \quad g^{\prime}=sg+s(s-1)\eta\otimes\eta,\] for any real constant \(s>0\). Then \((\varphi^{\prime},\xi^{\prime},\eta^{\prime},g^{\prime})\) is again a Sasakian structure on \(S\). Moreover, in the case of Sasakian \(\eta\)-Einstein manifolds, there is the following result: **Proposition 6.17**.: _[_14_, Proposition 18]_ _Let \((S,\varphi,\xi,\eta,g)\) be a \((2n+1)\)-dimensional Sasakian \(\eta\)-Einstein manifold with constants \((\lambda,\nu)\), and consider a \(\mathcal{D}\)-homothetic structure \((\varphi^{\prime},\xi^{\prime},\eta^{\prime},g^{\prime})\) as above. Then, \((S,\varphi^{\prime},\xi^{\prime},\eta^{\prime},g^{\prime})\) is also \(\eta\)-Einstein with constants_ \[\lambda^{\prime}=\frac{\lambda+2-2s}{s},\qquad\nu^{\prime}=2n-\frac{\lambda+2 -2s}{s}.\] In this article we will use the following terminology: a Sasakian \(\eta\)-Einstein manifold \(S\) will be called3_positive_ if \(\lambda>-2\), _null_ if \(\lambda=-2\) and _negative_ if \(\lambda<-2\). It follows from Proposition 6.17 that \(\mathcal{D}\)-homotheties preserve positive, null and negative \(\eta\)-Einstein manifolds, respectively. Positive \(\eta\)-Einstein manifolds include the well-known family of Sasaki-Einstein manifolds4, which is precisely the case when \(\lambda=\dim S-1\) and \(\nu=0\). For instance, the odd-dimensional spheres with the standard Sasakian structure are Einstein (in fact, it was recently proved in [43] that there are infinitely many families of Sasaki-Einstein metrics on every odd-dimensional standard sphere of dimension at least \(5\)). It follows from the Bonnet-Myers theorem that a manifold admitting a positive \(\eta\)-Einstein structure is compact and has finite fundamental group. Footnote 3: The notions of positive, negative and null are defined in [14] for general Sasakian manifolds in terms of the basic first Chern class, and they reduce to the stated inequalities for \(\lambda\) in the case of \(\eta\)-Einstein manifolds. Footnote 4: The literature on Sasaki-Einstein metrics is vast, for instance the whole Chapter 11 of [12] is devoted to Sasaki-Einstein metrics (see also [51]). We can rephrase Theorems 6.6 and 6.12 in terms of \(S_{1}\) and \(S_{2}\) being positive, null or negative Sasakian \(\eta\)-Einstein. Since \(\lambda_{1}=2\) and \(\lambda_{2}=2(2a^{2}+2b^{2})-2>-2\), we obtain **Theorem 6.18**.: _Let \(M=S_{1}\times S_{2}\) be the product of two Sasakian manifolds. Then, after possibly applying a \(\mathcal{D}\)-homothety to each Sasakian structure, \(M\) admits a Hermitian structure of the form \((J_{a,b},g_{a,b})\) for some \(a,b\in\mathbb{R}\), \(b\neq 0\), such that \(\operatorname{Ric}^{B}=0\) if and only if \(S_{1}\) and \(S_{2}\) admit Sasaki-Einstein metrics._ To rephrase Theorem 6.12 let us analyze when \[\lambda_{1}=4(n_{1}+an_{2})-2\quad\text{and}\quad\lambda_{2}=4(an_{1}+(a^{2}+ b^{2})n_{2})-2\] are \(<-2,=-2\) or \(>-2\), respectively. Note that \[\begin{cases}\lambda_{1}\geq-2\iff n_{1}+an_{2}\geq 0\quad\text{and}\\ \lambda_{2}\geq-2\iff an_{1}+(a^{2}+b^{2})n_{2}\geq 0,\end{cases}\] Assuming \(n_{1}\geq 1\) and \(n_{2}\geq 1\) we analyze all the combinations for \(\lambda_{1}\) and \(\lambda_{2}\). Case (i): When \(\lambda_{1}=-2\) we obtain that \(a=-\frac{n_{1}}{n_{2}}\) and thus \(\lambda_{2}=4b^{2}n_{2}-2\). Given that \(b\neq 0\), it is clear that \(\lambda_{2}>-2\), and in this case any \(b\neq 0\) works. Case (ii): When \(\lambda_{1}>-2\), \(\lambda_{2}\) can be \(<-2\), \(=-2\) or \(>-2\). Indeed, by possibly performing a \(\mathcal{D}\)-homothety to \(S_{1}\) we may assume that \(\lambda_{1}=2n_{1}-2\), so \(a=-\frac{n_{1}}{2n_{2}}\) and thus \(\lambda_{2}=\frac{4b^{2}n_{2}^{2}-n_{2}^{2}}{n_{2}}-2\). If \(\lambda_{2}=-2\), we can choose \(b=\frac{n_{1}}{2n_{2}}\). If \(\lambda_{2}>-2\), after possibly performing a \(\mathcal{D}\)-homothety to \(S_{2}\) we may assume that \(\lambda_{2}=\frac{3n_{1}^{2}}{n_{2}}-2\) and choose \(b=\frac{n_{1}}{n_{2}}\). Analogously, if \(\lambda_{2}<-2\), we may assume that \(\lambda_{2}=-\frac{3n_{1}^{2}}{4n_{2}}-2\) and choose \(b=\frac{n_{1}}{4n_{2}}\). Case (iii): When \(\lambda_{1}<-2\), we have that \(a<-\frac{n_{1}}{n_{2}}\). Then we have to determine the sign of \[P(a):=an_{1}+(a^{2}+b^{2})n_{2}=n_{2}\left[\left(a+\frac{n_{1}}{2n_{2}}\right)^{ 2}+b^{2}-\frac{n_{1}^{2}}{4n_{2}^{2}}\right].\] If \(b^{2}-\frac{n_{1}^{2}}{4n_{2}^{2}}\geq 0\) then \(P(a)>0\) provided \(a\neq-\frac{n_{1}}{2n_{2}}\). In particular \(P(a)>0\) for \(a<-\frac{n_{1}}{n_{2}}\). If \(b^{2}-\frac{n_{1}^{2}}{4n_{2}^{2}}<0\), then \(P(a)\leq 0\) if and only if \(\frac{-n_{1}-\sqrt{n_{1}^{2}-4b^{2}n_{2}^{2}}}{2n_{2}}\leq a\leq\frac{-n_{1}+ \sqrt{n_{1}^{2}-4b^{2}n_{2}^{2}}}{2n_{2}}\). Since \(-\frac{n_{1}}{n_{2}}<\frac{-n_{1}-\sqrt{n_{1}^{2}-4b^{2}n_{2}^{2}}}{2n_{2}}\), we arrive at \(P(a)>0\) for \(a<-\frac{n_{1}}{n_{2}}\). To sum up, \(\lambda_{1}<-2\) implies \(\lambda_{2}>-2\). In this case, after possibly applying a \(\mathcal{D}\)-homothety to \(S_{1}\) and \(S_{2}\) we can assume that \(\lambda_{1}=-n_{1}-2\) and \(\lambda_{2}=\frac{3n_{1}^{2}}{2n_{2}}-2\). Then, it is easily seen that \(a=-\frac{5n_{1}}{4n_{2}}\) and \(b=\frac{n_{1}}{4n_{2}}\) work. Now we can rephrase Theorem 6.12 as follows. **Theorem 6.19**.: _Let \(M=S_{1}\times S_{2}\) be the product of two Sasakian manifolds and assume \(n_{1}\geq 1\), \(n_{2}\geq 1\). Then, after possibly applying a \(\mathcal{D}\)-homothety to each Sasakian structure, \(M\) admits a CYT Hermitian structure of the form \((J_{a,b},g_{a,b})\) for some \(a,b\in\mathbb{R}\), \(b\neq 0\) if and only if \(S_{1}\) and \(S_{2}\) are \(\eta\)-Einstein and one of the following holds:_ 1. \(S_{1}\) _is positive and_ \(S_{2}\) _is arbitrary,_ 2. \(S_{1}\) _is negative or null, and_ \(S_{2}\) _is positive._ _Remark 6.20_.: According to Remark 6.14, when one of the Sasakian factors is one-dimensional, the CYT condition reduces to the other \(\eta\)-Einstein factor being positive. _Remark 6.21_.: Since odd-dimensional spheres equipped with their usual structure are Sasaki-Einstein, Theorem 6.19 shows the existence of CYT Hermitian structures on Calabi-Eckmann manifolds. Note that this was already proved in [6, Corollary 4.8] where, more generally, the existence of CYT structures on principal bundles over Hermitian manifolds with complex tori as fibers is proved. We point out that if we begin with two Sasaki-Einstein manifolds there is no need to perform any \(\mathcal{D}\)-homothety on the factors in order to obtain a CYT structure on their product. Indeed, we have the following result: **Proposition 6.22**.: _Let \(S_{1}\) and \(S_{2}\) be two Sasaki-Einstein manifolds with \(\dim S_{i}=2n_{i}+1\), \(n_{i}\geq 1\), for \(i=1,2\). Then \(S_{1}\times S_{2}\) admits a CYT structure \((J_{a,b},g_{a,b})\) with_ \[a=-\frac{n_{1}-1}{2n_{2}},\quad b^{2}=\frac{(n_{1}-1)(n_{1}+1)+2n_{2}(n_{2}+1 )}{4n_{2}^{2}}.\] Proof.: The proof follows by solving for \(a\) and \(b\) in (6.9), with \(\lambda_{1}=2n_{1}\) and \(\lambda_{2}=2n_{2}\). **Example 6.23** (Left invariant \(\eta\)-Einstein structures on Lie groups).: According to [30], a 3-dimensional compact Sasakian manifold is diffeomorphic to \(\mathbb{S}^{3}/\Gamma\), \(H_{3}/\Gamma\) or \(\widetilde{\mathrm{SL}}(2,\mathbb{R})/\Gamma\), where in each case \(\Gamma\) is a uniform lattice (i.e., a co-compact discrete subgroup). It is known that these 3 model geometries correspond precisely to positive, null or negative Sasakian \(\eta\)-Einstein structures. In the table below we show an explicit \(\eta\)-Einstein structure on the corresponding Lie algebras \(\mathfrak{sl}(2,\mathbb{R})\), \(\mathfrak{h}_{3}\) and \(\mathfrak{su}(2)\), all of them spanned by an orthonormal basis \(\{e_{1},e_{2},e_{3}\}\). \begin{table} \begin{tabular}{|c|c|c|c|} \hline Lie algebra & Lie brackets & Sasakian structure & \(\lambda\) \\ \hline \(\mathfrak{su}(2)\) & \([e_{1},e_{2}]=2e_{3},[e_{2},e_{3}]=2e_{1},[e_{3},e_{1}]=2e_{2}\) & \(\xi=e_{3},\eta=e^{3},\varphi e_{1}=e_{2}\) & 2 \\ \(\mathfrak{h}_{3}\) & \([e_{1},e_{2}]=2e_{3}\) & \(\xi=e_{3},\eta=e^{3},\varphi e_{1}=e_{2}\) & \(-2\) \\ \(\mathfrak{sl}(2,\mathbb{R})\) & \([e_{1},e_{2}]=2e_{3},[e_{2},e_{3}]=-e_{1},[e_{3},e_{1}]=-e_{2}\) & \(\xi=e_{3},\,\eta=e^{3},\,\varphi e_{1}=e_{2}\) & \(-4\) \\ \hline \end{tabular} \end{table} Table 1. 3-dimensional Sasakian Lie algebras Using Theorems 6.6 and 6.12 we recover the Hermitian structure \((J_{0,1},g_{0,1})\) on \(\mathfrak{su}(2)\times\mathfrak{su}(2)\), which has the special feature that \(\nabla^{B}\equiv 0\) on left invariant vector fields, so it is Bismut flat and in particular \(\operatorname{Ric}^{B}=0\) and \(\rho^{B}=0\). Moreover, we obtain the following CYT Hermitian structures \((J_{a,b},g_{a,b})\): * on \(\mathfrak{h}_{3}\times\mathfrak{su}(2)\), with \((a,b)=(-1,1)\), * on \(\mathfrak{sl}(2,\mathbb{R})\times\mathfrak{su}(2)\), with \((a,b)=(-\frac{3}{2},\frac{1}{2})\), * on \(\mathfrak{su}(2)\times\mathfrak{h}_{3}\) (after applying a \(\mathcal{D}\)-homothety to \(\mathfrak{su}(2)\) so that \(\lambda_{1}=0\)), with \((a,b)=(-\frac{1}{2},\frac{1}{2})\), * on \(\mathfrak{su}(2)\times\mathfrak{sl}(2,\mathbb{R})\) (after applying a \(\mathcal{D}\)-homothety to \(\mathfrak{su}(2)\) and \(\mathfrak{sl}(2,\mathbb{R})\) so that \(\lambda_{1}=0\), \(\lambda_{2}=-\frac{11}{4}\)), with \((a,b)=(-\frac{1}{2},\frac{1}{4})\). The Sasakian manifolds obtained as quotients of \(\operatorname{SU}(2)\), \(\widetilde{\operatorname{SL}}(2,\mathbb{R})\) and \(H_{3}\) by a uniform lattice carry induced \(\eta\)-Einstein structures with the same constant \(\lambda\), so that their products admit induced structures such that \(\operatorname{Ric}^{B}=0\) (in the case of \(\operatorname{SU}(2)\times\operatorname{SU}(2)\)) and CYT structures (in all the cases above). In higher dimensions, a family of null \(\eta\)-Einstein Lie algebras is given by the \((2n+1)\)-dimensional Heisenberg Lie algebra \(\mathfrak{h}_{2n+1}\). In fact, it can be seen in the same way as for \(\mathfrak{h}_{3}\) that \(\mathfrak{h}_{2n+1}\), spanned by \(\{X_{1},\ldots,X_{2n},\xi\}\) with brackets \([X_{2i-1},X_{2i}]=2\xi\) for all \(1\leq i\leq n\), is an example of a null Sasakian Lie algebra with \(\lambda=-2\). Therefore \(\mathfrak{h}_{2n+1}\times\mathfrak{su}(2)\) admits a CYT structure given by \((J_{a,b},g_{a,b})\) with \((a,b)=(-n,1)\), while \(\mathfrak{su}(2)\times\mathfrak{h}_{2n+1}\) admits a CYT structure given by \((a,b)=(-\frac{1}{2n},\frac{1}{2n})\) (after applying a \(\mathcal{D}\)-homothety to \(\mathfrak{su}(2)\) so that \(\lambda_{1}=0\)). The associated simply-connected Lie groups \(H_{2n+1}\times\operatorname{SU}(2)\) and \(\operatorname{SU}(2)\times H_{2n+1}\) admit uniform lattices, so that we obtain compact CYT manifolds. In contrast, there is just one example of a positive \(\eta\)-Einstein Lie algebra, which is given by \(\mathfrak{su}(2)\). Indeed, a Lie group \(G\) admitting a left invariant positive \(\eta\)-Einstein structure (or equivalently, a Sasaki-Einstein structure) is compact and has finite fundamental group, therefore \(G\) is semisimple. According to [10], the only semisimple Lie algebras carrying a contact form are \(\mathfrak{su}(2)\) and \(\mathfrak{sl}(2,\mathbb{R})\); hence the Lie algebra of \(G\) is \(\mathfrak{su}(2)\). We do not know yet of other examples, besides \(\widetilde{\operatorname{SL}}(2,\mathbb{R})\), of left invariant negative \(\eta\)-Einstein structures on Lie groups which admit lattices. Note that by a result in [2], the only \(5\)-dimensional Sasakian \(\eta\)-Einstein Lie algebra whose associated simply-connected Lie group admits lattices is \(\mathfrak{h}_{5}\) (and this is null). **Example 6.24** (Positive \(\eta\)-Einstein structures).: Tanno was the first to observe that by applying a suitable \(\mathcal{D}\)-homothety in the positive case one obtains a Sasaki-Einstein structure, and he used this to prove that the unit tangent bundle of \(\mathbb{S}^{n}\) has a homogeneous Sasaki-Einstein structure [56]. In particular, \(\mathbb{S}^{2}\times\mathbb{S}^{3}\) has a Sasaki-Einstein structure. Over the last decades infinitely many Sasaki-Einstein structures were shown to exist on connected sums of \(\mathbb{S}^{2}\times\mathbb{S}^{3}\) and also on \(5\)-dimensional manifolds which are not connected sums of \(\mathbb{S}^{2}\times\mathbb{S}^{3}\), including infinitely many rational homology \(5\)-spheres. Similar results hold also in higher dimensions (see [51] and references therein). A special class of Sasaki-Einstein manifolds is given by _3-Sasakian manifolds_ which are those Riemannian manifolds whose metric cone is hyperKahler. In particular, a \(3\)-Sasakian manifold has dimension \(4n+3\), with \(n\geq 0\), and it carries \(3\) different compatible Sasakian structures. They were introduced by C. Udriste [58] and Y. Kuo [41] in 1969 and 1970, respectively. According to Theorem 6.18 the product of any two of these manifolds can be endowed with a Hermitian structure \((J_{a,b},g_{a,b})\) satisfying \(\operatorname{Ric}^{B}=0\) or \(\rho^{B}=0\), choosing properly the values of \(a\) and \(b\neq 0\), after possibly applying a \(\mathcal{D}\)-homothety. To finish this section we briefly mention another construction which furnishes many examples of \(\eta\)-Einstein metrics, especially negative and null. According to [14], many interesting Sasakian examples can be found on links of isolated hypersurface singularities. Moreover, some of them carry \(\eta\)-Einstein structures. For more details on this construction see for instance [12, Chapter 9], [14, Section 6] and [51, Section 3.4]. **Example 6.25** (\(\eta\)-Einstein structures on links).: Consider the affine space \(\mathbb{C}^{n+1}\) together with a weighted \(\mathbb{C}^{\star}\)-action given by \[(z_{0},\ldots,z_{n})\mapsto(\lambda^{w_{0}}z_{0},\ldots,\lambda^{w_{n}}z_{n}),\] where the weights \(w_{j}\) are positive integers such that \(\gcd(w_{0},\ldots,w_{n})=1\). A weighted homogeneous polynomial with weights \(w=(w_{0},\ldots,w_{n})\in\mathbb{N}^{n+1}\) of degree \(d\) is a polynomial \(f\in\mathbb{C}[z_{0},\ldots,z_{n}]\) such that \[f(\lambda^{w_{0}}z_{0},\ldots,\lambda^{w_{n}}z_{n})=\lambda^{d}f(z_{0},\ldots,z_{n}).\] Assume that the origin is an isolated singularity of \(\{f=0\}\). Then, the _link_ of \(f\) is defined by \[L_{f}=\{f=0\}\cap\mathbb{S}^{2n+1},\] where \(\mathbb{S}^{2n+1}\) is the unit sphere in \(\mathbb{C}^{n+1}\), and it is a smooth manifold of dimension \(2n-1\) which by the Milnor Fibration Theorem is \((n-2)\)-connected. The link \(L_{f}\) is endowed with a natural Sasakian structure \(S_{w,f}=S_{w}|_{L_{f}}\) inherited as a Sasakian submanifold of \(\mathbb{S}^{2n+1}\) with its weighted Sasakian structure \((\Phi_{w},\xi_{w},\eta_{w},g_{w})\) which in the standard coordinates on \(\mathbb{C}^{n+1}\equiv\mathbb{R}^{2n+2}\) is determined by \[\eta_{w}=\frac{\sum_{i=0}^{n}(x_{i}dy_{i}-y_{i}dx_{i})}{\sum_{i=0}^{n}w_{i}(x_ {i}^{2}+y_{i}^{2})},\quad\xi_{w}=\sum_{i=0}^{n}w_{i}(x_{i}\partial y_{i}-y_{i} \partial x_{i}).\] Regarding the existence of \(\eta\)-Einstein on these links, there is the following result in [13] that establishes the existence of negative or null \(\eta\)-Einstein structures. **Theorem 6.26**.: _[_13_]_ _Let \(f\) be a non-degenerate weighted homogeneous polynomial of degree \(d\) and weight vector \(w\), and let \(|w|=w_{0}+\cdots+w_{n}\). Consider the induced Sasakian structure \(S_{w,f}\) on the link \(L_{f}\)._ 1. _If_ \(|w|=d\)_, then there exists a null_ \(\eta\)_-Einstein structure on_ \(L_{f}\)_,_ 2. _If_ \(|w|<d\)_, then there exists a negative_ \(\eta\)_-Einstein structure on_ \(L_{f}\)_._ _The \(\eta\)-Einstein structures are obtained by deforming suitably the induced Sasakian structure \(S_{w,f}\)._ _Remark 6.27_.: In the case \(|w|>d\), there are obstructions for the existence of positive \(\eta\)-Einstein structures on \(L_{f}\). One well-known example is the _Brieskorn-Pham_ link \(L(a_{0},\ldots,a_{n})\) associated to the polynomial \(f(z)=z_{0}^{a_{0}}+\cdots+z_{n}^{a_{n}}\). It can be seen that the weighted degree of \(f\) is \(d=\operatorname{lcm}(a_{0},\ldots,a_{n})\) and the weights are \(w_{j}=\frac{d}{a_{j}}\). Then by Theorem 6.26, * when \(\sum_{i}\frac{1}{a_{i}}=1\) there is a null \(\eta\)-Einstein structure on \(L(a_{0},\ldots,a_{n})\), * when \(\sum_{i}\frac{1}{a_{i}}<1\) there is a negative \(\eta\)-Einstein structure on \(L(a_{0},\ldots,a_{n})\). According to Theorem 6.19, the product \(\mathbb{S}^{2m+1}\times L(a_{0},\ldots,a_{n})\) admits CYT structures of the form \((J_{a,b},g_{a,b})\) as long as \(\sum_{i}\frac{1}{a_{i}}\leq 1\), by possibly applying a \(\mathcal{D}\)-homothety. ## 7. Appendix As mentioned before, we provide here a proof of Proposition 4.1, for the sake of completeness. Proof of Proposition 4.1.: Let \(\{e_{1},\ldots,e_{2n}\}\) be an orthonormal local frame satisfying \(Je_{2i-1}=e_{2i}\) for \(1\leq i\leq n\). Using this frame we compute by definition both sides of the equality we want to prove. For \(X\in\mathfrak{X}(M)\), using that \(J(\nabla_{U}J)=-(\nabla_{U}J)J\) for all \(U\in\mathfrak{X}(M)\), we have \[[J,\nabla^{*}\nabla J](X)=\sum_{i=1}^{2n}\underbrace{J(\nabla_{e_{i}}(\nabla_ {e_{i}}J))(X)}_{\bigodot}-\underbrace{(\nabla_{e_{i}}(\nabla_{e_{i}}J))(JX)}_{ \bigodot}-2J(\nabla_{\nabla_{e_{i}}e_{i}}J)(X),\] Recall that the integrability of \(J\) is equivalent to \(\nabla_{JU}J=J(\nabla_{U}J)\) for any \(U\in\mathfrak{X}(M)\). Using this fact when writing \((\nabla_{e_{i}}J)=-(\nabla_{J^{2}e_{i}}J)\), we obtain \[\text{\textcircled{1}} =-J(\nabla_{e_{i}}J(\nabla_{Je_{i}}J))(X)\] \[=-J\nabla_{e_{i}}(J(\nabla_{Je_{i}}J)X)-(\nabla_{Je_{i}}J)( \nabla_{e_{i}}X)\] \[=-J\nabla_{e_{i}}J\nabla_{Je_{i}}JX-J\nabla_{e_{i}}\nabla_{Je_{i }}X-\nabla_{Je_{i}}J\nabla_{e_{i}}X+J\nabla_{Je_{i}}\nabla_{e_{i}}X\] \[=-J\nabla_{e_{i}}J\nabla_{Je_{i}}JX-\nabla_{Je_{i}}J\nabla_{e_{i }}X-JR(e_{i},Je_{i})X-J\nabla_{[e_{i},Je_{i}]}X,\] \[\text{\textcircled{2}} =-(\nabla_{e_{i}}J(\nabla_{Je_{i}}J))(JX)\] \[=-\nabla_{e_{i}}(J(\nabla_{Je_{i}}J)(JX))+J(\nabla_{Je_{i}}J)( \nabla_{e_{i}}JX)\] \[=\nabla_{e_{i}}J\nabla_{Je_{i}}X-\nabla_{e_{i}}\nabla_{Je_{i}}JX +J\nabla_{Je_{i}}J\nabla_{e_{i}}JX+\nabla_{Je_{i}}\nabla_{e_{i}}JX\] \[=\nabla_{e_{i}}J\nabla_{Je_{i}}X+J\nabla_{Je_{i}}J\nabla_{e_{i}} JX-R(e_{i},Je_{i})JX-\nabla_{[e_{i},Je_{i}]}JX.\] Hence, \[[J,\nabla^{*}\nabla J] =-2[J,P](X)+\sum_{i=1}^{2n}(\nabla_{[e_{i},Je_{i}]}J)X-2\sum_{i=1 }^{2n}J(\nabla_{\nabla_{e_{i}}e_{i}}J)(X)\] \[\quad-\sum_{i=1}^{2n}(J\nabla_{e_{i}}J\nabla_{Je_{i}}JX+\nabla_{ Je_{i}}J\nabla_{e_{i}}X+\nabla_{e_{i}}J\nabla_{Je_{i}}X+J\nabla_{Je_{i}}J \nabla_{e_{i}}JX).\] Note that in the chosen \(J\)-adapted frame \(\{e_{i}\}\) the last sum equals zero, since replacing \(e_{i}\) by \(Je_{i}\) gives the same terms with opposite sign. Thus, \[[J,\nabla^{*}\nabla J]=-2[J,P](X)+\sum_{i=1}^{2n}(\nabla_{[e_{i},Je_{i}]}J)X-2 \sum_{i=1}^{2n}J(\nabla_{\nabla_{e_{i}}e_{i}}J)(X). \tag{7.1}\] Now, using (4.2) with our \(J\)-adapted frame we get \[2(\nabla_{\delta J}J)(X) =2\sum_{i=1}^{2n}(\nabla_{(\nabla_{e_{i}}J)e_{i}}J)(X)\] \[=2\sum_{i=1}^{2n}(\nabla_{\nabla_{e_{i}}Je_{i}}J)(X)-2\sum_{i=1}^ {2n}(\nabla_{J\nabla_{e_{i}}e_{i}}J)(X)\] \[=\sum_{i=1}^{2n}(\nabla_{\nabla_{e_{i}}Je_{i}}J+\nabla_{\nabla_{Je _{i}}e_{i}}J)(X)+\sum_{i=1}^{2n}(\nabla_{[e_{i},Je_{i}]}J)(X)-2\sum_{i=1}^{2n} J(\nabla_{\nabla_{e_{i}}e_{i}}J)(X).\] Replacing \(e_{i}\) by \(Je_{i}\) in the first sum we obtain the same terms with opposite sign, and thus this sum equals zero. Therefore, \[2(\nabla_{\delta J}J)(X)=\sum_{i=1}^{2n}(\nabla_{[e_{i},Je_{i}]}J)(X)-2\sum_{i= 1}^{2n}J(\nabla_{\nabla_{e_{i}}e_{i}}J)(X). \tag{7.2}\] Comparing (7.1) with (7.2) we obtain \([J,\nabla^{*}\nabla J]=2\nabla_{\delta J}J-2[J,P]\), as we wanted to prove.
2302.10707
Parallel Sentence-Level Explanation Generation for Real-World Low-Resource Scenarios
In order to reveal the rationale behind model predictions, many works have exploited providing explanations in various forms. Recently, to further guarantee readability, more and more works turn to generate sentence-level human language explanations. However, current works pursuing sentence-level explanations rely heavily on annotated training data, which limits the development of interpretability to only a few tasks. As far as we know, this paper is the first to explore this problem smoothly from weak-supervised learning to unsupervised learning. Besides, we also notice the high latency of autoregressive sentence-level explanation generation, which leads to asynchronous interpretability after prediction. Therefore, we propose a non-autoregressive interpretable model to facilitate parallel explanation generation and simultaneous prediction. Through extensive experiments on Natural Language Inference task and Spouse Prediction task, we find that users are able to train classifiers with comparable performance $10-15\times$ faster with parallel explanation generation using only a few or no annotated training data.
Yan Liu, Xiaokang Chen, Qi Dai
2023-02-21T14:52:21Z
http://arxiv.org/abs/2302.10707v1
# Parallel Sentence-Level Explanation Generation for Real-World Low-Resource Scenarios ###### Abstract In order to reveal the rationale behind model predictions, many works have exploited providing explanations in various forms. Recently, to further guarantee readability, more and more works turn to generate sentence-level human language explanations. However, current works pursuing sentence-level explanations rely heavily on annotated training data, which limits the development of interpretability to only a few tasks. As far as we know, this paper is the first to explore this problem smoothly from weak-supervised learning to unsupervised learning. Besides, we also notice the high latency of autoregressive sentence-level explanation generation, which leads to asynchronous interpretability after prediction. Therefore, we propose a non-autoregressive interpretable model to facilitate parallel explanation generation and simultaneous prediction. Through extensive experiments on Natural Language Inference task and Spouse Prediction task, we find that users are able to train classifiers with comparable performance \(10-15\times\) faster with parallel explanation generation using only a few or no annotated training data. Yan Liu\({}^{1}\), Xiaokang Chen\({}^{2}\), Qi Dai\({}^{1}\)\({}^{1}\)Microsoft Research Asia \({}^{2}\)School of Intelligence Science and Technology, Peking University interpretability, parallel explanation generation, low-resource scenarios ## 1 Introduction Recently, deep learning has developed rapidly [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]. The interpretability of black-box neural networks has aroused much attention and the importance of interpreting model predictions has been widely acknowledged. Previous interpretation works provide explanations in various forms as the rationale lying behind model decisions, such as attention distribution [14], heatmap [15], input keywords [16], etc. Due to the better human readability, many works exploit generating sentence-level human language explanations to better interpret model predictions and have achieved promising performance [17, 18]. However, sentence-level explanations are hard to achieve in real-world scenarios due to the high latency of autoregressive explanation generation and the severe reliance on human-annotated explanations. For instance, e-INFERSENT[17] autoregressively generates every explanation token, leading to much higher inference latency. In comparison, although some explanations lack readability[19], such as the attention-based heatmap explanation and post-hoc alignment map explanation, these explanations can be generated almost simultaneously with predictions. Moreover, in spite of readability, previous works that generate sentence-level explanations rely heavily on numerous human-annotated explanations during training. Nevertheless, datasets containing human-annotated explanations are rare due to the high cost. To alleviate these problems, in this work, we introduce the Classification Non-Autoregressive Transformer (C-NAT) framework for simultaneous classification and parallel sentence-level explanation generation with weakly-supervised and unsupervised learning strategies. To accelerate the explanation generation, we adopt the architecture of the non-autoregressive generation model NAT [20] to generate all tokens in parallel. We also equip the non-autoregressive generation model with a label predictor for simultaneous label prediction. Besides, to better accommodate real-world low-resource scenarios, we propose our weakly-supervised learning and unsupervised learning strategies. Specifically, inspired by [21], we first extract a set of labeling functions and the corresponding explanation templates from a small number of human-annotated samples, and then use these labeling functions and explanation templates to produce pseudo labels and explanations for a large amount of unlabeled data. For the unsupervised learning scenario, we utilize the back-translation mechanism to paraphrase the input sequences as the pseudo explanation targets, and apply a pre-trained language model to refine the predicted explanations during training. We verify the effectiveness of our approach on the Natural Language Inference (NLI) task and the Spouse Prediction (SP) task. Main contributions of this work are three-fold: * We propose novel weakly supervised learning and unsupervised learning strategies to accommodate interpretable models to real-world low-resource scenarios. * We introduce our C-NAT to support parallel explanation generation and simultaneous prediction. We also propose to leverage a pre-trained language model as a discriminator to generate more fluent explanations. * Experimental results show that our C-NAT can generate parallel fluent explanations and improve classification performance with significant inference speedup, even with few or no human annotations. ## 2 Model Architecture In this section, we introduce the architecture of C-NAT, which modifies the non-autoregressive generation model NAT [20] to support simultaneous label prediction and parallel sentence-level explanation generation. As shown in figure 1, C-NAT consists of the following five modules: an encoder stack, a decoder stack, a fertility predictor, an explanation predictor for parallel explanation tokens generation, and a label predictor for simultaneous label prediction. ### Encoder and Decoder We adopt the Transformer[22] as the backbone. To enable non-autoregressive interpretation, following [20], the decoder is modified in three aspects: input sequence, self-attention mask, and positional encoding. For input sequence modification, because previously generated tokens are unavailable under the non-autoregressive setting, we use a fertility predictor first to predict the length of the target explanation and produce decoder input with the tokens copied from the encoder input. For the modification of the self-attention mask, because the decoder input is the copied sequence of encoder input, the self-attention module is allowed to attend all positions, rather than only left positions in the conventional Transformer decoder. Therefore, the self-attention mask is replaced with a non-causal mask in our non-autoregressive decoder. For positional encoding modification, different from the self-attention module, the positional attention module uses positional encoding as the query and key, and the hidden representations from the previous layer as the value. ### Fertility predictor To generate the decoder input sequence for non-autoregressive interpretation, we copy and repeat the tokens from the encoder input. The fertility predictor is used to predict the number of times each token is copied, referred to as the _fertility_ of each corresponding token [20]. Specifically, given the input sentence of the encoder \(X=\{x_{1},x_{2},...,x_{S}\}\), the fertility predictor is fed with the encoded feature \(H=\{h_{1},h_{2},...,h_{S}\}\), and generates the fertility sequence \(F=\{f_{1},f_{2},...,f_{S}\}\). Finally, the input sequence of the non-autoregressive decoder is \(Y=\{y_{1},y_{2},...,y_{T}\}=\{\{x_{1}\}_{i=1}^{f_{1}},\\ \{x_{2}\}_{i=1}^{f_{2}},...,\{y_{S}\}_{i=1}^{f_{S}}\}\) with length \(T=f_{1}+f_{2}+...+f_{S}\), where \(\{y_{s}\}_{i=1}^{f_{s}}\) denotes the token \(x_{s}\) is repeated for \(f_{s}\) times. ### Explanation Predictor and Label Predictor The explanation predictor and label predictor are used to generating each token of the explanation sentence and classification label simultaneously. Given the output hidden states of the decoder stack \(H^{d}=\{h_{1}^{d},h_{2}^{d},...,h_{T}^{d}\}\), each explanation token \(e_{t}\) is generated with the probability \(p_{E}(e_{t})=\text{Softmax}(h_{t}^{d})\), and the explanation sentence \(E=\{e_{1},e_{2},...,e_{T}\}\) is generated in parallel with the probability \(p_{E}(E|X;\theta)=\prod_{t=1}^{T}p_{E}(e_{t}|X;\theta)\). Meanwhile, the label predictor projects the hidden states with an MLP layer and the mean pooling operation, resulting in the label prediction \(L\) with the probability \(p_{L}(L|X;\theta)\). ## 3 Training Strategy ### Fully-supervised Learning In the explanation available scenario, the fully-supervised training objective function of our model is the combination of the label prediction loss, the explanation prediction loss, and the fertility prediction loss. Besides, we also apply the pre-trained language model as an extra constraint on the ob Figure 1: The overall architecture of our C-NAT. jective function to encourage generating explanations of more fluency and diversity. Then the pre-trained language model with parameters \(\theta_{LM}\) estimates the log-likelihood of each predicted explanation sentence \(E^{\prime}\) as \(\log p_{LM}(E^{\prime};\theta_{LM})\). To enable the gradient backpropagation from the pre-trained language model to the C-NAT model, the product of the predicted probability distribution \(p_{E}(e_{t}|X;\theta)\) and the word embedding vectors is used as the input embedding of the explanation token \(e^{\prime}_{t}\) in the pre-trained language model. The additional loss term \(\mathcal{L}_{LM}\) is adopted to optimize the explanation generation by maximizing the estimated log-likelihood of the pre-trained language model over the training dataset. Finally, the fully-supervised training objective function of our C-NAT model is formulated as: \[\mathcal{L}=\mathcal{L}_{L}+\lambda_{E}\mathcal{L}_{E}+\lambda_{F}\mathcal{L}_ {F}+\lambda_{LM}\mathcal{L}_{LM} \tag{1}\] , where \(\lambda_{L}\), \(\lambda_{E}\), \(\lambda_{F}\) and \(\lambda_{LM}\) are hyperparameters for each loss term. ### Weakly-supervised Learning In the more practical scenario, where only a few human annotated explanations are available, we introduce the weakly-supervised learning strategy to generate the pseudo explanations and pseudo labels for the large-scale unlabeled data. Firstly, we extract the labeling functions along with the explanation templates from a small number of human-annotated samples. Then, we use the labeling functions and the explanation templates to annotate the pseudo labels and explanations for the large-scale unlabeled data. Due to the wide divergence in accuracy and coverage of the labeling functions, the data programming method [23] is applied for label aggregation [21], where a learnable accuracy weight \(w_{m}\) is assigned to each labeling function \(f_{m}(\cdot)\), and the final pseudo label is selected as the label with the largest aggregated accuracy weight. As for the labeling function with the highest contribution to the pseudo label, we select the corresponding explanation template \(E_{m}^{\text{temp}}\) and generate the pseudo natural language explanation \(\{E^{\text{pseudo}}\}\). Finally, the training data \(\mathcal{D}\) is a combination of the small amount of human-annotated data, and a large amount of data with pseudo labels and explanations. We optimize our C-NAT with the fully-supervised training objective function on the combined training dataset. ### Unsupervised Learning For the real-world scenario where no human annotated explanations are available, we also explore the unsupervised learning strategy for our C-NAT model training. Different from the autoregressive interpretation approach, golden explanations are only used as the training target but not the decoder input for the non-autoregressive interpretation approach. To mimic the human-annotated training targets, we utilize the back-translation mechanism to generate pseudo explanations as the noisy training targets, and keep refining the explanation generation with a pre-trained language model during training. ## 4 Experiments ### Tasks and Datasets To verify the effectiveness of our approach, we conduct experiments on the Natural Language Inference (NLI) and Spouse Prediction (SP) tasks. NLI task aims to predict the entailment relationship between two sentences. SP task is to predict whether two people in the given sentence are spouses. We use three datasets as our testbeds for **fully-supervised**, **weakly-supervised** and **unsupervised** learning respectively: **e-SNLI**[17], **SP**[21], and **SNLI**[24]. SNLI is a standard benchmark for the NLI task, while e-SNLI extends it with human-annotated natural language explanations for each sentence pair. Therefore, we use the e-SNLI dataset to generate explanations with full-supervision, while using the SNLI dataset for unsupervised explanation generation. SP dataset has only 30 samples annotated with human explanations, which we thus adopt for weakly-supervised explanation generation. Besides, as introduced in Section 3.2 and 3.3, we propose two methods to generate pseudo data in low-resource scenarios. For the SP dataset, we extract templates from 30 human explanations, which are then used to generate pseudo explanations and form our **SP-Pseudo dataset**. For the SNLI dataset without human annotated explanations at all, we propose to use a pre-trained NMT model to generate pseudo explanations, which form our **SNLI-Pseudo dataset**. The statistics of all datasets and pseudo data are shown in Table 1. ### Metrics To evaluate classification performance and explanation quality, we report **NE-Acc**(classification accuracy without generating explanations), **Acc** (classification accuracy), **BLEU** (similarity between generation and ground truth, if any), **PPL** (fluency of generated explanations), **Inter-Rep** (diversity of generated explanations), and **Rationality** (rationality of explanations). Specifically, the Rationality metric is a model-based evaluation metric, which utilizes a pre-trained classifier to evaluate whether the generated explanation is reasonable for corresponding input and prediction. ### Implementation Details We set the embedding size and the hidden size as 512, and use 8 heads. The layer number of the encoder and decoder are set as both 6. We use the Adam [25] for optimization with \begin{table} \begin{tabular}{l c c c c} \hline \hline **Datasets** & **Train** & **Val** & **Test** & **Annotated/Total** \\ \hline e-SNLI & 549367 & 9842 & 9824 & 570K/570K \\ SP-Pseudo & 22195 & 2796 & 2796 & 30/22195 \\ SNLI-Pseudo & 549367 & 9842 & 9824 & 0/570K \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics of datasets. \(\beta_{1}=0.9\), \(\beta_{2}=0.999\) and \(\epsilon=10^{-8}\). The learning rate is set to 0.00004, and the dropout rate is set to 0.3. ### Results of Fully-Supervised Learning Evaluation results on e-SNLI in the full-supervised learning scenario are shown in Table 2. We observe that our C-NAT can achieve the comparable performance of explanation generation and label prediction with more than \(20\times\) speedup compared to the baseline autoregressive models. We also conduct the ablation study to evaluate the effectiveness of each component. We find that the BLEU score and PPL score drop significantly with the LM discriminator removed, but the prediction accuracy remains. It indicates that the pretrained language model can effectively improve the fluency of generated explanations. If we modify C-NAT for autoregressive generation, much higher inference latency would be witnessed, and the performance would also degrade due to the exposure bias problem. Besides, we notice the classification performance drops on NE-Acc and ACC for baseline models, while our C-NAT achieves 2.07 absolute improvement. This demonstrates that our method can improve the inference ability of the classifier with model interpretability increased, instead of improving interpretability at the cost of classification performance. ### Results of Weakly-Supervised Learning Table 3 shows the results of our C-NAT model with weakly-supervised learning strategy on the Spouse Prediction dataset that has only 30 human annotated explanations. We augment with pseudo data generated by our template-based approach. Because there are no previous works exploring the weakly-supervised learning method for explanation generation, we choose the modified Transformer model supporting classification as our baseline. Despite the small amount of human-annotated data, with the pseudo labels and explanations, we can still achieve improvement compared to the baseline model on all metrics. ### Results of Unsupervised Learning We conduct experiments in the Natural Language Inference task under the unsupervised learning scenario where no human-annotated explanations are available. Table 4 shows the experimental results of applying our approach in such a scenario. We observe that the LM clearly affects the persuasion accuracy and the fluency of explanations. Moreover, we also notice that the performance drops a lot without using the unsupervised learning strategy, which confirms the effectiveness of our unsupervised learning approach. ## 5 Conclusion In this paper, we explore the important problem of generating human-friendly sentence-level explanations in low-resource scenarios. To solve the high inference latency problem of previous interpretable models, we propose our C-NAT to support parallel explanation generation and simultaneous prediction. We conduct extensive experiments in the Natural Language Inference task and Spouse Prediction task in the fully-supervised learning, weakly-supervised learning, and unsupervised learning scenarios. Experimental results reveal that our C-NAT can generate fluent and diverse explanations with classification performance also improved. \begin{table} \begin{tabular}{l c c c} \hline \hline **Methods** & **Rationality\({}^{\dagger}\)** & **PPL\({}^{\ddagger}\)** & **Inter-Rep\({}^{\ddagger}\)** & **Acc\({}^{\dagger}\)** \\ \hline C-NAT & **72.69** & 46.32 & **0.49** & **83.12** \\ w/o LM & 70.49 & 57.46 & 0.52 & 83.00 \\ w/o SSup & 4.68 & **37.26** & 0.89 & 82.36 \\ \hline \hline \end{tabular} \end{table} Table 4: Automatic evaluation results in NLI task with the unsupervised learning strategy. The higher\({}^{\dagger}\)(or smaller\({}^{\ddagger}\)), the better. At the bottom, we present the results of ablation study. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline **Methods** & **BLEU\({}^{\dagger}\)** & **Rationality\({}^{\dagger}\)** & **PPL\({}^{\ddagger}\)** & **Inter-Rep\({}^{\ddagger}\)** & **NE-Acc\({}^{\ddagger}\)** & **Acc\({}^{\dagger}\)** & **Latency\({}^{\ddagger}\)** & **Speedup\({}^{\dagger}\)** \\ \hline Dataset\({}^{\dagger}\) & 22.51 & 100.00 & 30.00 & 0.40 & 100.00 & 100.00 & - & - \\ Transformer(AT) & 20.33 & 80.16 & 27.04 & 0.51 & 80.62 & 79.46 & 793ms & \(1.27\times\) \\ e-INFERSENT(AT) & **22.40** & 84.79 & **10.58** & 0.72 & **84.01** & 83.96 & 1006ms & \(1.00\times\) \\ \hline C-NAT & 21.19 & **85.10** & 34.71 & **0.30** & 82.41 & **85.23** & **47ms** & \(\mathbf{21.40\times}\) \\ w/o LM & 20.87 & 84.51 & 46.77 & 0.32 & 82.41 & 85.19 & 47ms & \(21.40\times\) \\ w/o NAR & 19.33 & 82.06 & 61.14 & 0.47 & 82.41 & 80.47 & 734ms & \(1.37\times\) \\ w/o LCE & 21.04 & 84.16 & 36.21 & 0.33 & 36.68 & 39.31 & 47ms & \(21.40\times\) \\ \hline \hline \end{tabular} \end{table} Table 2: Automatic evaluation results in NLI task with full-supervision. The higher\({}^{\dagger}\)(or smaller\({}^{\ddagger}\)), the better. \({}^{\dagger}\)We evaluate the ground truth with our metrics. Latency is computed as the time to decode a single output sequence without mini batching, averaged over the whole test set. At the bottom, we present the results of the ablation study. \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Methods** & **Rationality\({}^{\dagger}\)** & **PPL\({}^{\ddagger}\)** & **Net-Acc\({}^{\ddagger}\)** & **Acc\({}^{\dagger}\)** & **Latency\({}^{\ddagger}\)** & **Speedup\({}^{\dagger}\)** \\ \hline Transformer(AT) & 76.51 & 31.74 & 81.72 & 84.11 & 560ms & \(1.00\times\) \\ \hline C-NAT & **77.41** & 30.61 & **85.01** & **87.14** & **34ms** & \(\mathbf{16.65\times}\) \\ w/o LM & 75.79 & 44.31 & 85.01 & 86.95 & 34ms & \(16.65\times\) \\ w/o NAR & 71.78 & 47.24 & 85.01 & 84.13 & 518ms & \(1.09\times\) \\ w/o LCE & 76.09 & **30.24** & 42.65 & 47.15 & 34ms & \(16.65\times\) \\ \hline \hline \end{tabular} \end{table} Table 2: Automatic evaluation results in NLI task with full-supervision. The higher\({}^{\dagger}\)(or smaller\({}^{\ddagger}\)), the better. \({}^{\dagger}\)We evaluate the ground truth with our metrics. Latency is computed as the time to decode a single output sequence without mini batching, averaged over the whole test set. At the bottom, we present the results of the ablation study.
2308.05437
Determination of Thermal Conductivity of phase pure 10H-SiC Thin Films by non-destructive Raman Thermometry
10 H SiC thin films are potential candidates for devices that can be used in high temperature and high radiation environment. Measurement of thermal conductivity of thin films by a non-invasive method is very useful for such device fabrication. Micro-Raman method serves as an important tool in this aspect and is known as Raman Thermometry. It utilises a steady-state heat transfer model in a semi-infinite half space and provides for an effective technique to measure thermal conductivity of films as a function of film thickness and laser spot size. This method has two limiting conditions i.e. thick film limit and thin film limit. The limiting conditions of this model was explored by simulating the model for different film thicknesses at constant laser spot size. 10H SiC films of three different thicknesses i.e. 104, 135 and 156 nm were chosen to validate the thin film limiting condition. It was found that the ideal thickness at which this method can be utilised for calculating thermal conductivity is 156 nm. Thermal conductivity of 156 nm film is found to be 102.385 $(Wm^{-1}K^{-1})$.
Madhusmita Sahoo, Kalyan Ghosh, Swayamprakash Sahoo, Pratap K. Sahoo, Tom Mathews, Sandip Dhara
2023-08-10T08:50:15Z
http://arxiv.org/abs/2308.05437v1
Determination of Thermal Conductivity of phase pure 10H-SiC Thin Films by non-destructive Raman Thermometry ###### Abstract 10 H SiC thin films are potential candidates for devices that can be used in high temperature and high radiation environment. Measurement of thermal conductivity of thin films by a non-invasive method is very useful for such device fabrication. Micro-Raman method serves as an important tool in this aspect and is known as Raman Thermometry. It utilises a steady-state heat transfer model in a semi-infinite half space and provides for an effective technique to measure thermal conductivity of films as a function of film thickness and laser spot size. This method has two limiting conditions i.e. thick film limit and thin film limit. The limiting conditions of this model was explored by simulating the model for different film thicknesses at constant laser spot size. 10H SiC films of three different thicknesses i.e. 104, 135 and 156 nm were chosen to validate the thin film limiting condition. It was found that the ideal thickness at which this method can be utilised for calculating thermal conductivity is 156 nm. Thermal conductivity of 156 nm film is found to be 102.385 (\(Wm^{-1}K^{-1}\)). ## I Introduction Energy harvesting utilising ionising and non ionising radiation sources have generated a lot of interest in the scientific community in recent years. Amongst various methods and materials, it is particularly challenging to find a suitable material for utilising ionising radiation. SiC has been considered as a suitable material in this regard due to its ability to perform in high temperature and high radiation conditions where conventional semiconductor devices cannot perform adequately.[1] It is important that the thermal conductivity is measured by a non destructive method after the whole device is assembled. It becomes all the more important for an radioactive environment, where one would need necessary provisions and methodology to measure thermal conductivity intermittently to know the health of the device. While many studies have been done for more common polytype like 3C, 4H, and 6H, higher hexagonal polytypes of SiC remain unexplored.[2; 3; 4] Hence, it is desirable that a phase pure SiC is synthesized so that defect-phonon interaction can be minimised leading to higher thermal conductivity.[2] In this study, we have focused on determining the thermal conductivity of phase pure 10H SiC. Several methods have been reported so far to determine thermal conductivity of dielectric materials. These methods are steady state method,[5]\(\omega\) method,[6] photo acoustic method,[7] thermal microscopy method[8] and thermo-reflectance method.[9; 10] These methods are invasive in nature wherein the original sample is either damaged or extensive sample preparation and data analysis is required. Hence, Perichon et al. demonstrated a non invasive and non destructive micro-Raman method to determine thermal conductivity of thick films.[11] However, necessary condition for this method is the sample thickness must be one order higher than the laser diameter. This implied that only the films having thickness in the order of microns were suitable for the method proposed by Perichon et al.[11] Huang et al. modified the methodology to evaluate thermal conductivity of thin films having thickness in submicrometer to nanometer range.[12] Prior to this study, only few researchers have used the method proposed by Huang et al. for determining thermal conductivity of Si based devices, 2D graphene and biological samples.[13; 14; 15] In this report, we have used the method proposed by Huang et al. for determining thermal conductivity of 10H SiC thin films in the range of 104-156 nm, which is being reported for the first time to the best of our knowledge. ## II Experimental details SiC thin films were deposited on cleaned Si substrates using RF magnetron sputtering. The deposition parameters such as RF power, deposition time, gas flow, target to substrate distance are as follows. RF power = 90 Watt, reflecent power = 0 Watt, deposition time = 30 mins, gas flow(Ar) =15 sccm and target to substrate distance (3.5inch). Thickness of films were varied by changing only chamber pressure during deposition. Three different thickness of 104 nm, 153 nm and 156 nm were obtained at a chamber pressure of 1*10\({}^{-2}\), 2*10\({}^{-2}\) and 5*10\({}^{-2}\) mbar, respectively. The thickness of the films were measured by cross sectional field emission scanning electron microscopy (FESEM) (SIGMA model, Carl Zeiss). Crystallographic phase of the films were determined from GIXRD by using Rigaku Smart Lab X-ray diffractometer with monochromatic Cu K-\(\alpha\) radiation (\(\lambda\) = 1.5418 A). Raman spectra of the samples were acquired using a 532 nm laser as the excitation source (Jobin-Yvon LabRam HR Evolution, Horiba). The spectra were collected using a spectrometer coupled with a CCD based detector in back-scattered geometry with 50x lens. Temperature dependent Raman spectra were recorded using a high temperature Linkam stage. ## III Crystallographic and Morphological Studies The crystallographic phase of the RF magnetron sputtered films were determined by matching peaks of X-ray diffraction pattern with peaks from ICDD card no 89-2214. A representational grazing incidence X-ray diffraction pattern of 156 nm thick 10H SiC is shown in figure 1. The peak at 44.5\({}^{\circ}\) and 28.3\({}^{\circ}\) corresponds to (018) and (008) planes of 10 H SiC. Surface morphology of SiC samples using FESEM (Inset) Cross-sectional thickness of a deposited SiC sample. Surface morphology of 156 nm thick 10H SiC film is shown in fig 2, which indicated smooth and continuous film formation. All other films were also found to possess similar morphological features. Cross sectional FESEM is shown in inset of fig 2. It reveals uniform thickness across the substrate. The thicknesses measured from cross-sectional FESEM is used in calculation of thermal conductivity of the film (k\({}_{f}\)), which discussed in detail in Section IV. ## IV Raman Thermometry ### The model It has been a general practice for researchers to combine temperature dependent Raman peak shift within the ambit of thermal scanning probe microscopy and calculate thermal conductivity. However, heating source in both the methods are different. Therefore in this work we have followed the methodology developed by Huang et al. that involves a separate mathematical model to account for both the local heat induced due to a Gaussian laser beam and the thickness of the sample.[12] In this method, a Gaussian laser beam is considered to be incident on a relatively thick substrate. It is assumed that heat transfer is complete and heat loss is minimum. This reduces the problem to a steady state heat transfer problem from a circular region in a semi-infinite space. The temperature across the film sample then satisfies the Laplacian \[\triangledown^{2}t(r,z)=0 \tag{1}\] Applying necessary boundary conditions Huang et al. deduced the formula for thermal conductivity of the thick film to be \[k=\frac{\left[I_{0}\left(1\right)+I_{1}\left(1\right)\right]\cdot P}{\sqrt{2 \pi}er_{0}\triangle T} \tag{2}\] where \(I_{0}\) and \(I_{1}\) are modified Bessel's equations of zeroth and first order respectively, which are derived by Dryden P represents power of laser, \(r_{0}\) is radius of laser beam and \(\triangle T\) is induced temperature rise in sample due to laser irradiation.[16] The presence of laser beam as a heating source is quintessential to the model proposed. Hence, it becomes necessary to know exact spot size of the laser beam on the sample. The spot size for a Gaussian laser beam is calculated by equation 3, which is given below. \[r_{o}=\frac{\lambda}{\pi\cdot N.A.} \tag{3}\] where \(\lambda\) represents the wavelength of the laser beam and N.A. is the numerical aperture of the lens, \(r_{o}\) is required radius of the laser spot. However, above basic equation focuses on bulk films with thickness at least one order of magnitude higher than the laser beam diameter. Extending this to a thin film model, thermal Figure 1: Grazing incidence X-ray diffraction spectra of 156 nm thick 10H SiC film Figure 2: Surface morphology of SiC samples using FESEM (Inset) Cross-sectional thickness of a deposited SiC sample. conductivity of the film in eq. 2 became apparent thermal conductivity (\(k_{app}\)) of the entire sample as given below. \[k_{app}=\frac{\left[I_{0}\left(1\right)+I_{1}\left(1\right)\right]\cdot P}{\sqrt{ 2\pi}er_{0}\triangle T} \tag{4}\] Heat flow across the sample must still satisfy equation 1. Hence, a Laplacian was applied to substrate and film separately. Subsequently, necessary boundary conditions were applied to find a relation between thermal conductivity of the substrate(\(k_{s}\)), thin film(\(k_{f}\)) and the entire sample(\(k_{app}\)) giving rise to the following equation. \[\frac{k_{s}}{k_{app}}=1+\sqrt{\frac{2}{\pi}}\frac{e-e^{-1}}{\left[I_{0}\left( 1\right)+I_{1}\left(1\right)\right]}\frac{\delta}{r_{0}}\frac{k_{s}}{k_{f}} \left[1-\left(\frac{k_{f}}{k_{s}}\right)^{2}\right] \tag{5}\] where \(\delta\) is the thickness of the film. Equation 5 shows a dependency on \(\delta/r_{o}\) ratio of the sample. Huang et al. have further simplified equation 4 by assuming \(k_{f}\) is less than \(k_{s}\). This holds true for SiO\({}_{2}\), which was the material of interest to them. However, there is no literature value available for bulk 10H SiC. Hence, a simulation was carried out using C programme to explore variation of k\({}_{f}\) and k\({}_{app}\) with a change in \(\delta/r_{o}\) ratio to circumvent the limiting condition used by Huang et al. The substrate conductivity(k\({}_{s}\)) is used as a normalizing factor for both of them. The results of this simulation is given in figure 3. It is observed from figure 3 that for \(\delta/r_{o}\) ratio of 0.01 (very thin film region), curve shows a highly concave nature and the effect of \(k_{app}\) is predominant over \(k_{f}\). The concavity of the curve decreases for thicker films as \(\delta/r_{o}\) value increases to 0.4. A reversal of nature is observed at \(\delta/r_{o}\) =0.4. This marks transition of film from thin film to bulk and can be considered as the critical value for the associated material beyond which assumptions pertaining to thin film limit will not work. This can be explained by the fact that \(\delta/r_{o}\) ratio increases with increase in film thickness. As a result, heat flow across the film increases thereby increasing significance of \(k_{f}\) parameter. In very thin films, heat flow occurs beyond the film in entire sample making \(k_{app}\) value significant. Conversely, for bulk films, heat transfer is limited mostly to the film making \(k_{f}\) value significant. Simulation of equation 5 provides us a generous plot to observe this transition as we move from thin to bulk region as presented in figure 3. ### Calculation of thermal conductivity of 10H SiC films Thermal conductivity of 10H SiC films were calculated by utilising equation 4 and 5 of section IV A. It is known that thermal conductivity is a function of temperature of the film when it is subjected to a heat gradient. In the current study, shift in Raman peak position is indicative of thermal gradient of the film. The thermal gradient was achieved by irradiating the film with a 532 nm laser at two different laser power i.e 1.25 mW and 5 mW. Raman spectra of films were first measured by keeping laser power at 1.25 mW, wherein laser induced sample heating is presumed to be negligible. Sample heating was then carried out by increasing the temperature from 298K to 773K using a Linkam stage and measuring the corresponding Raman spectrum. Raman spectra thus obtained had a Raman peak at 960cm\({}^{-1}\), which is a characteristic of hexagonal SiC polytype.[17] The shift in this 960 cm\({}^{-1}\) peak for 156 nm 10H SiC film with respect to change in temperature is shown in fig 4. It is observed that with increase in temperature, peak shifted towards lower wave number(red shift). The red shift is due to anharmonicity induced in the film with increase in temperature. Similar red shift was also observed for other two films. Subsequently, a calibration plot was generated by plotting Raman peak shift against temperature and is shown in fig 5. From the linear plot one can observe that with increase in substrate temperature the Raman peak shift reduced. The slope became shallower with higher thickness, indicating a smaller temperature difference between two consecutive points. Subsequently, with higher thickness the value of \(\Delta\)T reduces and value of k\({}_{app}\) increases. A separate set of Raman spectra were recorded at room temperature by keeping laser power at 5mW for deducing apparent thermal conductivity (k\({}_{app}\)). The sample heating due to change in laser power was determined by observing the change in peak position and subsequently utilising the calibration plot to find the corresponding temperature change in the film, which is \(\Delta\)T. This temperature difference is then used in equation 4 of section IV A to calculate the apparent thermal conductivity of the sample (k\({}_{app}\)), which includes thermal conductivity of film and substrate. Since, thickness(\(\delta\)) for each film is known from cross-sectional FESEM images, we calculated thermal conductivity of the film (K\({}_{f}\)) by using equation 5 of section IV A. Two roots were obtained for the quadratic equation in each case. The negative root was ignored on account of physical impossibility and only the positive root for K\({}_{f}\) is reported in Table 1. As presented in Table 1 \(\delta/r_{o}\) values for films of thickness 104, 135 and 156 nm are 0.18, 0.23 and 0.26, respectively. These values lie in thin film limit as indicated in fig 3. It can be observed in fig 3 that when \(\delta/r_{o}\) varies from 0.2 to 0.3, the relationship between K\({}_{app}\) and K\({}_{f}\) tends to become linear implying K\({}_{app}\) becomes equal to K\({}_{f}\). This is corroborated in our calculation. It can be observed in table 1 that with increase in \(\delta/r_{o}\) value from 0.18 to 0.26, K\({}_{app}\) becomes comparable to K\({}_{f}\) for 156 nm film. This also indicates that within thin film limit, thermal conductivity value can vary within range of 11- 103 \((Wm^{-1}K^{-1})\). This gives us an idea about the range of thermal conductivity values for 10H SiC films below 200 nm thickness. There have been various reports on thermal conductivity of SiC. Slack et al. observed that thermal conductivity of SiC monocrystal at room temperature is 490 \((Wm^{-1}K^{-1})\).[3] Composite materials in SiC was found to have reduced thermal conductivity value of 252-270 \((Wm^{-1}K^{-1})\) as compared to monocrystals studied by slack et al.[18] However there is no report of thermal conductivity of phase pure SiC thin film in any polytype. So, there is no reference to compare thermal conductivity of thin film (K\({}_{f}\)) and we are hereby reporting and effective way to determine thermal conductivity of phase pure SiC thin films by a nondestructive method for the first time. As per our observation, thermal conductivity of 10H SiC is 102.36 \((Wm^{-1}K^{-1})\). ## V Conclusion In this work, we have calculated thermal conductivity of 10H SiC films of three thicknesses by utilising Raman Thermometry. A novel thin film limit was worked out for films in nanometer range by extrapolating the model given by Huang et al. and the same was applied to calculate thermal conductivity of 10H SiC film. Our observation confirms thickness dependence of thermal conductivity which can vary from 11 -102 \((Wm^{-1}K^{-1})\). It was found that the 156 nm film is the ideal thickness for 10H SiC films to calculate thermal conductivity by this method as its K\({}_{app}\) is comparable to K\({}_{f}\). Figure 4: SiC Raman band shift with temperature of 156 nm thick 10H-SiC film measured irradiated with 1.25 mW laser power Figure 5: Raman Shift Vs Temperature calibration plots of 104, 135, and 156 nm 10H-SiC films ###### Acknowledgements. Madhusmita Sahoo acknowledges Dr Shaju Albert for his support in carrying out the project. The authors from NISER acknowledge the Department of Atomic Energy (DAE), India for supporting this work through the project RIN-4001. ## Statements and declarations The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2302.08148
Empirical Investigation of Neural Symbolic Reasoning Strategies
Neural reasoning accuracy improves when generating intermediate reasoning steps. However, the source of this improvement is yet unclear. Here, we investigate and factorize the benefit of generating intermediate steps for symbolic reasoning. Specifically, we decompose the reasoning strategy w.r.t. step granularity and chaining strategy. With a purely symbolic numerical reasoning dataset (e.g., A=1, B=3, C=A+3, C?), we found that the choice of reasoning strategies significantly affects the performance, with the gap becoming even larger as the extrapolation length becomes longer. Surprisingly, we also found that certain configurations lead to nearly perfect performance, even in the case of length extrapolation. Our results indicate the importance of further exploring effective strategies for neural reasoning models.
Yoichi Aoki, Keito Kudo, Tatsuki Kuribayashi, Ana Brassard, Masashi Yoshikawa, Keisuke Sakaguchi, Kentaro Inui
2023-02-16T08:49:47Z
http://arxiv.org/abs/2302.08148v1
# Empirical Investigation of Neural Symbolic Reasoning Strategies ###### Abstract Neural reasoning accuracy improves when generating intermediate reasoning steps. However, the source of this improvement is yet unclear. Here, we investigate and factorize the benefit of generating intermediate steps for symbolic reasoning. Specifically, we decompose the reasoning strategy w.r.t. step granularity and chaining strategy. With a purely symbolic numerical reasoning dataset (e.g., A=1, B=3, C=A+3, C?), we found that the choice of reasoning strategies significantly affects the performance, with the gap becoming even larger as the extrapolation length becomes longer. Surprisingly, we also found that certain configurations lead to nearly perfect performance, even in the case of length extrapolation. Our results indicate the importance of exploring effective strategies for neural reasoning models. 1 Footnote 1: Code available at: [https://github.com/ao1neko/reasoning-strategy](https://github.com/ao1neko/reasoning-strategy) ## 1 Introduction Artificial intelligence researchers have been attempting neural-symbolic integration for a long time (d'Avila Garcez and Lamb, 2020; Hamilton et al., 2022). Neural models tend to perform better when generating intermediate reasoning steps in addition to the answer. This phenomenon was seen across various reasoning tasks, such as math word problems (Wei et al., 2022; Cobbe et al., 2021; Kojima et al., 2022; Recchia, 2021; Lewkowycz et al., 2022), commonsense reasoning (Wei et al., 2022; Wang et al., 2022), and symbolic reasoning (Wei et al., 2022; Kojima et al., 2022). However, it is yet unclear which factors in the intermediate step generation bring the benefit. Previous studies often used different strategies for step generation in an ad-hoc manner. To investigate this, we break down the neural reasoning process into two strategies: _output strategy_ and _chaining strategy_ (Figure 1). The output strategy (SS2.1) determines the granularity of intermediate reasoning step generation (all at once vs. step-by-step vs. token-by-token). Some studies trained the models to generate reasoning steps and a conclusion derived from them at once (Nye et al., 2021; Lewkowycz et al., 2022; Wei et al., 2022; Kojima et al., 2022; Wang et al., 2022; Recchia, 2021), some generated a single reasoning step given the input and iterated this process until achieving a conclusion (Sanyal et al., 2022; Picco et al., 2021; Tafjord et al., 2021), and others iteratively generated sub-goals as well as reasoning steps (Liang et al., 2021; Shwartz et al., 2020). In turn, the chaining strategy (SS2.2) defines the reasoning path direction (shortest path vs. exhaustive path vs. backward path). For example, some studies used a backward chaining process (Picco et al., 2021; Rocktaschel and Riedel, 2017; Cingillioglu and Russo, 2019), while others adopted exhaustive searches (Tafjord et al., 2021; Liang et al., 2021; Yang et al., 2022). To compare the strategies, we prepared a test bed of numerical reasoning problems in a simplified language (Figure 1). This format allows for more controlled testing while serving as a necessary condition--should a model fail to solve it, it cannot be expected to adequately generalize to more complex math word problems. We found that both strategies substantially affect the symbolic reasoning performance of neural Figure 1: In a controlled setting, we found that output and chaining strategy choice significantly impact performance when conducting multi-step reasoning. seq2seq learners. Overall, iterative generation outperformed all-at-once outputting, and roughly granular reasoning steps (i.e., shortest-path chaining) lagged behind finely granular steps (i.e., exhaustive and backward chaining). Surprisingly, some settings had near-perfect performance even in generalization tests which extrapolate over greater reasoning depths and unseen numbers during training. ## 2 Experimental settings Problem definition.We evaluated the models' ability to iteratively perform arithmetic operations over given symbols. Given a series of equations, the task is to answer the value of a target variable (Figure 1). Each question also has a certain reasoning depth--the number of _necessary_ equations to reach the answer. For example, the depth of the question A=1, B=2+A, C=3+B, D=2, C? is 3 (A=1, B=2+A, C=3+B). Each equation defines either an assignment (e.g., A=1) or a modular addition and an assignment (e.g., B=3+1). The addition is mod 100. The question contexts also contain distractors that are not necessary to calculate the answer (e.g., D=A+2 in Figure 1). A value assigned to a particular variable is typically referred to in different equations (e.g., A=1, B=A+1). Numbers, variables, and the ordering of equations are randomly assigned. Motivation for using artificial dataThere are mainly three advantages to this dataset. First, the symbolic format allows easier control of reasoning depth for generalization tests. Specifically, we trained a model using instances with shallow (1-5) depths and evaluated them with instances with shallow/deep (1-12) depths. On the other hand, math word problems are harder to control for reasoning depth (e.g., it is not easy to come up with various instances which have a reasoning depth of 10). Second, we wanted to avoid the "spurious bias" that natural (math word) texts implicitly bring into the model Gururangan et al. (2018); Gupta et al. (2021); Al-Negheimish et al. (2021); Sugawara et al. (2018); Jia and Liang (2017); McCoy et al. (2019). Third, we assume that our setting is the necessary condition for solving math word problems. It is unreasonable to expect that a model that can't solve this pure numerical reasoning task can solve more complex tasks. In total, we prepared 5K instances for training and 2.4K for testing. ### Output strategies We compared three configurations: all-at-once, step-by-step, and token-by-token (Figure 1(a)). All-at-once:The model outputs the entire reasoning chain and the final answer in a single call (i.e., _chain-of-thought_ style) Wei et al. (2022); Cobbe et al. (2021); Yavuz et al. (2022); Shwartz et al. (2020). In this setting, the more reasoning steps, Figure 2: Overview of (a) output and (b) chaining strategies given the INPUT: D=A+2, A=1, B=A+1, C=3+B, C? the longer the sequence the decoder must generate at once. **Step-by-step:** The model outputs a single reasoning step per call. Each generated step is concatenated to the past input, and the model again generates the next step (i.e., _proofwriter_ style) (Liang et al., 2021; Sanyal et al., 2022; Picco et al., 2021; Tafjord et al., 2021; Shwartz et al., 2020). This process is iterated until the model outputs the answer or until a set maximum number of iterations is reached (\(100\)). **Token-by-token:** This is the same as step-be-step chaining, but the decoder outputs only a single _token_ per call. We set the maximum number of steps to \(500\). Comparing _all-at-once_ and the others reveals the effect of changing the sequence length that the decoder outputs in a single call. In addition, comparing _step-by-step_ and _token-by-token_ quantifies the advantage of breaking a problem into meaningful units. ### Chaining strategies Particular variables sometimes depend on another variable; the key to reaching the correct answer is determining the order in which the equations are referred to. Regarding existing studies, we compared three chaining strategies: _shortest-path_, _exhaustive_, and _backward_ chaining (Figure 1(b)). **Shortest-path chaining:** The model straightforwardly solves the equations starting from the first solvable one (i.e., involving a known value) and ending with the target (Wei et al., 2022; Cobbe et al., 2021; Yavuz et al., 2022; Shwartz et al., 2020). Here, the reasoning behind determining the shortest path is not outputted by the model. **Exhaustive chaining:** The model greedily solves all given equations until the target value is reached (Tafjord et al., 2021; Liang et al., 2021; Yang et al., 2022). Specifically, the model calculates the left-most solvable equation in each step. Note that this strategy typically derives a long reasoning chain; from an engineering perspective, this strategy is inefficient. **Backward chaining:** The model starts from the equation for the target variable and backtracks over the dependent equations until it reaches a known value (Picco et al., 2021; Rocktaschel and Riedel, 2017; Cingillioglu and Russo, 2019). Then, it solves each equation in order by inserting known or calculated values until the target one is reached. **No chaining:** As a baseline, we also examined the setting where the model was trained to directly output the answer. ## 3 Results **Models:** We used the pre-trained T5-base, T5-large 2(Raffel et al., 2020), and BART-base 3(Lewis et al., 2020). Results of BART-base are in Appendix C. Footnote 2: [https://huggingface.co/docs/transformers/model_doc/T5](https://huggingface.co/docs/transformers/model_doc/T5) Footnote 3: [https://huggingface.co/docs/transformers/model_doc/bart](https://huggingface.co/docs/transformers/model_doc/bart) Note that their pre-defined tokenizers have all the numbers from 0 to 9, and the numerical values in our dataset are divided into digits (e.g., "12" should be "@@1 @2") in advance, following Kim et al. (2021). **Training:** The models were first pre-trained using a 10K _simple_ dataset for 30 epochs, then trained with the 5K training set (1K training instances for each reasoning depth.) for 2000 epochs. The experiment setting details are in Appendix A. In addition, we prepared 0.2K test instances for each reasoning depth. This pre-training is intended to teach the models primitive operations (i.e., assignment, reference, and addition). The pre-training dataset contains two types of single-depth instances: _assign-refer_ type (e.g., A=1,A?) and _operate-assign-refer_ type (e.g., A=1+3, A?). All the results in the paper are averages of the results on three different seeds. ### Output strategies We compared the output strategies while fixing the chaining strategy to the shortest path. Figure 3(a) shows the accuracy per reasoning depth. Note that the accuracy score here denotes whether the answer (e.g., C=6) is correct. We observed the following: (i) **generating intermediate reasoning steps enhance the performance**, and (ii) among the output strategies, **step-by-step works the best**, and **all-at-once works the worst**. The format of Figure 3: Distributions of the total reasoning chain length (num. characters). The all-at-once and step-by-step generate those at depth 12. the dataset in this study is simple. Therefore, this result indicates the low symbolic reasoning ability of neural models and the necessity of the choice of an appropriate reasoning strategy. We hypothesized that the source of all-at-once's inferiority was that the decoder overfitted to output a similar length of reasoning steps as those in the (shallower) training data. In fact, the models generated relatively shorter reasoning steps in the out-of-domain (e.g., depth of 12) setting when using the all-at-once strategy (Figure 3); this supports our hypothesis. The advantage of step-by-step over token-by-token suggests the advantage of breaking the problem into meaningful units (reasoning step) and modeling each step in a single call of the encoder-decoder. ### Chaining strategies Figure 3(b) and Table 1 show the results on each depth with a fixed step-by-step output strategy. Note that the accuracy of the chain (left side of the scores) was measured based on not an exact match but mathematically. For example, even if the order of generated equations is different, it is correct. The results of a fixed token-by-token output strategy are in Appendix B. While the performance dropped in the shortest-path setting as the reasoning depth increased, with either the exhaustive or backward chaining, models successfully solved the task even when extrapolating to depths 6-12. The models correctly generated the intermediate steps (nearly perfect) as well as the final answer in the exhaustive and backward chaining settings (Table 1). Note that these strategies were ineffective with all-at-once outputting. Gontier et al. (2020) compared chaining strategies and concluded that models that _didn't_ generate reasoning steps had better generalization performance than models that did when the reasoning chains were long. However, our results suggest that the choice of the appropriate output strategy improves the reasoning ability of the model. We considered that the source of shortest-path inferiority was the rough granularity of the given reasoning steps. The models don't know the shortest path before outputting the reasoning steps. Therefore, both the exhaustive and shortest path chaining approaches must search for variables other than those on the shortest path. As shown in Figure 1(b), the exhaustive chaining approach is taught this process explicitly. On the other hand, the shortest-path chaining approach must be learned that by training data that don't include this process. \begin{table} \begin{tabular}{c c c c} \hline \hline Depth & Shortest & Backward & Exhaustive \\ \hline 6 & 99.3/99.3 & 100/ 100 & 99.7/99.7 \\ 8 & 95.5/95.7 & 100/ 100 & 99.8/99.8 \\ 12 & 76.7/77.7 & 99.5/99.5 & 98.2/98.3 \\ \hline \hline \end{tabular} \end{table} Table 1: Accuracy of the T5-base model with the step-by-step output strategy at each depth (chain/answer). \begin{table} \begin{tabular}{l l l} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{Error types}} & Gold & Prediction \\ \hline \multirow{2}{*}{Copying error} & B=2+A, & B=6+A, \\ & B=2+1, & B=3 & B=6+1, \\ \hline \multirow{2}{*}{Hasty assignment} & B=2+A, & (skip step) \\ & B=2+1, & B=2+2, \\ \multirow{2}{*}{B=3} & B=4 & B=4 \\ \hline \hline \end{tabular} \end{table} Table 2: Illustrative examples of the errors under the step-by-step, shortest-path chaining settings. (skip step) denotes that the reasoning steps is accidentally skipped. Figure 4: Accuracy changes of the models against reasoning depth. The gray range represents the training data domain (1-5 depth). Figure 3(a) shows the performance degradation with the increase of reasoning steps when using the all-at-once strategy. Figure 3(b) shows that the combination of step-by-step output and backward/exhaustive chaining leads to successful generalization. We thought this difference affected the accuracy and concluded that **the accuracy is higher when the granularity of given intermediate steps is finer**, even though they are long. Therefore, we concluded that **the accuracy is higher when the granularity of intermediate steps is finer**, even though they are long. ### Error analysis We also analyzed the errors of the depth-12 instances under the shortest-path strategy. 4 We observed two types of errors: (i) copying errors and (ii) hasty assignment. Table 2 shows an illustrative example of each error type and the percentage of these errors. The most frequent one (53%) was a simple copying error, where the model failed to accurately copy an original equation into the reasoning chain. This erroneous copying ability is consistent with Xu et al. (2020) and supports the advantage of introducing a copy mechanism to the model Ontanon et al. (2022). Second, a hasty assignment is the model skipping the step to copy the equation from context and instead assigned it a random value. Note that these errors were almost addressed in the other strategies; this could stem from the difficulty of the implicit calculation of the shortest path. Footnote 4: In total, 32 instances were analyzed. That is the total number of incorrect answers on one seed. ### Models' scalability To investigate the scalability, we compared T5-large with T5-base. Figure 5 shows the result. T5-large had a similar trend but slightly lower accuracy on all-at-once and step-by-step compared to T5-base. The reason may be that T5-large needs more data for updating the weights of the entire model. On the other hand, the accuracy of T5-large is higher than T5-base on token-by-token. It's because the data size of token-by-token is as token lengths of output sequence times as the data size of all-at-once, as shown in Figure 1(a). This result indicates that the parameter size of the model needs to be larger to output token-by-token. ## 4 Conclusions We investigated and factorized the reasoning strategy in symbolic numerical reasoning with neural seq2seq models. We found that the combination of step-by-step output and finely granular reasoning leads to successfully performing symbolic reasoning. Our results support the potential of neural models for symbolic reasoning. ### Limitations We found that even simple symbolic reasoning requires the appropriate selection of reasoning strategy. It is unclear whether our findings generalize to more complex symbolic reasoning and/or problems written in natural language. If our findings do not generalize in these different settings, we must address the gap in future work. For example, we start with one of the simplest tasks and find out when models fail as we add complexity to tasks one by one. From the engineering perspective, the iterative strategies are limited to the input length of the model. For example, in our experiments, when adopting the setting where reasoning depths are greater than 13, the input length of step-by-step and token-by-token became longer than the input length limit of T5 (i.e., 512 tokens). In addition, gigantic language models (e.g., GPT-3) have recently been used. Including these models in our study is one of our future works. ## Acknowledgements We thank four anonymous reviewers who provided valuable feedback. We would like to also appreciate the member of Tohoku NLP Group for their cooperation in conducting this research. This work was supported by JSPS KAKENHI Grant Numbers JP22H00524, 21K21343 and JST CREST Grant Number JPMJCR20D2, Japan. Figure 5: Accuracy changes of the T5-base and T5-large against reasoning depth. The gray range presents the training data domain (1-5 depth). This figure shows that the accuracy of T5-large with token-by-token is higher.
2310.11608
Classification of Safety Driver Attention During Autonomous Vehicle Operation
Despite the continual advances in Advanced Driver Assistance Systems (ADAS) and the development of high-level autonomous vehicles (AV), there is a general consensus that for the short to medium term, there is a requirement for a human supervisor to handle the edge cases that inevitably arise. Given this requirement, it is essential that the state of the vehicle operator is monitored to ensure they are contributing to the vehicle's safe operation. This paper introduces a dual-source approach integrating data from an infrared camera facing the vehicle operator and vehicle perception systems to produce a metric for driver alertness in order to promote and ensure safe operator behaviour. The infrared camera detects the driver's head, enabling the calculation of head orientation, which is relevant as the head typically moves according to the individual's focus of attention. By incorporating environmental data from the perception system, it becomes possible to determine whether the vehicle operator observes objects in the surroundings. Experiments were conducted using data collected in Sydney, Australia, simulating AV operations in an urban environment. Our results demonstrate that the proposed system effectively determines a metric for the attention levels of the vehicle operator, enabling interventions such as warnings or reducing autonomous functionality as appropriate. This comprehensive solution shows promise in contributing to ADAS and AVs' overall safety and efficiency in a real-world setting.
Santiago Gerling Konrad, Julie Stephany Berrio, Mao Shan, Favio Masson, Stewart Worrall
2023-10-17T22:04:42Z
http://arxiv.org/abs/2310.11608v1
# Classification of Safety Driver Attention During Autonomous Vehicle Operation ###### Abstract Despite the continual advances in Advanced Driver Assistance Systems (ADAS) and the development of high-level autonomous vehicles (AV), there is a general consensus that for the short to medium term, there is a requirement for a human supervisor to handle the edge cases that inevitably arise. Given this requirement, it is essential that the state of the vehicle operator is monitored to ensure they are contributing to the safe operation of the vehicle. This paper introduces a dual-source approach integrating data from an infrared camera facing the vehicle operator and vehicle perception systems to produce a metric for driver alertness in order to promote and ensure safe operator behavior. The infrared camera detects the driver's head, enabling the calculation of head orientation which is relevant as the head typically moves according to the individual's focus of attention. By incorporating environmental data from the perception system, it becomes possible to determine whether the vehicle operator is observing objects in the surroundings. Experiments were conducted using data collected in Sydney, Australia, simulating AV operations in an urban environment. Our results demonstrate that the proposed system effectively determines a metric for the attention levels of the vehicle operator, enabling interventions such as warnings or reducing autonomous functionality as appropriate. This comprehensive solution shows promise in contributing to ADAS and AVs' overall safety and efficiency in a real-world setting. ## I Introduction As the prevalence of Advanced Driver Assistance Systems (ADAS) and Autonomous Vehicles (AVs) increases, monitoring the vehicle operator's state becomes essential to ensure safety. AVs of level 3 or below require human intervention when necessary, and the safety driver must remain alert to monitor the vehicle's surroundings. In a driving environment, the surrounding elements, including pedestrians, vehicles, and infrastructure, are perceived by the vehicle sensors, and play a crucial role in driving planning and navigation. The safety driver monitoring the driving process must be trained to pay attention to these agents to ensure overall safety. The attention level is a function of the topology of the road, the type and number of traffic participants, and the dynamic and unpredictable nature of the environment, which makes each situation unique. Maintaining constant attention to the environment is a great challenge for the safety driver, and sometimes may not even be possible due to the number of simultaneous events that need to be monitored. Therefore, in similar traffic situations, driver attention levels may vary. Tracking the driver state can help ensure that the driver is aware of their environment and capable of taking control of the vehicle when required. Driver alertness is critical in maintaining the safety and efficiency of AVs. While traditional Driver Monitoring Systems (DMS) focus primarily on driver information, an advanced system incorporating vehicle and surrounding data can provide a more comprehensive understanding of driver behaviour. This paper addresses the challenge of identifying inattentive human drivers in the context of ADAS and AVs by introducing a dual-source approach that combines both driver and vehicle data. We consider the objects detected by the vehicle's perception system and the observations made by the driver. This approach provides a metric to estimate driver alertness, enabling future warnings and interventions that would promote safe driving behaviours, contributing to these emerging technologies' overall safety and efficiency in a single, comprehensive solution. To validate our approach, we conducted data collection during real driving scenarios. We utilized an IR camera to capture images of the safety driver, allowing for the calculation of head orientation. Additionally, we logged information about the surrounding environment captured by the vehicle sensors, including objects such as pedestrians and vehicles. The fusion of data from both sources enabled the classification of driver attention into two levels: regular and low. Based on our experiments, we have observed that the driver's attention gradually decreases over time as they become more accustomed and confident with the technology. This phenomenon is often referred to as "automation complacency" or "attentional disengagement". Research in the field has shown that as individuals gain trust in automated systems, they may become less vigilant and attentive to their surroundings. These findings indicate that our approach is capable of measuring the driver's level of attention towards their surrounding environment. ## II Related Work A comprehensive safety case for ADAS and AVs should incorporate a trained human supervisor capable of responding promptly to any autonomy failure [1, 2, 3, 4]. Additionally, the safety case must include an autonomy failure profile compatible with adequate human supervision [5], and the human supervisor must be able to manage any autonomy failures. Drivers are typically unable to predict when they may be required to assume control of an AV. A lack of alertness could impede their ability to respond effectively in emergencies. To enhance safety in AVs, several studies have been conducted to examine the underlying factors that contribute to poor driver alertness. In [3, 6, 7], the authors investigate some primary factors contributing to driver negligence. The main factors are passive fatigue, distraction, over-reliance on automation, and prolonged driving time. A Human Supervisor Monitoring System (HSMS) can improve road safety by constantly monitoring the presence and state of the human supervisor, providing real-time alerts to intervene if necessary, and ensuring that supervisors are always prepared to take control of the vehicle when needed. In recent studies [8, 9], three main approaches to detecting driver's state have been identified and categorized as physiological, behavioural, and vehicle-based. The physiological approach involves using sensors attached to the human body to collect signals such as ECG, PPG, EEG, EOG, skin temperature, GSR, and EMG. While effective, this method can be expensive and intrusive, and signals can vary between individuals. The behavioural approach involves monitoring a driver's behaviour for signs of fatigue or distraction, such as head pose, blink frequency, PERCLOS, and gaze region. While non-intrusive, this approach faces challenges such as facial occlusion and fast head movements. The vehicle-based approach is a reliable and non-intrusive method that utilizes sensors in various vehicle components to collect data on metrics such as steering wheel movement, acceleration, braking, geo-position, and object detection. This approach can be limited in detecting certain types of driver states or behaviours, such as fatigue or distraction that may not manifest as changes in vehicle movement. In [10], a combination of the behaviour and vehicle-based approaches was utilized. However, determining the overall rate of observed failures involves the product of the autonomy failure rate and the rate of unsuccessful failure mitigation by the supervisor, as noted in [5]. One of the challenges is that human ability varies in a non-linear manner with autonomy failure rates, making it more challenging for a supervisor to ensure safety as autonomy maturity improves. Therefore, safety cases for road testing must consider both the anticipated failures during testing and the practical efficacy of human supervisors, given the failure profile. Moreover, research studies have shown that as drivers gain trust and confidence in the capabilities of autonomous vehicles, their attention to the driving task may diminish over time [11]. Factors contributing to automation complacency include the perception of increased system reliability, a sense of reduced workload, and a perception that the system can handle challenging driving situations effectively [12]. This paper introduces a non-invasive driver monitoring system that combines sensor information from the driver and vehicle to detect inattention patterns and generate a metric of driver attention that can be used for warnings and possible interventions. The system primarily focuses on objects in the driver's immediate vicinity, which are critical to safe driving. Through experiments using real-world data, we have verified that our proposed metric aligns with well-researched studies on automation complacency. The system does not account for fatigue or drowsiness and aims to address only a lack of attention to the driving environment. ## III Methodology The proposed system assesses driver alertness using a combination of driver, vehicle, and environmental information. In driving scenarios, the vehicle environment constantly changes, and the vehicle must handle complex situations such as approaching an intersection, navigating traffic lights, or interacting with pedestrians and other vehicles. The human supervisor must monitor these situations; head movement variations do not necessarily indicate inattentiveness. ### _Dataset Collection_ This study gathered data using our electric vehicle platforms, as described in [13]. The vehicles were equipped with advanced autonomous vehicle sensors, including multiple cameras perceiving the surroundings, an interior IR camera focused on the safety driver, lidar, GPS, and odometry. These sensors were synchronized and calibrated to ensure accurate data collection. The participants in the study were simulating the role of safety drivers for an AV, providing a realistic context for data collection. The specific trajectory followed by the vehicle is illustrated in Fig 1. A total of 25 laps were completed as the seven participants took turns acting as the supervisor of the vehicle along the designated path during multiple laps. The starting and endpoints of each lap are indicated by the red square depicted in Fig 1. ### _Head Pose Estimation_ The safety driver is monitored by an IR camera installed inside the vehicle. Using an IR camera proves advantageous in capturing clear images, particularly in scenarios where partial face illumination might hinder accurate detection. Moreover, IR cameras facilitate the identification of eye points even when individuals wear sunglasses, expanding its Fig. 1: Path driven in UTM coordinates. applicability to a broader range of cases. This study employed a pre-trained YOLOv7-tiny model [14] to detect the safety driver's head using 128x128px IR images. Geometric models are a widely used method for image-based head pose estimation. These models utilize a static template and facial landmarks to determine the corresponding head pose through an analytical process. The main challenge of these models lies in accurately detecting the facial landmarks. Face landmarks were estimated using MediaPipe [15, 16], a machine learning-based method that infers the 3D facial surface from a single camera input in real-time without the need for a dedicated depth sensor. The method produces ten 3D face landmarks, which help determine the head pose. Fig. 2 depicts the face landmarks/points detected. The estimation of the head pose is achieved through the utilization of the Perspective-n-Point (PnP) algorithm that incorporates Infinitesimal Plane-based Pose Estimation (IPPE) [17]. The algorithm requires that the object points be co-planar for successful implementation. For this reason, we utilized four landmarks corresponding to the outer eye's canthus and jaw angles (points A, B, C, and D) as indicated in Fig. 2. The pitch, roll, and yaw angles can define the head orientation in 3D space. However, in our approach, only the yaw angles are used. ### _Vehicle Perception_ The vehicle perception system is based on sensor fusion between camera and lidar information. By utilizing the extrinsic calibration between the cameras and lidar, as well as the intrinsic parameters of the cameras, the vehicle can accurately detect surrounding objects. A YOLOv4-tiny [18] detector identifies pedestrians and vehicles within image frames from the three front-facing cameras. Each detection is then translated into the point cloud domain using rigid transformation matrices between sensors. Subsequently, the corresponding point cloud is clustered to extract the centroid. To track and smooth the paths of multiple objects in a 2D top-down view, a Gaussian Mixture Probability Hypothesis Density (GMPHD) filter [19] is employed. ### _Information Overlay_ The data underwent post-processing to analyse the correlation between the detected safety driver's face direction and the outcome of the vehicle's perception system. This analysis allowed for a deeper understanding of the driver's behavior and the vehicle's surrounding environment during the drive. For each timestamp, we convert the head orientation (head yaw) and perception results to the local coordinate system of the ego vehicle. This transformation is illustrated in Figure 3, where vehicles and pedestrians detected by the perception system are represented by the letters "V" and "P", respectively. The arrows displayed along the vehicle's pose indicate the orientation of the safety driver's head. This representation enabled clear visualization of the detected objects in relation to the ego vehicle's position and orientation over time. ### _Filtering_ At each timestamp, the objects surrounding the vehicle were filtered to ensure that the safety driver's field of view spanned \(90^{\circ}\), with \(45^{\circ}\) coverage on each side. The object detection range was established at 15 m in front of the vehicle and 10 m on either side. This configuration allowed for an optimal balance between visibility and focus for the safety driver. The head orientation estimation algorithm demonstrates limitations, such as losing head detection due to extreme head rotations or lighting conditions. It can also be affected by occlusion or partial head visibility. These factors can cause the algorithm to fail to detect the head or provide an inaccurate estimate of its orientation. A filtering process is applied to the collected data to address these challenges. It identifies and eliminates noise and outliers, which are data points significantly different from most of the data. By removing these outliers, the filtered data provides a more accurate representation of the safety driver's head orientation during the simulation. Fig. 4 demonstrates the effectiveness of the filtering process by comparing the unfiltered data with the filtered data. The figure clearly shows how the filtering process improves the Fig. 3: Head orientation and vehicle perception output in relation to the vehicle’s position. “V” and “P” denote the positions of vehicles and pedestrians. The temporal scale is represented by a color gradient, with blue indicating the earliest data, transitioning through green and finally to yellow for the most recent data. Fig. 2: Landmarks used for head pose estimation. accuracy of the head orientation estimation by removing outliers and reducing the impact of noise in the data. To mitigate the algorithm's limitations stemming from the absence of eye movement tracking, we defined two driver's gaze regions. Focus Vision (FV) (\(10^{\circ}\) span) is the area where the driver focuses their vision to capture information from what they are observing. Peripheral Vision (PV) (\(5^{\circ}\) span) is the region in which the driver can unconsciously perceive environmental changes. At a cognitive level, the information obtained in this region can alert the driver and change their focus of attention. These areas are depicted in Figure 5. These values are taken from [20], where the authors explore the registration of drivers' eye movements and vehicle driving parameters while navigating left- and right-hand curves. The study findings indicated that drivers' gaze direction varies around the reference axis, with fixation points in the region surrounding the horizontal gaze. A driver must turn their head when the eye-yaw angle exceeds \(15^{\circ}\) to perceive target size and position. ### _Data Split_ Throughout the entire trajectory, our study specifically targeted a few locations for analyzing the attention of the safety driver. These locations were carefully selected to encompass scenarios where the AV needs to execute intricate maneuvers. The main location evaluated corresponds to the intersection between Smithers St and Myrtle St, a T-junction, as depicted in Fig 6. This intersection was of special interest because the vehicle drives through a street with some traffic, and makes a turn to a perpendicular street with high traffic of both vehicles and pedestrians. In this way, the driver is forced to observe pedestrians and vehicles on both sides. By examining the challenging locations, we aimed to gain valuable insights into the safety driver's attention during turning maneuvers. ### _Attention Classification_ The objective of this process is to evaluate the level of attention exhibited by the safety driver towards the surrounding objects. To achieve this, we compute the intersection between the head orientation of the driver and the objects detected by the vehicle. Specifically, we use the driver's head yaw angle in conjunction with the calculated angles of the objects based on their distances. By intersecting these angles, we can determine whether the driver directed their attention towards the location of each object. This resulting intersection is subsequently compared to the regions of the FV and PV, enabling the derivation of a metric for classifying the driver's attention. Objects within the FV region are considered the most relevant for analysis, while objects in the PV region are also taken into account with a 50% weight compared to the former. This weight was determined empirically. This is demonstrated in [21], where authors assess visual attention within a limited portion of the visual field, specifically examining drivers' ability to attend to objects in the centre while being aware of those in the periphery. They suggest that, with the rise of automated vehicles, understanding the capabilities of the peripheral visual field becomes crucial for maintaining situational awareness. Thus, peripheral information may play a significant role in a driver's overall awareness and should be considered in the design of interfaces aiming to enhance this awareness. The chosen driving area generally has a higher number of vehicles compared to pedestrians. However, since there is no vehicle mapping information available, it is difficult to determine the relevance of specific vehicles or pedestrians to the driver. Therefore, it is necessary to consider both stationary and moving vehicles, as well as pedestrians on sidewalks Fig. 4: Head orientation filtering. The yaw angle of the driver’s head is shown in blue, while the resulting signal of the filtering process is depicted in orange. Fig. 5: Driver’s field of vision. The shaded area represents the FV region, while the area that extends up to the dotted line represents the driver’s PV region. Fig. 6: Intersection with a sample vehicle path in blue. or crossing the street. We using a percentage metric for evaluating observed objects at this intersection, as it might mitigate potential biases. By calculating the percentage based on the total number of detected objects, a more balanced assessment can be achieved. In this way, we calculated which objects the driver observed and in which region of the driver's vision they were located. This information is used to classify the objects based on their relevance to the driver. In order to identify regular and low levels of attention, we use two K-Means classifiers to measure the driver's attention to both vehicles and pedestrians. A third classifier evaluates the relationship between the observed vehicles and pedestrians in the scene, enabling the classification of situations into two groups: those where vehicles predominate, and those where the relationship between both is similar. We call them Scenario I and Scenario II, respectively. Cases within each scenario are shown in 7. The purple background is Scenario I and the green background is Scenario II. A detailed description of both scenarios is given in Section IV. This facilitates the identification of potential areas where attention may be lacking. Once the K-Means classifiers have evaluated the collected data, the driver's attention level is divided into regular or low. To accomplish this, the outputs from the two first classifiers mentioned are inputs into an additional K-Means classifier that combines the information to determine the driver's level of attention to their surroundings. This approach enables a comprehensive assessment of the driver's attention and more accurately determines their overall attention level. ## IV Results ### _Quantitative Analysis_ A total of 25 cases were evaluated, with 10 falling into the low-attention category and 15 in the regular-attention category. Fig. 7 illustrates the distinctive characteristics of each case, featuring three informative bars. For a better interpretation, cases have been ordered by scenarios and driver attention. The first bar showcases the percentage of participation of the detected objects within the scene. The dark grey segment represents the percentage of vehicles, while the light grey segment represents pedestrians. Together, these two segments account for 100% of the detected objects. The information collected allows for the identification of two distinct driving scenarios. Scenario I corresponds to situations where vehicles represent between 95% and 100% of the detected objects, with the remaining percentage, if any, attributed to pedestrians. As a result, a low pedestrian observation rate is expected in this scenario. Out of the cases evaluated, 15 fall into this category. Scenario II, on the other hand, is characterized by a higher proportion of pedestrians, ranging from 5% to 45% of the observed objects. This scenario is represented by the remaining 10 cases. The second bar provides insights into the percentage of vehicles observed by the driver. The blue segment indicates the proportion of vehicles observed within the FV region, while the orange segment represents the percentage observed within the PV region. The third bar displays the percentage of observed pedestrians. The green segment corresponds to pedestrians observed within the FV region, while the red segment represents those within the PV region. This information served as input for the initial set of classifiers. The outputs generated were subsequently utilized Fig. 7: A breakdown of 25 cases in the analysis. Grey bars illustrate the ratio between the number of detected vehicles (depicted in dark grey) and pedestrians (in light grey) present in the scene. The blue-orange bars display the percentages of vehicles within the Focus Vision (FV) and Peripheral Vision (PV) regions, respectively. Likewise, the green-red bars represent the percentages of pedestrians situated in the FV and PV regions. The purple background is corresponding to the Scenario I cases and the yellow background corresponds to Scenario II. Cases have been ordered by scenarios and attention level for a better understanding. The bottom axis shows the case reference number (between parenthesis), the second row shows the number of the lap (1-7) driven by each supervisor (A-G), and the third row is the attention level as inputs for the subsequent classifier, creating a cascaded approach. This sequential process allowed for the final classification of each case to be successfully conducted. The cases categorized as low attention are marked by a low percentage of observed vehicles and pedestrians overall. In the first scenario, the percentage of observed vehicles does not exceed 20% in any of the cases, while the observation rate of pedestrians is nearly non-existent, as expected. In the second scenario, low attention is characterized by a higher observation rate of pedestrians compared to vehicles. In the best-case scenario (column 25), the observed pedestrians represent less than 25% of the total observations, with the relationship between detected vehicles and pedestrians being close to 50%. Out of this percentage, half were observed in the FV region, while the other half were observed in the PV region. Regular-attention cases are marked by an observation rate of vehicles ranging from 30% to 60% of those detected and pedestrian observation rates between 0% and 25%. In Scenario I, the scarcity of pedestrians means that observations are typically very low or non-existent. However, there are two cases in this scenario that deviate from this pattern, namely columns 4 and 8, where pedestrians were observed at high percentages of 50% and 100%, respectively. Upon closer examination, it was found that in the column-8 case, the vehicle made only one detection corresponding to a pedestrian, which suggests a potential failure in the detection of the object. Typically, multiple detections of the same object are observed over time when the object is successfully detected, as shown in Figure 3. In this case, the absence of such multiple detections means that the pedestrian's actual presence cannot be guaranteed. If the pedestrian were truly present, there would have been several nearby detections corresponding to the same object. Similarly, in the column-4 case, only one of the two pedestrian detections in the scene accounted for 50% of the driver's observations. These detections were separated by a time of 71.56 s, making it unlikely for them to be the same pedestrian, since the vehicle was not stationary during that time. Despite these potential failures in detection, these cases were still correctly classified. Some samples of Scenario II and the corresponding attention classification are shown in Fig. 8. For a better understanding of the examples, Table I shows the metrics for each case. For the regular-attention sample, the safety driver observed 16% of the vehicle detections within the FV region and 13% of the vehicle detections within the PV region. This accounts for a total of 29% of the vehicle detections present in the scene. Furthermore, the driver observed 60% and 15% of the pedestrian detections within the FV and PV regions, respectively, corresponding to 75% of the total observed pedestrian detections. Regarding the total number of objects detected, the percentage of vehicles observed is only 22.22% and that of pedestrians represents 16.67%. For the low-attention, the driver exhibited a lower observation percentage for vehicle detections, with 0% and 6% observed within the FV and PV regions, respectively. They has observed 29% of the pedestrians within FV region and 29% within the PV region. These percentages with respect to the total detected objects result in 3.64% for vehicles and 25.45% for pedestrians. A significant finding arises from the analysis of drivers' attention during each lap regardless of which scenario it occurred in. As is shown in Table II, initially all drivers were classified as having regular attention in the first lap. However, as the subsequent laps unfolded, the patterns became more diverse. Out of the seven drivers, two (29%) consistently maintained regular attention throughout all laps. Meanwhile, four drivers (57%) experienced fluctuations between regular and low attention or low and regular attention. In one particular case (14%), attention decreased and remained consistently low throughout the rest of the laps. This trend can be attributed to drivers gaining confidence in their vehicles, which may contribute to an overall decrease in attention levels. This behaviour can be observed in Fig. 8 which showcases the drivers and their respective laps displayed at the bottom of the bars. The top row represents the case reference number, followed by the correspo \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Attention & Veh. in FV & Veh. in PV & Ped. in FV & Ped. in PV \\ \hline Low & 0\% & 6\% & 29\% & 29\% \\ Regular & 16\% & 13\% & 60\% & 15\% \\ \hline \end{tabular} \end{table} TABLE I: Veh. in FV: Vehicles observed by the driver in the FV region; Veh. in PV: Vehicles observed in the PV region; PDPed. in FV: Pedestrians observed by the driver in the FV region; Ped. in PV: Pedestrians observed in the PV region. Fig. 8: Samples for each attention classification. In the low-attention case, there is limited awareness of the objects present in the environment, which stands in stark contrast to the heightened observation associated with the high-attention case. of each driver and the driver identifiers A-G in the second row. The attention level for each case is indicated in the bottom row. ### _Case Study_ The following case demonstrates how information from the environment works with the orientation of the driver's head, and how the angles of surrounding objects are used to intersect each other and understand where the driver was looking. This regular-attention case is shown in Fig. 9, which illustrates two moments from the sequence. At moment T1 (Fig. 8(a)), the vehicle detects a pedestrian that the driver is not observing. However, 1.67 seconds later (T2), the driver turns their head and looks at the pedestrian (Fig. 8(b)). Figs 8(a) and 8(b) exhibit information captured by the vehicle at both times. The top-left image shows the vehicle's frontal camera view, enabling vehicle and pedestrian recognition. This is one of the three front cameras that the vehicle has and that were used to detect objects. The bottom-left picture displays the IR camera view of the safety driver, featuring landmarks mentioned in Section III-B (red points), and orientation angles provided by three coloured lines close to the driver's nose. Additionally, some information the algorithm provides is shown on the top-left side. The right image shows lidar information, with a red arrow indicating the driver's head orientation, a green path showing the vehicle's path, and a yellow-purple marker indicating the detected pedestrian. In this can also be observed the detection of a vehicle in figure 8(a) that later stopped being detected in figure 8(c). This can be observed in the plot presented in Fig. 8(c). This shows the overlap between the vehicle's position and the driver's head orientation. The "V" and "P" markers in Fig. 8(c) represent the vehicles and pedestrians detected at the scene, respectively. Each group of vehicle detections corresponds to a different vehicle, while the group of pedestrian detections corresponds to the pedestrian observed in the camera. The vehicle that was detected in Figure 8(a) and not in Figure 8(b) is not shown in this figure as it has been removed as part of the filtering mentioned in Section III-E. The head orientation at both times is indicated by additional red arrows, corresponding to the blue dots in Fig. 8(d). The detected pedestrian from the T2 observation is coloured in red. In Fig. 8(d) the intersection between the head orientation and the pedestrian's angles with respect to the vehicle are shown. Although both situations presented correspond to the times in the blue points, in this figure is easy to see the driver has observed the parked vehicles before and even the \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Driver & Lap 1 & Lap 2 & Lap 3 & Lap 4-7 \\ \hline A & Regular & Low & Low & - \\ B & Regular & Low & Regular & - \\ C & Regular & Regular & Low & - \\ D & Regular & Low & Regular & - \\ E & Regular & Regular & Regular & - \\ F & Regular & Regular & Regular & - \\ G & Regular & Low & Low & Low/Regular \\ \hline \end{tabular} \end{table} TABLE II: Attention levels given by drivers on each lap driven. Fig. 9: Analysis of two moments of a sequence, T1 and T2. (a) and (c) show the pictures of the vehicle’s frontal camera (left-top) showing the road, and the internal IR camera (left-down) showing the driver. The right image is the data captured by the vehicle’s systems. (b) represents both times by a couple of red arrows. Also, the pedestrian detection corresponding to T2 is coloured red as well. (d) shows T1 and T2 as blue dots in the angle’s plot pedestrian. In the general description of the situation, the driver paid attention to most of the detected objects in the scene and also followed the pedestrian for some time after T1. Mainly they were observed within the FV, meaning a better attention focus. It is why this case has been classified as a regular attention case. ## V Conclusions and Future Work This research paper presents the findings of a classification study conducted on various real-life driving scenarios, focusing on distinguishing between regular and low driver attention. The attention metric we developed was validated using a dataset gathered from individuals acting as safety drivers in AV. Through this validation process, we observed a correlation between our proposed metric and the phenomenon of "automation complacency". Specifically, we found that individuals exhibited higher levels of attention during the first lap than in subsequent laps. The methodology employed in this study involved leveraging information from the surrounding environment of the vehicle in conjunction with the driver's head movements to identify observed objects within the environment. This allowed for inferring the driver's attention based on whether specific objects were observed during driving. By considering vehicle and environmental data alongside driver information, the proposed system is well-positioned to offer comprehensive metrics. These metrics play a crucial role in facilitating appropriate warnings and promoting safe driving behaviors. Our future plans involve expanding our dataset and refining the processing techniques to obtain more pertinent data from the vehicle's surroundings. To achieve this goal, we are working towards incorporating map information into our system. This additional data will provide us with a more comprehensive understanding of the environment. With the inclusion of map information, we will be able to differentiate between pedestrians on the sidewalk and those crossing the street or about to do so. Additionally, we can identify right-of-ways, determine the direction of vehicle travel, and consider other relevant factors. By implementing this system, we have the potential to significantly improve overall driving safety by providing metrics that generate alerts to drivers when it is determined that their attention to their surroundings is inadequate.
2307.12916
Improving Approximation Guarantees for Maximin Share
We consider fair division of a set of indivisible goods among $n$ agents with additive valuations using the fairness notion of maximin share (MMS). MMS is the most popular share-based notion, in which an agent finds an allocation fair to her if she receives goods worth at least her ($1$-out-of-$n$) MMS value. An allocation is called MMS if all agents receive their MMS values. However, since MMS allocations do not always exist, the focus shifted to investigating its ordinal and multiplicative approximations. In the ordinal approximation, the goal is to show the existence of $1$-out-of-$d$ MMS allocations (for the smallest possible $d>n$). A series of works led to the state-of-the-art factor of $d=\lfloor3n/2\rfloor$ [Hosseini et al.'21]. We show that $1$-out-of-$4\lceil n/3\rceil$ MMS allocations always exist, thereby improving the state-of-the-art of ordinal approximation. In the multiplicative approximation, the goal is to show the existence of $\alpha$-MMS allocations (for the largest possible $\alpha < 1$), which guarantees each agent at least $\alpha$ times her MMS value. We introduce a general framework of "approximate MMS with agent priority ranking". An allocation is said to be $T$-MMS, for a non-increasing sequence $T = (\tau_1, \ldots, \tau_n)$ of numbers, if the agent at rank $i$ in the order gets a bundle of value at least $\tau_i$ times her MMS value. This framework captures both ordinal approximation and multiplicative approximation as special cases. We show the existence of $T$-MMS allocations where $\tau_i \ge \max(\frac{3}{4} + \frac{1}{12n}, \frac{2n}{2n+i-1})$ for all $i$. Furthermore, we can get allocations that are $(\frac{3}{4} + \frac{1}{12n})$-MMS ex-post and $(0.8253 + \frac{1}{36n})$-MMS ex-ante. We also prove that our algorithm does not give better than $(0.8631 + \frac{1}{2n})$-MMS ex-ante.
Hannaneh Akrami, Jugal Garg, Eklavya Sharma, Setareh Taki
2023-07-24T16:17:45Z
http://arxiv.org/abs/2307.12916v2
# Improving Approximation Guarantees for Maximin Share ###### Abstract We consider fair division of a set of indivisible goods among \(n\) agents with additive valuations using the desirable fairness notion of maximin share (MMS). MMS is the most popular share-based notion, in which an agent finds an allocation fair to her if she receives goods worth at least her MMS value. An allocation is called MMS if all agents receive their MMS values. However, since MMS allocations do not always exist [17], the focus shifted to investigating its ordinal and multiplicative approximations. In the ordinal approximation, the goal is to show the existence of \(1\)-out-of-\(d\) MMS allocations (for the smallest possible \(d>n\)). A series of works led to the state-of-the-art factor of \(d=\lfloor 3n/2\rfloor\)[13]. We show that \(1\)-out-of-\(\lceil 4n/3\rceil\) MMS allocations always exist, thereby improving the state-of-the-art of ordinal approximation. In the multiplicative approximation, the goal is to show the existence of \(\alpha\)-MMS allocations (for the largest possible \(\alpha<1\)) which guarantees each agent at least \(\alpha\) times her MMS value. A series of works in the last decade led to the state-of-the-art factor of \(\alpha=\frac{3}{4}+\frac{3}{3836}\)[1]. We introduce a general framework of \((\alpha,\beta,\gamma)\)-MMS that guarantees \(\alpha\) fraction of agents \(\beta\) times their MMS values and the remaining \((1-\alpha)\) fraction of agents \(\gamma\) times their MMS values. The \((\alpha,\beta,\gamma)\)-MMS captures both ordinal and multiplicative approximations as its special cases. We show that \((2(1-\beta)/\beta,\beta,3/4)\)-MMS allocations always exist. Furthermore, since we can choose the \(2(1-\beta)/\beta\) fraction of agents arbitrarily in our algorithm, this implies (using \(\beta=\sqrt{3}/2\)) the existence of a randomized allocation that gives each agent at least \(3/4\) times her MMS value (ex-post) and at least \((17\sqrt{3}-24)/4\sqrt{3}>0.785\) times her MMS value in expectation (ex-ante). ## 1 Introduction Fair allocation of resources (goods) is a fundamental problem in multiple disciples, including computer science, economics, and social choice theory, where the goal is to divide goods among agents in a _fair_ manner. This field has received significant attention since the seminal work of Steinhaus in the 1940s [14]. When the goods are divisible, the two standard fairness notions are _envy-freeness_ and _proportionality_, based on envy and share, respectively. In an envy-free allocation, no agent prefers another agent's allocation, and in a proportional allocation, each agent receives her proportionate share, i.e., at least a \(1/n\) fraction of her value of all the goods. In the case of divisible goods, an envy-free and proportional allocation exists; see [11, 12, 13]. We study the discrete setting, in which each good can be given to exactly one agent. Formally, the input consists of a set \(N\) of \(n\) agents, a set \(M\) of \(m\) indivisible goods and a valuation profile \(\mathcal{V}=(v_{1},\ldots,v_{n})\) where \(v_{i}:2^{M}\to\mathbb{R}_{\geq 0}\) is agent \(i\)'s valuation function over the subsets of goods. Simple examples show that in the discrete case, neither envy-freeness nor proportionality can be guaranteed.* This necessitates the refinement of these notions. Footnote *: Consider an instance with two agents and one good with positive value to both agents. In this paper, we consider the natural and most popular discrete analog of proportionality called _maximin share_ (MMS) introduced in [1]. The MMS value of an agent is the maximum value she can guarantee if she divides the goods into bundles (one for each agent) and then receives a bundle with the minimum value. Formally, for a set \(S\) of goods and an integer \(d\), let \(\Pi_{d}(S)\) denote the set of all partitions of \(S\) into \(d\) bundles. Then, \[\operatorname{MMS}_{i}^{d}(S):=\max_{(P_{1},\ldots,P_{d})\in\Pi_{d}(S)}\min_{j }v_{i}(P_{j}).\] The MMS value of each agent \(i\) is denoted by \(\operatorname{MMS}_{i}:=\operatorname{MMS}_{i}^{n}(M)\). An allocation is said to be MMS if each agent receives at least their MMS value. However, MMS is an unfeasible share guarantee that cannot always be satisfied when there are more than two agents with additive valuations [12, 13, 14]. Therefore, the MMS share guarantee needs to be relaxed, and the two natural ways are its multiplicative and ordinal approximations. \(\alpha\)**-MMS.** Since we need to lower the share threshold, a traditional way is to consider \(\alpha<1\) times the MMS value. Formally, an allocation \(X=\langle X_{1},\ldots,X_{n}\rangle\) is \(\alpha\)-MMS if for each agent \(i\), \(v_{i}(X_{i})\geq\alpha\cdot\operatorname{MMS}_{i}\). Earlier works showed the existence of \(2/3\)-MMS allocations using several different approaches [12, 1, 1, 13]. Later, in a groundbreaking work [15], the existence of \(3/4\)-MMS allocations was obtained through more sophisticated techniques and involved analysis. This factor was slightly improved to \(3/4+1/(12n)\) in [14], then more recently to \(\frac{3}{4}+\min(\frac{1}{36},\frac{3}{16n-4})\)[1], and finally to \(3/4+3/3836\)[1]. On the other hand, \(\alpha\)-MMS allocations need not exist for \(\alpha>1-1/n^{4}\)[14]. **1-out-of-\(d\) MMS.** Another way of relaxing MMS is to consider the share value of \(\operatorname{MMS}_{i}^{d}(M)\) for \(d>n\) for each agent \(i\), which is the maximum value that \(i\) can guarantee if she divides the goods into \(d\) bundles and then takes a bundle with the minimum value. This notion was introduced together with the MMS notion in [1], which also shows the existence of \(1\)-out-of-\((n+1)\) MMS after _adding excess goods_. Unlike \(\alpha\)-MMS, this notion is robust to small perturbations in the values of goods because it only depends on the bundles' ordinal ranking and is not affected by small perturbations as long as the ordinal ranking of the bundles does not change.+ Footnote †: As mentioned in [1], the \(\alpha\)-MMS is very sensitive to agents’ precise cardinal valuations: Consider the example mentioned in [1]. Assume \(n=3\) and there are four goods \(g_{1}\), \(g_{2}\), \(g_{3}\) and \(g_{4}\) with values \(30\), \(39\), \(40\) and \(41\) respectively for agent \(1\). Assume the goal is to guarantee the \(3/4\)-MMS value of each agent. We have \(\operatorname{MMS}_{1}=40\), and therefore any non-empty bundle satisfies \(3/4\)-MMS for agent \(1\). However, if the value of \(g_{3}\) gets slightly perturbed and becomes \(40+\epsilon\) for any \(\epsilon>0\), then \(\operatorname{MMS}_{1}>40\) and then \(3/4\cdot\operatorname{MMS}_{1}>30\) and the bundle \(\{g_{1}\}\) does not satisfy agent \(1\). Thus, the acceptability of a bundle (in this example, \(\{g_{1}\}\)) might be affected by an arbitrarily small perturbation in the value of an irrelevant good (i.e., \(g_{3}\)). Observe that in this example, whether the value of \(g_{3}\) is \(40\) or \(40+\epsilon\) for any \(\epsilon\in\mathbb{R}\), \(\{g_{1}\}\) is an acceptable \(1\)-out-of-\(4\) MMS bundle for agent \(1\). In the standard setting (i.e., without excess goods), the first non-trivial ordinal approximation was the existence of \(1\)-out-of-\((2n-2)\) MMS allocations [1], which was later improved to \(1\)-out-of-\(\lceil 3n/2\rceil\)[15], and then to the current state-of-the-art \(1\)-out-of-\(\lfloor 3n/2\rfloor\)[15]. On the other hand, the existence of \(1\)-out-of-\((n+1)\) MMS allocations is open to date. In this paper, we show that \(1\)-out-of-\(\lceil 4n/3\rceil\) MMS allocations always exist, thereby improving the state-of-the-art of \(1\)-out-of-\(d\) MMS. Another way to interpret \(1\)-out-of-\(d\) MMS allocations is giving \(n/d\) fraction of agents their MMS value and nothing to the remaining agents. We prove this equivalence in Section 2. Both ordinal and multiplicative approximations focus on extremes. In 1-out-of-\(d\) MMS, some agents get nothing, and others are guaranteed their _full_ MMS value. In \(\alpha\)-MMS, each agent receives (the same factor) \(\alpha<1\) fraction of their MMS value. As a middle ground between these two notions, we introduce a general framework of \((\alpha,\beta,\gamma)\)-MMS that guarantees \(\alpha\) fraction of agents \(\beta\) times their MMS values and the remaining \((1-\alpha)\) fraction of agents \(\gamma\) times their MMS values. The \((\alpha,\beta,\gamma)\)-MMS captures both ordinal and multiplicative approximations as special cases. Namely, an \(\alpha\)-MMS allocation can also be denoted by \((1,\alpha,\gamma)\)-MMS for any arbitrary \(\gamma\) and 1-out-of-\(d\) allocations are in correspondence with \((n/d,1,0)\)-MMS allocations. Furthermore, the \((\alpha,\beta)\)-framework introduced in [10], where \(\alpha\) fraction of agents receive \(\beta\)-MMS, is another special case of \((\alpha,\beta,\gamma)\)-MMS with \(\gamma=0\). We show that \((2(1-\beta)/\beta,\beta,3/4)\)-MMS allocations always exist. Moreover, our algorithm can choose the \(2(1-\beta)/\beta\) fraction of agents getting a \(\beta>3/4\) fraction of their MMS value. Therefore, by choosing these agents randomly and using \(\beta=\sqrt{3}/2\), we can guarantee each agent \(3/4\) of her MMS value (ex-post) and \((17\sqrt{3}-24)/4\sqrt{3}>0.785\) of her MMS value on average (ex-ante). ### Further Related Work Since the MMS notion and its variants have been intensively studied, we mainly focus here on the closely related work. Computing the MMS value of an agent is NP-hard, but a PTAS exists [21]. For \(n=2\), MMS allocations always exist [1]. For \(n=3\), a series of work has improved the MMS approximation from \(3/4\)[23] to \(7/8\)[1] to \(8/9\)[1], and then to \(11/12\)[22]. For \(n=4\), \((4/5)\)-MMS allocations exist [1, 1]. Babaioff _et al._[1] considered \(\ell\)-out-of-\(d\) MMS, in which the MMS value of an agent is the maximum value that can be guaranteed by partitioning goods into \(d\) bundles and selecting the \(\ell\) least-valuable ones. This was further studied by Segal-Halevi [1, 1]. Currently, the best result is the existence of \(\ell\)-out-of-\(\lfloor(\ell+\frac{1}{2})n\rfloor\) MMS [1]. The MMS and its ordinal approximations have also been applied in the context of cake-cutting problems [1, 1, 1, 2]. A series of works have been done on randomly generated instances. Bouveret and Lemaitre [1] showed that MMS allocations usually exist (for data generated randomly using uniform or Gaussian valuations). MMS allocations exist with high probability when the valuation of each good is drawn independently and randomly from the uniform distribution on \([0,1]\)[1] or for arbitrary distributions of sufficiently large variance [13]. MMS can be analogously defined for fair division of chores where items provide negative value. Like the case of the goods, MMS allocations do not always exist for chores [1]. Many papers studied approximate MMS for chores [1, 1, 1], with the current best approximation ratio being \(13/11\)[1]. For three agents, \(19/18\)-MMS allocations exist [22]. Also, ordinal MMS approximation for chores has been studied in [1]. MMS has also been studied for non-additive valuations [1, 1, 2]. Generalizations have been studied where restrictions are imposed on the set of allowed allocations, like matroid constraints [1], cardinality constraints [1], and graph connectivity constraints [1, 2]. Strategyproof versions of fair division have also been studied [1, 1, 1, 1]. MMS has also inspired other notions of fairness, like weighted MMS [13], AnyPrice Share (APS) [1], Groupwise MMS [1, 1], and self-maximizing shares [1]. Babaioff et al. [1] studied fairness mechanisms to give all agents both ex-ante and ex-post guarantees. Namely, they give a deterministic polynomial time algorithm that computes a distribution over ex-ante proportional allocations, and ex-post, every allocation gives every agent at least \(1/2\)-MMS. Preliminaries For all \(t\in\mathbb{N}\), we denote the set \(\{1,2,\ldots,t\}\) by \([t]\). A discrete fair division instance is denoted by \((N,M,\mathcal{V})\), where \(N=[n]\) is the set of \(n\) agents, \(M=[m]\) is the set of \(m\) indivisible goods and \(\mathcal{V}=(v_{1},\ldots,v_{n})\) is the vector of agents' valuation function. For all \(i\in[n]\), \(v_{i}:2^{M}\to R_{\geq 0}\) is the valuation function of agent \(i\) over all subsets of the goods. In this paper, we assume \(v_{i}(\cdot)\) is additive for all \(i\in[n]\), i.e., \(v_{i}(S)=\sum_{g\in S}v_{i}(\{g\})\) for all \(S\subseteq M\). For ease of notation, we also use \(v_{i}(g)\) and \(v_{i,g}\) instead of \(v_{i}(\{g\})\). For a set \(S\) of goods and any positive integers \(d\), let \(\Pi_{d}(S)\) denote the set of all partitions of \(S\) into \(d\) bundles. Then, \[\text{MMS}_{i}^{d}(S):=\max_{P\in\Pi_{d}(S)}\min_{j=1}^{d}v_{i}(P_{j}). \tag{1}\] Setting \(d=n\), we obtain the standard MMS notion. Formally, \(\text{MMS}_{i}=\text{MMS}_{i}^{n}(M)\). An allocation \(X=\langle X_{1},\ldots,X_{n}\rangle\) is a partition of the goods into \(n\) bundles such that for all \(i\in[n]\) agent \(i\) receives \(X_{i}\). We call an allocation \(X\)_\(1\)-out-of-\(d\) MMS_, if for all agents \(i\), \(v_{i}(X_{i})\geq\text{MMS}_{i}^{d}(M)\). For each agent \(i\), \(d\)-MMS partition of \(i\) is a partition \(P=(P_{1},\ldots,P_{d})\) of \(M\) into \(d\) bundles such that \(\min_{j=1}^{d}v_{i}(P_{j})\) is maximized. Basically, for a \(d\)-MMS partition \(P\) of agent \(i\), \(\text{MMS}_{i}^{d}(M)=\min_{j=1}^{d}v_{i}(P_{j})\). Whenever \(d\) is not an integer, by \(1\)-out-of-\(d\) MMS, we mean \(1\)-out-of-\(\lceil d\rceil\) MMS.++ In the rest of the paper, for each agent \(i\), we denote a \(d\)-MMS partition of \(i\) by \(P^{i}=(P^{i}_{1},\ldots,P^{i}_{d})\). Footnote ‡: This is without loss of generality because we prove the existence of \(1\)-out-of-\(4n/3\) MMS assuming that \(4n/3\) is an integer. If \(4n/3\) is not an integer, we can copy one of the agents \(1\) or \(2\) times (depending on \(4n\) mod \(3\)) so that the new instance has \(n^{\prime}\) agents and \(4n^{\prime}/3\) is an integer. Note that \(\lceil 4n/3\rceil=4n^{\prime}/3\). Since we prove the existence of \(1\)-out-of-\(4n^{\prime}/3\) MMS for the new instance, we prove the existence of an allocation that gives all the agents \(i\) in the original instance their \(\text{MMS}_{i}^{4n^{\prime}/3}(M)=\text{MMS}_{i}^{\lceil 4n/3\rceil}(M)\) value. Hence, the existence of \(1\)-out-of-\(\lceil 4n/3\rceil\) MMS allocations follows. An allocation \(X=\langle X_{1},\ldots,X_{n}\rangle\) is a partition of the goods into \(n\) bundles such that for all \(i\in[n]\), agent \(i\) receives bundle \(X_{i}\). **Definition 1**.: _Given an instance \(\mathcal{I}=(N,M,\mathcal{V})\), for \(0<\alpha\leq 1\), \(\beta>\gamma\geq 0\), an allocation \(X\) is \((\alpha,\beta,\gamma)\)-MMS if for any partition of the agents into two parts of \(N_{1}\) and \(N_{2}\) such that \(|N_{1}|\leq\alpha n\) and \(N_{2}=N\setminus N_{1}\), for all \(i\in N_{1}\), \(v_{i}(X_{i})\geq\beta\cdot\text{MMS}_{i}\) and for all \(j\in N_{2}\), \(v_{j}(X_{j})\geq\gamma\cdot\text{MMS}_{j}\)._ **Lemma 1**.: _For integers \(d\geq n\), \(1\)-out-of-\(d\) MMS allocations exist for all instances with \(n\) agents if and only if \((n/d,1,0)\)-MMS allocations exist for all instances with \(d\) agents._ Proof.: First, assume \(1\)-out-of-\(d\) MMS allocations exist for all instances with \(n\) agents. Consider any instance \(\mathcal{I}=(N,M,\mathcal{V})\) with \(d\) agents. Take any arbitrary partition of the agents into \(N_{1}\) and \(N_{2}\) such that \(|N_{1}|=n\). Consider the instance \(\mathcal{I}^{\prime}=(N_{1},M,\mathcal{V}_{1})\) where \(\mathcal{V}_{1}\) is the valuation profile of agents in \(N_{1}\). Since \(1\)-out-of-\(d\) MMS allocations exist for \(\mathcal{I}^{\prime}\), there exists an allocation that satisfies any \(n/d\) fraction of the agents in \(\mathcal{I}\) with their MMS value. Now assume \((n/d,1,0)\)-MMS allocations exist for all instances with \(d\) agents. Consider any instance \(\mathcal{I}=(N,M,\mathcal{V})\) with \(n\) agents. Add \(d-n\) dummy agents with arbitrary additive valuations. Let \(\mathcal{I}^{\prime}=(N^{\prime},M,\mathcal{V}^{\prime})\) be the resulting instance. Since \((n/d,1,0)\)-MMS allocations exist for \(\mathcal{I}^{\prime}\), for any \(n/d\) fraction of the agents in \(N^{\prime}\) and in particular for the agents in \(N\), there exists an allocation that gives them their MMS value in \(\mathcal{I}^{\prime}\) which is equal to their \(1\)-out-of-\(d\) MMS value in \(\mathcal{I}\). **Definition 2**.: _An instance \(\mathcal{I}=(N,M,\mathcal{V})\) is ordered if there exists an ordering \([g_{1},\ldots,g_{m}]\) of the goods such that for all agents \(i\), \(v_{i}(g_{1})\geq\ldots\geq v_{i}(g_{m})\)._ **Definition 3**.: _An instance \(\mathcal{I}=(N,M,\mathcal{V})\) is \(d\)-normalized if for all agents \(i\), there exists a partition \(P=(P_{1},\ldots,P_{d})\) of \(M\) into \(d\) bundles such that \(v_{i}(P_{j})=1\) for all \(j\in[d]\)._ Barman and Krishnamurthy [1] proved that when the goal is to guarantee a minimum threshold of \(\alpha_{i}\) for each agent \(i\), it is without loss of generality to assume the instance is ordered. Akrami et al. [1] proved that when the goal is to find an approximate MMS allocation, it is without loss of generality to assume the instance is \(n\)-normalized and ordered. Their proof does not rely on the number of agents. Formally, for any \(d\in\mathbb{N}\), when the goal is to find a \(1\)-out-of-\(d\) MMS allocation, it is without loss of generality to assume the instance is ordered and \(d\)-normalized, as shown in the following lemma (proof is in Appendix A). **Lemma 2**.: _For any \(d\in\mathbb{N}\), if \(1\)-out-of-\(d\) MMS allocations exist for \(d\)-normalized ordered instances, then \(1\)-out-of-\(d\) MMS allocations exist for all instances._ From now on, even if not mentioned, we assume the instance is ordered and \(d\)-normalized. Without loss of generality, for all \(i\in[n]\), we assume \(v_{i}(1)\geq v_{i}(2)\geq\ldots\geq v_{i}(m)\). In Section 2.1, we prove some properties of ordered \(d\)-normalized instances for arbitrary \(d\). In Section 4, we set \(d=4n/3\) and prove \(1\)-out-of-\((4n/3)\) MMS allocations always exist. ### 1-out-of-d MMS Recall that for a given instance \(\mathcal{I}\) and integer \(d\), for each agent \(i\), \(P^{i}=(P^{i}_{1},\ldots,P^{i}_{d})\) is a \(d\)-MMS partition of agent \(i\). **Proposition 1**.: _Given a \(d\)-normalized instance for all \(i\in N\) and \(k\in[d]\), we have_ 1. \(v_{i}(P^{i}_{k})=1\)_, and_ 2. \(v_{i}(M)=d\)_._ We note that it is without loss of generality to assume \(m\geq 2d\). Otherwise, we can add \(2d-m\) dummy goods with a value of \(0\) for all the agents. The normalized and ordered properties of the instance would be preserved. Consider the bag setting with \(d\) bags as follow. \[C_{k}:=\{k,2d-k+1\}\text{ for }k\in[d] \tag{2}\] See Figure 1 for more intuition. Next, we show some important properties of the values of the goods in \(C_{k}\)'s. **Proposition 2**.: _For all agents \(i\in N\), we have_ 1. \(v_{i}(1)\leq 1\)_,_ 2. \(v_{i}(C_{d})\leq 1\)_, and_ 3. \(v_{i}(d+1)\leq\frac{1}{2}\)_._ Proof.: For the first part, fix an agent \(i\). Let \(1\in P_{1}^{i}\). By Proposition 1, \(v_{i}(1)\leq v_{i}(P_{1}^{i})=1\). For the second part, by the pigeonhole principle, there exists a bundle \(P_{k}^{i}\) and two goods \(j,j^{\prime}\in\{1,2,\ldots,d+1\}\) such that \(\{j,j^{\prime}\}\subseteq P_{k}^{i}\). Without loss of generality, assume \(j<j^{\prime}\). We have \[v_{i}(C_{d}) =v_{i}(d)+v_{i}(d+1) (C_{d}=\{d,d+1\})\] \[\leq v_{i}(j)+v_{i}(j^{\prime}) (j\leq d\text{ and }j^{\prime}\leq d+1)\] \[\leq v_{i}(P_{k}^{i})=1. (\{j,j^{\prime}\}\in P_{k}^{i})\] For the third part, we have \[1\geq v_{i}(C_{d})=v_{i}(d)+v_{i}(d+1)\geq 2v_{i}(d+1).\] Thus, \(v_{i}(d+1)\leq\frac{1}{2}\). **Lemma 3**.: _For all \(i\in N\) and \(k\in[d]\), \(\sum_{j=k}^{d}v_{i}(C_{j})\leq d-k+1\)._ Proof.: For the sake of contradiction, assume the claim does not hold for some agent \(i\) and let \(\ell\geq 1\) be the largest index for which we have \(\sum_{j=\ell}^{d}v_{i}(C_{j})>d-\ell+1\). Proposition 2(2) implies that \(\ell<d\). We have \[v_{i}(\ell)+v_{i}(2d-\ell+1) =v_{i}(C_{\ell})\] \[=\sum_{j=\ell}^{d}v_{i}(C_{j})-\sum_{j=\ell+1}^{d}v_{i}(C_{j})\] \[>(d-\ell+1)-(d-(\ell+1)+1) (\sum_{j=k}^{d}v_{i}(C_{j})\leq d-k+1\text{ for }k>\ell)\] \[=1.\] For all \(j,j^{\prime}<\ell\), \(v_{i}(j)+v_{i}(j^{\prime})\geq v_{i}(\ell)+v_{i}(2d-\ell+1)>1\). Therefore, \(j\) and \(j^{\prime}\) cannot be in the same bundle in any \(d\)-MMS partition of \(i\). For \(j<\ell\), let \(j\in P_{j}^{i}\). For all \(j<\ell\) and \(\ell\leq j^{\prime}\leq 2d-\ell+1\), \[v_{i}(j)+v_{i}(j^{\prime}) \geq v_{i}(\ell)+v_{i}(2d-\ell+1)\] \[=v_{i}(C_{\ell})>1.\] Therefore, \(j^{\prime}\notin P_{j}\). Also, since \(\sum_{j=\ell}^{d}v_{i}(C_{j})>d-\ell+1\), there are at least \(t\geq d-\ell+2\) different bundles \(Q_{1},\ldots,Q_{t}\) in \(P\) such that \(Q_{j}\cap\{\ell,\ldots,2d-\ell+1\}\neq\emptyset\). It is a contradiction since these \(t\geq d-\ell+2\) bundles must be different from \(P_{1}^{i},\ldots P_{\ell-1}^{i}\). ## 3 Technical Overview For \(\alpha\)-MMS problem, the algorithms for \(\alpha\geq 3/4\)[1, 1, 1] utilize the two-phase approach: _valid reductions_ and _bag filling_. In a valid reduction, the instance is reduced by removing an agent \(a\) and a subset of goods \(S\) such that \(v_{a}(S)\geq\alpha\), and the MMS values of the remaining agents do not decrease (see Section 5 for more details). The valid reduction phase is crucial for the bag filling to work in the analysis of these algorithms. However, it is not clear how to define valid reductions in the case of 1-out-of-\(d\) MMS because \(d\) is not the same as the number of agents \(n\). Therefore, we only use bag filling in our algorithm, which makes its analysis quite involved and entirely different than \(\alpha\)-MMS algorithms. The algorithm is described in Algorithm 1. Given an ordered \(d\)-normalized instance, we initialize \(n\) bags (one for each agent) with the first \([2n]\) goods (highest valued) as follows. \[B_{k}:=\{k,2n-k+1\}\text{ for }k\in[n]. \tag{3}\] See Figure 2 for a better intuition. Then, we do bag-filling. That is, at each round \(j\), we keep adding goods in decreasing values to the bag \(B_{j}\) until some agent with no assigned bag values it at least \(1\) (recall that \(1\)-out-of-\(d\) MMS value of each agent is \(1\) in a \(d\)-normalized instance). Then, we allocate it to an arbitrary such agent. We note that in contrast to [1, 1], in the bag-filling phase we do not add arbitrary goods to the bags but we add the goods in the decreasing order of their values. To prove that the output of Algorithm 1 is \(1\)-out-of-\(d\) MMS, it is sufficient to prove that we never run out of goods in any round or, equivalently, each agent receives a bag in some round. Towards contradiction, assume that agent \(i\) does not receive a bag and the algorithm terminates. It can be easily argued that agent \(i\)'s value for at least one of the initial bags \(\{B_{1},\ldots,B_{n}\}\) must be strictly less than \(1\). Let \(\ell^{*}\) be the smallest such that \(v_{i}(B_{\ell^{*}})<1\). We consider two cases based on the value of \(v_{i}(2n-\ell^{*})\). In Section 4.1, we reach a contradiction assuming \(v_{i}(2n-\ell^{*})\geq 1/3\) and in Section 4.2, we reach a contradiction assuming \(v_{i}(2n-\ell^{*})<1/3\). Let \(\hat{B}_{j}\) denote the \(j\)-th bag at the end of the algorithm. The overall idea is to categorize the bags into different groups and prove an upper bound on the value of each bag (\(\hat{B}_{j}\)) for agent \(i\) depending on which group it belongs to. Since \(v_{i}(M)=n\) due to the instance being \(d\)-normalized, we get upper and lower bounds on the size of the groups. For example, if we know that for all bags \(\hat{B}_{j}\) in a certain group \(v_{i}(\hat{B}_{j})<1\), we get the trivial upper bound of \(n-1\) on the size of this group since \(n=v_{i}(M)=\sum_{j\in[n]}v_{i}(\hat{B}_{j})\). . However, for these cases, we have upper and lower bounds on the size of each group, and in general, we show several additional properties to make it work. For example, we obtain nontrivial upper bounds on the values of certain subsets of goods using the fact that all bundles in a \(d\)-MMS partition of agent \(i\) have value \(1\) (see Lemmas 6 and 14). ## 4 1-out-of-(4n/3) MMS Algorithm Our algorithm in Algorithm 1 consists of _initialization_ and _bag-filling_. First, we remark that assuming \(|M|\geq 2n\) is without loss of generality. This is because we can always add dummy goods to \(M\) with a value of \(0\) for all the agents. The resulting instance is ordered and \(d\)-normalized if the original instance has these properties. As mentioned in Section 3, the algorithm first initialize \(n\) bags as in (3) (see Figure 2). Then, in each round \(j\) of bag-filling, we keep adding goods in decreasing value to the bag \(B_{j}\) until some agent with no assigned bag values it at least \(1\). Then, we allocate it to an arbitrary such agent. In the rest of this section, we prove the following theorem, showing the correctness of the algorithm. **Theorem 1**.: _Given any ordered \((4n/3)\)-normalized instance, Algorithm 1 returns a \(1\)-out-of-\((4n/3)\) MMS allocation._ To do so, it suffices to prove that we never run out of goods in bag-filling. Towards contradiction, assume that the algorithm stops before all agents receive a bundle. Let \(i\) be an agent with no bundle. Let \(\hat{B}_{j}\) be the \(j\)-th bundle after bag-filling. **Observation 1**.: _For all \(j,k\) such that \(j\leq k\leq n\), \(v_{i}(\hat{B}_{j})\leq 1+v_{i}(2n-k+1)\)._ Proof.: Let \(g\) be the good with the largest index in \(\hat{B}_{j}\). If \(g=2n-j+1\), \(v_{i}(\hat{B}_{j}\setminus\{g\})=v_{i}(j)\leq 1\) by Proposition 2(1). If \(g>2n-j+1\), meaning that \(g\) was added to \(\hat{B}_{j}\) during bag-filling, then \(v_{i}(\hat{B}_{j}\setminus\{g\})<1\). Otherwise, \(g\) would not be added to \(\hat{B}_{j}\). Therefore, \[v_{i}(\hat{B}_{j}) =v_{i}(\hat{B}_{j}\setminus\{g\})+v_{i}(g)\] \[\leq 1+v_{i}(2n-k+1). (v_{i}(\hat{B}_{j}\setminus\{g\})\leq 1\text{ and }g\geq 2n-k+1)\] **Observation 2**.: _For all \(j,k\) such that \(k\leq j\leq n\), \(v_{i}(\hat{B}_{j})\leq\max(1+v_{i}(2n-k+1),2v_{i}(k))\)._ Figure 2: Bag initialization Proof.: First, assume \(\hat{B}_{j}\neq B_{j}\) and \(g\) be the last good added to \(\hat{B}_{j}\). We have \(v_{i}(\hat{B}_{j}\setminus\{g\})<1\). Otherwise, \(g\) would not be added to \(\hat{B}_{j}\). Therefore, \[v_{i}(\hat{B}_{j}) =v_{i}(\hat{B}_{j}\setminus\{g\})+v_{i}(g)\] \[<1+v_{i}(2n-k+1). (v_{i}(\hat{B}_{j}\setminus\{g\})<1\text{ and }g>2n-k+1)\] Now assume \(\hat{B}_{j}=B_{j}\). We have \[v_{i}(\hat{B}_{j}) =v_{i}(B_{j})\] \[=v_{i}(j)+v_{i}(2n-j+1)\] \[\leq 2v_{i}(k). (2n-j+1>j\geq k)\] Hence, \(v_{i}(\hat{B}_{j})\leq\max(1+v_{i}(2n-k+1),2v_{i}(k))\). **Observation 3**.: _There exists a bag \(B_{j}\), such that \(v_{i}(B_{j})<1\)._ Proof.: Otherwise, the algorithm would allocate the remaining bag with the smallest index to agent \(i\). Let \(\ell^{*}\) be the smallest such that \(v_{i}(B_{\ell^{*}+1})<1\). I.e., \(B_{\ell^{*}+1}\) is the leftmost bag in Figure 2 with a value less than \(1\) to agent \(i\). In Section 4.2, we reach a contradiction assuming \(v_{i}(2n-\ell^{*})<1/3\) and prove Theorem 2. **Theorem 2**.: _If Algorithm 1 does not allocate a bag to some agent \(i\), then \(v_{i}(2n-\ell^{*})\geq 1/3\) where \(\ell^{*}\) is the smallest index such that \(v_{i}(B_{\ell^{*}+1})<1\)._ In Section 4.1, we reach a contradiction assuming \(v_{i}(2n-\ell^{*})\geq 1/3\) and prove Theorem 3. **Theorem 3**.: _If Algorithm 1 does not allocate a bag to some agent \(i\), then \(v_{i}(2n-\ell^{*})<1/3\) where \(\ell^{*}\) is smallest such that \(v_{i}(B_{\ell^{*}+1})<1\)._ By Theorems 2 and 3, agent \(i\) who receives no bundle by the end of Algorithm 1 does not exist, and Theorem 1 follows. ### \(\mathbf{v_{i}(2n-\ell^{*})\geq 1/3}\) In this section we assume \(v_{i}(2n-\ell^{*})=1/3+x\) for \(x\geq 0\). We define \(A^{+}:=\{B_{1},B_{2},\ldots,B_{\ell^{*}}\}\); see Figure 3. Figure 3: An illustration of which group each bag belongs to. **Observation 4**.: _For all \(B_{j}\in A^{+}\), \(\hat{B}_{j}=B_{j}\)._ Proof.: For all \(B_{j}\in A^{+}\), \(v_{i}(B_{j})\geq 1\). Since \(i\) did not receive any bundle, \(B_{j}\) must have been assigned to some other agent, and no good needed to be added to \(B_{j}\) in bag-filling since there is an agent (namely \(i\)) with no bag who values \(B_{j}\) at least \(1\). **Observation 5**.: _For all \(j\geq 2n-\ell^{*}\), \(v_{i}(j)<1/2\)._ Proof.: Since \(v_{i}(B_{\ell^{*}+1})=v_{i}(\ell^{*}+1)+v_{i}(2n-\ell^{*})<1\) and \(v_{i}(2n-\ell^{*})\leq v_{i}(\ell^{*}+1)\), \(v_{i}(2n-\ell^{*})<1/2\). Also for all \(j\geq 2n-\ell^{*}\), \(v_{i}(j)\leq v_{i}(2n-\ell^{*})<1/2\). **Corollary 1** (of Observation 5).: \(x<1/6\). Let \(s\) be the smallest such that either the algorithm stops at step \(s+1\) or \(B_{s+1}\) gets more than one good in bag-filling. **Observation 6**.: \(s\geq\ell^{*}\). Proof.: For all \(j<\ell^{*}\), \(v_{i}(B_{j+1})\geq 1\). Since \(i\) did not receive any bundle, \(B_{j+1}\), must have been assigned to another agent. Therefore, the algorithm does not stop at step \(j+1\). Also, by Observation 4, \(B_{j+1}\) gets no good in bag-filling. Let \(A^{1}\) be the set of bags in \(\{B_{\ell^{*}+1},\ldots,B_{s}\}\) which receive exactly one good in bag-filling. Formally, \(A^{1}=\{B_{j}|\ell^{*}<j\leq s\text{ and }|\hat{B}_{j}|=3\}\). Let \(A^{2}=\{B_{1},B_{2},\ldots,B_{n}\}\setminus(A^{+}\cup A^{1})\). **Lemma 4**.: _For all \(B_{j}\in A^{2}\), \(v_{i}(B_{j})<4/3-2x\)._ Proof.: We have \[1 >v_{i}(B_{\ell^{*}+1})\] \[=v_{i}(\ell^{*}+1)+v_{i}(2n-\ell^{*})\] \[=v_{i}(\ell^{*}+1)+\frac{1}{3}+x.\] Hence, \(v_{i}(\ell^{*}+1)<2/3-x\). Also, for \(B_{j}\in A^{2}\), we have \[v_{i}(B_{j}) =v_{i}(j)+v_{i}(2n-j+1)\] \[\leq 2v_{i}(\ell^{*}+1) (2n-j+1>j\geq\ell^{*}+1\text{ since }B_{j}\in A^{2})\] \[<\frac{4}{3}-2x.\] So if \(\hat{B}_{j}=B_{j}\), the inequality holds. Now assume \(\hat{B}_{j}\neq B_{j}\). This implies that \(j\geq s+1\) and the algorithm did not stop at step \(j\) before adding a good to \(B_{j}\). Therefore it did not stop at step \(s+1\) before adding a good to \(B_{s+1}\) either. Let \(g\) be the first good added to \(B_{s+1}\). Since \(B_{s+1}\) requires more than one good, \[1 >v_{i}(B_{s+1}\cup\{g\})\] \[=v_{i}(s+1)+v_{i}(n-s)+v_{i}(g)\] \[\geq 2v_{i}(2n-\ell^{*})+v_{i}(g) (s+1<n-s\leq 2n-\ell^{*})\] \[=\frac{2}{3}+2x+v_{i}(g).\] Therefore, \(v_{i}(g)<1/3-2x\). Now let \(h\) be the last good added to bag \(B_{j}\). We have \[v_{i}(\hat{B}_{j}) =v_{i}(\hat{B}_{j}\setminus\{h\})+v_{i}(h)\] \[<1+v_{i}(g) (v_{i}(\hat{B}_{j}\setminus\{h\})<1\text{ and }v_{i}(h)\leq v_{i}(g))\] \[<\frac{4}{3}-2x.\] **Lemma 5**.: _For all \(B_{j}\in A^{+}\cup A^{1}\), \(v_{i}(\hat{B}_{j})\leq 4/3+x\)._ Proof.: First assume \(B_{j}\in A^{+}\). We have \(j\leq\ell^{*}\). Also, \[v_{i}(\hat{B}_{j}) =v_{i}(B_{j}) (\hat{B}_{j}=B_{j})\] \[=v_{i}(j)+v_{i}(2n-j+1)\] \[\leq 1+v_{i}(2n-\ell^{*}) (v_{i}(j)\leq 1\text{ and }2n-j+1>2n-\ell^{*})\] \[=\frac{4}{3}+x.\] Now assume \(B_{j}\in A^{1}\). Let \(g\) be the good added to bag \(B_{j}\) in bag-filling. We have, \[v_{i}(\hat{B}_{j}) =v_{i}(B_{j})+v_{i}(g)\] \[<1+v_{i}(2n-\ell^{*}) (v_{i}(B_{j})<1\text{ and }v_{i}(g)\leq v_{i}(2n+1)\leq v_{i}(2n-\ell^{*}))\] \[=\frac{4}{3}+x.\] Let \(|A^{1}|=2n/3+\ell\). Then \(|A^{2}|=n-\ell^{*}-(2n/3+\ell)=n/3-(\ell+\ell^{*})\). If \(\ell+\ell^{*}\leq 0\), then \(|A^{2}|\geq n/3\) and hence there are at least \(n/3\) bags with value less than \(4/3-2x\) (by Lemma 4) and at most \(2n/3\) bags with value at most \(4/3+x\) (by Lemma 5). Hence, \[v_{i}(M)<\frac{n}{3}(\frac{4}{3}-2x)+\frac{2n}{3}(\frac{4}{3}+x)=\frac{4n}{3}\] which is a contradiction since \(v_{i}(M)=4n/3\). So assume \(\ell+\ell^{*}>0\). Limit the items in a \(1\)-out-of-\(4n/3\) MMS partition \(P^{i}=(P^{i}_{1},\ldots,P^{i}_{4n/3})\) of agent \(i\) to \(\{1,\ldots,8n/3+\ell\}\) and let \(Q\) be the set of bags in \(P^{i}\) containing goods \(\{1,\ldots,\ell^{*}\}\). Formally, \(Q=\{P^{i}_{j}\cap\{1,\ldots,8n/3+\ell\}:|P^{i}_{j}\cap\{1,\ldots,\ell^{*}\}| \geq 1\}\). Let \(t\) be the number of bags of size \(1\) in \(Q\). **Lemma 6**.: _Let \(t\) be the number of bags of size \(1\) in \(Q=\{P^{i}_{j}\cap\{1,\ldots 8n/3+\ell\}:|P^{i}_{j}\cap\{1,\ldots\ell^{*}\}| \geq 1\}\). Then,_ \[v_{i}( \{8n/3-2\ell-t-2\ell^{*}+1,\ldots,8n/3+\ell\}\] \[\cup\{t+1,\ldots,\ell^{*}\}\] \[\cup\{2n-\ell^{*}+1,\ldots,2n-t\})\leq 2\ell^{*}+\ell-t.\] The items considered in Lemma 6 are marked with blue in Figure 4. First, we prove that the goods mentioned in Lemma 6 are distinct. To that end, it suffices to prove that \(8n/3-2\ell-t-2\ell^{*}+1>2n-t\). It follows from the fact that \(2n/3+\ell+\ell^{*}\leq n\). Before proving Lemma 6, let us show how to obtain a contradiction assuming this lemma holds. Note that since there are bags with value less than \(4/3-2x\) (namely the bags in \(A^{2}\)), it suffices to prove that there exists \(3(\ell+\ell^{*})\) other bags with total value \(4(\ell+\ell^{*})\). Since the remaining \(2n/3-2\ell-2\ell^{*}\) bags are of value at most \(4/3+x\) (by Lemma 5), we get \[v_{i}(M)<(\frac{n}{3}-\ell-\ell^{*})(\frac{4}{3}-2x)+(\frac{2n}{3}-2\ell-2\ell ^{*})(\frac{4}{3}+x)+4(\ell+\ell^{*})=\frac{4n}{3} \tag{4}\] which is a contradiction since \(v_{i}(M)=4n/3\). Now consider \(B=\{\hat{B}_{1},\ldots,\hat{B}_{2\ell+t+2\ell^{*}-2n/3},\hat{B}_{t+1},\ldots, \hat{B}_{\ell^{*}}\}\cup\hat{A}^{1}\) where \(\hat{A}^{1}\) is the set of bags in \(A^{1}\) after bag-filling. \(B\) consists of \(3(\ell+\ell^{*})\) bags. Now we prove that \(v_{i}(\bigcup_{B_{j}\in B}B_{j})\leq 4(\ell+\ell^{*})\). We have \[v_{i}(\bigcup_{\hat{B}_{j}\in B}\hat{B}_{j}) \leq v_{i}(\bigcup_{B_{j}\in A^{1}}B_{j})\] \[+v_{i}(\{1,\ldots,2\ell+t+2\ell^{*}-2n/3\})\] \[+v_{i}(\{8n/3-2\ell-t-2\ell^{*}+1,\ldots,8n/3+\ell\}\] \[\qquad\cup\{t+1,\ldots,\ell^{*}\}\] \[\qquad\cup\{2n-\ell^{*}+1,\ldots,2n-t\}).\] We bound the value of the goods marked with different colors in different inequalities. **Observation 7**.: _For all \(B_{j}\in A^{1}\), \(v_{i}(B_{j})<1\)._ Since \(|A^{1}|=2n/3+\ell\), \[v_{i}(\bigcup_{B_{j}\in A^{1}}B_{j})<2n/3+\ell.\] Also, since all goods are of value at most \(1\) to agent \(i\), \[v_{i}(\{1,\ldots,2\ell+t+2\ell^{*}-2n/3\})\leq 2\ell+t+2\ell^{*}-2n/3.\] Figure 4: The first \(2\ell+t+2\ell^{*}-2n/3\) items are marked with red, and the items considered in Lemma 6 are marked with blue. By Lemma 6, \[v_{i}(\{8n/3-2\ell-t-2\ell^{*}+1,\ldots,8n/3+\ell\}\] \[\quad\cup\{t+1,\ldots,\ell^{*}\}\] \[\quad\cup\{2n-\ell^{*}+1,\ldots,2n-t\})\leq 2\ell^{*}+\ell-t.\] By adding all the inequalities, we get \[v_{i}(\bigcup_{\hat{B}_{j}\in B}\hat{B}_{j})\leq 4(\ell+\ell^{*}).\] Hence, Inequality (4) holds, which is a contradiction. So the case of \(v_{i}(2n-\ell^{*})\geq 1/3\) cannot arise. **Theorem 3**.: _If Algorithm 1 does not allocate a bag to some agent \(i\), then \(v_{i}(2n-\ell^{*})<1/3\) where \(\ell^{*}\) is smallest such that \(v_{i}(B_{\ell^{*}+1})<1\)._ In the rest of this section, we prove Lemma 6. #### 4.1.1 Proof of Lemma 6 To prove Lemma 6, we partition the goods considered in this lemma into two parts. These parts are colored red and blue in Figure 5. We bound the value of red goods in Lemma 9, i.e., \[\sum_{2n-\ell^{*}<j\leq 2n-t}v_{i}(j)+\sum_{8n/3-2\ell-t-2\ell^{*}<j\leq 8n/3-2 \ell-2t-\ell^{*}}v_{i}(j)<\ell^{*}-t,\] and the value of the blue goods in Lemma 10, i.e., \[\sum_{t<j\leq\ell^{*}}v_{i}(j)+\sum_{8n/3-2\ell-2t-\ell^{*}<j\leq 8n/3+\ell}v_{i} (j)\leq\ell^{*}+\ell.\] Thereafter, we have \[v_{i}(\{8n/3-2\ell-t-2\ell^{*}+1,\ldots,8n/3+\ell\}\] \[\quad\cup\{t+1,\ldots,\ell^{*}\}\] \[\quad\cup\{2n-\ell^{*}+1,\ldots,2n-t\})\] \[\quad=\sum_{2n-\ell^{*}<j\leq 2n-t}v_{i}(j)+\sum_{8n/3-2\ell-t-2 \ell^{*}<j\leq 8n/3-2\ell-2t-\ell^{*}}v_{i}(j)\] \[\quad+\sum_{t<j\leq\ell^{*}}v_{i}(j)+\sum_{8n/3-2\ell-2t-\ell^{* }<j\leq 8n/3+\ell}v_{i}(j)\] \[\quad<(\ell^{*}-t)+(\ell^{*}+\ell)\] (Lemma 9 and 10) \[\quad=2\ell^{*}+\ell-t,\] and Lemma 6 follows. It suffices to prove Lemmas 9 and 10. In the rest of this section, we prove these two lemmas. Limit the items in a \(1\)-out-of-\(4n/3\) MMS partition of agent \(i\) to \(\{1,\ldots,8n/3+\ell\}\) and let \(R\) be the set of the resulting bags. Formally, for all \(j\in[4n/3]\), \(R_{j}=P_{j}^{i}\cap\{1,\ldots,8n/3+\ell\}\) and \(R=\{R_{1},\ldots,R_{4n/3}\}\). Without loss of generality, assume \(|R_{1}|\geq|R_{2}|\geq\ldots\geq|R_{4n/3}|\). Let \(t\) be the number of bags of size \(1\) in \(R\). **Lemma 7**.: _If there exist \(t\) bags of size at most \(1\) in \(R\), then_ \[\sum_{1\leq j\leq t+\ell}|R_{j}|\geq 3(t+\ell).\] Proof.: Since \(R_{j}\)'s are sorted in decreasing order of their size, \[\sum_{1\leq j\leq t+\ell}|R_{j}|\geq(t+\ell)|R_{t+\ell}|.\] Hence, if \(|R_{t+\ell}|\geq 3\), then \(\sum_{1\leq j\leq t+\ell}|R_{j}|\geq 3(t+\ell).\) So assume \(|R_{t+\ell}|\leq 2\). \[\frac{8n}{3}+\ell =\sum_{1\leq j\leq 4n/3}|R_{j}|\] \[=\sum_{1\leq j\leq t+\ell}|R_{j}|+\sum_{t+\ell<j\leq 4n/3-t}|R_{j}|+ \sum_{4n/3-t<j\leq 4n/3}|R_{j}|\] \[\leq\sum_{1\leq j\leq t+\ell}|R_{j}|+(\frac{4n}{3}-2t-\ell)|R_{t+ \ell}|+t\] \[\leq\sum_{1\leq j\leq t+\ell}|R_{j}|+2(\frac{4n}{3}-2t-\ell)+t\] Therefore, \[\sum_{j\in[t+\ell]}|R_{j}|\geq 3(\ell+t).\] **Lemma 8**.: \(\ell+\ell^{*}+t\leq 4n/3.\)__ Proof.: We have \(\ell^{*}+2n/3+\ell\leq s\leq n\). See Figure 4 for intuition. Therefore, \(\ell^{*}+\ell\leq n/3\). Also, \(t\leq\ell^{*}\leq n\). Hence \(\ell+\ell^{*}+t\leq 4n/3\). Figure 5: The items considered in in Lemma 9 are marked with red and the items in Lemma 10 are marked with blue. **Lemma 9**.: \[\sum_{2n-\ell^{*}<j\leq 2n-t}v_{i}(j)+\sum_{8n/3-2\ell-t-2\ell^{*}<j\leq 8n/3-2 \ell-2t-\ell^{*}}v_{i}(j)<\ell^{*}-t.\] Proof.: Let \(B^{\prime}=\{2n-\ell^{*}+1,\ldots,2n-t\}\cup\{8n/3-2\ell-t-2\ell^{*}+1,\ldots,8n /3-2\ell-2t-\ell^{*}\}\). \(|B^{\prime}|=2(\ell^{*}-t)\) and by Observation 5 for all goods \(g\in B^{\prime}\), \(v_{i}(g)<1/2\). Therefore, \(v_{i}(B^{\prime})<\ell^{*}-t\). **Lemma 10**.: \[\sum_{t<j\leq\ell^{*}}v_{i}(j)+\sum_{8n/3-2\ell-2t-\ell^{*}<j\leq 8n/3+\ell}v_{i} (j)\leq\ell^{*}+\ell.\] Proof.: Recall that \(\{R_{1},\ldots,R_{4n/3}\}\) is the set of bags in the \(1\)-out-of-\(4n/3\) MMS partition of agent \(i\) after removing items \(\{8n/3+\ell+1,\ldots,m\}\). Moreover, we know exactly \(t\) of these bags have size \(1\). If there is a bag \(R_{j}=\{g\}\) for \(g>t\), there must be a good \(g^{\prime}\in[t]\) such that \(g^{\prime}\in R_{j^{\prime}}\) and \(|R_{j^{\prime}}|>1\). Swap the goods \(g\) and \(g^{\prime}\) between \(R_{j}\) and \(R_{j^{\prime}}\) as long as such good \(g\) exists. Note that \(v_{i}(R_{j^{\prime}})\) can only decrease and \(v_{i}(R_{j})=v_{i}(g^{\prime})\leq 1\). Therefore, in the end of this process for all \(j\in[4n/3]\), \(v_{i}(R_{j})\leq 1\) and we can assume bags containing goods \(1,\ldots,t\) are of size \(1\) and bags containing goods \(t+1,\ldots,\ell^{*}\) are of a size more than \(1\). Recall that \(|R_{1}|\geq\ldots\geq|R_{4n/3}|\). Let \(T_{j}\) be the bag that contains good \(j\). Consider the bags \(B=\{R_{1},\ldots,R_{t+\ell}\}\cup\{T_{t+1},\ldots,T_{\ell^{*}}\}\). If \(|B|<\ell^{*}+\ell\), keep adding a bag with the largest number of items to \(B\) until there are exactly \(\ell^{*}+\ell\) bags in \(B\). First we show that \(B\) contains at least \(3\ell+2\ell^{*}+t\) goods. Namely, \[\sum_{S\in B}|S|\geq 3\ell+2\ell^{*}+t.\] By Lemma 7, \(\sum_{1\leq j\leq t+\ell}|R_{j}|\geq 3(t+\ell)\). If all the remaining \(\ell^{*}-t\) bags in \(B\setminus\{R_{1},\ldots,R_{t+\ell}\}\) are of size \(2\), then \(\sum_{S\in B}|S|\geq 3(t+\ell)+2(\ell^{*}-t)=3\ell+2\ell^{*}+t\). Otherwise, there is a bag in \(B\) of size at most \(1\); hence, all bags outside \(B\) are also of size at most \(1\). So we have \[\frac{8n}{3}+\ell =\sum_{S\in B}|S|+\sum_{S\notin B}|S|\] \[\leq\sum_{S\in B}|S|+(\frac{4n}{3}-\ell^{*}-\ell).\] Therefore, \[\sum_{S\in B}|S| \geq 4n/3+2\ell+\ell^{*}\] \[\geq 3\ell+2\ell^{*}+t.\] (Lemma 8) Note that the goods \(\{t+1,\ldots,\ell^{*}\}\) are contained in \(B\) and moreover, \(B\) contains at least \(3\ell+2\ell^{*}+t-(\ell^{*}-t)=3\ell+2t+\ell^{*}\) other goods. Therefore, \[\ell^{*}+\ell \geq\sum_{S\in B}v_{i}(S)\] \[\geq\sum_{t<j\leq\ell^{*}}v_{i}(j)+\sum_{8n/3-2\ell-2t-\ell^{*}<j \leq 8n/3+\ell}v_{i}(j).\] The last inequality follows because we used the \(3\ell+2t+\ell^{*}\) lowest valued goods in \([8n/3+\ell]\) ### \(\mathbf{v_{i}(2n-\ell^{*})<1/3}\) Let \(r^{*}\) be largest such that \(v_{i}(B_{r^{*}})<1\). That is, \(B_{r*}\) is the rightmost bag in Figure 2 with a value less than \(1\) to agent \(i\). **Lemma 11**.: _If \(v_{i}(2n-r^{*}+1)\leq 1/3\), then \(r^{*}<2n/3\)._ Proof.: Since \(1>v_{i}(B_{r^{*}})=v_{i}(r^{*})+v_{i}(2n-r^{*}+1)\), we have \(v_{i}(r^{*})<2/3+x\). By Observation 1, for all \(j\leq r^{*}\), \(v_{i}(\hat{B}_{j})\leq 4/3-x\). Also, by Observation 2, for all \(j>r^{*}\), \(v_{i}(\hat{B}_{j})<4/3+2x\). Hence, we have \[\frac{4n}{3} =v_{i}(M)\] \[=\sum_{j\leq r^{*}}v_{i}(\hat{B}_{j})+\sum_{j>r^{*}}v_{i}(\hat{B} _{j})\] \[<r^{*}(\frac{4}{3}-x)+(n-r^{*})(\frac{4}{3}+2x)\] \[=\frac{4n}{3}+x(2n-3r^{*}).\] Therefore, \(r^{*}<2n/3\). **Lemma 12**.: \(v_{i}(2n-r^{*}+1)>1/3\)_._ Proof.: Towards contradiction, assume \(v_{i}(2n-r^{*}+1)=1/3-x\) for \(x\geq 0\). By Lemma 11, \(r^{*}<2n/3\). **Claim 1**.: \(\sum_{j>r^{*}}v_{i}(\hat{B}_{j})<\frac{10n}{9}-r^{*}+\frac{2nx}{3}\)_._ Proof.: Note that by the definition of \(r^{*}\), for all \(j>r^{*}\), \(\hat{B}_{j}=B_{j}\). By Lemma 3, \(v_{i}(\{2n/3+r^{*}+1,\ldots,2n-r^{*}\})\leq 2n/3-r^{*}\). Also since \(v_{i}(r^{*})<2/3+x\), \(v_{i}(\{r^{*}+1,\ldots,2n/3+r^{*}\})\leq\frac{2n}{3}(\frac{2}{3}+x)\). In total, we get \[\sum_{j>r^{*}}v_{i}(\hat{B}_{j}) =\sum_{j>r^{*}}v_{i}(B_{j})\] \[=v_{i}(\{r^{*}+1,\ldots,2n/3+r^{*}\})+v_{i}(\{2n/3+r^{*}+1,\ldots,2n-r^{*}\})\] \[<\frac{2n}{3}(\frac{2}{3}+x)+\frac{2n}{3}-r^{*}\] \[=\frac{10n}{9}-r^{*}+\frac{2nx}{3}.\] Therefore, Claim 1 holds. We have \[\frac{4n}{3} =v_{i}(M)\] \[=\sum_{j\leq r^{*}}v_{i}(\hat{B}_{j})+\sum_{j>r^{*}}v_{i}(\hat{B} _{j})\] \[<r^{*}(\frac{4}{3}-x)+\frac{10n}{9}-r^{*}+\frac{2nx}{3}\] (Observation 1 and Claim 1) \[=r^{*}(\frac{1}{3}-x)+\frac{10n}{9}+\frac{2nx}{3}.\] Thus, \[\frac{2n}{9} <r^{*}(\frac{1}{3}-x)+\frac{2n}{3}(x)\] \[\leq\frac{2n}{3}\cdot\frac{1}{3}, (r^{*}\leq 2n/3\text{ by Lemma \ref{lem:2n}})\] which is a contradiction. Hence, \(v_{i}(2n-r^{*}+1)>1/3\). Recall that \(\ell^{*}\) be the smallest such that \(v_{i}(B_{\ell^{*}+1})<1\), i.e., \(B_{\ell^{*}+1}\) is the leftmost bag in Figure 2 with value less than \(1\) to agent \(i\). Let \(\ell\) be largest such that \(v_{i}(B_{\ell})<1\) and \(v_{i}(2n-\ell+1)\leq 1/3\). Since \(v_{i}(B_{\ell^{*}+1})<1\) and \(v_{i}(2n-\ell^{*})<1/3\), such an index exists and \(\ell\geq\ell^{*}+1\). Also, let \(r\) be smallest such that \(v_{i}(B_{r+1})<1\) and \(v_{i}(2n-r)\geq 1/3\). Again, since \(v_{i}(B_{r^{*}})<1\) and \(v_{i}(2n-r^{*}+1)>1/3\), such an index exists. We set \(x:=1/3-v_{i}(2n-\ell+1)\) and \(y:=v_{i}(2n-r)-1/3\). See Figure 6. **Observation 8**.: \(x<1/3\)_._ Proof.: Towards a contradiction, assume \(x=1/3\). Therefore, \(v_{i}(2n-\ell+1)=0\). Let \(k<2n-\ell+1\) be the number of goods with a value larger than \(0\) to agent \(i\). Consider \((P_{1}^{i}\cap[k],\ldots,P_{4n/3}^{i}\cap[k])\). There are at least \(\ell\) many indices \(j\) such that, \(|P_{j}^{i}\cap[k]|=1\). Since \(\mathcal{I}\) is \(4n/3\)-normalized, \(v_{i}(1)=\ldots=v_{i}(\ell)=1\) which is a contradiction with \(v_{i}(B_{\ell^{*}+1})<1\). **Observation 9**.: \(y<1/6\)_._ Proof.: We have \(1/3+y=v_{i}(2n-r)\leq v_{i}(B_{r})/2<1/2\). Thus, \(y<1/6\). **Corollary 2** (of Observation 1).: _For all \(j\leq\ell\), \(v_{i}(\hat{B}_{j})\leq 4/3-x\)._ **Corollary 3** (of Observation 2).: _For all \(j>r\), \(v_{i}(\hat{B}_{j})\leq\max(4/3-x,4/3-2y)\)._ **Observation 10**.: _For all \(\ell<j\leq r\), \(1\leq v_{i}(\hat{B}_{j})<1+x+y\)._ Proof.: Note that by definition of \(\ell\) and \(r\), for all \(\ell<j\leq r\), \(v_{i}(B_{j})\geq 1\). Therefore, \(\hat{B}_{j}=B_{j}\). Also, \[v_{i}(B_{j}) =v_{i}(j)+v_{i}(2n-j+1)\] \[\leq v_{i}(\ell)+v_{i}(2n-r) (\ell<j\text{ and }2n-r<2n-j+1)\] \[<(\frac{2}{3}+x)+(\frac{1}{3}+y) (v_{i}(B_{\ell})<1\text{ and }v_{i}(B_{r+1})<1)\] \[=1+x+y.\] **Lemma 13**.: \(r-\ell>2n/3\)_._ Proof.: If \(x+y\leq 1/3\), then by Corollaries 2 and 3 and Observation 10, for all \(t\in[n]\) we have \(v_{i}(\hat{B}_{t})\leq 4/3\) and for at least one bag this value is less than \(1\) by Observation 3. Therefore, \(v_{i}(M)<4n/3\), which is a contradiction. Thus, \(x+y>1/3\). We have \[\frac{4n}{3} =v_{i}(M)\] \[=\sum_{j\leq\ell}v_{i}(\hat{B}_{j})+\sum_{\ell<j\leq r}v_{i}(\hat {B}_{j})+\sum_{j>r}v_{i}(\hat{B}_{j})\] \[\leq\ell(\frac{4}{3}-x)+(r-\ell)(1+x+y)+(n-r)\max(\frac{4}{3}-x, \frac{4}{3}-2y)\] \[\leq(r-\ell)(1+x+y)+(n-r+\ell)\max(\frac{4}{3}-x,\frac{4}{3}-2y)\] (Corollaries 2 and 3 and Observation 10) \[=\frac{4n}{3}+(r-\ell)(x+y-\frac{1}{3})-(n-r+\ell)\min(x,2y).\] Therefore, \((r-\ell)(x+y-1/3)\geq(n-r+\ell)\min(x,2y)\). By Observation 8, \(x<1/3\) and thus, we have \(x+y-1/3<y\). Also, since \(y<1/6\) (by Observation 9), we have \(x+y-1/3<x-1/6<x/2\). Thus, \(x+y-1/3<\min(x,2y)/2\). Hence, \(r-\ell>2(n-r+\ell)\) and therefore, \(r-\ell>2n/3\). Let \(r-\ell=2n/3+s\). Recall that \(P^{i}=(P_{1}^{i},\ldots,P_{4n/3}^{i})\) is an \((4n/3)\)-MMS partition of \(M\) for agent \(i\). Since \(i\) is fixed, we use \(P=(P_{1},\ldots,P_{4n/3})\) instead for ease of notation. For all \(j\in[4n/3]\), let \(g_{j}\) be good with the smallest index (and hence the largest value) in \(P_{j}\). Without loss of generality, assume \(g_{1}<g_{2}<\ldots<g_{4n/3}\). Observe that \(\{1,\ldots,r\}\subseteq\cup_{k\in[r]}P_{k}\). Let \(S^{\prime}\) be the set of goods in \(\{r+1,\ldots,2n-\ell\}\) that appear in the first \(r\) bags in \(P\). Formally, \(S^{\prime}=\{g\in\{r+1,\ldots,2n-\ell\}\mid g\in\cup_{j\in[r]}P_{j}\}\). Let \(s^{\prime}:=\min(|S^{\prime}|,s)\). **Lemma 14**.: \(v_{i}(\{r-s^{\prime}+1,\ldots,r\}\cup\{2n-\ell-3s+2s^{\prime}+1,\ldots,2n-\ell \})\leq s\)_._ The items considered in Lemma 14 are marked with blue in Figure 7. Before proving Lemma 14, let us assume it holds and reach a contradiction. Since \(v_{i}(\ell)<1-v_{i}(2n-\ell+1)=2/3+x\), we have \[v_{i}(\{\ell+1,\ldots,r-s^{\prime}\})<(\frac{2n}{3}+s-s^{\prime})(\frac{2}{3} +x). \tag{5}\] Also, since \(v_{i}(2n-r+1)=1/3+y\), \[v_{i}(\{2n-r+1,\ldots,2n-\ell-3s+2s^{\prime}\})\leq(\frac{2n}{3}-2s+2s^{ \prime})(\frac{1}{3}+y). \tag{6}\] Figure 7: The items considered in Lemma 14 are marked with blue. Therefore, \[\sum_{\ell<j\leq r}v_{i}(\hat{B}_{j}) =\sum_{\ell<j\leq r}v_{i}(B_{j})\] \[=v_{i}(\{\ell+1,\ldots,r\}\cup\{2n-r+1,\ldots,2n-\ell\})\] \[=v_{i}(\{\ell+1,\ldots,r-s^{\prime}\})\] \[\qquad+v_{i}(\{r-s^{\prime}+1,\ldots,r\}\cup\{2n-\ell-3s+2s^{ \prime}+1,\ldots,2n-\ell\})\] \[\qquad+v_{i}(\{2n-r+1,\ldots,2n-\ell-3s+2s^{\prime}\})\] \[<(\frac{2n}{3}+s-s^{\prime})(\frac{2}{3}+x)+s+(\frac{2n}{3}-2s+2 s^{\prime})(\frac{1}{3}+y)\] (Inequalities (5) and (6) and Lemma 14) \[=\frac{2n}{3}(1+x+y)+(s-s^{\prime})(x-2y)+s.\] Thus, \[\frac{4n}{3} =v_{i}(M)\] \[=\sum_{j\leq\ell}v_{i}(\hat{B}_{j})+\sum_{\ell<j\leq r}v_{i}( \hat{B}_{j})+\sum_{j>r}v_{i}(\hat{B}_{j})\] \[<(\ell+n-r)\max(\frac{4}{3}-x,\frac{4}{3}-2y)+\frac{2n}{3}(1+x+y )+(s-s^{\prime})(x-2y)+s\] (Corollaries 2 and 3) \[=(\frac{n}{3}-s)\max(\frac{4}{3}-x,\frac{4}{3}-2y)+\frac{2n}{3}(1+ x+y)+(s-s^{\prime})(x-2y)+s.\] If \(x\leq 2y\), then by replacing \(\max(4/3-x,4/3-2y)\) with \(4/3-x\) in the above inequality, we get \[\frac{4n}{3} <(\frac{n}{3}-s)(\frac{4}{3}-x)+\frac{2n}{3}(1+x+y)+(s-s^{\prime })(x-2y)+s\] \[\leq\frac{n}{3}(\frac{4}{3}-x)+\frac{2n}{3}(1+x+y)+(s-s^{\prime} )(x-2y) (4/3-x\geq 1)\] \[\leq\frac{n}{3}(\frac{10}{3}+x+2y) ((s-s^{\prime})(x-2y)\leq 0)\] \[<\frac{4n}{3}, (x\leq 1/3\text{ and }y<1/6)\] which is a contradiction. If \(2y<x\), by replacing \(\max(4/3-x,4/3-2y)\) with \(4/3-2y\), we get \[\frac{4n}{3} <(\frac{n}{3}-s)(\frac{4}{3}-2y)+\frac{2n}{3}(1+x+y)+(s-s^{ \prime})(x-2y)+s\] \[=\frac{n}{3}(\frac{10}{3}+2x)-s(\frac{1}{3}-x)-s^{\prime}(x-2y)\] \[\leq\frac{n}{3}(\frac{10}{3}+2x) (x\leq 1/3\text{ and }x>2y)\] \[\leq\frac{4n}{3}, (x\leq 1/3)\] which is again a contradiction. Therefore, it is not possible that \(v_{i}(2n-\ell^{*})<1/3\). Thus, Theorem 2 follows. **Theorem 2**.: _If Algorithm 1 does not allocate a bag to some agent \(i\), then \(v_{i}(2n-\ell^{*})\geq 1/3\) where \(\ell^{*}\) is the smallest index such that \(v_{i}(B_{\ell^{*}+1})<1\)._ It only remains to prove Lemma 14. The main idea is as follows. Recall that \(s^{\prime}=\min(|S^{\prime}|,s)\). We consider two cases for \(s^{\prime}\). If \(s^{\prime}=s\), then in order to prove Lemma 14, we must prove \[v_{i}(\{r-s^{\prime}+1,\ldots,r\}\cup\{2n-\ell-s^{\prime}+1,\ldots,2n-\ell\}) \leq s^{\prime},\] which is what we do in Claim 3. In case \(s^{\prime}=|S^{\prime}|\), we prove \[v_{i}(\{r-s^{\prime}+1,\ldots,r\})+v_{i}(S^{\prime})\leq s^{\prime}\] in Claim 4 and \[v_{i}(\{2n-\ell-3s+2s^{\prime}+1,\ldots,2n-\ell\})-v_{i}(S^{\prime})\leq s-s^ {\prime}\] in Claim 5. Adding the two sides of the inequalities implies Lemma 14. We prove this lemma in Section 4.2.1. #### 4.2.1 Proof of Lemma 14 **Lemma 14**.: \(v_{i}(\{r-s^{\prime}+1,\ldots,r\}\cup\{2n-\ell-3s+2s^{\prime}+1,\ldots,2n-\ell \})\leq s\)_._ Note that \(\{1,\ldots,r\}\cup S^{\prime}\subseteq P_{1}\cup\ldots\cup P_{r}\). For \(j\in[r]\), let \(Q_{j}=P_{j}\cap(\{1,\ldots,r\}\cup S^{\prime})\). We begin with proving the following claim. **Claim 2**.: _There are \(s^{\prime}\) many sets like \(Q_{j_{1}},\ldots,Q_{j_{s^{\prime}}}\) such that \(|\cup_{k\in[s^{\prime}]}Q_{j_{k}}|\geq 2s^{\prime}\) and \(|\cup_{k\in[s^{\prime}]}Q_{j_{k}}\cap\{1,\ldots,r\}|\geq s^{\prime}\)._ Proof.: If \(s^{\prime}=0\), the claim trivially holds. Thus, assume \(s^{\prime}\geq 1\). By induction, we prove that for any \(t\leq s^{\prime}\), there are \(t\) many sets like \(Q_{j_{1}},\ldots,Q_{j_{t}}\) such that \(|\cup_{k\in[t]}Q_{j_{k}}|\geq 2t\) and \(|\cup_{k\in[t]}Q_{j_{k}}\cap\{1,\ldots,r\}|\geq t\). Induction basis: \(t=1\).If there exists \(Q_{k}\) such that \(|Q_{k}\cap\{1,\ldots,r\}|\geq 2\), let \(j_{1}=k\). Otherwise, for all \(k\in[r]\), we have \(|Q_{k}\cap\{1,\ldots,r\}|=1\). Since \(s^{\prime}\geq 1\), there must be an index \(k\) such that \(|Q_{k}\cap S^{\prime}|\geq 1\). Let \(j_{1}=k\). Induction assumption:There are \(t\) many sets like \(Q_{j_{1}},\ldots,Q_{j_{t}}\) such that \(|\cup_{k\in[t]}Q_{j_{k}}|\geq 2t\) and \(|\cup_{k\in[t]}Q_{j_{k}}\cap\{1,\ldots,r\}|\geq t\). Now for \(t+1\leq s^{\prime}\), we prove that there are \(t+1\) many sets like \(Q_{j_{1}},\ldots,Q_{j_{t+1}}\) such that \(|\cup_{k\in[t+1]}Q_{j_{k}}|\geq 2t+2\) and \(|\cup_{k\in[t+1]}Q_{j_{k}}\cap\{1,\ldots,r\}|\geq t+1\). Case 1: \(|\cup_{k\in[t]}Q_{j_{k}}|\geq 2t+2\):If \(|\cup_{k\in[t]}Q_{j_{k}}\cap\{1,\ldots,r\}|\geq t+1\), set \(j_{t+1}=k\) for an arbitrary \(k\in[r]\setminus\{j_{1},\ldots,j_{t}\}\). Otherwise, set \(j_{t+1}=k\) for an index \(k\in[r]\setminus\{j_{1},\ldots,j_{t}\}\) such that \(|Q_{k}\cap\{1,\ldots,r\}|\geq 1\). Case 2: \(|\cup_{k\in[t]}Q_{j_{k}}|=2t+1\):If there exists \(k\in[r]\setminus\{j_{1},\ldots,j_{t}\}\), such that \(|Q_{k}\cap[r]|\geq 1\), set \(j_{t+1}=k\). Otherwise, set \(j_{t+1}=k\) for any \(k\in[r]\setminus\{j_{1},\ldots,j_{t}\}\) such that \(|Q_{k}|\geq 1\). Since \(|\cup_{j\in[r]}Q_{j}|\geq r+s^{\prime}>2t+1\), such \(k\) exists. Case 3. \(|\cup_{k\in[t]}Q_{j_{k}}|=2t\) and \(|\cup_{k\in[t]}Q_{j_{k}}\cap\{1,\ldots,r\}|\geq t+1\):\(|\cup_{k\in[r]\setminus\{j_{1},\ldots,j_{t}\}}Q_{j_{k}}|\geq r+s^{\prime}-2t>r-t\). Therefore, by pigeonhole principle, there exists an index \(k\in[r]\setminus\{j_{1},\ldots,j_{t}\}\) such that \(|Q_{k}|\geq 2\). Set \(j_{t+1}=k\). **Case 4**.: \(|\cup_{k\in[t]}Q_{j_{k}}|=2t\) **and**\(|\cup_{k\in[t]}Q_{j_{k}}\cap\{1,\ldots,r\}|=t\):If there exists \(k\in[r]\setminus\{j_{1},\ldots,j_{t}\}\), such that \(|Q_{k}\cap[r]|\geq 2\), set \(j_{t+1}=k\). Otherwise, for all \(k\in[r]\setminus\{j_{1},\ldots,j_{t}\}\), \(|Q_{k}\cap[r]|=1\) since \(|\cup_{k\in[t]}Q_{j_{k}}\cap\{1,\ldots,r\}|=t\) and \(|\cup_{k\in[r]}Q_{j_{k}}\cap\{1,\ldots,r\}|=r\). Set \(j_{t+1}=k\) for any \(k\in[r]\setminus\{j_{1},\ldots,j_{t}\}\), such that \(|Q_{k}\cap S^{\prime}|\geq 1\). Since \(|\cup_{j\in[r]}Q_{j}\cap S^{\prime}|\geq s^{\prime}>t\), such \(k\) exists. Now we prove Claim 3. **Claim 3**.: \(v_{i}(\{r-s^{\prime}+1,\ldots,r\}\cup\{2n-\ell-s^{\prime}+1,\ldots,2n-\ell\}) \leq s^{\prime}\)_._ Proof.: Let \(Q^{1}\) be the set of \(s^{\prime}\) most valuable goods in \(\cup_{k\in[s^{\prime}]}Q_{j_{k}}\) and let \(Q^{2}\) be the set of \(s^{\prime}\) least valuable goods in \(\cup_{k\in[s^{\prime}]}Q_{j_{k}}\). Since \(|\cup_{k\in[s^{\prime}]}Q_{j_{k}}|\geq 2s^{\prime}\), \(Q^{1}\cap Q^{2}=\emptyset\). Also, \(|\cup_{k\in[s^{\prime}]}Q_{j_{k}}\cap\{1,\ldots,r\}|\geq s^{\prime}\). Thus, \(v_{i}(Q^{1})\geq v_{i}(\{r-s^{\prime}+1,\ldots,r\})\). Moreover, \(v_{i}(Q^{2})\geq v_{i}(\{2n-\ell-s^{\prime}+1,\ldots,2n-\ell\})\). Hence, \[s^{\prime} =\sum_{k\in[s^{\prime}]}v_{i}(P_{j_{k}})\] \[\geq\sum_{k\in[s^{\prime}]}v_{i}(Q_{j_{k}})\] \[\geq v_{i}(\{r-s^{\prime}+1,\ldots,r\}\cup\{2n-\ell-s^{\prime}+1,\ldots,2n-\ell\}).\] Note that in case \(s^{\prime}=s\), Claim 3 implies Lemma 14. Therefore, from now on, we assume \(s^{\prime}=|S^{\prime}|<s\). **Claim 4**.: \(v_{i}(\{r-s^{\prime}+1,\ldots,r\})+v_{i}(S^{\prime})\leq s^{\prime}\)_._ Proof.: The proof is similar to the proof of Claim 3. Let \(Q^{1}\) be the set of \(s^{\prime}\) most valuable goods in \(\cup_{k\in[s^{\prime}]}Q_{j_{k}}\) and let \(Q^{2}\) be the set of \(s^{\prime}\) least valuable goods in \(\cup_{k\in[s^{\prime}]}Q_{j_{k}}\). Since \(|\cup_{k\in[s^{\prime}]}Q_{j_{k}}|\geq 2s^{\prime}\), \(Q^{1}\cap Q^{2}=\emptyset\). Also, \(|\cup_{k\in[s^{\prime}]}Q_{j_{k}}\cap\{1,\ldots,r\}|\geq s^{\prime}\). Thus, \(v_{i}(Q^{1})\geq v_{i}(\{r-s^{\prime}+1,\ldots,r\})\). Moreover, \(v_{i}(Q^{2})\geq v_{i}(S^{\prime})\) since \(s^{\prime}=|S^{\prime}|\). Hence, \[s^{\prime} =\sum_{k\in[s^{\prime}]}v_{i}(P_{j_{k}})\] \[\geq\sum_{k\in[s^{\prime}]}v_{i}(Q_{j_{k}})\] \[\geq v_{i}(\{r-s^{\prime}+1,\ldots,r\}\cup S^{\prime})\] \[=v_{i}(\{r-s^{\prime}+1,\ldots,r\})+v_{i}(S^{\prime}).\] **Claim 5**.: \(v_{i}(\{2n-\ell-3s+2s^{\prime}+1,\ldots,2n-\ell\})-v_{i}(S^{\prime})\leq s-s^{\prime}\)_._ Proof.: Note that by definition of \(S^{\prime}\), the \(2n-\ell-r-s^{\prime}=8n/3-2r+s-s^{\prime}\) goods in \(\{r+1,\ldots,2n-\ell\}\setminus S^{\prime}\) are in \(P_{r+1}\cup\ldots\cup P_{4n/3}\). Now for \(j\in[4n/3-r]\), let \(R_{j}=P_{j+r}\cap\{r+1,\ldots,2n-\ell\}\setminus S^{\prime}\). Assume \(|R_{j_{1}}|\geq\ldots\geq|R_{j_{4n/3-r}}|\). We prove \[\sum_{k\leq s-s^{\prime}}|R_{j_{k}}|\geq 3(s-s^{\prime}). \tag{7}\] If \(|R_{j_{s-s^{\prime}+1}}|\geq 3\), Inequality (7) holds. Otherwise, we have \[\frac{8n}{3}-2r+s-s^{\prime} =\sum_{k\in[4n/3-r]}|R_{j_{k}}|\] \[=\sum_{k\leq s-s^{\prime}}|R_{j_{k}}|+\sum_{s-s^{\prime}<k\leq 4n/3- r}|R_{j_{k}}|\] \[\leq\sum_{k\leq s-s^{\prime}}|R_{j_{k}}|+2(\frac{4n}{3}-r-s+s^{ \prime}).\qquad\qquad(|R_{j_{k}}|\leq 2\text{ for }k>s-s^{\prime})\] Thus, \(\sum_{k\in[s-s^{\prime}]}|R_{j_{k}}|\geq 3(s-s^{\prime})\). We have \[s-s^{\prime} =\sum_{k\in[s-s^{\prime}]}v_{i}(P_{j_{k}+r})\] \[\geq\sum_{k\in[s-s^{\prime}]}v_{i}(R_{j_{k}})\] \[\geq v_{i}(\{2n-\ell-3s+2s^{\prime}+1,\ldots,2n-\ell\})-v_{i}(S^ {\prime}).\ \ (|\cup_{k\in[s-s^{\prime}]}R_{j_{k}}|\geq 3(s-s^{\prime})\text{ and }|S^{ \prime}|=s^{\prime})\] Claims 4 and 5 imply Lemma 14. Recap:To show that a 1-out-of-\((4n/3)\) MMS allocation exists, it suffices to prove that we never run out of goods for bag-filling in Algorithm 1. Towards contradiction, we assumed that the algorithm stops before agent \(i\) receives a bundle. By Observation 3, a bag with a value less than 1 for agent \(i\) exists. Let \(\ell^{*}\) be the smallest such that \(v_{i}(B_{\ell^{*}+1})<1\). In Section 4.2, we reached a contradiction assuming \(v_{i}(2n-\ell^{*})<1/3\) and proved Theorem 2. In Section 4.1, we reached a contradiction assuming \(v_{i}(2n-\ell^{*})\geq 1/3\) and proved Theorem 3. Therefore, no such agent \(i\) exists, and all agents receive a bag by the end of Algorithm 1. Theorem 1 follows. ## 5 \((\alpha,\beta,\gamma)\)-MMS Allocation In this section, we show the existence of \((2(1-\beta)/\beta,\beta,3/4)\)-MMS allocation for any \(3/4<\beta<1\). Using \(\beta=\sqrt{3}/2\), this implies the existence of a randomized allocation that gives each agent at least \(3/4\) times her MMS value (ex-post) and at least \((34\sqrt{3}-48)/8\sqrt{3}>0.785\) times her MMS value in expectation (ex-ante). Given an instance \(\mathcal{I}=(N,M,V)\), without loss of generality, we assume that \(\mathcal{I}\) is \(n\)-normalized and ordered, which implies that \(\text{MMS}_{i}=1,\forall i\in N\). Since our approach is an extension of the Garg-Taki (GT) algorithm [12] for the existence of \(3/4\)-MMS allocation, we first summarize their algorithm. GT algorithm has two phases of _valid reductions_ and _bag filling_. In a valid reduction, the instance is reduced by removing an agent \(a\) and a subset \(S\) of goods such that \(a\)'s value for \(S\) is at least \(3/4\), and the MMS values of the remaining agents do not decrease, i.e., \(v_{a}(S)\geq 3/4\) and \(\text{MMS}_{i}\geq 1\) for each remaining agent in the reduced instance \((N\setminus\{a\},M\setminus\{S\},V\setminus\{v_{a}\})\). The GT algorithm utilizes the simple valid reductions with the set of goods \(S_{1}=\{1\}\) (i.e., the highest valued good), \(S_{2}=\{n,n+1\}\), \(S_{3}=\{2n-1,2n,2n+1\}\), and \(S_{4}=\{1,2n+1\}\), in the priority order of \(S_{1}\), \(S_{2}\), \(S_{3}\), and \(S_{4}\), i.e., \(S_{k}\) is performed only when for all \(j<k\), \(S_{j}\) is not feasible. The following lemma (proof is in Appendix A) shows that for all \(k\in[4]\), \(S_{k}\) is a valid reduction if performed in the priority order. **Lemma 15**.: _[_GT21_]_ _Let \(S\) be the lowest index bundle in \(S\in\{S_{1},S_{2},S_{3},S_{4}\}\) for which \(\{i\in N:v_{i}(S)\geq 3/4\}\) is non-empty. Then, removing \(S\) and agent \(a\) with \(v_{a}(S)\geq 3/4\) is a valid reduction._ Let \(\mathcal{I}^{\prime}=([n^{\prime}],[m^{\prime}],V^{\prime})\) be the instance after all the valid reductions are performed, i.e., no more valid reductions are feasible for \(\mathcal{I}^{\prime}\). This gives some information about the values of goods shown in the following corollary. **Corollary 4**.: _If no valid reductions are feasible for \(\mathcal{I}^{\prime}=([n^{\prime}],[m^{\prime}],V^{\prime})\), then for any agent \(i\in[n^{\prime}]\), \(v_{i}(1)<3/4\), \(v_{i}(n^{\prime}+1)<3/8\), and \(v_{i}(2n^{\prime}+1)<1/4\)._ In the bag filling phase, \(n^{\prime}\) bags are initialized using the first \([2n^{\prime}]\) goods as in (3) and each bag is filled with goods in \([m^{\prime}]\setminus[2n^{\prime}]\) until some agent has a value at least \(3/4\) for the bag. Although the GT algorithm is quite simple, the main challenge is showing there are enough goods in \([m^{\prime}]\setminus[2n^{\prime}]\) to satisfy each agent with a value of at least \(3/4\). Our algorithm is described in Algorithm 2. We start with an arbitrary set \(N_{1}\) of agents such that \(|N_{1}|\leq 2n(1-\beta)/\beta\), and our goal is to satisfy each of them with a value of at least \(\beta\). \(N_{2}\) is the set of remaining agents, and our goal is to satisfy each of them with a value of at least \(3/4\). Like the GT algorithm, Algorithm 2 also has two phases, valid reductions and bag filling, albeit with some crucial differences. In the valid reduction phase, we use different targets for \(N_{1}\) and \(N_{2}\), but we prioritize agents in \(N_{1}\). We first check if a valid reduction is feasible for an agent in \(N_{1}\cup N_{2}\) with some \(S_{k},k\in[4]\). If yes, we pick the smallest feasible index, say \(S_{\ell}\), and find an agent, say \(a\), in \(N_{1}\) with the highest value for \(S_{\ell}\). If this value is at least \(\beta\), we assign \(S_{\ell}\) to agent \(a\). Otherwise, we assign \(S_{\ell}\) to any agent in \(N_{2}\) with a value of at least \(3/4\). We run the bag-filling phase on the reduced instance if no valid reductions are feasible. This phase is similar to the GT algorithm except that we again use different targets for agents in \(N_{1}\) and \(N_{2}\) and prioritize agents in \(N_{1}\). For all \(i\in[n^{\prime}]\), let \(B_{i}\) be the \(i^{\text{th}}\) initial bag (i.e., \(B_{i}=\{i,2n^{\prime}-i+1\}\)) and \(\hat{B}_{i}\) be the bag at the end of the algorithm (i.e., after the bag-filling phase). Although Algorithm 2 is a simple extension of the GT algorithm, the analysis is not straightforward because \(S_{4}\) is not a valid reduction for \(N_{1}\). We first show that each agent in \(N_{2}\) will receive a bag valued at least \(3/4\) at the end of the algorithm. **Lemma 16**.: _Each agent in \(N_{2}\) receives a bag valued at least \(3/4\)._ Proof.: It easily follows from the GT algorithm because valid reductions do not decrease the MMS value of any remaining agent in \(N_{2}\), and the bag filling does not add goods from \([m^{\prime}]\setminus[2n^{\prime}]\) to a bag when some agent in \(N_{2}\) has a value at least \(3/4\). In the rest of the section, we show the following claim, proving the algorithm's correctness. **Lemma 17**.: _Each agent in \(N_{1}\) receives a bag valued at least \(\beta\)._ For a contradiction, suppose the bag filling phase stopped at iteration \(k\leq n^{\prime}\) because \([m^{\prime}]\setminus[2n^{\prime}]\) is empty and an agent \(a\in N_{1}\) has not received a bag. Since no more valid reductions are feasible, like in Corollary 4, we must have \[v_{a}(j)<\begin{cases}\beta&\forall j\leq n^{\prime}\\ \beta/2&\forall n^{\prime}<j\leq 2n^{\prime}\\ \beta/3&\forall j>2n^{\prime}\end{cases} \tag{8}\] This implies that at the beginning of the bag filling phase, \[v_{i}(B_{k})<3\beta/2,\forall k\in[n^{\prime}]. \tag{9}\] Further, if \(\hat{B}\neq B_{k}\) for some \(k\), then \(v_{a}(\hat{B})<\beta+\beta/3=4\beta/3\) for the bags assigned to other agents before iteration \(k\) because \(v_{a}(\hat{B}\setminus g)<\beta\), where \(g\) is the last good added to \(\hat{B}\) and \(v_{a}(g)\leq v_{a}(2n^{\prime}+1)<\beta/3\). Also, \(v_{a}(\hat{B})<\beta\) for all bags assigned to agents in \(N_{2}\) because we prioritize agents in \(N_{1}\). Therefore, in the bag-filling phase, we have \[v_{a}(\hat{B})<\begin{cases}\beta&\text{ if }\hat{B}\text{ is assigned to an agent in }N_{2}\\ 3\beta/2&\text{ if }\hat{B}\text{ is assigned to an agent in }N_{1}\end{cases}. \tag{10}\] During valid reductions, since we prioritize agents in \(N_{1}\), we have \[v_{a}(S_{\ell})<\beta\quad\text{if }S_{\ell}\text{ is assigned to an agent in }N_{2}. \tag{11}\] We next bound the value of \(v_{a}(S_{\ell})\) when it is assigned to an agent in \(N_{1}\). We have \[v_{a}(S_{1})\leq 1\text{ for any reduction using }S_{1}\text{ since the instance is }n\text{-normalized}\] \[v_{a}(S_{3})<3\beta/2\text{ for any reduction using }S_{3}\text{ since }S_{2}\text{ is not feasible} \tag{12}\] \[v_{a}(S_{4})<\beta+\beta/3=4\beta/3\text{ for any reduction using }S_{4}\text{ since }S_{1}\text{ and }S_{3}\text{ are not feasible}\] The only case left is reduction using \(S_{2}\), for which we break the analysis into multiple cases. Let \(S_{\ell_{1}}S_{\ell_{2}}\cdots\) for \(\ell_{i}\in[4]\) be a series of reduction. Now, consider the transitions to \(S_{2}\), i.e., \(\cdots S_{\ell}[S_{2}]^{t}S_{\ell^{\prime}}\cdots\), where \(\ell,\ell^{\prime}\neq 2\) and \(t\geq 1\) denote the number of \(S_{2}\)'s between \(S_{\ell}\) and \(S_{\ell^{\prime}}\). Let \(S_{2}^{t^{\prime}}\) denote the \(t^{\prime}\)-th \(S_{2}\) for \(t^{\prime}\in[t]\). There are three cases: **Case 1**\([S_{1}]^{s}[S_{2}]^{t}\cdots\): Here, \(S_{1}\) occurs exactly \(s\geq 0\) times. This case can happen at most once since \(S_{1}\)'s are the highest priority, and once \(S_{1}\) is not feasible, it remains infeasible. **Lemma 18** (Case 1).: \(v_{a}(S_{2}^{t^{\prime}})<3\beta/2,\forall t^{\prime}\in[t]\)_._ Proof.: Note that \(S_{2}^{t^{\prime}}:=\{n-t^{\prime}+1,n+t^{\prime}\},\forall t^{\prime}\in[t]\), where \(n\) is the number of agents in the original instance. By the pigeonhole principle, a bundle in \(a\)'s MMS partition \(P^{a}\) must contain two goods from \([n+1]\). This, together with the instance being \(n\)-normalized, implies that \(v_{a}(S_{2}^{1})\leq 1\). For \(t^{\prime}\geq 2\), if \(v_{a}(S_{2}^{t^{\prime}})>1\), then the goods in \(\{n-t^{\prime}+2,\ldots,n+t^{\prime}\}\) must be in \(t^{\prime}-1\) different bundles in \(P^{a}\), which implies that there must be a bundle in \(P^{a}\) that contains at least three goods from \(\{n-t^{\prime}+2,\ldots,n+t^{\prime}\}\) by the pigeonhole principle. This further implies that \(v_{a}(n+t^{\prime})\leq 1/3\) because the instance is \(n\)-normalized. Finally, since \(S_{1}\) is not feasible when the algorithm performs \(S_{2}\), we have \(v_{a}(n-t^{\prime}+1)<\beta,\forall t^{\prime}\in[t]\) and then we have \(v_{a}(S_{2}^{t^{\prime}})=v_{a}(n-t^{\prime}+1)+v_{a}(n+t^{\prime})<\beta+1/3 <3\beta/2,\forall t^{\prime}\geq 2\) using \(\beta>3/4\). **Case 2**\(\cdots S_{4}[S_{2}]^{t}\cdots\): This case cannot happen because \(S_{2}\) was not feasible when \(S_{4}\) was performed and the set of items in \(S_{2}\) doesn't change after an \(S_{4}\). **Case 3**\(\cdots S_{3}[S_{2}]^{t}\cdots\): Let \(s\) be the number of agents just before the instance is reduced using \(S_{3}\), which implies that \(S_{3}=\{2s+1,2s,2s-1\}\) and \(S_{2}=\{s-1,s\}\). Let \(x:=v_{a}(S_{3})\), which implies that \(v_{a}(2s-1)\geq x/3\). Since \(S_{2}\) is not feasible when we used \(S_{3}\), we have \(v_{a}(\{s,s+1\})<\beta\), which further implies that \(x<3\beta/2\). Furthermore, we have \(v_{a}(s+1)\geq v_{a}(2s-1)\geq x/3\) and \(v_{a}(s)<\beta-x/3\). Next, we break the analysis into two subcases depending on whether there are more \(S_{3}\) reductions later. **Case 3a:** If this is the last reduction with \(S_{3}\), then we have \(v_{a}(\{s-1,s\}<\beta+\beta-x/3<2\beta\). Since all later \(S_{2}=\{j,j^{\prime}\}\)'s will have a \(j<s-1\) and \(j^{\prime}>s+1\), we have \(v_{a}(\{j,j^{\prime}\})<\beta+\beta/2=3\beta/2\) for each of them. Note that this case can occur at most once. **Case 3b:** For the other case, if this is not the last \(S_{3}\), then we must have \(v_{a}(\{s-2,2s-2\})<\beta\) otherwise, \(S_{2}\) will always be feasible contradicting the fact that this is not the last \(S_{3}\). This implies that \(v_{a}(s-1)\leq v_{a}(s-2)<\beta-v_{a}(2s-2)<\beta-x/3\). Then, we have \[v_{a}(S_{2}^{1}\cup S_{3})=v_{a}(\{s-1,s\})+v_{a}(S_{3})<2\beta-2x/3+x<2\beta+x /3<5\beta/2.\] Furthermore, for each of the remaining \(t-1\)\(S_{2}=\{j,j^{\prime}\}\)'s, we have \(j<s,j^{\prime}>s\), implying \(v_{a}(\{j,j^{\prime}\}<\beta+\beta/2=3\beta/2\). The above analysis implies that **Corollary 5**.: _In Case 3, either \(v_{a}(S_{2}^{1}\cup S_{3})<5\beta/2\) or \(v_{a}(S_{2}^{1})<2\beta\). For the remaining \(t-1\)\(S_{2}\)'s, \(v_{a}(S_{2}^{t^{\prime}})<3\beta/2,\forall t^{\prime}\geq 2\)._ We are now ready to prove Lemma 17. Proof.: (of Lemma 17) Recall that we assumed for a contradiction that the bag filling phase stopped at iteration \(k\leq n^{\prime}\) because \([m^{\prime}]\setminus[2n^{\prime}]\) is empty and an agent \(a\in N_{1}\) has not received a bag. Lemma 16 implies that all agents in \(N_{2}\) must have received a bag valued at least \(3/4\) before this iteration. Since we prioritize agents in \(N_{1}\) in both valid reductions and bag filling, we have \(v_{a}(S)<\beta\) whenever \(S\) is given to an agent in \(N_{2}\). Further, (9) and (10) imply that both \(v_{a}(B_{k})\) and \(v_{a}(\hat{B})\) are strictly less than \(3\beta/2\) at the beginning and when assigned to other agents in the bag filling phase. In valid reductions, except for Case 3 of \(S_{2}\), (12) and Lemma 18 imply that \(v_{a}(S_{\ell})<3\beta/2\). Case 3a of \(S_{2}\) occurs at most once, which implies that for all \(S_{2}\)'s in this case except for one, say \(S_{2}^{*}\), we have \(v_{a}(S_{2})<3\beta/2\) and \(v_{a}(S_{2}^{*})<2\beta\). Case 3b of \(S_{2}\) implies that \(v_{a}(S_{3}\cup S_{2}^{1})<5\beta/2\) and for all other \(S_{2}\)'s we have \(v_{a}(S_{2})<3\beta/2\). Therefore, we have \[n=v_{a}(M) =\sum_{S\text{ assigned to }i\in N_{2}}v_{a}(S)+\sum_{S\text{ assigned to }i\in N_{1}}v_{a}(S)+v_{a}(B_{k})+\sum_{j=k+1}^{n^{\prime}}v_{a}(B_{j})\] \[<\beta\cdot(3\beta-2)n/\beta+3\beta/2\cdot(2(1-\beta)n/\beta-(n^ {\prime}-k+2))+2\beta+v_{a}(B_{k})+3\beta/2\cdot(n^{\prime}-k)\] \[=n-\beta+v_{a}(B_{k}),\] which implies that \(v_{a}(B_{k})>\beta\), a contradiction. ## Appendix A Missing Proofs **Lemma 2**.: _For any \(d\in\mathbb{N}\), if \(1\)-out-of-\(d\) MMS allocations exist for \(d\)-normalized ordered instances, then \(1\)-out-of-\(d\) MMS allocations exist for all instances._ Proof.: Let \(\mathcal{I}=(N,M,V)\) be an arbitrary instance. We create a \(d\)-normalized ordered instance \(\mathcal{I}^{\prime\prime}=(N,M,V^{\prime\prime})\) such that from any \(1\)-out-of-\(d\) MMS allocation for \(\mathcal{I}^{\prime\prime}\), one can obtain a \(1\)-out-of-\(d\) MMS allocation for the original instance \(\mathcal{I}\). First of all, we can ignore all agents \(i\) with \(\text{MMS}_{i}^{d}=0\) since no good needs to be allocated to them. Recall that for all \(i\in N\), \(P^{i}=(P^{i}_{1},\ldots,P^{i}_{d})\) is a \(d\)-MMS partition of agent \(i\). For all \(i\in N\) and \(g\in M\), we define \(v^{\prime}_{i,g}=v_{i}(g)/v_{i}(P^{i}_{j})\) where \(j\) is such that \(g\in P^{i}_{j}\). Now for all \(i\in N\), let \(v^{\prime}_{i}:2^{M}\to\mathbb{R}_{\geq 0}\) be defined as an additive function such that \(v^{\prime}_{i}(S)=\sum_{g\in S}v^{\prime}_{i,g}\). Note that \(v^{\prime}_{i,g}\leq v_{i}(g)/\text{MMS}_{i}^{d}(M)\) for all \(g\in M\) and thus, \[v_{i}(S)\geq v^{\prime}_{i}(S)\cdot\text{MMS}_{i}^{d}(M). \tag{13}\] Since \(v^{\prime}_{i}(P^{i}_{j})=1\) for all \(i\in N\) and \(j\in[d]\), \(\mathcal{I}^{\prime}=(N,M,V^{\prime})\) is a \(d\)-normalized instance. If a \(1\)-out-of-\(d\) MMS allocation exists for \(\mathcal{I}\), let \(X\) be one such allocation. By Inequality (13), \(v_{i}(X_{i})\geq v^{\prime}_{i}(X_{i})\cdot\text{MMS}_{i}^{d}(M)\geq\text{MMS }_{i}^{d}(M)\). Thus, every allocation that is \(1\)-out-of-\(d\) MMS for \(\mathcal{I}^{\prime}\) is \(1\)-out-of-\(d\) MMS for \(\mathcal{I}\) as well. For all agents \(i\) and \(g\in[m]\), let \(v^{\prime\prime}_{i,g}\) be the \(g\)-th number in the multiset of \(\{v_{i}(1),\ldots,v_{i}(m)\}\). Let \(v^{\prime\prime}_{i}:2^{M}\to\mathbb{R}_{\geq 0}\) be defined as an additive function such that \(v^{\prime\prime}_{i}(S)=\sum_{g\in S}v^{\prime\prime}_{i,g}\). Let \(\mathcal{I}^{\prime\prime}=\langle N,M,V^{\prime\prime}\rangle\). Note that \(\mathcal{I}^{\prime\prime}\) is ordered and \(d\)-normalized. Barman and Krishnamurthy [1] proved that for any allocation \(X\) in \(\mathcal{I}^{\prime\prime}\), there exists and allocation \(Y\) in \(\mathcal{I}^{\prime}\) such that \(v^{\prime}_{i}(Y_{i})\geq v^{\prime\prime}_{i}(X_{i})\). Therefore, from any \(1\)-out-of-\(d\) MMS allocation in \(\mathcal{I}^{\prime\prime}\), one can obtain a \(1\)-out-of-\(d\) MMS allocation in \(\mathcal{I}^{\prime}\) and as already shown before, it gives a \(1\)-out-of-\(d\) MMS allocation for \(\mathcal{I}\). **Lemma 15**.: _[_1_]_ _Let \(S\) be the lowest index bundle in \(S\in\{S_{1},S_{2},S_{3},S_{4}\}\) for which \(\{i\in N:v_{i}(S)\geq 3/4\}\) is non-empty. Then, removing \(S\) and agent \(a\) with \(v_{a}(S)\geq 3/4\) is a valid reduction._ Proof.: Clearly, \(v_{a}(S)\geq 3/4\). Next, we show that the MMS values of all other agents do not decrease separately for each case of \(S\in\{S_{1},S_{2},S_{3},S_{4}\}\). Fix agent \(b\in N\setminus\{a\}\) and a MMS partition \(P^{b}=(P^{b}_{1},\ldots,P^{b}_{n})\). After removing \(S\), we show that a partition of \(M\setminus S\) exists into \((n-1)\) bundles such that the value of each bundle is at least 1. * \(S=S_{1}\)**.** Removal of one item from \(P^{b}\) affects exactly one bundle and each of the remaining \((n-1)\) bundles has value at least 1. Therefore, the MMS value of \(b\) doesn't decrease. * \(S=S_{2}\)**.** In \(P^{b}\), there exists a bundle with two items from \(\{1,\ldots,n+1\}\) (pigeonhole principle). Let \(T\) be a bundle in \(P^{b}\) that has two items from \(\{1,\ldots,n+1\}\). Let us exchange these items with items \(n\) and \(n+1\) in other bundles and arbitrarily distribute any remaining items in \(T\) among other bundles. Clearly, the value of other bundles except \(T\) does not decrease, and hence the MMS value of \(b\) in the reduced instance doesn't decrease. * \(S=S_{3}\)**.** Similar to the proof of Case '\(S=S_{2}\)'. * \(S=S_{4}\)**.** In each iteration, the lowest index bundle from \(\{S_{1},S_{2},S_{3},S_{4}\}\) is picked. Therefore, \(S_{4}\) is only picked when \(v_{i}(S_{1}),v_{i}(S_{3})<3/4\) for all \(i\in N\) which implies that \(v_{i1}<3/4\) and \(v_{i(2n+1)}<1/4\) and hence \(v_{i}(S_{4})<1\) for all \(i\in N\). In \(P^{b}\), if items 1 and \(2n+1\) are in the same bundle, removing \(S_{4}\) and agent \(a\) is a valid reduction. For the other case, if 1 and \(2n+1\) are in two different bundles, we can make two new bundles, one with \(\{1,2n+1\}\) and another with all the remaining items of the two bundles. The value of the bundle without \(\{1,2n+1\}\) is at least 1 because \(v_{i}(S_{4})<1\) for all \(i\in N\) and \(\text{MMS}_{i}\geq 1\). Hence, this is a valid reduction.
2310.06374
Rethinking Model Selection and Decoding for Keyphrase Generation with Pre-trained Sequence-to-Sequence Models
Keyphrase Generation (KPG) is a longstanding task in NLP with widespread applications. The advent of sequence-to-sequence (seq2seq) pre-trained language models (PLMs) has ushered in a transformative era for KPG, yielding promising performance improvements. However, many design decisions remain unexplored and are often made arbitrarily. This paper undertakes a systematic analysis of the influence of model selection and decoding strategies on PLM-based KPG. We begin by elucidating why seq2seq PLMs are apt for KPG, anchored by an attention-driven hypothesis. We then establish that conventional wisdom for selecting seq2seq PLMs lacks depth: (1) merely increasing model size or performing task-specific adaptation is not parameter-efficient; (2) although combining in-domain pre-training with task adaptation benefits KPG, it does partially hinder generalization. Regarding decoding, we demonstrate that while greedy search achieves strong F1 scores, it lags in recall compared with sampling-based methods. Based on these insights, we propose DeSel, a likelihood-based decode-select algorithm for seq2seq PLMs. DeSel improves greedy search by an average of 4.7% semantic F1 across five datasets. Our collective findings pave the way for deeper future investigations into PLM-based KPG.
Di Wu, Wasi Uddin Ahmad, Kai-Wei Chang
2023-10-10T07:34:45Z
http://arxiv.org/abs/2310.06374v2
# Rethinking Model Selection and Decoding for Keyphrase Generation ###### Abstract Keyphrase Generation (KPG) is a longstanding task in NLP with widespread applications. The advent of sequence-to-sequence (seq2seq) pre-trained language models (PLMs) has ushered in a transformative era for KPG, yielding promising performance improvements. However, many design decisions remain unexplored and are often made arbitrarily. This paper undertakes a systematic analysis of the influence of model selection and decoding strategies on PLM-based KPG. We begin by elucidating why seq2seq PLMs are apt for KPG, anchored by an attention-driven hypothesis. We then establish that conventional wisdom for selecting seq2seq PLMs lacks depth: (1) merely increasing model size or performing task-specific adaptation is not parameter-efficient; (2) although combining in-domain pre-training with task adaptation benefits KPG, it does partially hinder generalization. Regarding decoding, we demonstrate that while greedy search achieves strong F1 scores, it lags in recall compared with sampling-based methods. Based on these insights, we propose DeSel, a likelihood-based decode-select algorithm for seq2seq PLMs. DeSel improves greedy search by an average of 4.7% semantic F1 across five datasets. Our collective findings pave the way for deeper future investigations into PLM-based KPG. ## 1 Introduction Keyphrases encapsulate the core information of a document. Due to their high information density, they have been found valuable in areas such as information retrieval Wu and Boliv (2008); Dave and Varma (2010); Kim et al. (2013); Boudin et al. (2020), document clustering Hammouda et al. (2005), summarization Zhang et al. (2004), and text classification Berend (2011). A keyphrase is termed a _present keyphrase_ if it is explicitly found within the document and an _absent keyphrase_ otherwise. The task of identifying present keyphrases is defined as _keyphrase extraction_ (KPE), whereas _keyphrase generation_ (KPG) involves predicting both types of keyphrases. Recently, pre-trained language models (PLMs) have been widely incorporated in KPG Chowdhury et al. (2022); Zhao et al. (2022) via sequence-to-sequence (seq2seq) generation, with promising performance on zero-shot Kulkarni et al. (2022), multilingual Gao et al. (2022), and low-resource Wu et al. (2022) KPG. However, existing literature typically focuses on a specific subset of important components in this pipeline, such as data construction and loss design, while making arbitrary choices for the others Zhao et al. (2022); Ray Chowdhury et al. (2022); Wu et al. (2022); Garg et al. (2022). As a result, KPG systems are often compared under different assumptions and the effect of the arbitrary design choices remains unclear. To bridge this gap, this paper focuses on two crucial questions that have not been systematically explored: 1. _Which PLM leads to the best KPG performance when fine-tuned?_ 2. _What is the best decoding strategy?_ In practice, sub-optimal choices for these factors could lead to optimizing an unnecessarily large model or sub-optimal results decoded from a strong KPG model. To answer these two questions, we conduct in-depth analyses on KPG with (1) PLMs of diverse size and pre-training strategies and (2) a diverse set of decoding strategies. To begin with, we posit that _seq2seq PLMs are inherently suitable to KPG_ (SS3). By drawing the correlations with a strong graph-based KPE algorithm, we show that these PLMs implicitly compute _phrase centrality_Boudin (2013) in their decoder attention patterns. This knowledge is also directly translatable to a strong ranking function for KPE. On the other hand, encoder-only models fail to carry such centrality information. Next, we search for the best _seq2seq PLM_ for KPG fine-tuning (SS4). While common strategies for other NLP tasks might advocate for (1) scaling up the model size, (2) in-domain pre-training, or (3) task adaptation, do these approaches hold the same merit for KPG? Our findings reveal that a singular emphasis on scaling or task adaptation does not ensure efficient performance improvement. In contrast, in-domain pre-training consistently bolsters performance across both keyphrase types and can benefit from task adaptation. A robustness analysis reveals that a proper model choice and data-oriented training approaches are complementary: without the latter, stronger PLMs are more vulnerable to perturbed input, with over 14% recall drop under named variation substitutions and over 5% recall drop under input paraphrasing. Decoding strategy is also an essential component in PLM-based KPG, but much under-explored by current literature. In SS5, we thoroughly compare six decoding strategies including greedy search, beam search, and sampling-based methods. Results suggest that when only generating a single sequence consisting of concatenated keyphrases, greedy search achieves a strong F1 score. However, aggregating the predictions from multiple sampled sequences outperforms greedy search due to a much higher recall. Based on these findings, we introduce DeSel, a likelihood-based selection strategy that selects from sampled phrases to augment the greedy search predictions. DeSel utilizes the probability of phrases from greedy search's predictions as the baseline to filter out noisy predictions from a set of sampled keyphrase candidates. Experiments on five KPG datasets show that DeSel consistently improves greedy decoding by 7.9% \(F1@M\) for present keyphrases, 25% \(F1@M\) for absent keyphrases, and 4.7% Semantic F1 for all keyphrases, achieving state-of-the-art KPG performance, underscoring the importance of carefully examining the design choice of KPG. To summarize, our primary contributions are: 1. An in-depth exploration of the intrinsic suitability of seq2seq PLMs for KPG. 2. A comprehensive examination of effective strategies for model enhancement in KPG, spotlighting the merits of specific combinations and their implications for robustness. 3. We establish the trade-off between accuracy and concept coverage for different decoding algorithms. Then, we introduce a probability-based decode-select mechanism DeSel that consistently improves over greedy search. 4. Our research illuminates the profound impact of under-emphasized factors on KPG performance. To facilitate future research on KPG, we release our code and models at [https://github.com/uclanlp/DeepKPG](https://github.com/uclanlp/DeepKPG). ## 2 Preliminaries ### Keyphrase Generation Problem DefinitionWe represent an example for KPG as a tuple \((\mathcal{X},\mathcal{Y})\), corresponding to the input document \(\mathcal{X}=(x_{1},x_{2},...,x_{d})\) and the set of human-written reference keyphrases \(\mathcal{Y}=\{y_{1},y_{2},...,y_{n}\}\). Following Meng et al. (2017), \(y_{i}\) is classified as a _present keyphrase_ if it is a substring of \(\mathcal{X}\) or an _absent keyphrase_ otherwise. The KPG task requires predicting \(\mathcal{Y}\) in any order, and the KPE task only requires predicting present keyphrases Turney (2000). EvaluationWe adopt lexical-based and semantic-based evaluation to evaluate a model's predictions \(\mathcal{P}=\{p_{1},p_{2},...,p_{m}\}\) against \(\mathcal{Y}\). For lexical evaluation, we follow Chan et al. (2019) and use the \(P@M\), \(R@M\), and \(F1@M\) scores. \(\mathcal{P}\) and \(\mathcal{Y}\) are stemmed with the Porter Stemmer Porter (1980) and the duplicates are removed before the score calculation. For semantic evaluation, we follow Wu et al. (2023) and report \(SemP\), \(SemR\), and \(SemF1\). Note that lexical metrics are separately calculated for present and absent keyphrases while the semantic metrics are calculated with all the phrases. We repeat all the experiments with three random seeds and report the averaged performance. BenchmarkMeng et al. (2017) introduce KP20k, which contains 500k Computer Science papers. Following their work, we train on KP20k and evaluate on the title and abstracts from the KP20k test set as well as four out-of-distribution testing datasets: Inspec Hulth (2003), Krapivin Krapivin et al. (2009), NUS Nguyen and Kan (2007), and SemEval Kim et al. (2010). Table 5 summarizes the statistics of all testing datasets. BaselinesWe consider two strong supervised encoder-decoder models from Ye et al. (2021): 1. **CopyTrans**: A Transformer Vaswani et al. (2017) with copy mechanism See et al. (2017). 2. The **SetTrans** model, which performs order-agnostic KPG. The model uses control codes trained via a k-step target assignment algorithm to generate keyphrases in parallel. As the goal of this work is thoroughly studying PLM-based methods, we only provide the results of the strongest baselines as a point of reference. We also include other baselines in appendix section G. In our analysis, we also use MultipartiteRank (MPRank, Boudin (2018)), a performant graph-based unsupervised KPE algorithm. More details about MPRank are discussed in SS3.1. ### Sequence-to-Sequence PLMs In this work, we focus on fine-tuning Transformer-based sequence-to-sequence PLMs **BART**(Lewis et al., 2020) and **T5**(Raffel et al., 2020) for KPG with the "One2Seq" formulation. Concretely, following Ye and Wang (2018) and Yuan et al. (2020), we use a separator token ; to join all the target keyphrases as the target sequence \(\mathcal{Y}=(y_{1}\) ;... ; \(y_{n})\). The models are trained with the cross-entropy loss for generating \(\mathcal{Y}\) based on \(\mathcal{X}\). At test time, greedy decoding is used, followed by a post-processing stage to segment the output sequence into individual phrases. We provide implementation details and hyperparameters in appendix D. ## 3 Do PLMs inherently carry significant knowledge of keyphrases? Existing studies have justified their use of seq2seq PLMs by drawing the close relationship between the pre-training tasks of BART (denoising language modeling) or T5 (unified text-to-text transfer) and the formulation of KPG (Gao et al., 2022; Zhao et al., 2022; Wu et al., 2022) or KPE (Kong et al., 2023). However, there is a lack of an in-depth understanding of _why_ seq2seq PLMs should be chosen for keyphrase-related tasks. In this section, we reason based on _phrase centrality_(Litvak and Last, 2008; Boudin, 2013) and show that PLMs with autoregressive decoders, including seq2seq PLMs, carry attention heads that approximately function as centrality assigners and naturally as potent keyphrase rankers. ### Centrality of Phrases The concept of phrase centrality originated from graph-based KPE, where keyphrase candidates are represented as nodes. Various graph centrality measures are used to determine a phrase's importance in the document. We use MPRank in our analysis, which encodes closeness-based and eigenvector-based centrality (Boudin, 2013). MPRank first uses rules to obtain \(C\) noun phrase candidates and then performs lexical clustering to group the candidates into topic clusters. Next, each candidate is represented as a graph node and connected with the candidates from other topic clusters. TextRank (Mihalcea and Tarau, 2004) is used to obtain a centrality score \(c_{i}\) for each of the nodes \(n_{i}\). We refer the readers to Boudin (2018) for further details. ### Attention intensities in BART and T5 decoders encode phrase centrality Using MPRank as a lens, we first investigate the extent to which PLMs implicitly represent centrality information. We use the paper titles and abstracts from the KP20k test set as the probing set. Each probing instance is fed into a PLM and the attention weights from the self-attention layers are collected. For the \(h^{th}\) attention head at layer \(l\), we denote the attention from token \(i\) to token \(j\) as \(\alpha_{i\to j}^{l,h}\). For the \(j^{th}\) token in the noun phrase candidate \(n_{i}\), the global attention weight on it is \[a_{ij}^{l,h}=\sum_{k=1,\dots,L}\alpha_{k\to j}^{l,h}, \tag{1}\] where \(L\) is the length of the text after tokenization. Then, the attention weight of \(n_{i}\) is calculated as \[a_{i}^{l,h}=|n_{i}|\sum_{j}a_{ij}^{l,h}, \tag{2}\] where \(|n_{i}|\) denotes the number of tokens in \(n_{i}\). We study four families of models: BART, T5, BERT, and GPT-2 (Radford et al., 2019). For \begin{table} \begin{tabular}{l|c|c c|c c} \hline Model & Size & Head & \(\rho\) & Head & \(\tau\) \\ \hline \multicolumn{6}{l}{_Encoder-only PLMs_} \\ BERT-base & 110M & 3-0 & 0.300 & 3-0 & 0.206 \\ BERT-large & 340M & 0-5 & 0.351 & 0-6 & 0.246 \\ \hline \multicolumn{6}{l}{_Decoder-only PLMs_} \\ gpt2 & 117M & 0-11 & 0.626 & 0-11 & 0.479 \\ gpt2-medium & 345M & 1-6 & 0.630 & 1-6 & 0.480 \\ gpt2-large & 774M & 0-13 & 0.627 & 0-13 & 0.478 \\ gpt2-xl & 1.5B & 0-6 & 0.626 & 0-6 & 0.476 \\ \hline \multicolumn{6}{l}{_Seq2seq PLMs_} \\ BART-base & 140M & 0-6 & 0.608 & 0-6 & 0.459 \\ BART-large & 406M & 0-9 & 0.585 & 0-9 & 0.438 \\ T5-small & 60M & 4-4 & 0.624 & 4-4 & 0.471 \\ T5-base & 223M & 8-4 & 0.621 & 8-4 & 0.466 \\ T5-large & 770M & 0-2 & 0.628 & 0-2 & 0.471 \\ T5-3B & 3B & 0-8 & **0.648** & 0-8 & **0.494** \\ \hline \end{tabular} \end{table} Table 1: Correlation between keyphrase candidates’ attention weights and centrality scores. \(l\)-\(h\) denotes attention head \(h\) in layer \(l\). Both \(h\) and \(l\) start from index 0. We report the attention head that achieves the best scores. The highest score is boldfaced. BART and T5, we use their decoder attentions. We correlate \(a_{i}^{l,h}\) with \(c_{i}\) using Spearman correlation \(\rho\) and Kendall's Tau \(\tau\) and present the best correlation for each model in Table 1. Surprisingly, BART and T5 decoders contain _attention heads that encode phrase centrality similarly as MPRank_. The head with the best correlation generally appears in the lower layers, indicating that centrality understanding may be more related to low-level features. Also, the upper bound of correlation strength grows with model size for T5 while does not grow for BART. Beyond _centrality assigners_, these attention heads are also potent _keyphrase extractors_: simply ranking the noun phrase candidates by \(a_{i}^{l,h}\) achieves similar \(F1@5\) for present keyphrases or \(SemF1@5\) score as MPRank (appendix B). Evaluating other types of PLMs, we find that BERT's attention heads only show weak centrality knowledge, with only 0.246 best Kendall Tau with MPRank. On the other hand, GPT-2 exhibits a similar pattern to the decoders from seq2seq PLMs, indicating that the observed pattern is strongly associated with _autoregressive decoders_. As centrality is generally correlated with global importance, our result aligns with the observations that masked language modeling tends to exploit local dependency while causal language modeling can learn long-range dependencies (Clark et al., 2019; Vig and Belinkov, 2019). In summary, through attention-based analyses, we reveal novel insights into the underlying keyphrase knowledge of PLMs with autoregressive decoders. Such knowledge can be employed explicitly (via ranking) or implicitly (via fine-tuning and prompting) to facilitate KPG. In the rest of the paper, we focus on rethinking two basic designs for KPG with seq2seq PLMs. ## 4 Influence of PLM Choice for KPG Three crucial design options exist for using seq2seq PLMs for KPG: _the choice of PLM_ to fine-tune, _the fine-tuning data and objective_, and _the decoding strategy_. Previous work focuses on fine-tuning objective and data construction (Meng et al., 2021; Ray Chowdhury et al., 2022; Garg et al., 2023) while often making the other two choices in an _ad hoc_ way, making it difficult to compare among approaches. This section dives into the first question by evaluating three "conventional wisdoms": 1. Using PLMs with _more parameters_ (SS4.1). 2. Using _in-domain_ PLMs (SS4.2). 3. Using _task-adapted_ PLMs (SS4.3). ### The scaling law for keyphrase generation Although the effect of model sizes has been explored for a range of tasks, it is poorly understood in the KPG literature, where most recent works employ a single PLM with 100M to 500M parameters (Kulkarni et al., 2022; Wu et al., 2022; Zhao et al., 2022). To establish a common ground, we measure the performance of fine-tuning BART-base/large (purple line) and T5-small/base/large/3B (green line) and report the results on KP20k in Figure 1. Surprisingly, fine-tuning BART or T5 is extremely _parameter-inefficient_ compared to task-specific architectures trained from scratch1. For instance, although T5's performance consistently increases with the model size, around 8x more parameters are required to achieve the same Present \(F1@M\) on KP20k as SetTrans and 30x more parameters are required to have a better \(SemF1\). Closer inspection shows that SetTrans excels in _recall_ via its parallel control codes and the set loss. In comparison, limited by the learning formulation and decoding strategy, fine-tuned seq2seq PLMs fall behind in their recall of important keyphrases. In SS5, we will show that this problem can be alleviated with a simple decode-then-select strategy. Footnote 1: We note that this claim is orthogonal to the observations that PLMs are _data-efficient_(Wu et al., 2022). BART vs. T5BART and T5 display similar scaling for \(F1@M\) and \(SemF1\). However, compared to T5, BART's recall scores increase more readily than the precision scores. At the same number of parameters, BART also performs better on absent keyphrases. One possible reason is that BART's text infilling objective is more advantageous for learning the knowledge for constructing spans absent from text (Wu et al., 2022). Which score is more sensitive to scaling?Compared to recall, _precision_ is more sensitive to model size. For example, T5-small achieves 98% \(SemR\) compared to the 50x larger T5-3B. In addition, _absent keyphrases_ scores are more sensitive. Overall, this suggests that small models are able to extract relevant keyphrases, but learn to _selectively omit_ unimportant keyphrases and _create more absent keyphrases_ as the model size grows. Indeed, the average number of predicted keyphrases decreases from T5-small (6.75), T5-base (5.74), and T5-large (5.66), to T5-3B (5.48), while the number of absent keyphrases increases from T5-small (0.91), T5-base (0.99), T5-large (1.01), to T5-3B (1.05). ### Domain knowledge is crucial to accurate keyphrase generation In-domain pre-training has been shown effective in a wide range of tasks requiring extensive domain knowledge (Beltagy et al., 2019; Lee et al., 2019). As keyphrases often contain domain-specific terminologies, we hypothesize that the domain of a PLM greatly affects its keyphrase generation ability. To test this hypothesis, we pre-train in-domain BART models SciBART-base and SciBART-large from scratch using the paper titles and abstracts from the S2ORC dataset (Lo et al., 2020). The processed dataset contains 171.7M documents or 15.4B tokens in total. The models are pre-trained on text infilling for 250k steps with batch size 2048, learning rate 3e-4, 10k warm-up steps, and polynomial learning rate decay. We present data processing and model training details in appendix C. The results of fine-tuning SciBART are presented with "+ID" (for "In-Domain") in Figure 1. As expected, SciBART significantly improves over BART for all three F1 metrics, outperforming the much larger T5-3B. Notably, SciBART also has _better parameter efficiency_ compared to general domain models: scaling from SciBART-base to SciBART-large provides a much larger growth in \(SemF1\) compared to scaling up BART and T5. Figure 1: KP20k test performance of models of various sizes and pre-training strategies. +ID = using in-domain SciBART. +TAPT = second-stage training on OAGKX. The results are averaged over 3 random seeds. ### Task-adaptive pre-training is more effective with in-domain models Task-adaptive pre-training (TAPT) is another common technique for adding task-specific supervision signals Gururangan et al. (2020). In this section, we analyze the effect on KPG performance of adding two types of TAPT stages to seq2seq PLMs: _keyphrase generation_ or _instruction following_. Keyphrase Pre-trainingWe directly use KeyBART Kulkarni et al. (2022) (denoted as "+TAPT" in Figure 1), which is trained using the OAGKX dataset Cano and Bojar (2020) on KPG with keyphrases corrupted from the input. To investigate the effects of TAPT on in-domain PLMs, we also fine-tune SciBART on OAGKX with batch size 256, learning rate 3e-5, and 250k steps. We denote this model as "+ID+TAPT" in Figure 1. Instruction Pre-trainingRecently, instruction tuning has been introduced to improve the generalization ability of PLMs Mishra et al. (2022); Ouyang et al. (2022). As KPG is relevant to classic NLP tasks such as information extraction and summarization, we hypothesize that training with instruction data also serves as TAPT for KPG2. To confirm, we benchmark FLAN-T5 Chung et al. (2022), a family of T5 models fine-tuned on instruction following datasets (yellow line in Figure 1). Footnote 2: In fact, some variants of the keyphrase extraction task are included in popular instruction datasets such as Nlv2 Wang et al. (2022) and Alpaca Taori et al. (2023). TAPT struggles to improve absent KPG but is more effective with in-domain models.Figure 1 suggests that both TAPT strategies lead to a similar amount of improvement in the present keyphrase F1@M and SemF1. Surprisingly, the absolute gain is small and TAPT hardly improves absent keyphrase performance. For KeyBART, although its pre-training data (OAGKX) has a similar percentage of absent keyphrases as KP20K (32% vs. 37%), its objective (recovering present keyphrases from corrupted input) might still be different from absent keyphrase generation. For FLAN-T5, we find that the KPG-related tasks in its pre-training tasks often contain very short input text, representing a significant distribution mismatch with KP20k. However, when applied on the in-domain SciBART, TAPT can greatly improve the performance on KP20k. Combined with SS4.2, we conclude that _in-domain pre-training is more important for KPG and TAPT serves a complementary secondary role_. ### Analysis: are strong KPG models sensitive to input perturbations? As in-domain and task-adapted PLMs already greatly benefit KPG, are data augmentation techniques no longer necessary? In this section, we reveal that these designs increase the model's sensitivity to input perturbations, and data augmentation is still desired for better generalization. #### 4.4.1 Method We design two input perturbations on KP20k to check the behaviors of BART-based KPG models. Name variation substitutionWe construct 8905 perturbed inputs by replacing present keyphrases with their name variations linked by Chan et al. (2019). Ideally, a robust KPG model would have a similar recall for the original phrases and the name variations as they appear in the same context. In addition, domain-specific or task-adapted models exhibit a larger performance drop compared to BART-large, suggesting a trade-off between domain/task specificity and generalization. Pre-trained on large-scale keyphrase data, KeyBART may rely more on syntax and position information in the data and thus be less sensitive to synonym change. On the other hand, pre-trained on a large-scale scientific corpus, SciBART is more robust than KeyBART on different scientific writing styles beyond the ones available in KP20k. ### Discussion We summarize the main conclusions derived from the empirical results presented in this section: * Naively scaling up BART and T5 is parameter-inefficient on KP20k compared to SetTrans. * Domain knowledge is crucial for KPG performance and improves parameter efficiency. * Task-adaptive training with keyphrase or instruction tuning data only significantly improves KPG with in-domain models. * In-domain pre-training and TAPT harm generalization in different ways and data augmentation during fine-tuning is desired. ## 5 Decoding Strategy for KPG While it is well-known that decoding strategies can strongly affect text generation quality Fan et al. (2018); Holtzman et al. (2020), surprisingly there has been little study about decoding PLM-based KPG models. Previous studies often directly use greedy search and variants of beam search Gao et al. (2022); Zhao et al. (2022); Wu et al. (2022), limiting the understanding of PLMs fine-tuned for KPG. To bridge this knowledge gap, we first carefully evaluate six decoding strategies on the strongest PLM-based KPG model. We then propose a simple yet effective _decode-select_ strategy to mitigate the observed deficiencies of greedy search. ### Multi-sequence decoding: the trade-off between coverage and quality We focus on decoding the SciBART-large+TAPT model fine-tuned on KP20k, under the budget varying from 1 to 20 samples. The following six decoding algorithms are compared. For each algorithm, their hyperparameters are chosen based on the KP20k validation set. 1. _Greedy search_. 2. _Beam search_. We set the beam size to the number of desired samples. 3. _Diverse beam search_ Vijayakumar et al. (2018). We set the number of groups to the number of desired samples and the weight of the dissimilarity term to \(\lambda_{g}=0.1\). 4. _Vanilla sampling_. We further apply a temperature scaling \(\tau=0.7\). 5. _Top-k sampling_Fan et al. (2018). We use temperature \(\tau=0.7\) and \(k=2\) as we find a large \(k\) harms the generation quality. 6. _Nucleus sampling_Holtzman et al. (2020). We set \(p=0.95\) and temperature \(\tau=0.5\). Figure 2 presents the semantic-based evaluation results as a function of sample size. In the single sample setting, greedy search achieves a strong \(SemF1\), only slightly outperformed by diverse beam search. For other methods, we observe a clear trade-off between their information coverage (\(SemR\)) and the noise in the final output (\(SemP\)) as the number of samples grows. Nevertheless, all these methods are able to outperform greedy search at a certain sample size, indicating that single-sequence decoding is sub-optimal. ### A simple decode-select strategy boosts the performance of greedy decoding Greedy search captures the correlations in human-written labels but suffers from _local decisions_ and _path dependency_: high-quality keyphrases can be missed with improbable first tokens. However, naively outputting the union of multiple sampled sequences brings excessive noise. To achieve the balance between the two, we introduce DeSel, a simple and effective three-stage decoding strategy: 1. \(\mathbf{De}\)code one sequence \(G\) via greedy search. 2. Sample \(n\) sequences \(\{S_{1},...,S_{n}\}\) to collect a set of candidate keyphrases \(S\). 3. \(\mathbf{Selec}\)th high quality phrases \(\{s_{1},...,s_{m}\}\subset S\) and output the sequence \((G\;;\;s_{1}\;;\;...\;;\;s_{m})\). For step 3, we estimate \(\Pr(s_{i}|\mathcal{X})\) for every phrase \(s_{i}\) in the \(n\) samples and \(\Pr(g_{j}|\mathcal{X})\) for every phrase \(g_{j}\in G\). Then, we use \(G\) as a baseline to select at most \(m\) most probable \(s_{i}\) that satisfies \[\Pr(s_{i}|\mathcal{X})\geq\frac{\alpha}{|G|}\sum_{g_{j}\in G}\Pr(g_{j}| \mathcal{X}), \tag{3}\] where \(\alpha\) is a hyperparameter controlling the trade-off between precision and recall. The probability estimation is obtained with either the original model or a newly trained "one2one" model3 that learns to generate a single keyphrase based on \(\mathcal{X}\). We use nucleus sampling with \(p\) = 0.95 and \(\tau\) = 0.5 for step 2, and set \(m\) = 10, \(n\) = 10, and \(\alpha\) = 0.78. Footnote 3: Starting from KeyBART, the one2one model can be efficiently trained. We provide more details in appendix F. Table 3 presents the test results of important models in this paper. DeSel consistently improves the performance over the base model by a large margin. In Table 4, we compare against other selection strategies including random selection, input overlap using a sentence transformer model, and FreqFS proposed in Zhao et al. (2022). DeSel is the only method that consistently outperforms both greedy search and nucleus sampling. DiscussionCompared to the single-sequence decoding baselines, DeSel wins by bringing in the diversity. Compared to the baseline ranking methods, DeSel wins by capturing the correlations between labels (encoded in the greedy search outputs) and using the likelihood-based criteria to filter out high-quality phrases from the diverse candidates. EfficiencyDeSel harms the inference latency as it generates multiple sequences. To improve the efficiency, one optimization is reusing the encoder's outputs for all the decoding and scoring operations. We implemented this strategy and benchmarked it with the BART (base) model. DeSel with n = 10 (1 greedy search and 10 sampling sequences decoded) takes 3.8x time compared to greedy decoding. \begin{table} \begin{tabular}{l|c c c} \hline Method & \(P\) & \(A\) & \(Sem\) \\ \hline Greedy search (\(G\)) & 0.426 & 0.063 & 0.597 \\ Nucleus sampling (\(S\)) & 0.385 & 0.074 & 0.599 \\ Random Selection & 0.385 & 0.072 & 0.596 \\ Input Overlap & 0.402 & 0.064 & 0.611 \\ FreqFS (Zhao et al., 2022) & 0.426 & 0.072 & 0.610 \\ DeSel (self) & 0.426 & 0.070 & 0.608 \\ DeSel (one2one) & **0.431** & **0.076** & **0.612** \\ \hline \end{tabular} \end{table} Table 4: A comparison across different decoding strategies. Methods below the dotted line merge \(G\) with \(S\). \begin{table} \begin{tabular}{l|c c c|c c c|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c|}{**KP20k**} & \multicolumn{3}{c|}{**Inspec**} & \multicolumn{3}{c|}{**Krapivin**} & \multicolumn{3}{c|}{**NUS**} & \multicolumn{3}{c}{**SemEval**} \\ & \(P\) & \(A\) & \(Sem\) & \(P\) & \(A\) & \(Sem\) & \(P\) & \(A\) & \(Sem\) & \(P\) & \(A\) & \(Sem\) & \(P\) & \(A\) & \(Sem\) \\ \hline CopyTrans & 0.376 & 0.046 & 0.562 & 0.333 & 0.023 & 0.569 & 0.365 & 0.063 & 0.547 & 0.429 & 0.044 & 0.579 & 0.321 & 0.022 & 0.377 \\ SetTrans & 0.391 & 0.058 & 0.585 & 0.328 & 0.030 & 0.573 & **0.375** & 0.072 & **0.560** & 0.446 & 0.055 & 0.597 & 0.342 & 0.029 & 0.396 \\ \hline CorrKG\({}^{\dagger}\) & 0.404 & 0.071 & N/A & 0.365 & **0.045** & N/A & N/A & N/A & **0.449** & **0.079** & N/A & **0.359** & **0.044** & N/A \\ \hline BART-large & 0.392 & 0.047 & 0.575 & 0.333 & 0.024 & 0.565 & 0.347 & 0.051 & 0.517 & 0.435 & 0.048 & 0.586 & 0.311 & 0.024 & 0.381 \\ SciBART-large & 0.396 & 0.057 & 0.587 & 0.328 & 0.026 & 0.557 & 0.329 & 0.056 & 0.503 & 0.421 & 0.050 & 0.567 & 0.304 & 0.033 & 0.382 \\ + TAPT & 0.426 & 0.063 & 0.597 & 0.330 & 0.030 & 0.569 & 0.347 & 0.064 & 0.519 & 0.442 & 0.055 & 0.585 & 0.333 & 0.031 & 0.386 \\ + TAPT + DeSel & **0.431** & **0.076** & **0.612** & **0.402** & 0.036 & **0.611** & 0.352 & **0.086** & 0.546 & **0.449** & 0.068 & **0.610** & 0.341 & 0.040 & **0.402** \\ \hline \hline \end{tabular} \end{table} Table 3: Testing results on all datasets. \(P\) and \(A\) stand for \(F1@M\) for present and absent keyphrases. \(Sem\) stands for \(SemF1\). The best performance is boldfaced. \({}^{\dagger}\)copied from Zhao et al. (2022). The best entries in each column are statistically significantly higher than the second best (p < 0.05) via a paired bootstrap test. Full results in Table 8. Figure 2: A comparison of six strategies for decoding from the SciBART+TAPT model. Greedy search achieves strong performance while performing worse than beam search and sampling with multiple samples. Related Work Keyphrase GenerationMeng et al. (2017) propose the task of Deep Keyphrase Generation and a strong baseline model CopyRNN. Later works improve the architecture by adding correlation constraints (Chen et al., 2018) and linguistic constraints (Zhao and Zhang, 2019), exploiting learning signals from titles (Ye and Wang, 2018; Chen et al., 2019), and hierarchical modeling the phrases and words (Chen et al., 2020). Ye and Wang (2018) reformulate the problem as generating a sequence of keyphrases, while Ye et al. (2021) further uses a set generation formulation to remove the influence of difference target phrase ordering. Other works include incorporating reinforcement learning (Chan et al., 2019; Luo et al., 2021), GANs (Swaminathan et al., 2020), and unifying KPE with KPG (Chen et al., 2019; Ahmad et al., 2021). Meng et al. (2021) conduct an empirical study on architecture, generalizability, phrase order, and decoding strategies, with the main focus on models trained from scratch instead of PLMs. PLMs for KPGMore recently, Wu et al. (2021), Chowdhury et al. (2022), Wu et al. (2022), Gao et al. (2022), and Wu et al. (2022) consider fine-tuning prefix-LMs or seq2seq PLMs for KPG. Kulkarni et al. (2022) use KPG as a pre-training task to learn strong BART-based representations. Zhao et al. (2022) adopt optimal transport for loss design and propose frequency-based filtering for decoding to improve BART-based KPG. ## 7 Conclusion This paper systematically investigated model selection and decoding for building KPG models with seq2seq PLMs. Our analyses suggested much more nuanced patterns beyond the "conventional wisdom" assumed by the majority of current literature. Our novel decoding strategy, DeSel, significantly improved the performance of greedy search across multiple datasets. More broadly, this study underscores the distinct nature of the KPG task. One should not blindly transpose conclusions or assumptions from other text generation tasks. Instead, they warrant careful re-evaluation and empirical validation. Our work also opens up exciting directions for future work, with deep groundings in keyphrase literature. For instance, making KPG models more robust, interpreting a KPG model, and designing better decoding algorithms for KPG. ### Limitations While our study sheds light on important aspects of keyphrase generation (KPG) models, several limitations present opportunities for future research. First, our analysis focuses on model selection and decoding and thus uses default cross entropy loss and original training set without data augmentations. Investigating how the discussed design choices with more recent data augmentation (Ray Chowdhury et al., 2022; Garg et al., 2022) or training strategies (Zhao et al., 2022) is an important future study. In addition, the best approach to combine the conclusions reached in this paper on long input KPG (Garg et al., 2022) or KPG models trained with reinforcement learning (Chan et al., 2019; Luo et al., 2021) worth future study. Second, while in-domain pre-training combined with task adaptation was found to enhance KPG performance, we did not fully investigate the underlying mechanisms leading to these improvements. Further research could explore the interplay between these two aspects and uncover more granular insights into how they improve KPG. Finally, although we revealed a compromise between performance optimization and model robustness, we did not delve into designing new methods for improving the robustness of these models against perturbed inputs. Future research could further explore techniques to mitigate this trade-off, developing models that maintain high performance while being resistant to input perturbations. ### Ethics Statement S2ORC and OAGKX are released under the Creative Commons By 4.0 License. We perform text cleaning and email/URL filtering on S2ORC to remove sensitive information, and we keep OAGKX as-is. We use the keyphrase benchmarking datasets distributed by the original authors. No additional preprocessing is performed before fine-tuning except lower-casing and tokenization. We do not re-distribute any of the datasets used in this work. Potential risks of SciBART include accidental leakage of (1) sensitive personal information and (2) inaccurate factual information. For (1), we carefully preprocess the data in the preprocessing stage to remove personal information, including emails and URLs. However, we had difficulties desensitizing names and phone numbers in the text because they overlapped with the informative content. For (2), since SciBART is pre-trained on scientific pa pers, it may generate scientific-style statements that include inaccurate information. We encourage the potential users of SciBART not to rely fully on its outputs without verifying their correctness. Pre-training SciBART and fine-tuning the large T5 models are computationally heavy, and we estimate the total CO\({}_{2}\) emission to be around 3000 kg using the calculation application provided by Lacoste et al. (2019). We will release the fine-tuned checkpoints and we document the hyperparameters in the appendix section D to help the community reduce the energy spent optimizing PLMs for KPG and other various NLP applications. ## Acknowledgments The research is supported in part by Taboola, NSF CCF-2200274, and an Amazon AWS credit award. We thank the Taboola team for the helpful discussion. We also thank anonymous reviewers, Da Yin, Tanmay Parekh, and other members of the UCLA-NLP group for their valuable feedback.
2301.10257
Hadronic versus leptonic origin of gamma-ray emission from supernova remnants
GeV and TeV emission from the forward shocks of supernova remnants (SNRs) indicates that they are capable particle accelerators, making them promising sources of Galactic cosmic rays (CRs). However, it remains uncertain whether this $\gamma$-ray emission arises primarily from the decay of neutral pions produced by very high energy hadrons, or from inverse-Compton and/or bremsstrahlung emission from relativistic leptons. By applying a semi-analytic approach to non-linear diffusive shock acceleration (NLDSA) and calculating the particle and photon spectra produced in different astrophysical environments, we parametrize the relative strength of hadronic and leptonic emission. We show that, even if CR acceleration is likely to occur in all SNRs, the observed photon spectra may instead primarily reflect the environment surrounding the SNR, specifically the ambient density and radiation field. We find that the most hadronic-appearing spectra are young and found in environments of high density but low radiation energy density. This study aims to guide the interpretation of current $\gamma$-ray observations and single out the best targets of future campaigns.
N. Corso, R. Diesing, D. Caprioli
2023-01-24T19:00:02Z
http://arxiv.org/abs/2301.10257v1
# Hadronic versus leptonic origin of gamma-ray emission from supernova remnants ###### Abstract GeV and TeV emission from the forward shocks of supernova remnants (SNRs) indicates that they are capable particle accelerators, making them promising sources of Galactic cosmic rays (CRs). However, it remains uncertain whether this \(\gamma\)-ray emission arises primarily from the decay of neutral pions produced by very high energy hadrons, or from inverse-Compton and/or bremsstrahlung emission from relativistic leptons. By applying a semi-analytic approach to non-linear diffusive shock acceleration (NLDSA) and calculating the particle and photon spectra produced in different astrophysical environments, we parametrize the relative strength of hadronic and leptonic emission. We show that, even if CR acceleration is likely to occur in all SNRs, the observed photon spectra may instead primarily reflect the environment surrounding the SNR, specifically the ambient density and radiation field. We find that the most hadronic-appearing spectra are young and found in environments of high density but low radiation energy density. This study aims to guide the interpretation of current \(\gamma\)-ray observations and single out the best targets of future campaigns. 0000-0002-8820-8800]Nicholas J. Corso 0000-0002-4880-7880]Rebecca Diesing 0000-0002-4880-7880]Damiano Caprioli ## 1 Introduction The forward shocks of supernova remnants (SNRs) are promising candidates for the primary sources of Galactic cosmic rays (CRs) since they provide sufficient energetics and an efficient acceleration mechanism, diffusive shock acceleration, or DSA (O'C. Drury et al., 1994; Hillas, 2005; Berezhko and Volk, 2007; Ptuskin et al., 2010; Caprioli et al., 2010). However, direct evidence of efficient hadron acceleration by SNRs, particularly up to the so-called CR knee at energies \(\gtrsim 10^{15}\) eV, remains limited (Blasi, 2019). The best observational evidence for hadron acceleration is \(>100\) MeV \(\gamma\)-ray emission from the decay of neutral pions (\(\pi_{0}\)) produced by interactions between CR ions and the ambient medium (O'C. Drury et al., 1994). However, when leptons--primarily electrons--are accelerated, they, too can produce strong \(\gamma\)-ray signatures via inverse Compton (IC) and relativistic bremsstrahlung radiation (Aharonian et al., 2006). Ideally, observational signatures would identify which of the two scenarios dominates, but in many cases the results are ambiguous. One example of this ambiguity is RX J1713.7-3946, which was identified as a source of high-energy \(\gamma\)-ray emission when it was detected in the TeV-band by the HESS collaboration (Aharonian et al., 2006). As demonstrated in Morlino et al. (2009), both hadronic and leptonic scenarios could explain observed HESS data. Later data from Fermi-LAT, however, favored leptonic models, apparently indicating that RX J1713.7-3946 and Vela Jr. are not efficient hadron accelerators (Ellison et al., 2010; Zirakashvili and Aharonian, 2010; Abdo et al., 2011; Lee et al., 2013). On the other hand, Morlino and Caprioli (2012) later identified Tycho's SNR as a strong candidate for hadronic \(\gamma\)-ray based on VERITAS and Fermi-LAT data; also, the detection of the characteristic "pion bump" in IC443 and W44 confirmed the hadronic nature of the \(\gamma\)-ray emission from these SNRs interacting with molecular clouds (Ackermann et al., 2013). These findings raise questions about whether hadronic emission from SNRs is common enough for them to be the primary accelerators of Galactic CRs. Ideally, to assess the hadronic/leptonic nature of a source, multi-wavelength measurements would be made for all \(\gamma\)-ray bright SNRs in consideration, but this is time-consuming and often inconclusive due to the often limited constraints on age and distance. In the recent years, kinetic simulations of non-relativistic shocks have shown that the acceleration of protons and heavier nuclei is mostly controlled by the local inclination of the shock (the angle between the shock normal and the local magnetic field), rather than by the shock strength, parametrized by its sonic and Alfvenic Mach numbers (Caprioli and Spitkovsky, 2014, 2014, 2015; Caprioli et al., 2017; Caprioli et al., 2018; Haggerty and Caprioli, 2020). Conversely, the efficiency of electron acceleration in such shocks is not fully understood, yet (though see the works by Guo et al., 2014; Park et al., 2015; Xu et al., 2020; Shalaby et al., 2022). Since most SNRs are likely probing a variety of shock inclinations (Pais and Pfrommer, 2020; Winner et al., 2020), we do not expect the acceleration efficiency to vary greatly in all of the SNRs that exhibit strong shocks. Therefore, we propose that environmental factors are essential determiners of the dominant emission mechanism. That is, we consider how the age and the characteristics of the medium in which SNRs expand (density profile and normalization, energy density of background radiation), impact their \(\gamma\)-ray production and thus its inferred hadronic/leptonic nature. In this study, we apply a semi-analytic formalism for non-linear diffusive shock acceleration to calculate the particle and photon spectra at various stages in the evolution of simulated SNRs. Using these spectra, we parameterize the relative strength of hadronic and leptonic emission in order to assess when the former dominates over the latter. Such an analysis, which leverages on the relative normalization and spectral slopes of different kinds of \(\gamma\)-ray emission, provides information about when/where SNRs are most likely to exhibit a hadronic signature. We construct effective "look-up tables" that may guide the interpretation of the spectra currently available and of those that will be provided by the incoming generation of \(\gamma\)-ray telescopes, such as LHAASO and especially CTA. The paper is organized as follows: in Section 2, we discuss the theoretical background of our computational model, along with the analytical tools with which we use to present our results; and in Sections 3 and 4, we present and analyze our results, with the goal of providing a tool for quick estimation of a source's capacity to produce hadronic emission. ## 2 Method In general, the evolution of a SNR follows four principal stages (e.g., Bisnovatyi-Kogan and Silich, 1995; Ostriker and McKee, 1988; Diesing and Caprioli, 2018): in the _ejecta-dominated stage_, the ejected mass is much greater than swept-up mass and the SNR expands effectively unimpeded; in the _Sedov stage_, the swept-up mass exceeds the ejected mass and the SNR expands adiabatically; in the _pressure-driven snowplow_, the SNR begins to lose energy to radiative cooling but continues to expand due to its internal pressure exceeding that of the ambient background; finally, in the _momentum-driven snowplow_, expansion is driven by the residual kinetic energy from the explosion. For this work, only the ejecta-dominated and Sedov stages are considered, since most of the SNRs seem to fade away in the radiative stage (e.g., Case and Bhattacharya, 1998; Bandiera and Petruk, 2010). Broadly speaking, we consider two types of environments surrounding our model SNRs. In the first, we assume a homogeneous interstellar medium (ISM) with a uniform matter density. In the second, we model an environment that may exist around a core-collapse supernova, in which the medium is dominated by stellar winds driven by the supernova progenitor and exhibits an inverse square matter density profile (\(n\propto R_{\rm sh}^{-2}\)). All simulated SNRs eject \(1M_{\odot}\) of mass with \(E=10^{51}\,\rm erg\) of kinetic energy. For the homogeneous profiles, we assume an ambient magnetic field strength of \(B_{0}=3\,\rm\upmu G\), and we test ambient number density values spanning \(n_{0}\in\left[5\times 10^{-3},10^{1}\right]\rm cm^{-3}\). In the case of the wind profile, we represent the number density as \(n(r)=n_{0}\left(r/\,\rm pc\right)^{-2}\). Our choice of \(n_{0}\) is motivated by Weaver et al. (1977), who note that \(\rho(r)\propto\dot{M}/(V_{w}r^{2})\), where \(\dot{M}\) is the mass loss rate of the progenitor and \(V_{w}\) is the speed of its stellar wind. Taking \(\dot{M}_{-5,\odot}/V_{w,6}\) to vary around order unity, with \(\dot{M}=\dot{M}_{-5,\odot}10^{-5}M_{\odot}\,\rm y^{-1}\) and \(V_{w}=V_{w,6}10^{6}\,\rm km\,s^{-1}\), we obtain and sample the following range of values: \(n_{0}\in\left[3.5\times 10^{-2},1.1\times 10^{1}\right]\rm cm^{-3}\). We adopt an ambient magnetic field strength profile that goes as the square root of the number density, of the form \(B_{0}/\,\rm G\simeq 0.01\sqrt{n/\left(5000\,\rm cm^{-3}\right)}\)(Chevalier, 1998). We simulate CR acceleration using the semi-analytic formalism for non-linear diffusive shock acceleration (NLDSA) described by Caprioli et al. (2009), Caprioli et al. (2010), Caprioli (2012), and Diesing and Caprioli (2019), and references therein (Malkov, 1997; Malkov et al., 2000; Blasi, 2002, 2004; Amato and Blasi, 2005, 2006). This model self-consistently solves the diffusion-advection equation for the transport of non-thermal particles in a quasi-parallel, non-relativistic shock, including the dynamical backreaction of accelerated particles and of CR-generated magnetic turbulence. Magnetic field amplification due to CR-driven streaming instabilities is taken into account as described in Diesing and Caprioli (2021); more precisely, fast shocks are dominated by the non-resonant (Bell) instability, while later in the Sedov stage the resonant instability becomes important (e.g., Bell, 2004; Amato and Blasi, 2009). This formalism calculates the instantaneous proton spectrum at each timestep of SNR evolution. As in Diesing and Caprioli (2019), we calculate the instantaneous electron spectra using the analytical approxima tion provided in Zirakashvili & Aharonian (2007), \[f_{\rm e}(p)=K_{\rm ep}f_{\rm p}(p)\left[1+0.523\left(p/p_{\rm e,max}\right)^{9/4 }\right]^{2}e^{-p^{2}/p_{\rm e,max}^{2}}, \tag{1}\] with \(p_{\rm e,max}\) the maximum electron momentum determined by equating the acceleration and synchrotron loss timescales and \(K_{\rm ep}\) the normalization of the electron spectrum relative to that of protons. Our reference value is \(K_{\rm ep}=1.6\times 10^{-3}\), which corresponds to the value determined for Tycho's SNR in Morlino & Caprioli (2012), and discuss how results change when varying this parameter over the range of \(K_{\rm ep}=10^{-4}\)-\(10^{-2}\), which encompasses the range of values inferred in other SNRs as well (Berezhko & Volk, 2004; Berezhko et al., 2006; Berezhko & Volk, 2006; Lee et al., 2013). The instantaneous proton and electron spectra are then weighted to account for adiabatic and-in the case of electrons-synchrotron losses, before being summed to produce a cumulative spectrum (see Caprioli et al., 2010; Diesing & Caprioli, 2019, for more details). To generate the spectrum of nonthermal radiation from a modeled SNR, we use the Naima Python package (Zabalza, 2015) which, given arbitrary proton and electron momentum distributions, calculates the emission due to IC (Khangulyan et al., 2014), synchrotron (Aharonian et al., 2010), nonthermal bremsstrahlung (Baring et al., 1999), and pion decay (Kafexhiu et al., 2014). We consider different background radiation fields, meant to mimic different astrophysical environments, on top of the ubiquitous Cosmic Microwave Background radiation (CMB), with a temperature of \(T=2.72\,\mathrm{K}\) and an energy density \(u_{\rm rad}=0.261\,\mathrm{eV/\,cm^{3}}\). An effective "maximal" radiation field would correspond to that of a HII environment, as described in Section 12.7 of Draine (2011), with an energy density of \(u_{\rm rad}=3.9\times 10^{3}\,\mathrm{eV/\,cm^{3}}\). To span the range of photon energy densities between these two extremes, we also consider an environment consisting of the CMB field and starlight peaking in the mid-infrared (MIR) with a temperature of \(T=100\,\mathrm{K}\). We treat the energy density of this stellar radiation field as a free parameter. To interpret the dominant form of emission, we introduce a parameter \(H\), which we name the _hadronicity_ of the emitted radiation. This parameter is defined as: \[H\equiv\frac{2}{\pi}\arctan\left[\log_{10}\left(\frac{L_{\rm had}}{L_{\rm lep} }\right)\right] \tag{2}\] where \(L_{\rm had/lep}\) is the hadronic/leptonic luminosity integrated over a given energy band. Throughout this work, we consider the "GeV" band as \(100\,\mathrm{MeV}\)-\(100\,\mathrm{GeV}\) and the "TeV" band as \(100\,\mathrm{GeV}\)-\(1\,\mathrm{PeV}\). These two bands broadly reflect the regimes of energy spanned by GeV and TeV observatories (i.e., Fermi and Cherenkov telescopes). Using this definition, a value of \(0.75<H\leq 1\) is considered "extremely hadronic.", while \(0.<H\leq 0.5\) corresponds to "mildly hadronic." Conversely, similar absolute values of \(H\) but with negative sign would correspond to extremely and mildly leptonic cases. The hadronicity parameter \(H\) may be interpreted as the likelihood that the \(\gamma\)-ray emission of SNR with given characteristics (age, density, magnetic field, photon background) is of hadronic origin. For practical purposes, \(H\) is meant to provide an informed guess about the hadronic/leptonic nature of the GeV or TeV emission from a given SNR without the need of performing a detailed time-dependent, multi Figure 1: Cumulative proton spectra (solid lines) and electron spectra (dashed lines) at the transition between the ejecta-dominated and Sedov-Taylor stages of a modeled SNR in a sample of environments. Line colors denote the density normalization used in each model. In the left panel, the ambient medium is taken to be homogeneous; in the right, it follows a wind profile. zone, calculation of particle acceleration and its ensuing multi-wavelength emission. ## 3 Results The ion and electron spectra produced by a sample of SNRs expanding in different density profiles are shown in Figure 1. These spectra are the cumulative post-shock distributions calculated when the SNR transitions from the ejecta-dominated to the Sedov stage. This time, denoted \(T_{\rm ST}\), is evaluated to be the moment that the accumulated mass from the surrounding medium exceeds the originally ejected mass. It is worth stressing that these spectra are steeper than the standard DSA prediction, \(dN/dE\propto E^{-2}\), due to the shock modification induced by nonthermal particles and by the amplified magnetic they generate (in particular, we include the effects of a "postcursor," as described in Haggerty & Caprioli, 2020; Caprioli et al., 2020; Diesing & Caprioli, 2021). In addition, electron spectra are cooled by inverse-Compton and synchrotron losses, as pointed out by Diesing & Caprioli (2019) and discussed by Cristofari et al. (2021); Morlino & Celli (2021); since the maximum electron energy is controlled by synchrotron losses, it is strongly dependent on the amplified magnetic field, which correlates with the local density, too. In Figure 2, we present a wide sample of modeled \(\gamma\)-ray spectra from SNRs in different environments, spanning the full range of hadronicity. Also these spectra are calculated at the beginning of the Sedov stage. This variation is exacerbated by the fact that an increase in the hadronic \(\gamma\)-ray luminosity typically accompanies a decrease in the leptonic luminosity and vice-versa. Namely, the luminosity due to \(\pi_{0}\)-decay scales the ambient density, while IC emission tends to be inhibited in denser environments, where electrons tend to suffer strong synchrotron losses. Note also that relativistic bremsstrahlung is always subdominant with respect to IC. Figure 3, instead, depicts the time evolution of the GeV and TeV luminosities for a homogeneous profile and a wind one. In particular, for the homogeneous scenario, in the late Sedov stage the leptonic luminosity tends to grow faster than the hadronic luminosity, as a consequence of the shifting of the electron cut-off to higher energies when the amplified magnetic field decreases and synchrotron losses become less severe. As a Figure 2: Cumulative hadronic \(\gamma\)-ray spectra (blue lines) and leptonic \(\gamma\)-ray spectra (red lines) at the transition between the ejecta-dominated and Sedov-Taylor stages of a modeled SNR in a sample of environments. Each panel represents a different ambient medium, with \(n_{0}\) denoting the matter density normalization and \(u_{\rm rad}\) denoting the radiation energy density. As in Figure 1, we consider both homogeneous and wind profiles for the ambient density. Vertical black dashed lines mark \(100\,\rm GeV\), the dividing energy between our GeV and TeV bands. The top (bottom) panels represent hadronic (leptonic) cases with the extremity of the scenario (i.e., the absolute value of the hadronicity, \(H\)) increasing from left to right. Note that the nature of the underlying particle acceleration remains the same across panels; the strong variation in \(\gamma\)-ray emission shown here arises solely from environmental factors consequence, over time, there is a general trend for the spectra to become more leptonic, perhaps even switching from being dominantly hadronic to leptonic. In the case of a wind profile, the leptonic emission monotonically increases against a monotonically decreasing hadronic curve, which results in a definitive progression toward a leptonically dominated scenario. To summarize the effect of the environment on an SNR's \(\gamma\)-ray emission, Figure 4 shows hadronicity as a function of ambient matter density and radiation energy density. At number densities less than order \(\sim\)0.1-1 cm\({}^{-3}\), where IC scattering tends to dominate leptonic emission, hadronicity increases linearly with increasing matter density and with decreasing energy density. Once the scenario becomes moderately or extremely hadronic, \(\pi^{0}\) decay and relativistic bremsstrahlung become more dominant processes such that the energy density dependence disappears and the hadronicity increases solely with number density. It is also worth noting that hadronic signatures tend to be stronger in the TeV band, largely due to the fact that TeV energies often sample the IC cutoff. We summarize the effect of SNR evolution on hadronicity in Figure 5, which shows hadronicity as a function of SNR age. As stated previously, SNRs generally become more leptonic with time. Here we can see that this effect is more pronounced in wind profiles, where the ambient density decreases with radius. Homogeneous profiles also exhibit a modest decrease in hadronicity with time in the TeV band, due to the increasing IC cutoff energy; in this case, however, we never find situations in which a source transitions from being dominantly hadronic to leptonic. Finally, we examine the effect of changing the normalization of the electron spectrum relative to the ions spectrum, \(K_{\rm ep}\). Figure 6 shows the hadronicity of a SNR embedded in a low-density environment as a function of \(K_{\rm ep}\). We allow \(K_{\rm ep}\) to vary from \(10^{-2}\) to \(10^{-4}\), according to the observational values inferred by the analysis of individual SNRs (e.g., Berezhko and Volk, 2004; Volk et al., 2005; Morlino and Caprioli, 2012), of the radio emission from nearby galaxies (Sarbadhicary et al., 2017) and kinetic simulations (e.g., Park et al., 2015; Xu et al., 2020). Both GeV and TeV bands show hadronicity profiles of an arctangent shape, resulting from the direct scaling of the leptonic luminosity with the number of nonthermal electrons. The TeV curve, which tends to be more hadronic than its GeV counterpart, appears phase-shifted but otherwise follows the same form. A similar phase shift also occurs when environmental parameters change. Thus, when classifying an SNR as "hadronic" or "leptonic," marginal cases can be sensitive to uncertainties in \(K_{\rm ep}\) However, in most cases varying \(K_{\rm ep}\) within Figure 3: Time evolution of hadronic and leptonic luminosities for a moderately hadronic homogeneous profile scenario (left) and a moderately leptonic wind profile scenario (right). Environmental parameters are the same as those used in the middle column of Figure 2. The top (bottom) row corresponds to luminosities calculated in the GeV (TeV) band. The vertical black dashed lines denote onset of the Sedov-Taylor phase. reasonable values has little impact on the expected nature of the SNR \(\gamma\)-ray emission. ## 4 Discussion Our analysis demonstrates that the apparent hadronicity of an SNR depends strongly on its age and the environment into which it expands. Notably, the best SNR candidates for strong hadronic emission are young and expanding into environments with high matter densities and/or low radiation energy densities. This conclusion derives from the fact that emission from \(\pi^{0}\) decay scales with matter density, while IC emission scales with radiation energy density. Many of the candidates for hadronic SNRs identified in (Caprioli (2011); Acero et al. (2015)) indeed are young and/or associated with molecular clouds. Likewise, Funk (2015) collected the spectra of several bright \(\gamma\)-ray sources, and those that are identified as likely hadronic either are young or exist in high density environments. When considering GeV and TeV band spectra, one notable pattern, as mentioned above, is that TeV emission tends to exhibit higher hadronicity. This behavior may run counter to expectations since, at TeV energies, the hadronic spectrum is steeper than the leptonic one. However, when measuring hadronicity over a fixed energy band, the dominance of one process over another depends in large part on the position of the high energy cutoff, which, in the case of IC emission, depends on the electron cutoff, which is mediated by synchrotron losses. We can summarize the important role of a SNR environment on its \(\gamma\)-ray emission in terms of a simple scaling relation for hadronicity, assuming IC dominates the leptonic emission. Assuming power law particle spectra of the form \(dN_{\rm p}/dE=K_{\rm p}E^{-q}\) and \(dN_{\rm e}/dE=K_{\rm e}E^{-q}\), we construct expressions for the emissivities of the two radiative processes motivated by the derivations in Longair (2011) and Ghisellini (2013): \(\epsilon_{\gamma_{\rm e}o}(E)\propto nK_{\rm p}E^{-q}\), \(\epsilon_{\gamma_{\rm IC}}(E)\propto u_{\rm rad}K_{\rm e}E^{-p}\), where \(n\) is the ambient proton number density, \(u_{\rm rad}\) is the ambient radiation field energy density, and \(p=(q-1)/2\). Note that these scalings are only valid at energies below the high-energy cutoff for each species; we expect these approximations to Figure 4: Hadronicity as a function of matter density normalization (\(n_{0}\)) and radiation energy density (\(u_{\rm rad}\)). Direct results from our model are presented as data points, while contours in the background are generated by 2D interpolation. The left (right) columns correspond to the GeV (TeV) bands, while the top (bottom) rows correspond to homogeneous (wind) density profiles. Broadly speaking, hadronic emission dominates in environments with high ambient density and low radiation energy density. break down in the TeV band. If we take \(K_{\rm ep}=K_{\rm e}/K_{\rm p}\), then the ratio of these two expressions scales as, \[\frac{\epsilon_{\gamma_{\rm e0}}}{\epsilon_{\gamma_{\rm HC}}}\propto\frac{n}{K_ {\rm ep}u_{\rm rad}E^{-(q+1)/2}}, \tag{3}\] Using the results of our hadronicity calculations in a homogeneous medium to estimate the normalization of this expression, we obtain, \[\frac{\epsilon_{\gamma_{\rm e0}}}{\epsilon_{\gamma_{\rm HC}}}\simeq 160 \bigg{(}\frac{10^{-3}}{K_{\rm ep}}\bigg{)}\bigg{(}\frac{n}{\rm cm^{-3}}\bigg{)} \bigg{(}\frac{\rm eV\ cm^{-3}}{u_{\rm rad}}\bigg{)}\bigg{(}\frac{\rm GeV}{E} \bigg{)}^{\frac{q+1}{2}}. \tag{4}\] This approximation yields good agreement with the top left panel of Figure 4. The story is somewhat more complicated for expansion in a wind profile, for which this simple, single-zone model is not necessarily a good approximation. Also, it is possible that the IC emission in the TeV band is overestimated because synchrotron losses may steepen the electron spectrum with respect to the ions' (Diesing and Caprioli, 2019). However, in the absence of detailed information about a SNR expansion history, this expression still holds as a rough estimate of the hadronicity. Generally speaking, the overall trends identified are consistent across both homogeneous and wind profiles, with the most notable difference being that in a decreasing density profile, SNRs may exhibit a more and more leptonic emission as they age. Physically speaking, any wind profile should terminate at some finite distance (Weaver et al., 1977; Ptuskin and Zirakashvili, 2005), beyond which the emission should be similar to the one of a SNR expanding in the homogeneous ISM (Caprioli, 2011). Furthermore, beyond a power-law scaling with radius, none of our profiles include inhomogeneities (clumps, molecular clouds, ISM-scale gradients), which would certainly be present in realistic scenarios. Since the matter density of clouds would be greater than their ambient surroundings, our results suggest that that they would increase the hadronicity of an observed source. ## 5 Conclusion In summary, we modeled time-dependent, multi-zone, CR acceleration in an evolving SNR using a semi-analytical implementation of non-linear DSA; the goal Figure 5: Hadronicity as a function of SNR age and ambient density normalization (\(n_{0}\)). Direct results from our model are presented as data points, while contours in the background are generated by 2D interpolation. The left (right) columns correspond to the GeV (TeV) bands, while the top (bottom) rows correspond to homogeneous (wind) density profiles. The black lines represent the Sedov-Taylor times for each density profile. In general, young SNRs tend to be the most hadronic. is to understand the factors that influence whether a SNR emission is dominated by hadronic or leptonic processes. We find that, for a fixed supernova explosion, the dominance of hadronic or leptonic emission is governed by environmental factors, rather than by the underlying nature of particle acceleration. Furthermore, we find that SNRs tend to appear more leptonic as they age-particularly in the TeV band-due to decreases in the amplified magnetic field and thus increases in the synchrotron-modulated maximum electron energy. This transition is even more pronounced in SNRs expanding into media with decreasing density (i.e., stellar winds). Thus, our findings suggest that the best candidates bearing signatures of hadron acceleration are young, core-collapse SNRs, as well as SNRs interacting with molecular clouds. More quantitatively, SNRs expanding into media with \([n/(\mathrm{cm}^{-3})]/[u_{\mathrm{rad}}/(\mathrm{eV\ cm}^{-3})]\gtrsim 3\) are likely to exhibit hadronic signatures even in the case of very efficient electron acceleration (\(K_{\mathrm{ep}}\gtrsim 10^{-2}\)). These findings may guide the missions of very-high energy \(\gamma\)-ray observatories such as H.E.S.S., MAGIC, VERITAS, LHAASO, and, in the near future, CTA. This research was partially supported by NASA grant 80NSSC20K1273 and the NSF grants AST-1909778, AST-2009326 and PHY-2010240.
2302.11419
Aligned Diffusion Schrödinger Bridges
Diffusion Schr\"odinger bridges (DSB) have recently emerged as a powerful framework for recovering stochastic dynamics via their marginal observations at different time points. Despite numerous successful applications, existing algorithms for solving DSBs have so far failed to utilize the structure of aligned data, which naturally arises in many biological phenomena. In this paper, we propose a novel algorithmic framework that, for the first time, solves DSBs while respecting the data alignment. Our approach hinges on a combination of two decades-old ideas: The classical Schr\"odinger bridge theory and Doob's $h$-transform. Compared to prior methods, our approach leads to a simpler training procedure with lower variance, which we further augment with principled regularization schemes. This ultimately leads to sizeable improvements across experiments on synthetic and real data, including the tasks of predicting conformational changes in proteins and temporal evolution of cellular differentiation processes.
Vignesh Ram Somnath, Matteo Pariset, Ya-Ping Hsieh, Maria Rodriguez Martinez, Andreas Krause, Charlotte Bunne
2023-02-22T14:55:57Z
http://arxiv.org/abs/2302.11419v3
# Aligned Diffusion Schrodinger Bridges ###### Abstract Diffusion Schrodinger bridges (DSB) have recently emerged as a powerful framework for recovering stochastic dynamics via their marginal observations at different time points. Despite numerous successful applications, existing algorithms for solving DSBs have so far failed to utilize the structure of _aligned_ data, which naturally arises in many biological phenomena. In this paper, we propose a novel algorithmic framework that, for the first time, solves DSBs while respecting the data alignment. Our approach hinges on a combination of two decades-old ideas: The classical Schrodinger bridge theory and Doob's _\(h\)-transform_. Compared to prior methods, our approach leads to a simpler training procedure with lower variance, which we further augment with principled regularization schemes. This ultimately leads to sizeable improvements across experiments on synthetic and real data, including the tasks of rigid protein docking and temporal evolution of cellular differentiation processes. ## 1 Introduction _Interpolation_, the task of transforming one given distribution into another, lies at the heart of many modern machine learning applications such as single-cell genomics (Tong et al., 2020; Schiebinger et al., 2019; Bunne et al., 2022), meteorology (Fisher et al., 2009), and robotics (Chen et al., 2021). To this end, diffusion Schrodinger bridges (De Borouli et al., 2021; Chen et al., 2022; Vargas et al., 2021; Liu et al., 2022) have recently emerged as a powerful paradigm due to their ability to generalize prior deep diffusion-based models, notably score matching with Langevin dynamics (Song and Ermon, 2019; Song et al., 2021) and denoising diffusion probabilistic models (Ho et al., 2020), which have achieved the state-of-the-art on many generative modeling problems. Despite the wide success, a significant limitation of existing frameworks for solving DSBs is that they fail to capture the _alignment_ of data: If \(\hat{\mathbb{P}}_{0},\hat{\mathbb{P}}_{1}\) are two (empirical) distributions between which we wish to interpolate, then a tacit assumption in the literature is that the dependence of \(\hat{\mathbb{P}}_{0}\) and \(\hat{\mathbb{P}}_{1}\) is unknown and somehow has to be recovered. Such an assumption, however, ignores important scenarios where the data is _aligned_, meaning that the samples from \(\hat{\mathbb{P}}_{0}\) and \(\hat{\mathbb{P}}_{1}\) naturally come in pairs \((\mathbf{x}_{0}^{i},\mathbf{x}_{1}^{i})^{N}_{i}\), which is common in many biological phenomena. Proteins, for instance, undergo conformational changes upon interactions with other biomolecules (protein docking, see Fig. 1). The goal is to model conformational changes by recovering a (stochastic) trajectory \(\mathbf{x}_{t}\) based on the positions observed at two-time points \((\mathbf{x}_{0},\mathbf{x}_{1})\). Failing to incorporate this alignment would mean that we completely ignore information on the correspondence between the initial and final points of the molecules, resulting in a much harder problem than nec Figure 1: Overview of SBalign: In biological tasks such as protein docking, one is naturally provided with _aligned_ data in the form of unbound and bound structures of participating proteins. Our goal is to therefore recover a stochastic trajectory from \(\mathbf{x}_{0}\) to \(\mathbf{x}_{1}\). To achieve this, we connect the characterization of an SDE conditioned on \(\mathbf{x}_{0}\) and \(\mathbf{x}_{1}\) (utilizing the Doob’s _\(h\)-transform_) with that of a Brownian bridge between \(\mathbf{x}_{0}\) and \(\mathbf{x}_{1}\) (classical Schrodinger bridge theory). We show that this leads to a simpler training procedure with lower variance and strong empirical results. essary. Beyond, the recent use of SBs has been motivated by an important task in molecular biology: Cells change their molecular profile throughout developmental processes (Schiebinger et al., 2019; Bunne et al., 2022b) or in response to perturbations such as cancer drugs (Lotfollahi et al., 2019; Bunne et al., 2021). As most measurement technologies are destructive assays, i.e., the same cell cannot be observed twice nor fully profiled over time, these methods aim at reconstructing cell dynamics from _unpaired_ snapshots. Recent developments in molecular biology, however, aim at overcoming this technological limitation. For example, Chen et al. (2022b) propose a transcriptome profiling approach that preserves cell viability. Weinreb et al. (2020) capture cell differentiation processes by clonally connecting cells and their progenitors through barcodes (see illustrative Figure in Supplement). Motivated by these observations, the goal of this paper is to propose a novel algorithmic framework for solving DSBs with (partially) _aligned_ data. Our approach is in stark contrast to existing works which, due to the lack of data alignment, all rely on some variants of _iterative proportional fitting_ (IPF) (Fortet, 1940; Kullback, 1968) and are thus prone to numerical instability. On the other hand, via a combination of the original theory of Schrodinger bridges (Schrodinger, 1931; Leonard, 2013) and the key notion of Doob's _\(h\)-transform_(Doob, 1984; Rogers and Williams, 2000), we design a novel loss function that completely bypasses the IPF procedure and can be trained with much lower variance. To summarize, we make the following contributions: * To our best knowledge, we consider, for the first time, the problem of interpolation with _aligned_ data. We rigorously formulate the problem in the DSB framework. * Based on the theory of Schrodinger bridges and \(h\)-transform, we derive a new loss function that, unlike prior work on DSBs, does not require an IPF-like procedure to train. We also propose principled regularization schemes to further stabilize training. * We describe how interpolating aligned data can provide better reference processes for use in classical DSBs, paving the way to hybrid aligned/non-aligned Schrodinger bridges (SBs). * We evaluate our proposed framework on both synthetic and real data. For experiments utilizing real data, we consider two tasks where such aligned data is naturally available. The first is the task of developmental processes in single-cell biology, and the second is (_rigid_) protein docking, where the goal is to predict the 3D structure of the bound complex formed by two proteins, given their unbound 3D structures. Our method demonstrates a considerable improvement over prior methods across various metrics, thereby substantiating the importance of taking the data alignment into account. Related work.Solving DSBs is a subject of significant interest in recent years and has flourished in a number of different algorithms (De Bortoli et al., 2021; Chen et al., 2022a; Vargas et al., 2021; Bunne et al., 2023; Liu et al., 2022a). However, all these previous approaches focus on the _unaligned_ data, and therefore the methodologies all rely on IPF and are hence drastically different from ours. In the experiments, we will demonstrate the importance of taking the alignment of data into consideration by comparing our method to these baselines. An important ingredient in our theory is Doob's \(h\)-transform, which has recently also been utilized by Liu et al. (2023) to solve the problem of constrained diffusion. However, their fundamental motivation is different from ours. Liu et al. (2023) focus on learning the drift of the diffusion model and the \(h\)-transform _together_, whereas ours is to read off the drift _from_ the \(h\)-transform with the help of _aligned data_. Consequently, there is no overlap between the two algorithms and their intended applications. To the best of our knowledge, the concurrent work of Tong et al. (2023) is the only existing framework that can tackle aligned data, which, however, is not their original motivation. In the context of solving DSBs, their algorithm can be seen as learning a vector field that generates the correct _marginal_ probability; see (Tong et al., 2023, Proposition 4.3). Importantly, this is different from our aim of finding the _pathwise_ optimal solution of DSBs: If \((\mathbf{x}_{0,\text{test}}^{i})_{i=1}^{m}\) is a test data set which we wish to predict their destinations, then the framework of Tong et al. (2023) can only ensure that the marginal distribution \((\mathbf{x}_{1,\text{test}}^{i})_{i=1}^{m}\) is correct, whereas ours is capable of predicting that \(\mathbf{x}_{1,\text{test}}^{i}\) is precisely the destination of \(\mathbf{x}_{0,\text{test}}^{i}\) for each \(i\). This latter property is highly desirable in tasks like ML-accelerated protein docking. ## 2 Background Problem formulation.Suppose that we are given access to i.i.d. _aligned_ data \((\mathbf{x}_{0}^{i},\mathbf{x}_{1}^{i})_{i=1}^{N}\), where the marginal distribution of \(\mathbf{x}_{0}^{i}\)'s is \(\hat{\mathbb{P}}_{0}\) and of \(\mathbf{x}_{1}^{i}\)'s is \(\hat{\mathbb{P}}_{1}\). Typically, we view \(\hat{\mathbb{P}}_{0}\) as the empirical marginal distribution of a stochastic process observed at time \(t=0\), and likewise \(\hat{\mathbb{P}}_{1}\) the empirical marginal observed at \(t=1\). The goal is to reconstruct the stochastic process \(\mathbb{P}_{t}\) based on \((\mathbf{x}_{0}^{i},\mathbf{x}_{1}^{i})_{i=1}^{N}\), i.e., to _interpolate_ between \(\hat{\mathbb{P}}_{0}\) and \(\hat{\mathbb{P}}_{1}\). Such a task is ubiquitous in biological applications. For instance, understanding how proteins dock to other biomolecules is of significant interest in biology and has become a topic of intense study in recent years (Ganea et al., 2022; Tsaban et al., 2022; Corso et al., 2023). In the protein docking task, \(\mathbf{x}_{0}^{i}\) represents the 3D structures of the unbound proteins, while \(\mathbf{x}_{1}^{i}\) represents the 3D structure of the bound complex. Reconstructing a stochastic process that diffuses \(\mathbf{x}_{0}^{i}\)'s to \(\mathbf{x}_{1}^{i}\)'s is tantamount to recovering the energy landscape governing the docking process. Similarly, in molecular dynamics simulations, we have access to trajectories \(\left(\mathbf{x}^{i}_{t}\right)_{t\in[0,1]}\), where \(\mathbf{x}^{i}_{0}\) and \(\mathbf{x}^{i}_{1}\) represent the initial and final positions of the \(i\)-th molecule respectively. Any learning algorithm using these simulations should be able to respect the provided alignment. Diffusion Schrodinger bridges.To solve the interpolation problem, in Section3, we will invoke the framework of DSBs, which are designed to solve interpolation problems with _unaligned_ data. More specifically, given two marginals \(\hat{\mathbb{P}}_{0}\) and \(\hat{\mathbb{P}}_{1}\), the DSB framework proceeds by first choosing a reference process \(\mathbb{Q}_{t}\) using prior knowledge, for instance a simple Brownian motion, and then solve the entropy-minimization problem over all stochastic processes \(\mathbb{P}_{t}\): \[\min_{\mathbb{P}_{0}=\hat{\mathbb{P}}_{0},\;\mathbb{P}_{1}=\hat{\mathbb{P}}_{1} }D_{\mathrm{KL}}(\mathbb{P}_{t}\|\mathbb{Q}_{t}).\] (SB) Despite the fact that many methods exist for solving (SB) (De Bortoli et al., 2021; Chen et al., 2022; Vargas et al., 2021; Bunne et al., 2023), none of these approaches are capable of incorporating _alignment_ of the data. This can be seen by inspecting the objective (SB), in which the coupling information \((\mathbf{x}^{i}_{0},\mathbf{x}^{i}_{1})\) is completely lost as only its individual marginals \(\hat{\mathbb{P}}_{0},\hat{\mathbb{P}}_{1}\) play a role therein. Unfortunately, it is well-known that tackling the marginals separately necessitates a forward-backward learning process known as the _iterative proportional fitting_ (IPF) procedure (Fortet, 1940; Kullback, 1968), which constitutes the primary reason of high variance training, thereby confronting DSBs with numerical and scalability issues. Our major contribution, detailed in the next section, is therefore to devise the first algorithmic framework that solves the interpolation problem with aligned data _without_ resorting to IPF. ## 3 Aligned Diffusion Schrodinger Bridges In this section, we derive a novel loss function for DSBs with aligned data by combining two classical notions: The theory of Schrodinger bridges (Schrodinger, 1931; Leonard, 2013; Chen et al., 2021b) and Doob's \(h\)-transform (Doob, 1984; Rogers and Williams, 2000). We then describe how solutions to DSBs with aligned data can be leveraged in the context of classical DSBs. ### Learning aligned diffusion Schrodinger Bridges Static SB and aligned data.Our starting point is the simple and classical observation that (SB) is the continuous-time analogue of the _entropic optimal transport_, also known as the _static_ Schrodinger bridge problem (Leonard, 2013; Chen et al., 2021b; Peyre and Cuturi, 2019): \[\pi^{\star}:=\operatorname*{argmin}_{\mathbb{P}_{0}=\mathbb{P}_{0},\;\mathbb{P }_{1}=\hat{\mathbb{P}}_{1}}D_{\mathrm{KL}}(\mathbb{P}_{0,1}\|\mathbb{Q}_{0,1}) \tag{1}\] where the minimization is over all _couplings_ of \(\hat{\mathbb{P}}_{0}\) and \(\hat{\mathbb{P}}_{1}\), and \(\mathbb{Q}_{0,1}\) is simply the joint distribution of \(\mathbb{Q}_{t}\) at \(t=0,1\). In other words, if we denote by \(\mathbb{P}^{\star}_{t}\) the stochastic process that minimizes (SB), then the joint distribution \(\mathbb{P}^{\star}_{0,1}\) necessarily coincides with the \(\pi^{\star}\) in (1). Moreover, since in DSBs, the data is always assumed to arise from \(\mathbb{P}^{\star}_{t}\), we see that: The _aligned_ data \((\mathbf{x}^{i}_{0},\mathbf{x}^{i}_{1})_{i=1}^{N}\) constitutes samples of \(\pi^{\star}\). This simple but crucial observation lies at the heart of all derivations to come. Our central idea is to represent \(\mathbb{P}^{\star}_{t}\) via two different, but equivalent, characterizations, both of which involve \(\pi^{\star}\): That of a _mixture_ of reference processes with pinned end points, and that of conditional _stochastic differential equations_ (SDEs). \(\mathbb{P}^{\star}_{t}\) from \(\pi^{\star}\): \(\mathbb{Q}_{t}\) with pinned end points.For illustration purposes, from now on, we will assume that the reference process \(\mathbb{Q}_{t}\) is a Brownian motion with diffusion coefficient \(g_{t}\):* Footnote *: Extension to more involved reference processes is conceptually straightforward but notationally clumsy. Furthermore, reference processes of the form (2) are dominant in practical applications (Song et al., 2021; Bunne et al., 2023), so we omit the general case. \[\mathrm{d}\mathbb{Q}_{t}=g_{t}\;\mathrm{d}\mathbb{W}_{t}. \tag{2}\] In this case, it is well-known that \(\mathbb{Q}_{t}\)_conditioned_ to start at \(\mathbf{x}_{0}\) and end at \(\mathbf{x}_{1}\) can be written in another SDE (Mansuy and Yor, 2008; Liu et al., 2023): \[\mathrm{d}X_{t}=g_{t}^{2}\frac{\mathbf{x}_{1}-X_{t}}{\beta_{1}-\beta_{t}}\; \mathrm{d}t+g_{t}\;\mathrm{d}\mathbb{W}_{t} \tag{3}\] where \(X_{0}=\mathbf{x}_{0}\) and \[\beta_{t}:=\int_{0}^{t}g_{s}^{2}\;\mathrm{d}s. \tag{4}\] We call the processes in (3) the _scaled Brownian bridges_ as they generalize the classical Brownian bridge, which corresponds to the case of \(g_{t}\equiv 1\). The first characterization of \(\mathbb{P}^{\star}_{t}\) is then an immediate consequence the following classical result in Schrodinger bridge theory: Draw a sample \((\mathbf{x}_{0},\mathbf{x}_{1})\sim\pi^{\star}\) and connect them via (3). The resulting path is a sample from \(\mathbb{P}^{\star}_{t}\)(Leonard, 2013; Chen et al., 2021b). In other words, \(\mathbb{P}^{\star}_{t}\) is a _mixture_ of scaled Brownian bridges, with the mixing weight given by \(\pi^{\star}\). \(\mathbb{P}_{t}^{\star}\) from \(\pi^{\star}\): SDE representation.Another characterization of \(\mathbb{P}_{t}^{\star}\) is that it is itself given by an SDE of the form (Leonard, 2013; Chen et al., 2021) \[\mathrm{d}X_{t}=g_{t}^{2}b_{t}(X_{t})\,\mathrm{d}t+g_{t}\,\mathrm{d}\mathbb{W}_{t}. \tag{5}\] Here, \(b_{t}:\mathbb{R}^{d}\to\mathbb{R}^{d}\) is a time-dependent drift function that we wish to learn. Now, by Doob's h-transform, we know that the SDE (5) _conditioned_ to start at \(\mathbf{x}_{0}\) and end at \(\mathbf{x}_{1}\) is given by another SDE (Doob, 1984; Rogers and Williams, 2000): \[\mathrm{d}X_{t}=g_{t}^{2}[b_{t}(X_{t})+\nabla\log h_{t}(X_{t})]\,\mathrm{d}t+ g_{t}\,\mathrm{d}\mathbb{W}_{t} \tag{6}\] where \(h_{t}(\mathbf{x}):=\mathbb{P}(X_{1}=\mathbf{x}_{1}|X_{t}=\mathbf{x})\) is the _Doob's h function_. Notice that we have suppressed the dependence of \(h_{t}\) on \(\mathbf{x}_{0}\) and \(\mathbf{x}_{1}\) for notational simplicity. Loss function.Since both (3) and (6) represent \(\mathbb{P}_{t}^{\star}\), the solution of the DSBs, the two SDEs must coincide. In other words, suppose we parametrize \(b_{t}\) as \(b_{t}^{\theta}\), then, by matching terms in (3) and (6), we can learn the optimal parameter \(\theta^{\star}\) via optimization of the loss function \[L(\theta):=\mathbb{E}\Bigg{[}\int_{0}^{1}\!\left\|\frac{\mathbf{x}_{1}-X_{t}}{ \beta_{1}-\beta_{t}}-\nabla\log h_{t}^{\theta}(X_{t})\right\|^{2}\mathrm{d}t \Bigg{]} \tag{7}\] where \(h_{t}^{\theta}\) is determined by \(b_{t}^{\theta}\) as well as the drawn samples \((\mathbf{x}_{0},\mathbf{x}_{1})\). In short, assuming that, for each \(\theta\), we can compute \(h_{t}^{\theta}\)_based only on \(b_{t}^{\theta}\)_, we can then backprop through (7) and optimize it using any off-the-shelf algorithm. A slightly modified (7).Even with infinite data and a neural network with sufficient capacity, the loss function defined in (7) does converge to 0. For the purpose of numerical stability, we instead propose to modify (7) to: \[L(\theta):=\mathbb{E}\Bigg{[}\int_{0}^{1}\!\left\|\frac{\mathbf{x}_{1}-X_{t}}{ \beta_{1}-\beta_{t}}-\big{(}b_{t}^{\theta}+\nabla\log h_{t}^{\theta}(X_{t}) \big{)}\right\|^{2}\mathrm{d}t\Bigg{]} \tag{8}\] which is clearly equivalent to (7) at the true solution of \(b_{t}\). Notice that (8) bears a similar form as the popular score-matching objective employed in previous works (Song and Ermon, 2019; Song et al., 2021): \[L(\theta):=\mathbb{E}\Bigg{[}\int_{0}^{1}\!\left\|\nabla\log p(\mathbf{x}_{t} |\mathbf{x}_{0})-s^{\theta}(X_{t},t)\right\|^{2}\mathrm{d}t\Bigg{]}, \tag{9}\] where the term \(\frac{\mathbf{x}_{1}-X_{t}}{\beta_{1}-\beta_{t}}\) is akin to \(\nabla\log p(\mathbf{x}_{t}|\mathbf{x}_{0})\), while \(\big{(}b_{t}^{\theta}+\nabla\log h_{t}^{\theta}(X_{t})\big{)}\) corresponds to \(s^{\theta}(X_{t},t)\). Computing \(h_{t}^{\theta}\).Inspecting \(h_{t}\) in (6), we see that, given \((\mathbf{x}_{0},\mathbf{x}_{1})\), it can be written as the conditional expectation of an indicator function: \[h_{t}(\mathbf{x})=\mathbb{P}(X_{1}=\mathbf{x}_{1}|X_{t}=\mathbf{x})=\mathbb{E }\big{[}\mathds{1}_{\{\mathbf{x}_{1}\}}\,|X_{t}=\mathbf{x}\big{]} \tag{10}\] where the expectation is over (5). Functions of the form (10) lend itself well to computation since it solves simulating the _unconditioned_ paths. Furthermore, in order to avoid overfitting on the given samples, it is customary to replace the "hard" constraint \(\mathds{1}_{\{\mathbf{x}_{1}\}}\) by its _smoothed_ version (Zhang and Chen, 2022; Holdijk et al., 2022): \[h_{t,\tau}(\mathbf{x}):=\mathbb{E}\bigg{[}\mathrm{exp}\bigg{(}-\frac{1}{2\tau} \|X_{1}-\mathbf{x}_{1}\|^{2}\bigg{)}|X_{t}=\mathbf{x}\bigg{]}. \tag{11}\] Here, \(\tau\) is a regularization parameter that controls how much we "soften" the constraint, and we have \(\lim_{\tau\to 0}h_{t,\tau}=h_{t}\). Although the computation of (11) can be done via a standard application of the Feynman-Kac formula (Rogers and Williams, 2000), an altogether easier approach is to parametrize \(h_{t,\tau}\) by a second neural network \(m^{\phi}\) and perform alternating minimization steps on \(b_{t}^{\theta}\) and \(m^{\phi}\). This way, we can also avoid simulating even the unconditional paths of (5), and thereby further reducing the variance in training. Regularization.Since it is well-known that \(\nabla\log h_{t}\) typically explodes when \(t\to 1\)(Liu et al., 2023), it is important to regularize the behavior of \(m^{\phi}\) for numerical stability, especially when \(t\to 1\). Moreover, in practice, it is desirable to learn a drift \(b_{t}^{\theta}\) that respects the data alignment _in expectation_: If \((\mathbf{x}_{0},\mathbf{x}_{1})\) is an input pair, then multiple runs of the SDE (5) starting from \(\mathbf{x}_{0}\) should, on average, produce samples that are in the proximity of \(\mathbf{x}_{1}\). This observation implies that we should search for drifts whose corresponding \(h\)-transforms are diminishing. A simple way to simultaneously achieve the above two requirements is to add an \(\ell^{2}\)-regularization term, resulting in the loss function: \[L(\theta,\phi):=\mathbb{E}\Bigg{[}\int_{0}^{1}\!\left\|\frac{ \mathbf{x}_{1}-X_{t}}{\beta_{1}-\beta_{t}}-\big{(}b_{t}^{\theta}+m^{\phi}(X_{t} )\big{)}\right\|^{2} \tag{12}\] \[+\lambda_{t}\|m^{\phi}(\mathbf{x}_{t})\|^{2}\,\mathrm{d}t\Bigg{]}\] where \(\lambda_{t}\) can either be constant or vary with time. The overall algorithm is depicted in Algorithm 1. ### Paired Schrodinger Bridges as Prior Processes Classical SBs are unsuitable in cases where the alignments are known, because they only consider samples from \(\hat{\mathbb{P}}_{0}\) and \(\hat{\mathbb{P}}_{1}\) and disregard those drawn from the (optimal) coupling \(\pi^{\star}\). However, the reliance of our method on this crucial knowledge is critical to avoid the necessity of IPF-like iterates but may become a limitation when insufficient information on alignments is available. In such a situation, while it is unrealistic to hope for an accurate solution to the aligned SB problem, the interpolation between \(\hat{\mathbb{P}}_{0}\) and \(\hat{\mathbb{P}}_{1}\) learned by SBalign (5) can potentially still be leveraged to obtain a better reference process, when solving a classical SB on the same marginals --i.e. the term \(b_{t}(X_{t})\) learned via SBalign can, in fact, be used _as is_ to construct a data-informed alternative \(\hat{\mathbb{Q}}_{t}\) to the standard Brownian motion (2). Improved reference processes, either using pre-trained or data-informed ones, have been previously considered in the literature. For instance, both De Bortoli et al. (2021) and Chen et al. (2022) use a pre-trained reference process for challenging image interpolation tasks. This approach, however, relies on DSBs trained using the classical score-based generative modeling objective between a Gaussian and the data distribution. It therefore pre-trains the reference process on a related --but different-- process, i.e., the one mapping Gaussian noise to data rather than \(\hat{\mathbb{P}}_{0}\) to \(\hat{\mathbb{P}}_{1}\). An alternative, proposed by Bunne et al. (2023), draws on the closed-form solution of SBs between two Gaussian distributions, which are chosen to approximate \(\hat{\mathbb{P}}_{0}\) and \(\hat{\mathbb{P}}_{1}\), respectively. Unlike our method, these alternatives construct better prior drifts by falling back to simpler and related tasks, or approximations of the original problem. We instead propose to shape a coarse-grained description of the drift based on alignments sampled directly from \(\mathbb{P}_{0,1}\). ## 4 Experiments In this section, we evaluate SBalign in different settings involving 2-dimensional synthetic datasets, the task of reconstructing cellular differentiation processes, as well as predicting the conformation of a protein structure and its ligand formalized as rigid protein docking problem. ### Synthetic Experiments We run our algorithm on two synthetic datasets (figures in SS B), and compare the results with classic Schrodinger bridge models, i.e., the forward-backward SB formulation proposed by Chen et al. (2022), herein referred to as fbSB. We equip the baseline with prior knowledge, as elaborated below, to further challenge SBalign. disregards our prior knowledge on the alignment of particles, which is instead reliably reproduced by the dynamics learned by SBalign (Fig. 2b). One way of encoding this additional information on the nature of the process is to modify \(\mathbb{Q}_{t}\) by introducing a clockwise radial drift, which describes the prior tangential velocity of particles moving circularly around the center. Solving the classical SB with this updated reference process indeed generates trajectories that respect most alignments (Fig. 2b), but requires a hand-crafted expression of the drift that is only possible in very simple cases. T dataset.In most real-world applications, it is very difficult to define an appropriate reference process \(\mathbb{Q}_{t}\), which respects the known alignment without excessively distorting the trajectories from a solution to (SB). This is already visible in simple examples like (Fig. 2d-f), in which the value of good candidate prior drifts at a specific location needs to vary wildly in time. In this dataset, \(\hat{\mathbb{P}}_{0}\) and \(\hat{\mathbb{P}}_{1}\) are both bi-modal distributions, each supported on two of the four extremes of an imaginary T-shaped area. We target alignments that connect the two arms of the T as well as the top cloud with the bottom one. We succeed in learning them with SBalign (Fig. 2e) but unsurprisingly fail when using the baseline fbSB (Fig. 2d) with a Brownian motion prior. In this case, however, attempts at designing a better reference drift for fbSB must take into account the additional constraint that the horizontal and vertical particle trajectories intersect (see Fig. 2e), i.e., they cross the same area at times \(t_{h}\) and \(t_{v}\) (with \(t_{h}>t_{v}\)). This implies that the drift \(b_{t}\), which initially points downwards (when \(t<t_{v}\)), should swiftly turn rightwards (for \(t>t_{h}\)). Setting imprecise values for one of \(t_{h}\) and \(t_{v}\) when defining custom reference drifts for classical SBs would hence not lead to the desired result and, worse, would actively disturb the flow of the other particle group. As described in SS 3.2, in presence of hard-to-capture requirements on the reference drift, the use of SBalign offers a remarkably easy and efficient way of learning a parameterization of it. For instance, when using the drift obtained by SBalign as reference drift for the computation of the SB baseline (fbSB), we find the desired alignments (Fig. 2f). ### Cell Differentiation Biological processes are determined through heterogeneous responses of single cells to external stimuli, i.e., developmental factors or drugs. Understanding and predicting the dynamics of single cells subject to a stimulus is thus crucial to enhance our understanding of health and disease and the focus of this task. Most single-cell high-throughput technologies are destructive assays --i.e., they destroy cells upon measurement-- allowing us to only measure _unaligned_ snapshots of the evolving cell population. Recent methods address this limitation by proposing (lower-throughput) technologies that keep cells alive after transcriptome profiling (Chen et al., 2022) or that genetically tag cells to obtain a clonal trace upon cell division (Weinreb et al., 2020). Dataset.To showcase SBalign's ability to make use of such (partial) alignments when inferring cell differentiation processes, we take advantage of the genetic barcoding system developed by Weinreb et al. (2020). With a focus on Figure 3: Cell differentiation trajectories based on (**a**) the ground truth and (**b-d**) learned drifts. SBalign is able to learn an appropriate drift underlying the true differentiation process while respecting the alignment. (**d**) Using the learned drift from SBalign as a reference process helps improve the drift learned by other training methods. fate determination in hematopoiesis, Weinreb et al. (2020) use expressed DNA barcodes to clonally trace single-cell transcriptomes over time. The dataset consists of two snapshots: the first, recorded on day 2, when most cells are still undifferentiated (see Fig. 3(a)), and a second, on day 4, comprising many different mature cell types (see Fig. 3(b)). Using SBalign as well as the baseline fSSB, we attempt to reconstruct cell evolution between day 2 and day 4, all while capturing the heterogeneity of emerging cell types. For details on the dataset, see SS B. Baselines.We benchmark SBalign against previous DSBs such as (Chen et al., 2022, fbsB). Beyond, we compare SBalign in the setting of learning a prior reference process. Naturally, cell division processes and subsequently the propagation of the barcodes are very noisy. While this genetic annotation provides some form of assignment, it does not capture the full developmental process. We thus test SBalign in a setting where it learns a prior from such partial alignments and, plugged into fbsB, is fine-tuned on the full dataset. Evaluation metrics.To assess the performance of SBalign and the baselines, we monitor several metrics, which include distributional distances, i.e., MMD (Gretton et al., 2012) and W\({}_{e}\)(Cuturi, 2013), as well as average scores, i.e., \(\ell_{2}(\text{PS})\)(Bunne et al., 2021) and RMSD. Moreover, we also train a simple neural network-based classifier to annotate the cell type on day 4 and we report the accuracy of the predicted vs. true cell type for all the models. See SS C.1 for further details. Results.SBalign accurately predicts cellular differentiation processes in hematopoiesis from day 2 to day 4, as visible from the (2D projections of the) learned trajectories and alignments (Fig. 2(c)) and the quantitative evaluation in Table 1. SBalign outperforms fbsB in all but the cell-type accuracy metric: Remarkably, our method exceeds the performances of the baseline also on distributional metrics and not uniquely on alignment-based ones. Further, we evaluate how well SBalign recovers the heterogeneity of emerging cell types throughout the developmental process on day 4. The results are displayed in Fig. 3(d) and show that, while capturing the overall differentiation trend, SBalign (as well as fbsB) struggles to isolate rare cell types. Lastly, we employ SBalign to learn a prior process from noisy alignments based on genetic barcode annotations. When using this reference process within fbsB, we learn an SB which compensates for inaccuracies stemming from the stochastic nature of cell division and barcode redistribution and which achieves better scores on distributional metrics (see Tab. 1). Further results can be found in SS A. ### Protein Docking In (_computational_) protein docking, the goal is to predict the 3D structure of the bound (docked) state of a protein pair, given the unbound states of the corresponding proteins. These proteins are denoted (arbitrarily) as the ligand and receptor respectively. For the scope of this paper, and following previous work, we focus on the rigid docking setup. However, our algorithm can also be applied to flexible protein docking, and we leave a full treatment of this problem to future work. Experimental setup.Our setup follows a similar convention as EquiDock(Ganea et al., 2022). To summarize, the unbound structure of the ligand is derived by applying a random rotation and translation to the corresponding bound structure, while the receptor is held fixed w.l.o.g. Applying a different rotation and translation to each ligand can however result in a different Brownian bridge for each complex, resulting in limited meaningful signal for learning \(b_{t}^{\theta}\) \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Methods**} & \multicolumn{6}{c}{**Cell Differentiation**} \\ \cline{2-7} & MMD \(\downarrow\) & W\({}_{e}\)\(\downarrow\) & \(\ell_{2}(\text{PS})\)\(\downarrow\) & RMSD \(\downarrow\) & Class. Acc. \(\uparrow\) \\ \hline frsSB & 1.58e-2 & 12.6 & 4.07 & 9.63e-1 & 58.0\% \\ frsB with SBalign & 5.15e-3 & 10.6 & 0.95 & 9.88e-1 & 49.0\% \\ \hline **SBalign** & 9.77e-3 & 11.2 & 1.24 & 9.28e-1 & 56.0\% \\ \hline \hline \end{tabular} \end{table} Table 1: **Cell differentiation prediction results.** Shown are distributional metrics (MMD, W\({}_{e}\)), alignment-based metrics (\(\ell_{2}\), RMSD), and cell type classification accuracy for different methods on the cell differentiation dataset. Figure 4: Cell type prediction on the differentiation dataset. All distributions are plotted on the first two principal components. **a-b:** Ground truth cell types on day 2 and day 4 respectively. **c-d:** fbsB and SBalign cell type predictions on day 4. SBalign is able to better model the underlying differentiation processes and capture the diversity in cell types. To avoid this, we sample a rotation and translation at the start of training and apply the same rotation and translation to all complexes across training, validation, and testing. Additional details regarding this setup can be found in SS B. Dataset.We use the DB5.5 dataset (Vreven et al., 2015) for our empirical evaluation. The DB5.5 dataset is a standard dataset used in protein-protein docking, however, it only has 253 complexes. We utilize the same splits as EquiDock (Ganea et al., 2022), with 203 complexes in the training set, 25 complexes in the validation set, and 25 complexes in the test set. For the evaluation in Table 2, we use the full DB5.5 test set. For ligands in the test set, we generate the corresponding unbound versions by applying the rotation and translation sampled during training. Baselines.We compare our method to the GNN-based model EquiDock as well as traditional docking software including Attract(Schindler et al., 2017; de Vries et al., 2015), HDock(Yan et al., 2020), ClusPro(Desta et al., 2020; Kozakov et al., 2017), and PatchDock(Mashiach et al., 2010; Schneidman-Duhovny et al., 2005). As mentioned in the paragraph above, for ligands in the test set, we generate the corresponding unbound versions by applying the rotation and translation sampled during training. We evaluate the trained model from EquiDock and SBalign on these unbound structures and report corresponding evaluation metrics. For the remaining baselines, we include the numbers from (Ganea et al., 2022). These baselines typically sample several candidate complexes by considering small increments of rotation angles. We expect this makes them somewhat invariant to arbitrary initialization, and the corresponding docking scores to not be severely impacted. Evaluation metrics.We report two metrics, Complex Root Mean Square Deviation (Complex RMSD), and Interface Root Mean Square Deviation (Interface RMSD). Following (Ganea et al., 2022), the ground truth and predicted complex structures are first superimposed using the Kabsch algorithm (Kabsch, 1976), and the Complex RMSD is then computed between the superimposed versions. A similar procedure is used for computing Interface RMSD, but only using the residues from the two proteins that are within \(8\,\mathrm{\SIUnitSymbolAngree}\) of each other. More details in SS C.1. Results.The model performance is summarized in Table 2. Our method SBalign considerably outperforms EquiDock across all metrics. SBalign also achieves comparable or better performance than traditional docking software without relying on extensive candidate sampling and re-ranking or learning surface templates from parts of the current test set. An example of docked structures, in direct comparison with EquiDock is displayed Fig. 5. Further visualizations and results can be found in SS A. ## 5 Conclusion In this paper, we propose a new framework to tackle the interpolation task with aligned data via diffusion Schrodinger bridges. Our central contribution is a novel algorithmic framework derived from the Schrodinger bridge theory and Doob's \(h\)-transform. Via a combination of the two notions, we derive novel loss functions which, unlike all prior methods for solving diffusion Schrodinger bridges, do not rely on the iterative proportional fitting procedure and are hence numerically stable. We verify our proposed algorithm on various synthetic and real-world tasks and demonstrate noticeable improvement over the previous state-of-the-art, thereby substantiating the claim that data alignment is a highly relevant feature that warrants further research. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{6}{c}{**DB5.5 Test Set**} \\ \cline{2-7} & \multicolumn{3}{c}{Complex RMSD} & \multicolumn{3}{c}{Interface RMSD} \\ \cline{2-7} **Methods** & Median & Mean & Std & Median & Mean & Std \\ \hline Attract\({}^{*}\) & 9.55 & 10.09 & 9.88 & 7.48 & 10.69 & 10.90 \\ HDock\({}^{*}\) & 0.30 & 5.34 & 12.04 & 0.24 & 4.76 & 10.83 \\ ClustPro\({}^{*}\) & 3.38 & 8.25 & 7.92 & 2.31 & 8.71 & 9.89 \\ PatchDock\({}^{*}\) & 18.26 & 18.00 & 10.12 & 18.88 & 18.75 & 10.06 \\ EquiDock\({}^{*}\) & 14.13 & 14.72 & 5.31 & 11.97 & 13.23 & 4.93 \\ \hline EquiDock & 14.12 & 14.73 & 5.31 & 11.97 & 13.23 & 4.93 \\ **SBalign** & 6.59 & 6.69 & 2.04 & 7.69 & 8.11 & 2.39 \\ \hline \hline \end{tabular} \end{table} Table 2: **Rigid docking results. Complex and interface RMSD between predicted and true bound structures (after Kabsch alignment). \({}^{*}\) denotes methods for which we use values directly from (Ganea et al., 2022). All other results show the performance on our test set.** Figure 5: Ground truth and predicted bound structures for the complex with PDB ID: 1QA9. SBalign is able to find the true binding interface compared to EquiDock. ## Acknowledgements This publication was supported by the NCCR Catalysis (grant number 180544), a National Centre of Competence in Research funded by the Swiss National Science Foundation as well as the European Union's Horizon 2020 research and innovation programme 826121. We thank Caroline Uhler for introducing us to the dataset by Weinreb et al. (2020), which was instrumental in this research.
2308.03046
On a factorization result of Ştefănescu -- II
\c{S}tef\u{a}nescu proved an elegant factorization result for polynomials over discrete valuation domains [CASC'2014, Lecture Notes in Computer Science, Ed. by V. Gerdt, W. Koepf, W. Mayr, and E. Vorozhtsov, Springer, Berlin, {Vol. \textbf{8660}}, pp. 460--471, 2014.] In this paper, a generalization of \c{S}tef\u{a}nescu's result is proved to cover a larger class of polynomials over discrete valuation domains. Such results are useful in devising algorithms for polynomial factorization.
Sanjeev Kumar, Jitender Singh
2023-08-06T08:14:45Z
http://arxiv.org/abs/2308.03046v2
# On a factorization result of Stefanescu-II ###### Abstract. Stefanescu proved an elegant factorization result for polynomials over discrete valuation domains [CASC'2014, Lecture Notes in Computer Science, Ed. by V. Gerdt, W. Koepf, W. Mayr, and E. Vorozhtsov, Springer, Berlin, Vol. **8660**, pp. 460-471, 2014.] In this paper, a generalization of Stefanescu's result is proved to cover a larger class of polynomials over discrete valuation domains. Such results are useful in devising algorithms for polynomial factorization. 2010 Mathematics Subject Classification: Primary 30C10; 12E05; 11C08. \({}^{*,*}\)Corresponding author: [email protected]; [email protected] ## 1. Introduction Let \((R,v)\) be a discrete valuation domain. Let \(f=a_{0}+a_{1}x+\cdots+a_{n}x^{n}\in R[x]\) be a nonconstant polynomial. Newton polygon \(N_{f}\) of the polynomial \(f\) is defined as the lower convex hull of the set \(\{(i,v(a_{i}))\mid a_{i}\neq 0\}\). The slopes of the underlying Newton polygon are the slopes of some line segments. Note that the slope of the line joining the points \((n,v(a_{n}))\) and \((i,v(a_{i}))\) is \(m_{i}(f)=(v(a_{n})-v(a_{i}))/(n-i)\) for each \(i=0,1,\ldots,n-1\). The Newton index \(e(f)\) of the polynomial \(f\) is defined as \[e(f)=\max_{1\leq i\leq n}m_{i}(f).\] It follows from the definition of \(N_{f}\) and \(e(f)\) that for nonconstant polynomials \(f,g\in R[x]\), one has \(e(fg)=\max(e(f),e(g))\). From the application point of view, Newton index has been used in devising algorithms for factoring polynomials [1]. As a generalization of the classical result of Dumas [2], Stefanescu [3] proved a factorization result for polynomials over a discrete valuation domain using Newton index. Further, using the method of [3], Kumar and Singh [4] extended the result of Stefanescu to include a wider class of polynomials over discrete valuation domains. In [1], Stefanescu proved the following elegant factorization results. **Theorem A**.: _Let \((R,v)\) be a discrete valuation domain. Let \(f=a_{0}+a_{1}x+\cdots+a_{n}x^{n}\in R[x]\) be a nonconstant polynomial with \(a_{0}a_{n}\neq 0\) and \(n\geq 2\). Assume that there exists an index \(s\in\{0,1,2,\ldots,n-1\}\) for which each of the following conditions is satisfied._ 1. \(m_{i}(f)<m_{s}(f)\) _for all_ \(i\in\{0,1,2,\ldots,n-1\},\ i\neq s\)_,_ 2. \(n(n-s)(m_{s}(f)-m_{0}(f))=1\)_,_ 3. \(\gcd(v(a_{s})-v(a_{n}),n-s)=1\)_._ _Then the polynomial \(f\) is either irreducible in \(R[x]\), or \(f\) has a factor whose degree is a multiple of \(n-s\)._ **Theorem B**.: _Let \((R,v)\) be a discrete valuation domain. Let \(f=a_{0}+a_{1}x+\cdots+a_{n}x^{n}\in R[x]\) be a nonconstant polynomial with \(a_{0}a_{n}\neq 0\) and \(n\geq 2\). Assume that there exists an index \(s\in\{0,1,2,\ldots,n-1\}\) for which each of the following conditions is satisfied._ 1. \(m_{i}(f)<m_{s}(f)\) _for all_ \(i\in\{0,1,2,\ldots,n-1\},\ i\neq s\)_,_ 2. \(u=n(n-s)(m_{s}(f)-m_{0}(f))\geq 2\)_,_ 3. \(\gcd(v(a_{s})-v(a_{n}),n-s)=1\)_._ _Then either \(f\) is irreducible in \(R[x]\), or \(f\) has a divisor whose degree is a multiple of \(n-s\), or \(f\) admits a factorization \(f=f_{1}f_{2}\) such that \(\alpha_{2}\deg(f_{1})-\alpha_{1}\deg(f_{2})\) is a multiple of \(n-s\) for some \(\alpha_{1},\alpha_{2}\in\{1,\ldots,u-1\}\)._ This note ameliorates and extends the aforementioned factorization results on the lines of [4]. Our main results are the following: **Theorem 1**.: _Let \((R,v)\) be a discrete valuation domain and let \(f=a_{0}+a_{1}x+\cdots+a_{n}x^{n}\in R[x]\) be a nonconstant polynomial with \(a_{0}a_{n}\neq 0\) and \(n\geq 2.\) Assume that there exists an index \(s\in\{0,1,2,\ldots,n-1\}\) such that the following conditions are satisfied:_ 1. \(m_{i}(f)<m_{s}(f)\) _for all_ \(i\in\{0,1,2,\ldots,n-1\},\ i\neq s\)_,_ 2. \(d=\gcd(v(a_{s})-v(a_{n}),n-s)\) _satisfies_ \[d=\begin{cases}n(n-s)(m_{s}(f)-m_{0}(f)),&\text{ if }s\neq 0;\\ 1,&\text{ if }s=0.\end{cases}\] _Then the polynomial \(f\) is either irreducible in \(R[x]\), or \(f\) has a factor in \(R[x]\) whose degree is zero or a multiple of \((n-s)/d\)._ **Theorem 2**.: _Let \((R,v)\) be a discrete valuation domain and let \(f=a_{0}+a_{1}x+\cdots+a_{n}x^{n}\in R[x]\) be a nonconstant polynomial with \(a_{0}a_{n}\neq 0\) and \(n\geq 2\). Assume that there exists an index \(s\in\{1,2,\ldots,n-1\}\) such that each of the following conditions is satisfied._ 1. \(m_{i}(f)<m_{s}(f)\) _for all_ \(i\in\{0,1,2,\ldots,n-1\},\ i\neq s\)_,_ 2. \(d=\gcd(v(a_{s})-v(a_{n}),n-s)\) _satisfies the following:_ \[d=\begin{cases}\text{a proper divisor of }u,\ \text{where }u=n(n-s)(m_{s}(f)-m_{0}(f))\geq 2,& \text{ if }s\neq 0;\\ 1,&\text{ if }s=0.\end{cases}\] _Then either \(f\) is irreducible in \(R[x]\), or \(f\) has a divisor whose degree is zero or a multiple of \((n-s)/d\), or \(f\) admits a factorization \(f=f_{1}f_{2}\) such that \(\alpha_{2}\deg(f_{1})-\alpha_{1}\deg(f_{2})\) is a multiple of \((n-s)/d\) for some \(\alpha_{1},\alpha_{2}\in\{1,\ldots,(u/d)-1\}\) with \(\alpha_{1}+\alpha_{2}=u/d\)._ We observe that Theorems 1 and 2 reduce to Theorems A and B, respectively for \(d=1\) and \(s\neq 0\). Further, Theorem 1 reduces to the main result of [4] incase \(v(a_{n})=0\). In view of Theorem 1, if we take \(d=2\) and \(0<s<n/2\) so that \((n-s)>n/2\), then either \(f\) is irreducible, or \(f\) has a factor whose degree is a multiple of \((n-s)/2\), say \(m(n-s)/2\) for some positive integer \(m\). If possible, suppose that \(m\geq 4\), then we have \(n>m(n-s)/2\geq mn/4\geq n\), which is absurd, and so, we must have \(m<4\). So, either \(f\) is irreducible, or \(f\) has a factor whose degree is equal to one of \((n-s)/2\) and \(n-s\). **Example 1**.: For a prime \(p\), let \(v=v_{p}\) denotes the \(p\)-adic valuation on \(\mathbb{Q}\). For \(n\geq 5\), consider the polynomial \[X = a_{0}+p^{n-4}a_{1}x+p^{n(n-3)-1}(a_{2}x^{2}+a_{n}x^{n})\in\mathbb{Z }[x],\] where \(a_{0},a_{1},a_{2},a_{n}\in\{1,2,\ldots,p-1\}\). Here, we have \[m_{2}(f) = \frac{v_{p}(p^{n(n-3)-1}a_{n})-v_{p}(p^{n(n-3)-1}a_{2})}{n-2}=0,\] \[m_{1}(f) = \frac{v_{p}(p^{n(n-3)-1}a_{n})-v_{p}(p^{n-4}a_{1})}{n-1}=n-3,\] \[m_{0}(f) = \frac{v_{p}(p^{n(n-3)-1}a_{n})-v_{p}(a_{0})}{n-0}=n-3-\frac{1}{n},\] which shows that \(e(X)=m_{1}(f)\), and so, \(s=1\). Further, we have \(n(n-1)(m_{1}(f)-m_{0}(f))=n-1\), and \(\gcd(v_{p}(p^{n-4}a_{1})-v_{p}(p^{n(n-3)-1}a_{n}),n-1)=\gcd((n-1)(n-3),n-1)=n-1\). By Theorem 1, the polynomial \(X\) is either irreducible, or \(X\) has a factor whose degree is a multiple of \(n-1\). **Example 2**.: For a prime \(p\), let \(v_{p}\) be the \(p\)-adic valuation on \(\mathbb{Q}\). For a positive integer \(d\geq 2\), we consider the polynomial \[Y_{d} = p^{d+1}+p^{d-1}x^{d+1}+x^{d(d+1)}\in\mathbb{Z}[x].\] Here \(n=d(d+1)\), \(a_{0}=p^{d+1}\), \(a_{d+1}=p^{d-1}\), \(a_{n}=1\), and \(a_{j}=0\) for all \(j\not\in\{0,d+1,d(d+1)\}\). So, we have \[m_{d+1}(f) = \frac{v_{p}(a_{n})-v_{p}(a_{d+1})}{d(d+1)-(d+1)}=-\frac{1}{d+1},\] \[m_{0}(f) = \frac{v_{p}(a_{n})-v_{p}(a_{0})}{d(d+1)-0}=-\frac{1}{d},\] which shows that \(e(X)=m_{d+1}(f)\), and so, \(s=d+1\). Further, \(u=(d-1)(d+1)\geq 2\), since \(d\geq 2\). Furthermore, \(\gcd(v_{p}(a_{s})-v_{p}(a_{n}),n-s)=\gcd(d-1,d(d+1)-d-1)=d-1\), which divides \(u\). Thus, by Theorem 2, the polynomial \(Y_{d}\) is irreducible, or has a factor whose degree is a multiple of \(d+1\), or \(f\) has a factorization \(f=f_{1}f_{2}\) in \(\mathbb{Z}[x]\) such that \(\alpha_{2}\deg f_{1}-\alpha_{1}\deg f_{2}\) is a multiple of \(d+1\), for some \(\alpha_{1},\alpha_{2}\in\{1,2,\ldots,d\}\) with \(\alpha_{1}+\alpha_{2}=d+1\). ## 2. Proof of Theorems 1 and 2 **Proof of Theorem 1.** Our method of proof is similar to that of [4]. If \(s=0\), then the Newton polygon of \(f\) is a straight line segment joining the points \((0,v(a_{0}))\) and \((n,v(a_{n}))\), and so, by the classical result of Dumas [2] on factorization of polynomials via Newton polygons, it follows that \(f\) is either irreducible, or \(f\) has a factor of degree zero. Now assume that \(s>0\). Suppose that \(f\) is not irreducible in \(R[x]\) so that \(f\) admits a factorization \(f=f_{1}f_{2}\) in \(R[x]\) with \(\min\{\deg f_{1},f_{2}\}\geq 1\). For each \(i=1,2\), let \(n_{i}=\deg(f_{i})\) so that \(n=n_{1}+n_{2}\), and \(f_{i}=\sum_{j=0}^{n_{i}}a_{ij}x^{j}\). Consequently, we have \(a_{0}=a_{10}a_{20}\) and \(a_{n}=a_{1n_{1}}a_{2n_{2}}\) so that \(v(a_{0})=v(a_{10})+v(a_{20})\) and \(v(a_{n})=v(a_{1n_{1}})+v(a_{2n_{2}})\). If we let \[c_{s}=v(a_{n})-v(a_{s}),\ c_{n}=v(a_{n})-v(a_{0}),\ c_{i0}=v(a_{in_{i}})-v(a_{ i0}),\ i=1,2,\] then it follows that \(c_{n}=c_{10}+c_{20}\). By the hypothesis 1 and the identity \(e(f_{1}f_{2})=\max(e(f_{1}),e(f_{2}))\), we have \[\frac{c_{s}}{n-s}=\frac{v(a_{n})-v(a_{s})}{n-s}=e(f)\geq e(f_{1})\geq m_{0}(f_{1 })=\frac{c_{10}}{n_{1}}.\] We then have \(\frac{c_{s}}{n-s}-\frac{c_{10}}{n_{1}}\geq 0\), and so \(\frac{c_{s}}{d}n_{1}-\frac{n-s}{d}c_{10}\geq 0\). Note that from the hypothesis 2, \(c_{s}/d\) and \((n-s)/d\) are both integers. Since \(e(f)\geq e(f_{2})\), we must have \(c_{s}n_{2}-(n-s)c_{20}\geq 0\) and \(\frac{c_{s}}{d}n_{2}-\frac{(n-s)}{d}c_{20}\geq 0\). By the hypothesis 2, we have the following: \[1=\frac{c_{s}}{d}n-\frac{(n-s)}{d}c_{n}=\Big{(}\frac{c_{s}}{d}n_{1}-\frac{(n-s )}{d}c_{10}\Big{)}+\Big{(}\frac{c_{s}}{d}n_{2}-\frac{(n-s)}{d}c_{20}\Big{)}, \tag{1}\] which shows that one of the nonnegative integers \(\frac{c_{s}}{d}n_{1}-\frac{(n-s)}{d}c_{10}\) and \(\frac{c_{s}}{d}n_{2}-\frac{(n-s)}{d}c_{20}\) is zero. First assume that \(\frac{c_{s}}{d}n_{1}-\frac{(n-s)}{d}c_{10}=0\). Then from (1), we have \(\frac{c_{s}}{d}n_{1}=\frac{(n-s)}{d}c_{10}\). Since \(\gcd(c_{s}/d,(n-s)/d)=1\), the integer \((n-s)/d\) must divide \(n_{1}\). Similarly, if we assume that \(\frac{c_{s}}{d}n_{2}=\frac{(n-s)}{d}c_{20}\), then it follows from (1) that \((n-s)/d\) must divide \(n_{2}\). This completes the proof. **Proof of Theorem 2.** Assume that \(f=f_{1}f_{2}\) for some nonconstant polynomials \(f_{1}\) and \(f_{2}\) in \(R[x]\). For \(s=0\), Theorem 2 reduces to Theorem 1. So, we assume that \(s>0\). We use the notation same as in the proof of Theorem 1 so that we arrive at the following: \[\frac{c_{s}}{d}n-\frac{(n-s)}{d}c_{n}=\frac{u}{d};\ \frac{c_{s}}{d}n_{i}- \frac{(n-s)}{d}c_{i0}\geq 0,\ i=1,2,\] where \(n=n_{1}+n_{2},c_{n}=c_{10}+c_{20}\), and \[\Big{(}\frac{c_{s}}{d}n_{1}-\frac{(n-s)}{d}c_{10}\Big{)}+\Big{(}\frac{c_{s}}{ d}n_{2}-\frac{(n-s)}{d}c_{20}\Big{)}=\frac{u}{d}. \tag{2}\] If \(\frac{c_{s}}{d}n_{i}-\frac{(n-s)}{d}c_{i0}=0\) for any \(i\in\{1,2\}\), then as in Theorem 1, we deduce that the degree of a divisor of \(f\) must be divisible by \((n-s)/d\). If \(\frac{c_{s}}{d}n_{1}-\frac{(n-s)}{d}c_{10}=1\), then \(\frac{c_{s}}{d}n_{2}-\frac{(n-s)}{d}c_{20}=\frac{u}{d}-1\), and so, from (2), we have \[\frac{c_{s}}{d}\big{(}n_{2}-\big{(}\frac{u}{d}-1\big{)}n_{1}\big{)}=\frac{(n-s )}{d}\big{(}c_{20}-\big{(}\frac{u}{d}-1\big{)}c_{10}\big{)},\] which in view of the fact that \(c_{s}/d\) and \((n-s)/d\) are coprime, shows that \((n-s)/d\) divides \(n_{2}-(\frac{u}{d}-1)n_{1}\). More generally, if we let \(\alpha_{i}=\frac{c_{s}}{d}n_{i}-\frac{(n-s)}{d}c_{i0}\), \(i=1,2\) with \(\alpha_{1}+\alpha_{2}=u/d\), then using (2) one has the following: \[\frac{c_{s}}{d}(\alpha_{2}n_{1}-\alpha_{1}n_{2})=\frac{(n-s)}{d}(\alpha_{2}c_{ 10}-\alpha_{1}c_{20}).\] This in view of the fact that \(\gcd((n-s)/d,c_{s}/d)=1\) shows that \((n-s)/d\) must divide \(\alpha_{2}n_{1}-\alpha_{1}n_{2}\). This completes the proof. **Acknowledgments.** The present research is supported by Science and Engineering Research Board(SERB), a statutory body of Department of Science and Technology (DST), Govt. of India through the project grant no. MTR/2017/000575 awarded to the second author under MATRICS Scheme. **Disclosure.** The authors declare to have no competing interest.
2310.16167
iNVS: Repurposing Diffusion Inpainters for Novel View Synthesis
We present a method for generating consistent novel views from a single source image. Our approach focuses on maximizing the reuse of visible pixels from the source image. To achieve this, we use a monocular depth estimator that transfers visible pixels from the source view to the target view. Starting from a pre-trained 2D inpainting diffusion model, we train our method on the large-scale Objaverse dataset to learn 3D object priors. While training we use a novel masking mechanism based on epipolar lines to further improve the quality of our approach. This allows our framework to perform zero-shot novel view synthesis on a variety of objects. We evaluate the zero-shot abilities of our framework on three challenging datasets: Google Scanned Objects, Ray Traced Multiview, and Common Objects in 3D. See our webpage for more details: https://yashkant.github.io/invs/
Yash Kant, Aliaksandr Siarohin, Michael Vasilkovsky, Riza Alp Guler, Jian Ren, Sergey Tulyakov, Igor Gilitschenski
2023-10-24T20:33:19Z
http://arxiv.org/abs/2310.16167v1
# _invs_: Repurposing Diffusion Inpainters for Novel View Synthesis ###### Abstract. We present a method for generating consistent novel views from a single source image. Our approach focuses on maximizing the reuse of visible pixels from the source image. To achieve this, we use a monocular depth estimator that transfers visible pixels from the source view to the target view. Starting from a pre-trained 2D inpainting diffusion model, we train our method on the large-scale _Ob/aw_ers dataset to learn 3D object priors. While training we use a novel masking mechanism based on epipolar lines to further improve the quality of our approach. This allows our framework to perform zero-shot novel view synthesis on a variety of objects. We evaluate the zero-shot abilities of our framework on three challenging datasets: Google Scanned Objects, Ray Traced Multiview, and Common Objects in 3D. See our webpage for more details: [https://yashkant.github.io/invs/](https://yashkant.github.io/invs/) 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 233 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 233 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 233 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 23 2023 2023 2023 2023 2023 23 2023 2023 2023 2023 23 2023 233 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 23 2023 2023 2023 from different objects can be a solution for accurately reconstructing objects. Recently, the emerging efforts on large-scale text-to-image diffusion models (Ramesh et al., 2022, 2021; Rombach et al., 2022; Saharia et al., 2022) prove the capability of learning a generic object prior by training on large-scale image datasets, _e.g._, LAION (Schuhmann et al., 2022). However, these models operate in the 2D domain and lack precise control over camera view directions, limiting their effectiveness in view synthesis tasks. This work empowers the pre-trained large-scale text-to-image diffusion model with the ability for camera viewpoint control to generate novel views. We make the following contributions: first, we attempt to reuse pixels from the input view when camera views are not significantly far away. This is achieved by back-projecting such pixels into the 3D space using monocular depth and reprojecting them back onto the novel view. Second, we apply inpainting to recover the missing regions by leveraging Inpainting Stable Diffusion (_ISD_) (Rombach et al., 2022). However, naively applying _ISD_ fails to generalize well to the masks that arise from the reprojection procedure since the _ISD_ model is trained with masks that randomly cover a part of the image. Therefore, we propose to train _ISD_ on a dataset in which we can compute such masks easily. One prominent choice is a dataset of 3D assets, _e.g._, _Obyiverse_(Deitke et al., 2023), which can be rendered from multiple views and for which such masks can be computed. After training, our method can predict missing pixels in the novel view image, while at the same time preserving pixels that are initially visible. We abbreviate our method as _NVS_ which stands for inpainting-driven Novel View Synthesis. We conduct experiments on synthetic and real datasets, and find that our method can achieve strong novel view synthesis (NVS) results from single images as shown in Figure 1. We conduct ablative and failure mode analyses which demonstrates that a good monocular depth estimator is important to preserve structure and allow maximal reuse of source pixels. ## 2. Related Works **Novel View Synthesis in Space.** Novel view synthesis is a longstanding problem in computer vision and graphics. Early methods rely on the images from multiple viewpoints and attempt to incorporate the knowledge from epipolar geometry to perform smooth interpolation between the different views (Chen and Williams, 1993; Debevec et al., 1996). One of the important milestones in novel view synthesis is the introduction of Neural Radiance Fields (NeRFs) (Mildenhall et al., 2020). NeRFs can synthesize smooth interpolations between different views with the help of volumetric rendering. Since then, numerous improvements have been introduced to improve the original design (Barron et al., 2021; Chen et al., 2021; Kuang et al., 2022; Wang et al., 2021; Zhang et al., 2020). However, most of them share the same limitation of relying on multiple views for learning 3D representation. Newer works have demonstrated that using deep networks is a promising approach for synthesizing novel views from few images owing to their generalization capabilities (Chan et al., 2023; Deng et al., 2023; Mirzaei et al., 2023; Sajjadi et al., 2022; Zhou and Tulsiani, 2023). In its limit, this approach allows for generating novel views given exactly a _single_ image (Gu et al., 2023; Shen et al., 2023; Shih et al., 2020; Tang et al., 2023; Wiles et al., 2020) and we adapt this setting in our work. Zero123 (Liu et al., 2023) proposes to fine-tune Stable Diffusion for NVS task. They condition diffusion both on the source image and on the CLIP embedding of the source image. However, this method largely ignores the inability of U-Net (Ronneberger et al., 2015) networks to generate output that is not aligned with the source (Siarohin et al., 2019, 2019). In contrast, our method relies on geometry clues to align the source and target views. This helps us to preserve the content from the source image well. **Novel View Synthesis in Time.** Video generation conditional on one or more input image(s) can be seen as a novel view synthesis task with generated images unrolling both in space and time. Prior works (Denton and Fergus, 2018; Finn et al., 2016; Hsieh et al., 2018; Villegas et al., 2017; Vondrick and Torralba, 2017; Wang et al., 2017) in this domain proposed the use of spatiotemporal conditioning to handle dynamic scenes. More recent works have trained on large indoor scene and video datasets further improving quality of generations task (Koh et al., 2021; Lee et al., 2021; Ye et al., 2019; Yu et al., 2022). Our work is particularly inspired by InfiniteNature and InfiniteNature Zero works (Li et al., 2022; Liu et al., 2021) which utilize softmax-splitting (Niklaus and Liu, 2020) for synthesizing infinite videos of nature with a fly-through camera. Recent works in this space have tackled generation of novel views with full 360-degree camera control (Chai et al., 2023), as well as learning domain-specific dynamics of abstract scenes (Mahapatra and Kulkarni, 2022), and very recently general dynamic prior from largescale videos (Li et al., 2023). **3D Generative Models.** The recent surge in the quality and diversity of generations orchestrated by 2D image diffusion models poses the question of whether the prior knowledge learned by these models can be used for generating 3D objects and scenes. Indeed, diffusion models have some textual control over the viewpoint. For example, DreamBooth (Ruiz et al., 2023) shows that the diffusion model can properly react to the words "front", "back", and "side" words the prompt. The seminal work that exploits 2D diffusion for 3D generation, DreamFusion (Poole et al., 2023), proposes to optimize NeRF representation by judging the novel view generations with the large-scale pre-trained text-to-image diffusion model (Saharia et al., 2022). Several follow-up works (Chen et al., 2023; Lin et al., 2023) improve the resolution and quality of the resulting 3D assets. On the other hand, Dreambooth3D (Raj et al., 2023) introduces additional image control. Although these works can generate reasonable novel views, they require a lengthy optimization process. Additionally, several works (Chen et al., 2023; Richardson et al., 2023) have proposed to utilize stable diffusion for mesh texturing. TEXTure (Richardson et al., 2023) uses 2D diffusion models to sequentially in-paint novel regions over the existing mesh, projecting results via a differentiable renderer. Text2Tex (Chen et al., 2023) extends this strategy with an automatic viewpoint-finding approach for optimized re-projection. In both of these works, Stable Diffusion utilizes text as conditioning, which provides only limited control over the generation. ## 3. Method In this section, we introduce the overall task setup (Sec. 3.1), the strategy used for generating inputs to the inpainting network (Sec. 3.2), three different losses used throughout training (Sec. 3.3), and our inference technique (Sec. 3.4). ### Overview **Novel View Synthesis Task.** Given a single RGB image of a 3D asset (source view) \(\mathbf{I}_{\mathrm{s}}\in\mathbb{R}^{h\times\mathbf{w}\times 3}\), and the corresponding camera pose \(\mathbf{C}_{\mathrm{s}}\in\mathbb{R}^{3\times 4}\), we aim to generate a target view \(\mathbf{I}_{t}\in\mathbb{R}^{h\times\mathbf{w}\times 3}\) of this asset from a novel viewpoint, say \(\mathbf{C}_{t}\in\mathbb{R}^{3\times 4}\). **Inpainter Inputs.** We start by preparing inputs for the inpainting model, which takes in a partial view of the scene as well as a binary mask that denotes the region to be inpainted. We then obtain source view depth \(\mathbf{D}_{\mathrm{s}}\in\mathbb{R}^{h\times\mathbf{w}}\), which is available for our synthetic training set, and can be calculated using an off-the-shelf monocular depth estimator (Bhat et al., 2023) during inference. Using this depth map, we warp the pixels from the source to the target viewpoint by creating a partial target view \(\mathbf{I}_{\mathrm{s}\to t}\in\mathbb{R}^{h\times\mathbf{w}\times 3}\). We train our _ISD_ model to inpaint the missing regions of this partial view. Additionally, we provide the inpainter network with a mask \(\mathbf{M}_{\mathrm{s}\to t}\in\mathbb{R}^{h\times\mathbf{w}}\) that indicates parts of the image which require inpainting. **Training and Inference.** We train our _NVS_ model intialized from the Stable Diffusion inpainting checkpoint1 on source-target pairs sampled at random from 20M rendered views of the largescale synthetic _Objaves_(Deitke et al., 2023) dataset. The finetuning process is outlined in Sec. 3.3. Finally, we also modify the DDIM inference which helps to significantly reduce artifacts in the NVS generations, described in Sec. 3.4. Footnote 1: [https://huggingface.co/runwayml/stable-diffusion-inpainting](https://huggingface.co/runwayml/stable-diffusion-inpainting) ### Generating Partial View and Epipolar Mask **Warping source view using depth.** Next, we describe how to unproject the pixels from source view \(\mathbf{I}_{\mathrm{s}}\) into 3D space, and then reproject them into target view \(\mathbf{I}_{t}\) (_a.k.a_ warping). Let any source pixel from \(\mathbf{I}_{\mathrm{s}}\) be \(\mathbf{p}_{\mathrm{s}}=[x,y,1]\) defined in homogenous coordinates. We can unproject it into 3D world space by: \[\mathbf{p}_{w}=\mathbf{R}_{\mathrm{s}}\cdot d_{\mathrm{s}}\cdot\mathbf{K}_{ \mathrm{s}}^{-1}\mathbf{p}_{\mathrm{s}}+\mathbf{T}_{\mathrm{s}}, \tag{1}\] where \(\mathbf{K}_{\mathrm{s}}\in\mathbb{R}^{3\times 3}\) is the source view camera intrinsic, \(\mathbf{C}_{\mathrm{s}}=[\mathbf{R}_{\mathrm{s}}|\mathbf{T}_{\mathrm{s}}]\in \mathbb{R}^{3\times 4}\) is the source camera, and \(d_{\mathrm{s}}\in\mathbb{R}\) is the scalar depth value for the point \(\mathbf{p}_{\mathrm{s}}\). Finally, world space point \(\mathbf{p}_{w}\) can be reprojected in the target view with camera as: \[\mathbf{p}_{t}=\mathbf{K}_{t}\cdot d_{t}^{-1}\cdot\mathbf{R}_{t}^{-1}\cdot( \mathbf{p}_{w}-\mathbf{T}_{t}), \tag{2}\] where \(\mathbf{C}_{t}=[\mathbf{R}_{t}|\mathbf{T}_{t}]\) is the target camera, \(d_{t}\in\mathbb{R}\) is scalar target depth, and \(\mathbf{p}_{t}\) is target pixel in homogenous coordinates. Applying the above transform for all foreground pixels in the source view can obtain the partial target view \(\mathbf{I}_{\mathrm{s}\to t}\). Additionally, when reprojecting points to the target view, we use forward softmax-splatting (Niklaus and Liu, 2020) similar to (Li et al., 2022) to handle overlapping points using \(z\)-values. In Figures 2 and 3, we visualize the warped target outputs. **How to create inpainting mask?** Reprojecting source pixels to target view only gives us the information about the visible pixels of the object. However, it does not tell the inpainting network anything regarding which regions in the image are newly discovered and which already exist. The simplest method to construct an inpainting mask would be to use all pixels that are not part of the object in the partial target view. However, we find that using such strategy creates a very large inpainting mask, and subsequently _ISD_ struggles to generalize or maintain consistency with the source view. We show results with this in Sec. 4. Figure 3. **Inpainter model training using denoising, and boundary losses.** Inpainting Stable Diffusion (_ISD_) accepts a noised target view as well as a partial target view and epipolar mask. All three of these inputs are concatenated before they are fed into the diffusion. Instead of the text condition we use CLIP (Radford et al., 2021) embedding of the source view, that is provided to the _ISD_ through cross attention layers. The final generated image is compared against ground truth with \(L_{2}\) loss, moreover to enforce object shape discovery we introduce additional boundary mask loss. Figure 2. **Epipolar mask and partial view generation strategy.** Starting with a source view, we use a pre-existing monocular depth estimator to calculate the distance of each point in the image from the camera, creating a depth map. We then use this depth map to “unproject” the 2D image into 3D space, generating a partial point cloud. Next, we take the partial point cloud and “reproject” it onto the target view, essentially projecting the 3D points back onto a 2D image from a different angle. As we do this, we also generate a “visibility mask” which identifies any new areas that become visible in the target view that were not visible in the source view. **Inpainting mask with Epipolar Geometry.** When a light ray falls onto a particular pixel \(\mathbf{p}_{s}\) in the source view, it corresponds to a line \(I_{t}\) in the target view. This line is known as an epipolar line, as illustrated in Fig. 2. It is worth noting that only a portion of this line in the target view is obstructed, while the rest remains visible. The point \(\mathbf{p}_{w}\) precisely determines which part is obstructed. Anything preceding \(\mathbf{p}_{w}\) is visible, whereas anything following it is obstructed. To create the inpainting or visibility mask, we generate rays from each pixel in the source view to the target view until they intersect with the reprojected point. This process yields \(\mathbf{M}_{s\to t}\), which we refer to as the Epipolar Mask. Fig. 2 provides a visual depiction of this procedure along with the resulting mask. **Using smooth inpainting mask using ray angles.** When creating the epipolar inpainting mask, instead of using a binary value of 0/255 (black/white) at each pixel, we use a smooth value (linearly interpolated) between the source and target camera ray angle at the corresponding 3D world point (projected onto this pixel). Providing this inpainting mask indicates to the ISD _how much camera angle variation has happened at each point_ (180 degrees is black, 0 degrees is white). This information is used while training and helps the inpainter ignore/overwrite flipped pixels. Figure 5 shows a visual example of this. ### Training _iN_vs: Inpainter for Novel View Synthesis **Denoising Novel Views.** Equipped with the partial target view and the epipolar mask highlighting regions to be inpainted, we can now train our inpainting model. Concretely, our _ISD_ takes as conditioning the epipolar mask \(\mathbf{M}_{s\to t}\), partial target view \(\mathbf{I}_{s\to t}\), as well as the CLIP embedding of the source view \(X_{s}\), and it is trained to denoise a noisy target view \(\mathbf{I}_{t}+\epsilon\). Following previous works (Dhariwal and Nichol, 2021; Rombach et al., 2022), we utilize epsilon parameterization of diffusion and optimize our network with the following loss: \[L_{2}=\|\epsilon-\textit{ISD}(\mathbf{I}_{t}+\epsilon,\mathbf{M}_{s\to t}, \mathbf{I}_{s\to t},X_{s})\|^{2}\,. \tag{3}\] **Encoding Source Views with CLIP.** We replace the text-encoder of CLIP (Radford et al., 2021) used in the Stable Diffusion model with its image-encoder, to condition generation on source view \(\mathbf{I}_{s}\). Since the source view does not align well with the target view in RGB image space, we choose to formulate the conditioning via cross-attention instead of using concatenation, unlike previous work (Watson et al., 2022). **Stricter Boundary Loss.** Stable Diffusion inpainting model is trained on real images with diverse backgrounds, and we find that it struggles to generate uniform solid color (white or black) backgrounds. It shows an affinity towards inpainting backgrounds with patterns, or enlarges the object boundaries to cover entirety of inpainting mask. To tackle this issue, we propose a loss re-weighting that puts more emphasis on target regions where the model has to discover object boundary. Concretely, we introduce re-weighting coefficient \(W[\mathbf{p}_{t}]=1\) if \(\mathbf{p}_{t}\) is a pixel that falls within the boundary of known regions, and \(W[\mathbf{p}_{t}]=2\) if \(\mathbf{p}_{t}\) is a pixel where the boundary is unknown (as shown in Fig. 3). Finally, we obtain the re-weighted loss: \[L_{W}=\|W\left(\epsilon-\textit{ISD}(\mathbf{I}_{t}+\epsilon,\mathbf{M}_{s \to t},\mathbf{I}_{s\to t},X_{s})\right)\|^{2}\,. \tag{4}\] **Training on early denoising steps.** We observe that _ISD_ mostly struggles to decode the shape of the object, while reasonable textures can be obtained even with non-finetuned _ISD_ when using ground truth boundary masks (Chen et al., 2023). We find that during inference denoising, the shape boundary is discovered much earlier compared to texture, so we additionally fine-tune _ISD_ by sampling noise levels in the first 10% of the denoiser schedule. ### Inference **Rescale and Recenter.** At inference time we wish to generate a novel view of the object from a single source view. Since we do not have ground truth source view depth we rely on monocular depth predictor ZoeDepth (Bhat et al., 2023). However, depth estimators can predict object depth only up to unknown scale, hence, we recenter the projected world points to the origin and rescale them into a cube, which follows the setup used in rendering our dataset. **Guiding Inference using Partial Target View.** We observe that instead of starting the backward denoising process from pure noise, we can significantly boost the quality of the generated views by starting with a noisy version of the image \(\mathbf{I}_{s\to t}\) (see Sec. 4.5). ## 4. Experiments In this section, we provide details on our training and evaluation datasets (Sec. 4.1, Sec. 4.2), compare our method to three NVS models (Sec. 4.3), and provide an ablation study (Sec. 4.5) of each component. ### Training Setup. **Objaverse and Rendering Setup.** To train _iNVS_ we require paired data consisting of source and target views. To generate this data, we utilize the extensive _Objaverse_ dataset, which contains nearly 800,000 3D assets. We employ Blender as our rendering software and begin by recentering all scenes at the origin. Additionally, we rescale the bounding box of each scene to fit within a \([-1,1]^{3}\) cube. For each object in the dataset, we randomly generate 24 camera viewpoints within predefined boundaries. The radius of the viewpoint is sampled from a range of 3 to 4, and the field of view (FoV) is set to 50 degrees. Using these viewpoints, we render both the images and corresponding depth maps. We utilize the Cycles engine in Blender and employ 128 samples per ray for rendering. All images are rendered at a resolution of \(512\times 512\) pixels. **Selecting good camera poses.** To add diversity to the lighting conditions, we randomly sample lighting from a collection of 100 environmental maps. These maps provide a range of indoor and outdoor lighting conditions with varying intensity levels. It's worth noting that in the Objaverse dataset, most assets are oriented with the \(Z\)-axis pointing upwards. Consequently, synthesizing the object from extreme bottom or top view angles can be challenging. For instance, when objects are placed on a platform, viewing them from below makes it almost impossible to accurately determine the opposite view without additional information. To address this issue, we empirically determine that sampling the polar angle \(\theta\) from a uniform distribution between -65 and 75 degrees provides reasonably accurate views on average. We sample the azimuth angle \(\phi\) randomly between 0 and 360 degrees. By utilizing all 800,000 assets in the Objaverse dataset, we render a total of 19 million images to train our _iNVS_ model. **Model and Training details.** We performed fine-tuning on the pretrained Inpainting Stable Diffusion (_ISD_) v1.5 checkpoint to adapt it for our task. This model is capable of generating high-resolution images with dimensions of \(512\times 512\), utilizing a latent space of dimensions \(64\times 64\times 4\). During the fine-tuning process, we employed a sequential training approach, consisting of three stages with separate losses previously introduced: a) denoising, b) boundary loss, and c) early steps training. Our final model was trained on 96 A100 GPUs with each stage training over 7 days. ### Evaluation **Datasets.** We evaluate how well _iNVS_ generalizes at generating novel target views across three different datasets. Specifically, we use two synthetics datasets, _Google Scanned Objects_ (_GSO_) [Downs et al., 2022; 7], and _Ray-Traced Multi-View_ (_RTMV_) [Tremblay et al., 2022]. The _GSO_ dataset contains nearly one thousand photorealistic 3D models, which we render using Blender following the same setup used for generating training data (described in 4.1). The _RTMV_ dataset contains high quality renderings of nearly 2000 scenes from 4 different sources, and we filter out the scenes that contain _GSO_ objects. Finally, we also evaluate on real videos from the _Common Objects in 3D CO3D_[Reizenstein et al., 2021] dataset, which is a dataset of 19,000 videos of common objects spanning 50 categories. **Metrics.** Following prior work [Liu et al., 2023], we compare _iNVS_ and baselines with three different metrics covering different aspects of image similarity: PSNR, SSIM [Wang et al., 2004] and LPIPS [Zhang et al., 2018]. During evaluation, for every object (or scene) we sample two random views \(\mathbf{I_{s}}\) and \(\mathbf{I_{t}}\) along with their camera poses \(\mathbf{C_{s}}\) and \(\mathbf{C_{t}}\). Next, starting from \(\mathbf{I_{s}}\) we can compute relative camera transformation and generate a target view. **Masked Metrics.** For PSNR and SSIM metrics we find that filtering out background (using the ground truth mask) before comparison helps to avoid spurious gains, hence we report masked metrics. ### Baseline Comparisons _Zero-1-to-3_[Liu et al., 2023] is an image and camera pose conditioned diffusion model which leveraged a pretrained Stable Diffusion called Image Variations [Labs, 2023] and finetuned it on Obiyverse renders. Unlike our method, Zero-1-to-3 is trained to generate novel views from scratch, and is prone to cause inconsistency between source and target views (see results for more details). We use the official codebase2 and checkpoints with our datasets for evaluation. _Point-E_[Nichol et al., 2022] is an image-conditioned diffusion model which operates over 3D point clouds to generate objects. We use the official codebase and checkpoint3, and use the settings mentioned in the paper to generate point clouds with 4,000 points. We render the point cloud from the target viewpoints for novel views. Footnote 2: [https://github.com/cvlab-columnia/zero123](https://github.com/cvlab-columnia/zero123) Footnote 3: [https://github.com/openai/point-e](https://github.com/openai/point-e) _Shap-E_[Jun and Nichol, 2023] is a conditional generative model for 3D, which directly outputs the parameters of implicit functions that can be rendered directly as neural radiance fields. It is a two-stage model that involves generating a latent code for each 3D asset and then uses a diffusion model to denoise this latent code. We use the official codebase and checkpoint4, and render outputs as neural radiance field from the target viewpoint. Footnote 4: [https://github.com/openai/shap-e](https://github.com/openai/shap-e) **Our method achieves good PSNR and comparable LPIPS.** We find that our method achieves the higher PSNR and comparable LPIPS scores compared to all other baselines on the Google Scanned Objects (_GSO_) and Common Objects in 3D (_CO3D_) benchmarks (Table 1 and 2). This indicates that our method performs well in terms of noise reduction and perceptual similarity in both synthetic and real data scenarios. The _CO3D_ dataset consists of real-world views captured from free-form videos, while the _GSO_ dataset contains virtual scans of 3D photorealistic assets. On the Ray-traced Multiview (_RTMV_) dataset, we find that our method is able to outperform all baselines in PSNR, but falls short on the SSIM and LPIPS metrics (shown in Table 2). We attribute this low performance to out-of-distribution variations in lighting across our rendering setup (described in Section 4.1 compared to _RTMV_). **Structural Similarity is compromised in generated views.** It is worth mentioning that our method consistently underperforms \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**\# Method**} & \multicolumn{2}{c}{**PSNR \(\uparrow\)**} & \multicolumn{2}{c}{**SSIM \(\uparrow\)**} & \multirow{2}{*}{**LPIPS \(\downarrow\)**} \\ & mask & unmask & mask & unmask & \\ \hline 1 _Point-E_ & 8.90 & 12.04 & 0.18 & 0.82 & 0.25 \\ 2 _Shap-E_ & 10.39 & 12.18 & 0.30 & 0.82 & 0.29 \\ 3 _Zero-1-to-3_ & 14.74 & 14.70 & **0.34** & **0.84** & 0.25 \\ \hline 4 _Original_ ISD & 15.03 & 13.25 & 0.09 & 0.49 & 0.38 \\ 5 _iNVS_ (ours) & **18.95** & **19.83** & 0.30 & 0.80 & **0.24** \\ \hline \hline \end{tabular} \end{table} Table 1. Comparison with baselines on **Google Scanned Objects** dataset. \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**\# Method**} & \multicolumn{2}{c}{**PSNR \(\uparrow\)**} & \multicolumn{2}{c}{**SSIM \(\uparrow\)**} & \multirow{2}{*}{**LPIPS \(\downarrow\)**} \\ & mask & unmask & mask & unmask & \\ \hline 1 _Point-E_ & 7.40 & 10.44 & 0.14 & **0.67** & **0.41** \\ 2 _Shap-E_ & 8.35 & 9.74 & **0.17** & 0.65 & 0.48 \\ 3 _Zero-1-to-3_ & 9.09 & 8.29 & 0.16 & 0.58 & 0.50 \\ 4 _Original_ ISD & 14.61 & 11.25 & 0.09 & 0.27 & 0.65 \\ 5 _iNVS_ (ours) & **16.83** & **17.82** & 0.09 & 0.5 & 0.49 \\ \hline \hline \end{tabular} \end{table} Table 2. Comparison with baselines on **Ray-traced Multiview** data. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**\# Method**} & \multicolumn{2}{c}{**PSNR \(\uparrow\)**} & \multicolumn{2}{c}{**SSIM \(\uparrow\)**} & \multirow{2}{*}{**LPIPS \(\downarrow\)**} \\ & mask & unmask & mask & unmask & \\ \hline 1 _Point-E_ & 9.37 & 10.10 & 0.22 & 0.72 & 0.38 \\ 2 _Shap-E_ & 10.67 & 10.01 & **0.33** & **0.73** & 0.42 \\ 3 _Zero-1-to-3_ & 12.32 & 9.91 & **0.33** & 0.69 & 0.42 \\ \hline 4 _Original_ ISD & 16.43 & 13.56 & 0.24 & 0.46 & 0.44 \\ 5 _iNVS_ (ours) & **17.58** & **17.39** & **0.33** & 0.65 & **0.36** \\ \hline \hline \end{tabular} \end{table} Table 3. Comparison with baselines on **Common Objects in 3D** dataset. on the SSIM metric across all datasets. We find that this occurs primarily due to misalignment in monocular depth estimator. We observe that under significant viewpoint variations, the monocular depth estimator fails to generate consistent depth across different parts of the objects. This inconsistency leads to distortions in the generated images and lower SSIM scores (see Section 4.6). **Masked metrics help disambiguate performance across baselines.** We notice that Shap-E and Point-E often produce tiny objects, and their white background pixels (matching the target) lead to majority of their unmasked gains, thus outperforming Zero123 quantitatively. However, using masked metrics we notice these trends change (see PSNR and SSIM in Tables 2 and 3). ### Qualitative Results We present the visualization of the results obtained from our method and the baselines in Figure 4 and draw the following conclusions: **Preservation of Text and Fine Details.** We employ monocular depth to unproject pixels into 3D space and warp them to the target viewpoint. This technique enables us to preserve a significant amount of textual information and finer details across multiple viewpoints. This is clearly demonstrated in Row 1, where our method successfully retains the text "Whey Protein" between the source and target views, unlike the _Zero-1-to-3_ baseline. Additionally, our method shows less distortion in the structural details of objects, as seen in Row 2, where _Zero-1-to-3_ alters the toaster oven into grills, while our method accurately preserves this detail. The benefits of Figure 4. **Comparison with SoTA methods on NVS task (GSO dataset).** The first column is input, columns two to four are baselines: _Point-E_(Nichol et al., 2022), _Shap-E_(Jun and Nichol, 2023) and _Zero-1-to-3_(Liu et al., 2023). Fifth column is untrained _ISD_, and last two columns is _iNVS_ and ground truth. Figure 5. **Partial target view and epipolar masks on CO3D dataset.** The first column shows input. Second column show the warped source view. The third column demonstrates the smooth inpainting epipolar mask. Notice that the inpainting mask for cement dump truck (second row) is much darker compared to the race car (first row) due to larger angle variation (details in Sec. 3.2). Last column shows generated result. this technique is further highlighted in Fig. 5, where we show that for many viewpoints most of the partial target view can be directly reused to significantly simplify the job of inpainting network. **Consistent synthesis for single object across multiple views.** In Figures 8 and 9, we show NVS results across many different objects from a fixed source view, and randomly selecting six target views. We find our generations to be largely coherent across viewpoints, and more stochastic under large viewpoint variations. **Faithful Compliance with Viewpoint Variations:** Another challenge in novel view synthesis is maintaining control over the generated views, particularly when dealing with significant viewpoint variations. Rows 3 and 4 exemplify this issue, showing that _Zero-1-to-3_ struggles to exercise accurate control over the generated images, resulting in the android and the shoe being created from incorrect viewpoints. **Outperforming 3D Diffusion Models:** Our method surpasses the performance of the _Point-E_ and _Shap-E_ baselines, both of which are variants of 3D diffusion models. We believe that utilization of a large, pretrained inpainting model in our method contributes to the generation of visually superior results. ### Ablation Study We conduct a series of ablation experiments and analysis to evaluate the effectiveness of our proposed changes and additional training. The quantitative performance results for all metrics are reported in Table 4, and visuals are presented in Figure 6. **Epipolar Mask helps to constrain NVS generation.** The epipolar mask allows us to control the extent of the inpainting necessary in the warped image, as depicted in Figures 2 and 3. When we omit the use of this mask, the inpainting model faces challenges in understanding the relative orientation between the source view and the target view. As a result, the performance of the model is compromised, leading to distorted and exaggerated generated images. **Guiding denoising inference with partial target view yields significant benefits.** We find that during the 500 step DDIM inference _iNVS_ generates a rough outline of the target view within the first 100 steps. Given that we have a partial target view available after warping the source pixels to the target viewpoint, we utilize it as guidance. Specifically, we replace the output of the first 10 DDIM steps with a noised version of the partial target view, similar to Repair (Lugmay et al., 2022). This approach proves to be immensely Fig. 6. **Ablation Study of _iNVS_ on GSO dataset.** We describe each column from left to right. First, we show input image. Second, we show generation without the epipolar mask. Third, we show generation without inference guidance (Section 3.4). Fourth, we skip source-view conditioning via CLIP during inference. Fifth, we show a variant of our method without boundary loss \(L_{W}\). The last two columns show result from _iNVS_ and ground truth. helpful in preventing the inpainting model from generating arbitrary boundaries, shown in Figure 6, third column. **Without image conditioning our model is unable exploit source view information well.** When there is a significant variation in viewpoint between the target and source images, the warped image becomes less informative for the inpainting process. In such scenarios, it becomes essential for the inpainting model to heavily rely on the source view and the learned 3D priors for accurate autocompletion. By removing the conditioning on the source view, the model's ability to generate high-quality and consistent views is compromised, as demonstrated in Figure 6, fourth column. **Boundary loss enhances generations.** Furthermore, we find that enhancing the boundary loss improves the generation quality leading to consistent visuals, evident from Figure 6, fifth column. ### Current Limitations **Imprecise monocular depth can lead to structure and texture problems.** We notice that ZoeDepth (Bhat et al., 2023) can generate depth maps that distorts flat surfaces, which leads to unrealistically deformed surfaces or incorrect texture predictions. We put visuals of these in failure modes #1 and #2 in Figure 7. **Inference guidance can occasionally lead to incomplete or flipped output images.** Recall from Section 3.4 that we use the noisy partial target view for the first 10 DDIM steps. However, since this view is incomplete and may contain flipped pixels (under high camera variations), this trick occasionally creates incomplete outputs or lead to reversed-view generated images. We put visuals of these in failure modes #3 and #4 in Figure 7. ## 5. Conclusion In this work we propose an approach for novel view synthesis that provides a significant advancement in terms of quality of the results and the coverage of the different object categories. Our approach combines recent advancement of diffusion models with epipolar geometry. By training on a large scale _Obiyavse_ dataset we were able to re-purpose Inpainting Stable Diffusion for the novel view synthesis task. Surprisingly we found that after finetuning our method gained an understanding of the underlying shape, even for the objects that were not seen during the training. Our approach demonstrates sizable improvement over state-of-the-art novel view synthesis methods, especially when considering texture preservation from the input image. One limitation of our approach is inability to generate consistent textures in the regions that are not visible in the original image. This however opens the exciting opportunity for future research, which can explore auto-regressive schemes of novel view generation. #### Acknowledgments. We thank Ziyi Wu for helping with aligning visuals of Point-E and Shap-E baselines, as well as organizing cited works. We also thank Colin Eles for helping with infrastructure required for largescale training, and Prakitsha Bhattacharyaamant for reviewing early drafts of this work. Finally, we would like to thank the Siggraph Asia reviewing committee for their invaluable feedback. \begin{table} \begin{tabular}{l c c c} \hline \hline **Method** & **PSNR \(\uparrow\)** & **SSIM \(\uparrow\)** & **LPIPS \(\downarrow\)** \\ \hline _iNVS (ours)_ & **19.83** & **0.8** & **0.24** \\ - _epipolar mask_ & 17.39\({}_{-2.44}\) & 0.65\({}_{-0.15}\) & 0.36+0.12 \\ - _inference guidance_ & 17.48\({}_{-2.35}\) & 0.70\({}_{-0.1}\) & 0.31+0.07 \\ - _image conditioning_ & 16.57\({}_{-3.26}\) & 0.70\({}_{-0.01}\) & 0.30+0.06 \\ - _boundary loss_ & 19.10\({}_{-0.73}\) & 0.77\({}_{-0.03}\) & 0.27+0.03 \\ - _early steps training_ & 19.70\({}_{-0.13}\) & 0.78\({}_{-0.03}\) & 0.26+0.02 \\ \hline _Original_ ISD & 13.25\({}_{-6.58}\) & 0.49\({}_{-0.31}\) & 0.38+0.14 \\ \hline \hline \end{tabular} \end{table} Table 4. **Ablation Study.** First row is our method _iNVS_. The second row shows our method where the epipolar mask is replaced with a full mask that covers all non-splatted pixels. The third row is our method without using a partial target view as diffusion input in the first steps as guidance. The fourth row is our method without conditioning on the source image. The fifth row demonstrates our method without _Lw_. Finally we highlight our model with the original training schedule. Also for the reference we show the performance of the original unaltered _ISD_. Figure 8: **Multiple novel views from single image.** We show six randomly sampled camera views given an input image, and corresponding ground truth. Figure 7: **Failure Modes. Left:** Imperfect depth maps cause issues in structure and texture. **Right:** Inference time tricks can occasionally hinder generations. Figure 9: **Multiple novel views from single image.** We show six randomly sampled camera views given an input image, and corresponding ground truth.
2305.11496
On evolution equations with white-noise boundary conditions
In this paper, we delve into the study of evolution equations that exhibit white-noise boundary conditions. Our primary focus is to establish a necessary and sufficient condition for the existence of solutions, by utilizing the concept of admissible observation operators and the Yosida extension for such operators. By employing this criterion, we can derive an existence result, which directly involves the Dirichlet operator. In addition, we also introduce a Desch-Schappacher perturbation result, which proves to be instrumental in further understanding these equations. Overall, our paper presents a comprehensive analysis of evolution equations with white-noise boundary conditions, providing new insights and contributing to the existing body of knowledge in this field.
Mohamed Fkirine, Said Hadd, Abdelaziz Rhandi
2023-05-19T07:52:07Z
http://arxiv.org/abs/2305.11496v1
# On evolution equations with white-noise boundary conditions ###### Abstract. In this paper, we delve into the study of evolution equations that exhibit white-noise boundary conditions. Our primary focus is to establish a necessary and sufficient condition for the existence of solutions, by utilizing the concept of admissible observation operators and the Yosida extension for such operators. By employing this criterion, we can derive an existence result, which directly involves the Dirichlet operator. In addition, we also introduce a Desch-Schappacher perturbation result, which proves to be instrumental in further understanding these equations. Overall, our paper presents a comprehensive analysis of evolution equations with white-noise boundary conditions, providing new insights and contributing to the existing body of knowledge in this field. Key words and phrases:Stochastic evolution equations, boundary conditions, white-noise, unbounded perturbations 2020 Mathematics Subject Classification: 60H15, 93E03, 47D06 This article is based upon work from COST Action 18232 MAT-DYN-NET, supported by COST (European Cooperation in Science and Technology), www.cost.eu. The third author is member of the Gruppo Nazionale per l'Analisi Matematica, la Probabilita e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM). This work has been supported by the National Center for Scientific and Technical Research (CNRST) via the FINCOME program. understanding of these equations. Overall, our work contributes to the existing literature on boundary white-noise conditions and sheds light on the underlying mathematical properties of these equations. The equations of the form (1.1) were first studied by Balakrishnan [2] and further developed by Da Prato and Zabczyk [6], with extensive coverage in [5, Chap. 13]. Since then, several authors have shown interest in this problem, including [13], [7], [10], [22], [15], [3], and [12]. In the literature (e.g., [6]), the standard conditions for (1.1) are that \(A:=A_{m}\) with \(D(A)=\ker(G)\) is the generator of a strongly continuous semigroup \(\mathbb{T}\) on \(H\), and that the stationary boundary value problem \[(A_{m}-\lambda)z=0,\quad Gz=u,\] has a unique solution \(z=\mathbb{D}_{\lambda}u\in H\) for some \(\lambda\) and all \(u\in U\). Here, \(\mathbb{D}_{\lambda}\) is the Dirichlet operator associated with \(A_{m}\) and \(G\). Using this operator, the problem (1.1) can be equivalently expressed as the stochastic Cauchy problem \[(\mathbf{SCP})_{\mathbf{A},\mathbf{B}}\ \begin{cases}dX(t)=A_{-1}X(t)dt+BdW(t), \quad t\in[0,T],\\ X(0)=X_{0},\end{cases}\] in \(H_{-1}\), where \(H_{-1}\) is the extrapolation space associated with \(A_{m}\) and \(H\), \(A_{-1}\) with domain \(D(A)=H\) is the extension of \(A\) to \(H\), and \(B:=(\lambda-A_{-1})\mathbb{D}_{\lambda}\in\mathcal{L}(U,H_{-1})\), as explained in Section 2. We define a solution of \((\mathbf{SCP})_{\mathbf{A},\mathbf{B}}\) as any \(H\)-valued process \(X(\cdot)\) satisfying \[X(t)=\mathbb{T}(t)X_{0}+\int_{0}^{t}\mathbb{T}_{-1}(t-s)BdW(s),\qquad t\in[0,T],\] where \(\mathbb{T}_{-1}\) is the extrapolation semigroup associated with \(\mathbb{T}\) and \(H\), which is the semigroup generated by \(A_{-1}\). This paper aims to extend the existing literature on this topic by providing new insights into these equations and establishing necessary and sufficient conditions for the existence of solutions. To study the unique mild solution of \((\mathbf{SCP})_{\mathbf{A},\mathbf{B}}\) and (1.1), we introduce the input maps \(\Phi_{t}:L^{2}([0,t];U)\to H_{-1}\), associated with the control operator \(B\) and \(A\) (refer to Section 2 for details). The existence of a unique mild solution for \((\mathbf{SCP})_{\mathbf{A},\mathbf{B}}\) and (1.1) is guaranteed if and only if \(\Phi_{t}\) admits an extension to a Hilbert-Schmidt operator from \(L^{2}([0,t];U)\) to \(H\). This result was first established in Hilbert spaces by Da Prato and Zabczyk in [6], and later refined in [1] for Banach spaces. Note that the notion of Hilbert-Schmidt operators is an essential tool in the study of infinite-dimensional systems. In the context of \((\mathbf{SCP})_{\mathbf{A},\mathbf{B}}\) and (1.1), the requirement of an extension of \(\Phi_{t}\) to a Hilbert-Schmidt operator is a strong condition, ensuring the existence and uniqueness of solutions. Let us consider the perturbed stochastic Cauchy problem obtained by adding a linear bounded operator \(\mathscr{P}:H\to H_{-1}\) to the generator \(A\) in the original problem: \[(\mathbf{SCP})_{\mathbf{A}+\mathscr{P},\mathbf{B}}\ \begin{cases}dX(t)=(A_{-1}+\mathscr{P})\,X(t)dt+BdW(t), \quad t\in[0,T],\\ X(0)=X_{0},\end{cases}\] The first question that arises is determining the conditions under which the operator \((A_{-1}+\mathscr{P})_{|H}\) is a generator in \(H\). This question has already been addressed by Desch and Schappacher, who introduced an appropriate class of perturbations in their work [8] and in [9, Chapter III.3.a]. Specifically, if \(\mathscr{P}\in\mathcal{L}(H,H_{-1})\) is an admissible control operator for \(A\) (as defined in Section 2), then the part of \(A_{-1}+\mathscr{P}\) in \(H\) generates a strongly continuous semigroup \(\mathscr{T}:=(\mathscr{T}(t))_{t\geq 0}\) on \(H\). Despite the importance of studying generator perturbation of stochastic Cauchy problems, the existing literature on this topic is limited. Only a few results have been reported in the literature, namely [24], [16] and [20]. In [24], Peszat studied the existence and uniqueness of solutions, as well as the absolute continuity of the laws of the solutions of the problem \((\mathbf{SCP})_{\mathbf{A}+\mathbf{P},\mathbf{I}}\) where the operator \(P\in\mathcal{L}(D(A),H)\) is closed, \[\bigcup_{t>0}\mathrm{Ran}\mathbb{T}(t)\subset D(A)\quad\text{ and }\quad\int_{0}^{T}\|P\mathbb{T}(t)\|^{2}dt<\infty.\] In [20], a variation of constant formula for \((\mathbf{SCP})_{\mathbf{A}+\mathbf{P},\mathbf{B}}\) is given when \((W(t))_{t\geq 0}\) is a one-dimensional Wiener process and \(B\in\mathcal{L}(U,H)\), see also [11]. In [16], the authors studied the existence of solutions, as well as the invariant measure of \((\mathbf{SCP})_{\mathbf{A}+\mathbf{P},\mathbf{B}}\) in the context of Banach spaces for \(P\in\mathcal{L}(H)\) and \(B\in\mathcal{L}(U,H)\). This paper has two main goals. The first one is to provide a characterization of the existence and uniqueness of solutions to the stochastic Cauchy problem \((\mathbf{SCP})_{\mathbf{A},\mathbf{B}}\) using the notion of admissible observation operators and the Yosida extension of unbounded linear operators (see Section 2). This characterization will allow us to derive a necessary and sufficient condition for the existence and uniqueness of solutions in terms of the Dirichlet operator, which is more accessible than the condition on the input maps, since it only depends on the operator \(B\) and the resolvent of \(A\). The second aim of this paper is to establish the existence and uniqueness of solutions to \((\mathbf{SCP})_{\mathbf{A}+\mathscr{P},\mathbf{B}}\). To do so, we assume that \((\mathbf{SCP})_{\mathbf{A},\mathbf{B}}\) has a solution and \(\mathscr{P}\) is an admissible control operator for \(A\). However, both the operators \(B\) and \(\mathscr{P}\) are unbounded, so we need to address this issue. Specifically, we prove that \((\mathbf{SCP})_{\mathbf{A}+\mathscr{P},\mathbf{B}}\) has a solution if and only if the input maps associated with \((A_{-1}+\mathscr{P})_{|H}\) and \(B\) are Hilbert-Schmidt from \(L^{2}([0,T];U)\) to \(H\). Unfortunately, we do not know an explicit expression for the extrapolation semigroup of \(\mathscr{T}\) and \(H\) in terms of the semigroup \(\mathbb{T}\). Therefore, we introduce a new notion of \(\mathcal{S}\)-admissible observation operators (see Definition 3.3) to address this problem using a dual approach. The rest of the paper is organized as follows: In Section 2, we present some preliminaries about admissible control and observation operators, as well as the results needed in the following sections. Section 3 presents sufficient and necessary conditions for the existence of solutions to \((\mathbf{SCP})_{\mathbf{A},\mathbf{B}}\). In Proposition 3.4, we use the notion of admissible control/observation operators and the Yosida extensions of such operators. In Proposition 3.8, we provide a condition that characterizes the existence and uniqueness of solutions directly in terms of the Dirichlet operator. In Section 4, we address the second aim of the paper and prove the existence and uniqueness of solutions to the perturbed stochastic Cauchy problem \((\mathbf{SCP})_{\mathbf{A}+\mathscr{P},\mathbf{B}}\). The last section is devoted to an application to a one-dimensional heat equation with a white-noise Neumann boundary control. **Notation.** Throughout the paper, we use the following notation. \(H\), \(U\), and \(Y\) always denote separable Hilbert spaces. For \(T>0\), we denote the spaces \(L^{2}(0,T;U)\) and \(L^{2}(0,T;Y)\) by \(\mathcal{U}_{T}\) and \(\mathcal{Y}_{T}\), respectively. The scalar product is denoted by \(\langle\cdot,\cdot\rangle\), equipped with a subscript to specify the space if necessary. \(\mathcal{L}(U,H)\) denotes the space of bounded linear operators from \(U\) to \(H\), endowed with the usual operator norm. \(\mathcal{L}_{2}(U,H)\) denotes the space of Hilbert-Schmidt operators from \(U\) to \(H\). We use \(\|\cdot\|_{\mathcal{L}_{2}(U,H)}\) to denote the Hilbert-Schmidt norm for these operators or simply \(\|\cdot\|_{2}\) when there is no ambiguity about the spaces. We denote by \(\mathbb{C}_{\alpha}\) the half-plane of all \(\lambda\in\mathbb{C}\) with \(\mathrm{Re}(\lambda)>\alpha\). Given a separable Hilbert space \(\mathscr{X}\), \(H^{2}(\mathscr{X})\) denotes the Hardy space of all analytic functions \(f:\mathbb{C}_{0}\to\mathscr{X}\) that satisfy \[\sup_{\alpha>0}\int_{-\infty}^{+\infty}\|f(\alpha+i\beta)\|^{2}d\beta<\infty.\] The norm of \(f\) in this space is, by definition, \[\|f\|_{H^{2}}=\frac{1}{2\pi}\left(\sup_{\alpha>0}\int_{-\infty}^{+\infty}\|f( \alpha+i\beta)\|^{2}d\beta\right)^{\frac{1}{2}}.\] In this work, we require some classical background on semigroup theory. Let \(A\) be the generator of a strongly continuous semigroup \(\mathbb{T}=(\mathbb{T}(t))_{t\geq 0}\) on a Hilbert space \(H\), with growth bound \(\omega_{0}(\mathbb{T})\). We denote by \(H_{1}\) the space \(D(A)\) with the norm \(\|x\|_{1}=\|(\beta-A)x\|\), where \(\beta\) is an arbitrary (but fixed) element in the resolvent set \(\rho(A)\). We also denote by \(H_{-1}\) the completion of \(H\) with respect to the norm \(\|x\|_{-1}=\|R(\beta,A)x\|\), where \(R(\beta,A):=(\beta-A)^{-1}\), and \(\beta\) is as before. It is easy to verify that \(H_{1}\) and \(H_{-1}\) are independent of the choice of \(\beta\), since different values of \(\beta\) lead to equivalent norms on \(H_{1}\) and \(H_{-1}\). Note that the norm \(\|\cdot\|_{1}\) is equivalent to the graph norm of \(A\). We have \(H_{1}\subset H\subset H_{-1}\) densely and with continuous embedding. The semigroup \(\mathbb{T}\) extends to a strongly continuous semigroup on \(H_{-1}\), whose generator is an extension of \(A\), with domain \(H\). We denote the extensions of \(\mathbb{T}\) and \(A\) by \(\mathbb{T}_{-1}:=(\mathbb{T}_{-1}(t))_{t\geq 0}\) and \(A_{-1}\), respectively. Furthermore, we use \(H_{1}^{d}\) to denote \(D(A^{*})\) with the norm \(\|x\|_{1}^{d}=\|(\bar{\beta}-A^{*})x\|\) and \(H_{-1}^{d}\) to denote the completion of \(H_{1}^{d}\) with respect to the norm \(\|x\|_{-1}^{d}=\|R(\bar{\beta},A^{*})x\|\), where \(A^{*}\) is the adjoint operator of \(A\) and \(\bar{\beta}\) is an arbitrary (but fixed) element in the resolvent set \(\rho(A^{*})\). Note that \(H_{-1}\) is the dual of \(H_{1}^{d}\) with respect to the pivot space \(H\). ## 2. Background on admissible control/observation operators In this section, we present general information about admissible control and observation operators. The content of this section can be found in greater detail, along with numerous references, in [26], [27], and [25, Chapter 4]. Throughout this section, we assume that \(H\), \(U\), \(Y\), and \(Z\) are separable Hilbert spaces, with \(Z\) continuously and densely embedded in \(H\), and \(T>0\) is a real number. We consider the following boundary control problem: \[\begin{cases}\dot{x}(t)=A_{m}x(t),&t\in[0,T],\\ Gx(t)=u(t),&t\in[0,T],\\ x(0)=x_{0}\in H,\end{cases} \tag{2.1}\] where \(u\in\mathcal{U}_{T}\) is a control function, \(A_{m}:Z\subset H\to H\) is a closed linear operator and \(G:Z\to U\) is a linear operator such that the following assumptions * The operator \(G:Z\to U\) is surjective, * The operator \(A:=(A_{m})_{|D(A)}\) with \(D(A):=\ker G\) generates a \(C_{0}\)-semigroup \(\mathbb{T}:=(\mathbb{T}(t))_{t\geq 0}\) on \(H\), are satisfied. It is shown by Greiner [14, Lemmas 1.2, 1.3] that under the assumptions **(A1)** and **(A2)**, the domain \(D(A_{m})\) can be viewed as the direct sum of \(D(A)\) and \(\ker(\lambda-A_{m})\) for any \(\lambda\in\rho(A)\). Moreover, the operator \(G_{|\ker(\lambda-A_{m})}\) is invertible and the inverse \[\mathbb{D}_{\lambda}:=\left(G_{|\ker(\lambda-A_{m})}\right)^{-1}:U\to\ker( \lambda-A_{m})\subset H,\qquad\lambda\in\rho(A),\] is bounded. The operator \(\mathbb{D}_{\lambda}\) is called the Dirichlet operator associated with \(A_{m}\) and \(G\). We consider the following operator \[B:=(\lambda-A_{-1})\mathbb{D}_{\lambda}\in\mathcal{L}(U,H_{-1}),\qquad\lambda \in\rho(A).\] As for any \(u\in U\) and \(\lambda\in\rho(A)\) we have \(\mathbb{D}_{\lambda}u\in\ker(\lambda-A_{m})\) so \(\lambda\mathbb{D}_{\lambda}u=A_{m}\mathbb{D}_{\lambda}u\). Hence, \[(A_{m}-A_{-1})\,\mathbb{D}_{\lambda}u=(\lambda-A_{-1})\mathbb{D}_{\lambda}u= Bu.\] Since \(\mathbb{D}_{\lambda}\) is the inverse of \(G_{|\ker(\lambda-A_{m})}\) and \(D(A_{m})=D(A)\oplus\ker(\lambda-A_{m})\), it follows that \[A_{m}=(A_{-1}+BG)_{|Z}.\] Thus, the boundary control system (2.1) can be reformulated as \[\begin{cases}\dot{x}(t)=A_{-1}x(t)+Bu(t),&t\in[0,T],\\ x(0)=x_{0}\in H.\end{cases} \tag{2.2}\] The mild solution of the equation (2.2) is given by \[x(t)=\mathbb{T}(t)x_{0}+\Phi_{t}u,\qquad u\in\mathcal{U}_{T},\ \ x_{0}\in H,\] where \(\Phi_{t}\in\mathcal{L}(\mathcal{U}_{T},H_{-1})\) is defined by \[\Phi_{t}u:=\int_{0}^{t}\mathbb{T}_{-1}(t-s)Bu(s)ds. \tag{2.3}\] Notice that in the above formula, \(\mathbb{T}_{-1}\) acts on \(H_{-1}\) and the integration is carried out in \(H_{-1}\). This motivates the following definition. **Definition 2.1**.: The operator \(B\in\mathcal{L}(U,H_{-1})\) is called an admissible control operator for \(A\) if \(\operatorname{Ran}(\Phi_{\tau})\subset H\) for some \(\tau>0\). It is worth noting that if \(B\) is an admissible control operator for \(A\), then the closed graph theorem guarantees that \(\Phi_{t}\in\mathcal{L}(\mathcal{U}_{T},H)\) for all \(t\geq 0\). As a result, for any \(u\in\mathcal{U}_{T}\) and \(x_{0}\in H\), the solutions \(x(\cdot)\) of (2.2) stay in \(H\) and form a continuous \(H\)-valued function of \(t\). The operators \(\Phi_{t}\) are commonly referred to as input maps associated with the pair \((A,B)\). Now we deal with the concept of admissible observation operators, a dual concept of admissible control operators when we work in reflexive Banach spaces, in particular Hilbert spaces. This concept is introduced and developed in [26]. We have the following definition: **Definition 2.2**.: The operator \(C\in\mathcal{L}(H_{1},Y)\) is called an admissible observation operator for \(A\) if for some (hence all) \(\alpha>0\) there is a constant \(\gamma:=\gamma(\alpha)>0\) such that \[\int_{0}^{\alpha}\|C\mathbb{T}(t)x\|^{2}dt\leq\gamma^{2}\|x\|^{2}\] for any \(x\in D(A)\). Suppose \(C\in\mathcal{L}(H_{1},Y)\) is an admissible observation operator for \(A\). Then, the map \(\Psi_{T}\) defined by \[(\Psi_{T}x)(t)=C\mathbb{T}(t)x,\ \ \ x\in D(A),\ \ \ t\in[0,T],\] has an extension to an operator \(\Psi_{T}\in\mathcal{L}(H,\mathcal{Y}_{T})\). The operators \(\Psi_{T}\) are called output maps corresponding to the pair \((A,C)\). As shown in [25, Section 4.4], \(B\in\mathcal{L}(U,H_{-1})\) is an admissible control operator for \(\mathbb{T}\) if and only if \(B^{*}\in\mathcal{L}\left(H_{1}^{d},U\right)\) is an admissible observation operator for the dual semigroup \(\mathbb{T}^{*}\). Moreover, for every \(T>0\) the adjoint \(\Phi_{T}^{*}\in\mathcal{L}(H,\mathcal{U}_{T})\) of the operator \(\Phi_{T}\) introduced in (2.3) is given by \[\left(\Phi_{T}^{*}x\right)(t)=\left(\Psi_{T}^{d}x\right)(T-t),\qquad t\in[0,T],\ \ x\in H,\] where \((\Psi_{T}^{d})_{T\geq 0}\) are the output maps corresponding to the pair \((A^{*},B^{*})\). Now, we introduce the Yosida extension of \(C\), denoted \(C_{\Lambda}\), by \[C_{\Lambda}x=\lim_{\lambda\rightarrow+\infty}C\lambda R(\lambda, A)x,\] \[D(C_{\Lambda})=\left\{x\in H,\ \ \ \lim_{\lambda\rightarrow+\infty}C \lambda R(\lambda,A)x\ \ \text{exists in}\ Y\right\}.\] Clearly, \(H_{1}\subset D(C_{\Lambda})\subset H\). We note that if \(C\) is an admissible observation operator for \(A\), the representation theorem of Weiss [26, Theorem 4.5], shows that \(\text{Ran}(\mathbb{T}(t))\subset D(C_{\Lambda})\) and \[(\Psi_{T}x)(t)=C_{\Lambda}\mathbb{T}(t)x,\] for all \(x\in H\) and for almost every \(t\in(0,T]\). ## 3. Equations with white-noise boundary conditions Let \((\Omega,\mathcal{F},\mathbb{P})\) be a probability space with right continuous increasing family \(\mathbf{F}=(\mathcal{F}_{t})_{t\geq 0}\) of sub-\(\sigma\)-fields of \(\mathcal{F}\) each containing \(\mathbb{P}\)-null sets. Let \((e_{n})_{n\in\mathbb{N}}\) be an orthonormal basis in \(U\) and let \(\{\beta_{n}\}\) be a sequence of independent real valued \(\mathbf{F}\)-Wiener processes. We define a cylindrical Wiener process on \(U\) by the serie \[W(t)=\sum_{n=0}^{\infty}\beta_{n}(t)e_{n},\quad t\geq 0,\] which converges in a Hilbert space \(\tilde{U}\) containing \(U\) with a Hilbert-Schmidt embedding. We consider the following problem with boundary white-noise condition \[\begin{cases}dX(t)=A_{m}X(t),&t\in[0,T],\\ GX(t)=\dot{W}(t),&t\in[0,T],\\ X(0)=X_{0}\in H.&\end{cases} \tag{3.1}\] Under the assumptions **(A1)** and **(A2)**, Da Prato and Zabczyk [6] (see also [5, Chap. 13]) proved that the boundary problem (3.1) is reformulated as the following stochastic Cauchy problem \[(\mathbf{SCP})_{\mathbf{A},\mathbf{B}}\begin{cases}dX(t)=A_{-1}X(t)+BdW(t),&t \in[0,T],\\ X(0)=X_{0}\in H,&\end{cases}\] in \(H_{-1}\), where \(B\in\mathcal{L}(U,H_{-1})\) is the control operator associated with \(A_{m}\) and \(G\). It is well known, see [4, Theorem 5.4], that \((\mathbf{SCP})_{\mathbf{A},\mathbf{B}}\) has a unique mild solution \[X(t)=\mathbb{T}(t)X_{0}+\int_{0}^{t}\mathbb{T}_{-1}(t-s)BdW(s),\qquad t\in[0,T],\] in \(H_{-1}\) if and only if for some \(T>0\) \[\int_{0}^{T}\|\mathbb{T}_{-1}(t)B\|^{2}_{\mathcal{L}_{2}(U,H_{-1})}\,dt<\infty.\] We mention that in the above formula \(\mathbb{T}_{-1}\) acts on \(H_{-1}\) and then the stochastic convolution carried out in \(H_{-1}\). Here, we are interested in \(H\)-valued solutions to the stochastic Cauchy problem \((\mathbf{SCP})_{\mathbf{A},\mathbf{B}}\). This motivates the following definition. **Definition 3.1**.: Assume that \((\mathbf{SCP})_{\mathbf{A},\mathbf{B}}\) has a mild solution \(X\) in \(H_{-1}\). We say that \((\mathbf{SCP})_{\mathbf{A},\mathbf{B}}\) has a mild solution if for each \(t\in[0,T]\) the process \(X(t)\) takes values in \(H\). The weak solution of \((\mathbf{SCP})_{\mathbf{A},\mathbf{B}}\) can also be defined using the same terminology. In fact, \(X\) is a mild solution of \((\mathbf{SCP})_{\mathbf{A},\mathbf{B}}\) if and only if it is a weak solution of \((\mathbf{SCP})_{\mathbf{A},\mathbf{B}}\). It is worth mentioning that if a solution exists, it is unique. For more details about this fact, see [1, Proposition 2.4]. Therefore, to simplify the terminology, we will simply speak of a _solution_. The following theorem from [6] (in Hilbert spaces) and [1, Proposition 2.4] (in Banach spaces) gives a necessary and sufficient condition for the existence and uniqueness of a solution for \((\mathbf{SCP})_{\mathbf{A},\mathbf{B}}\). **Theorem 3.2**.: _The stochastic Cauchy problem \((\mathbf{SCP})_{\mathbf{A},\mathbf{B}}\) has a solution if and only if \(\Phi_{T}\) is Hilbert-Schmidt from \(\mathcal{U}_{T}\) to \(H\)._ To characterize the existence of solutions to \((\mathbf{SCP})_{\mathbf{A},\mathbf{B}}\) using the notion of admissibility introduced in Section 2, we introduce the following definition. **Definition 3.3**.: Let \(B\in\mathcal{L}(U,H_{-1})\) and \(C\in\mathcal{L}(H_{1},Y)\). 1. The operator \(B\) is said to be \(\mathcal{S}\)-admissible control operator for \(A\) if the operator \(\Phi_{T}\) has an extension to a Hilbert-Schmidt operator from \(\mathcal{U}_{T}\) into \(H\). 2. The operator \(C\) is said to be \(\mathcal{S}\)-admissible observation operator for \(A\) if the operator \(\Psi_{T}\) has an extension to a Hilbert-Schmidt operator from \(H\) into \(\mathcal{Y}_{T}\). Since every Hilbert-Schmidt operator is bounded, it follows that the \(\mathcal{S}\)-admissibility implies the admissibility. For the converse implication we have the following proposition. **Proposition 3.4**.: _Suppose that \(B\in\mathcal{L}(U,H_{-1})\) and \(C\in\mathcal{L}(H_{1},Y)\)._ 1. \(C\) _is_ \(\mathcal{S}\)_-admissible observation operator for_ \(A\) _if and only if_ \(C\) _is an admissible observation operator for_ \(A\) _and_ \[\gamma(T):=\int_{0}^{T}\|C_{\Lambda}\mathbb{T}(t)\|_{2}^{2}dt<\infty\] _for some_ \(T>0\)_._ 2. \(B\) _is_ \(\mathcal{S}\)_-admissible control operator for_ \(A\) _if and only if_ \(B^{*}\) _is_ \(\mathcal{S}\)_-admissible observation operator for_ \(A^{*}\)_._ 3. \(B\) _is_ \(\mathcal{S}\)_-admissible control operator for_ \(A\) _if and only if_ \(B^{*}\) _is an admissible observation operator for_ \(A^{*}\) _and_ \[\int_{0}^{T}\|B^{*}_{\Lambda}\mathbb{T}^{*}(t)\|_{2}^{2}dt<\infty\] (3.2) _for some_ \(T>0\)_._ Proof.: If \(C\in\mathcal{L}(H_{1},Y)\) is \(\mathcal{S}\)-admissible, then \(\Psi_{T}\) is linear bounded from \(H\) to \(\mathcal{Y}_{T}\). Therefore, \(C\) is an admissible observation for \(A\) and \((\Psi_{T}x)(t)=C_{\Lambda}\mathbb{T}(t)x\) for a.e \(t\geq 0\) and \(x\in H\). Now, let \((e_{k})_{k\in\mathbb{N}}\) be an orthonormal basis of \(H\). Then, we have \[\|\Psi_{T}\|_{2}^{2} = \sum_{k\in\mathbb{N}}\|\Psi_{T}e_{k}\|^{2}\] \[= \sum_{k\in\mathbb{N}}\int_{0}^{T}\|\left(\Psi_{T}e_{k}\right)(t)\| ^{2}dt\] \[= \sum_{k\in\mathbb{N}}\int_{0}^{T}\|C_{\Lambda}\mathbb{T}(t)e_{k} \|^{2}dt\] \[= \int_{0}^{T}\|C_{\Lambda}\mathbb{T}(t)\|_{2}^{2}dt.\] Thus (i) is satisfied. Moreover, it is shown in [25, Theorem 4.4.3] that \(B\) is an admissible control operator for \(A\) if and only if \(B^{*}\) is an admissible observation operator for \(A^{*}\). In this case, we have \[\left(\Phi_{T}^{*}x\right)(t)=B_{\Lambda}^{*}\mathbb{T}^{*}\left(T-t\right)x,\] for every \(x\in H\) and a.e \(t\in[0,T]\). Thus, \[\int_{0}^{T}\|B_{\Lambda}^{*}\mathbb{T}^{*}(t)\|_{2}^{2}dt=\|\Phi_{T}^{*}\|_{ 2}^{2}.\] Consequently, (ii) and (iii) follow from (i). It follows from (iii) of the above proposition that the stochastic Cauchy problem \((\mathbf{SCP})_{\mathbf{A},\mathbf{B}}\) has solution if and only if (2.2) has a solution and the condition (3.2) holds. **Remark 3.5**.: Let \(C\) be an \(\mathcal{S}\)-admissible observation operator for \(A\) and let \((\mathbb{T}(t))_{t\geq 0}\) be exponentially stable, then \[\int_{0}^{\infty}\|C_{\Lambda}\mathbb{T}(t)\|_{2}^{2}dt<\infty.\] In fact, we choose \(t_{0}>0\) large enough such that \(\|\mathbb{T}(t_{0})\|<1\). Then, \[\int_{0}^{\infty}\|C_{\Lambda}\mathbb{T}(t)\|_{2}^{2}dt = \sum_{k=0}^{\infty}\int_{kt_{0}}^{(k+1)t_{0}}\|C_{\Lambda}\mathbb{ T}(t)\|_{2}^{2}dt\] \[= \sum_{k=0}^{\infty}\int_{0}^{t_{0}}\|C_{\Lambda}\mathbb{T}(t) \mathbb{T}(kt_{0})\|_{2}^{2}dt\] \[\leq \gamma(t_{0})\sum_{k=0}^{\infty}\|\mathbb{T}(t_{0})\|^{2k}<\infty.\] **Proposition 3.6**.: _Assume that \(C\in\mathcal{L}(H_{1},Y)\) is \(\mathcal{S}\)-admissible observation operator for \(A\). Then for every \(\omega>\omega_{0}(\mathbb{T})\) there exists \(K_{\omega}\geq 0\) such that_ \[\|CR(\lambda,A)\|_{2}\leq\frac{K_{\omega}}{\sqrt{Re(\lambda)-\omega}},\qquad \forall\lambda\in\mathbb{C}_{\omega}. \tag{3.3}\] Proof.: Let \(\lambda\in\mathbb{C}\) with \(Re(\lambda)>\omega_{0}(\mathbb{T})\). We choose \(\omega\in(\omega_{0}(\mathbb{T}),Re(\lambda))\) and set \(\epsilon=Re(\lambda)-\omega\). Let \((e_{k})_{k\in\mathbb{N}}\) be an orthonormal basis of \(H\). It follows from [25, Theorem 4.3.7] that the Laplace transform of \(\Psi_{T}z\) exists at \(\lambda\) for any \(z\in H\) and it is given by \(\overline{(\Psi_{T}z)}(\lambda)=CR(\lambda,A)z\). Then, we have \[\|CR(\lambda,A)\|_{2}^{2} = \sum_{k=1}^{\infty}\|CR(\lambda,A)e_{k}\|^{2}\] \[= \sum_{k=1}^{\infty}\left\|\int_{0}^{\infty}e^{-\lambda t}C_{ \Lambda}\mathbb{T}(t)e_{k}dt\right\|^{2}\] \[\leq \sum_{k=1}^{\infty}\left(\int_{0}^{\infty}|e^{-\epsilon t}|\cdot \|e^{-\omega t}C_{\Lambda}\mathbb{T}(t)e_{k}\|dt\right)^{2}.\] It follows from [25, Proposition 4.3.6], that \(t\mapsto e^{-\omega t}C_{\Lambda}\mathbb{T}(t)e_{k}\in L^{2}(0,+\infty;Y)\). Then, the Cauchy-Schwarz inequality implies \[\|CR(\lambda,A)\|_{2}^{2} \leq\int_{0}^{\infty}|e^{-\epsilon t}|^{2}dt\cdot\sum_{k=1}^{ \infty}\int_{0}^{\infty}\|e^{-\omega t}C_{\Lambda}\mathbb{T}(t)e_{k}\|^{2}dt.\] Consequently, Remark 3.5 implies that there exists \(K_{\omega}\geq 0\) such that \[\|CR(\lambda,A)\|_{2}\leq\frac{K_{\omega}}{\sqrt{\epsilon}}=\frac{K_{\omega}}{ \sqrt{Re(\lambda)-\omega}}.\] **Remark 3.7**.: The converse implication of the above proposition is known as the stochastic Weiss conjecture: 1. It is shown in [19, Theorem 4.5], that if \(\mathbb{T}\) is a contraction semigroup on \(H\), \(Y\) is a Hilbert space and \(C\in\mathcal{L}(H_{1},Y)\) satisfies the estimate (3.3) for every \(\lambda\in\mathbb{C}_{\omega}\) then \(C\) is an admissible observation operator for \(A\). We ask if \(C\) can be an \(\mathcal{S}\)-admissible observation operator for \(A\). 2. A dual version of the estimate (3.3) was considered in [1]. In fact, assuming that \(-A\) is sectorial, injective with dense range and admits a bounded \(H^{\infty}\)-calculus of angle less than \(\frac{\pi}{2}\) on \(H\), the authors showed that if for any \(\lambda>0\), \[\|R(\lambda,A_{-1})B\|_{2}<\infty\quad\text{and}\quad\sum_{n\in\mathbb{Z}}2^{ n}\|R(2^{n},A_{-1})B\|_{2}^{2}<\infty,\] then \(B\) is an \(\mathcal{S}\)-"infinite" admissible control operator for \(A\), that is, the operator \(\widetilde{\Phi}\) defined by \[\widetilde{\Phi}u:=\lim_{T\to\infty}\int_{0}^{T}\mathbb{T}_{-1}(t)Bu(t)\,dt, \quad u\in L^{2}(0,\infty;U),\] exists in \(H\) and it is Hilbert-Schmidt from \(L^{2}(0,+\infty;U)\) to \(H\). Now, we give a criterion of the \(\mathcal{S}\)-admissibility of \(B\) in terms of the Dirichlet operator. **Proposition 3.8**.: _Let \(\omega>\omega_{0}(\mathbb{T})\). Then we have the following:_ 1. \(C\) _is_ \(\mathcal{S}\)_-admissible for_ \(A\) _if and only if_ \[\sum_{n\in\mathbb{Z}}\left\|CR\left(\omega+\frac{2\pi in}{T},A\right)\right\|_{2} ^{2}<\infty.\] (3.4) 2. \(B\) _is_ \(\mathcal{S}\)_-admissible for_ \(A\) _if and only if_ \[\sum_{n\in\mathbb{Z}}\left\|\mathbb{D}_{\omega+\frac{2\pi in}{T}}\right\|_{2} ^{2}<\infty.\] Proof.: (i) Let \(\omega>\omega_{0}(\mathbb{T})\). Define the operator \(A_{\omega}:=A-\omega\) with domain \(D(A_{\omega})=D(A)\) and \(\Psi_{T}^{\omega}\) be the output map associated to \((A_{\omega},C)\). Now, let \((e_{k})_{k\in\mathbb{N}}\) and \((h_{j})_{j\in\mathbb{N}}\) be an orthonormal basis of \(H\) and \(Y\), respectively. Choose \((f_{n}:=e^{-\frac{2\pi in}{T}(\cdot)})_{n\in\mathbb{Z}}\) as an orthonormal basis of \(L^{2}(0,T)\). Then, \((e^{-\frac{2\pi in}{T}(\cdot)}\otimes h_{j})_{n\in\mathbb{Z},j\in\mathbb{N}}\) is an orthonormal basis of \(L^{2}(0,T;Y)\). Using Parseval's identity, we have \[\left\|\Psi_{T}^{\omega}\right\|_{2}^{2} =\sum_{k\in\mathbb{N}}\left\|\Psi_{T}^{\omega}e_{k}\right\|^{2}\] \[=\sum_{k,j\in\mathbb{N}}\sum_{n\in\mathbb{Z}}\left(\Psi_{T}^{ \omega}e_{k},f_{n}\otimes h_{j}\right)_{\mathfrak{Y}_{T}}^{2}\] \[=\sum_{k,j\in\mathbb{N}}\sum_{n\in\mathbb{Z}}\left(\int_{0}^{T} \langle(\Psi_{T}^{\omega}e_{k})(t),(f_{n}\otimes h_{j})\left(t)\right)_{Y}dt \right)^{2}\] \[=\sum_{k,j\in\mathbb{N}}\sum_{n\in\mathbb{Z}}\left(\int_{0}^{T} \langle e^{-\frac{2\pi in}{T}t}(\Psi_{T}^{\omega}e_{k})(t),h_{j}\rangle_{Y}dt \right)^{2}.\] Using the fact that \(\left(\Psi_{T}^{\omega}e_{k}\right)(t)=e^{-\omega t}\left(\Psi_{T}e_{k}\right) (t)\), we have \[\int_{0}^{T}e^{-\frac{2\pi ini}{T}t}\left(\Psi_{T}^{\omega}e_{k} \right)(t)dt =\int_{0}^{T}e^{-\left(\omega+\frac{2\pi in}{T}\right)t}\left( \Psi_{T}e_{k}\right)(t)dt\] \[=CR\left(\omega+\frac{2\pi in}{T},A\right)Je_{k},\] where \(J:=Id-e^{-\omega T}\mathbb{T}(T)\). Thus, \[\left\|\Psi_{T}^{\omega}\right\|_{2}^{2} =\sum_{k,j\in\mathbb{N}}\sum_{n\in\mathbb{Z}}\left(\left\langle CR \left(\omega+\frac{2\pi ni}{T},A\right)Je_{k},h_{j}\right\rangle_{Y}\right)^{2}\] \[=\sum_{n\in\mathbb{Z}}\left\|CR\left(\omega+\frac{2\pi ni}{T},A \right)J\right\|_{2}^{2}.\] Taking into account that the operator \(J\) is invertible, since the spectral radius of \(e^{-\omega T}\mathbb{T}(T)\) is less than one, it follows that (3.4) holds if and only if \(\Psi_{T}^{\omega}\) is Hilbert-Schmidt. This happens if and only if \(\Psi_{T}\) is Hilbert-Schmidt. (ii) follows from Proposition 3.4 and (i). (ii) of the above proposition is an extension of the result in [23, Corollary 7.4] (where the case \(B\in\mathcal{L}(U,H)\) was considered) in the Hilbert setting. **Example 3.9**.: Let \(U\) be a separable Hilbert space. We consider the following transport equation with boundary-noise \[\begin{cases}\frac{\partial X(t,\theta)}{\partial t}=\frac{\partial X(t,\theta) }{\partial\theta},&\theta\in[-r,0],\ \ t\in[0,T],\\ X(0,\theta)=\varphi(\theta),&\theta\in[-r,0],\\ X(t,0)=\dot{W}(t),&t\in[0,T],\end{cases} \tag{3.5}\] where \(\varphi\in H:=L^{2}([-r,0];U)\) and \((W(t))_{t\in[0,T]}\) is a cylindrical Wiener process over \(U\). On the space \(H\), we define the operators \[Q_{m}\varphi=\partial_{\theta}\varphi,\qquad\varphi\in D(Q_{m}) =W^{1,2}([-r,0];U),\] \[G\varphi=\varphi(0),\qquad\varphi\in W^{1,2}([-r,0];U).\] Thus, the problem (3.5) is reformulated in \(H\) as the following \[\begin{cases}\dot{X}(t)=Q_{m}X(t),&t\in[0,T],\\ X(0)=\varphi,\\ GX(t)=\dot{W}(t).\end{cases}\] It is clear that the operator \(G:D(Q_{m})\to U\) is surjective. Moreover, it is well known that the operator \[Q:=Q_{m},\qquad D(Q):=\left\{\varphi\in D(Q_{m}),\quad\varphi(0)=0\right\},\] generates the left shift semigroup \((S(t))_{t\geq 0}\) on \(H\) defined by \[(S(t)\varphi)(\theta)=\begin{cases}0,&-t\leq\theta\leq 0,\\ \varphi(t+\theta),&-r\leq\theta<-t.\end{cases}\] By a simple calculations, the Dirichlet operator associated with \(Q_{m}\) and \(G\) is \(\mathbb{D}_{\lambda}:U\to H\) given by \[(\mathbb{D}_{\lambda}v)(\theta)=e^{\lambda\theta}v,\qquad\theta\in[-r,0],\ \ v\in U,\ \ \lambda\in\mathbb{C}.\] Thus, the control operator \(B\) associated with \(Q_{m}\) and \(G\) is given by \[B=(\lambda-Q_{-1})\,\mathbb{D}_{\lambda}\in\mathcal{L}(U,H_{-1}),\qquad \lambda\in\mathbb{C}.\] Now, let \((u_{k})_{k\in\mathbb{N}}\) be an orthonormal basis of \(U\). Then, for any \(\omega>\omega_{0}(S)\), \[\sum_{n\in\mathbb{Z}}\left\|\mathbb{D}_{\omega+\frac{2\pi in}{T} }\right\|_{2}^{2} =\sum_{n\in\mathbb{Z}}\sum_{k\in\mathbb{N}}\int_{-r}^{0}\left\| \left(\mathbb{D}_{\omega+\frac{2\pi in}{T}}u_{k}\right)(\theta)\right\|^{2}d\theta\] \[=\sum_{n\in\mathbb{Z}}\sum_{k\in\mathbb{N}}\int_{-r}^{0}\left\|e ^{\left(\omega+\frac{2\pi in}{T}\right)\theta}u_{k}\right\|^{2}d\theta\] \[=\sum_{n\in\mathbb{Z}}\sum_{k\in\mathbb{N}}\int_{-r}^{0}e^{2 \omega\theta}d\theta=+\infty.\] It follows from Proposition 3.8, that \(B\) is not \(\mathcal{S}\)-admissible for \(Q\). Thus, the white-noise boundary problem (3.5) does not have a solution in \(H\). ## 4. Desch-Schappacher perturbation In this section, we are interested in studying an unbounded perturbation result that concerns the existence of a solution for the perturbed problem. Let \(\mathscr{P}\in\mathcal{L}(H,H_{-1})\) be an admissible control operator for \(A\). Then, the operator \[\mathscr{A}:=A_{-1}+\mathscr{P},\qquad D(\mathscr{A})=\left\{x\in H,\quad(A_{- 1}+\mathscr{P})x\in H\right\},\] generates a strongly continuous semigroup \(\mathscr{T}:=\left(\mathscr{T}(t)\right)_{t\geq 0}\) on \(H\) given by \[\mathscr{T}(t)x=\mathbb{T}(t)x+\int_{0}^{t}\mathbb{T}_{-1}(t-s)\mathscr{P} \mathscr{T}(s)xds\] for any \(x\in H\). The operator \(\mathscr{P}\in\mathcal{L}(H,H_{-1})\) is called a Desch-Schappacher perturbation for \(A\), see [8] and [9, Chap. III, Corollary 3.4]. For \(\mathscr{P}\in\mathcal{L}(H,H_{-1})\), we shall consider the perturbed stochastic Cauchy problem \[(\mathbf{SCP})_{\mathscr{A},\mathbf{B}}\ \begin{cases}dX(t)=\mathscr{A}X(t)dt+BdW(t), \quad t\in[0,T],\\ X(0)=X_{0}.\end{cases}\] It follows from Theorem 3.2, that \((\mathbf{SCP})_{\mathscr{A},\mathbf{B}}\) has a solution if and only if the input-map \[\Phi_{T}^{\mathscr{P}}u=\int_{0}^{T}\mathscr{T}_{-1}(T-s)Bdu(s),\quad u\in \mathcal{U}_{T},\] associated with \((\mathscr{A},B)\) is Hilbert-Schmidt from \(\mathcal{U}_{T}\) to \(H\). It is not immediately apparent how to establish the Hilbert-Schmidt property for \(\Phi_{T}^{\mathscr{P}}\) through a direct approach. This is due to the lack of a simple expression for the semigroup \(\mathscr{T}_{-1}\). However, we can overcome this problem by using a dual argument. Specifically, we will rely on the characterization of the \(\mathcal{S}\)-admissible control operators developed in the previous section. Let \(P\in\mathcal{L}(H_{1},H)\) be an admissible observation operator for \(A\). It is well-known that the perturbed operator \(\mathcal{A}:=A+P\) with domain \(D(\mathcal{A})=D(A)\) generates a strongly continuous semigroup \(\mathbb{S}:=(S(t))_{t\geq 0}\) on \(H\), see [18], see also [25, Theorem 5.4.2]. The following result shows the invariance of \(\mathcal{S}\)-admissibility for \(A\) under the unbounded perturbation \(P\). **Theorem 4.1**.: _Let \(C\in\mathcal{L}(H_{1},Y)\) be an \(\mathcal{S}\)-admissible observation operator for \(A\). Assume that \(P\in\mathcal{L}(H_{1},H)\) is an admissible observation operator for \(A\). Then, \(C\) is \(\mathcal{S}\)-admissible observation operator for \(\mathcal{A}\)._ Proof.: Let \(\omega>\max\left\{\omega_{0}(\mathbb{T}),\omega_{0}(\mathbb{S})\right\}\). Define the operators \(A_{\omega}:=A-\omega\) and \(\mathcal{A}_{\omega}:=A_{\omega}+P\) with domain \(D(\mathcal{A}_{\omega})=D(A_{\omega})=D(A)\). Denote \(\mathbb{T}_{\omega}\) and \(\mathbb{S}_{\omega}\) the semigroup generated by \(A_{\omega}\) and \(\mathcal{A}_{\omega}\), respectively. By Remark 3.5, we have \(\int_{0}^{+\infty}\|C_{\Lambda}\mathbb{T}_{\omega}(t)\|_{2}^{2}dt<\infty\) and \(C_{\Lambda}\mathbb{T}_{\omega}(t)\in\mathcal{L}_{2}(H,Y)\) for a.e. \(t\geq 0\). Now, we define the operator \[\mathfrak{S}:\mathbb{R}^{+} \to\mathcal{L}_{2}(H,Y)\] \[t \mapsto C_{\Lambda}\mathbb{T}_{\omega}(t).\] Thus, since \(\mathcal{L}_{2}(H,Y)\) is a separable Hilbert space it follows from the Paley-Wiener theorem that the Laplace transform \(\widehat{\mathfrak{S}}\) of \(\mathfrak{S}\) is in \(H^{2}\left(\mathcal{L}_{2}(H,Y)\right)\). Moreover, for any \(x\in H\) and any \(\lambda\in\mathbb{C}_{0}\), we have \(\widehat{\mathfrak{S}(\lambda)}x=CR(\lambda,A_{\omega})x\), and hence \(CR(\lambda,A_{\omega})\in H^{2}\left(\mathcal{L}_{2}(H,Y)\right)\). On the other hand, we know that \[CR(\lambda,\mathcal{A}_{\omega})=CR(\lambda,A_{\omega})\left[I+PR(\lambda, \mathcal{A}_{\omega})\right] \tag{4.1}\] for any \(\lambda\in\mathbb{C}_{0}\). Moreover, \(P\) is an admissible observation operator for \(\mathcal{A}_{\omega}\), see [18]. Thus, it follows from the exponential stability of \(\mathbb{S}_{\omega}\) that \[\|PR(\lambda,\mathcal{A}_{\omega})\|<\infty,\] for any \(\lambda\in\mathbb{C}_{0}\). According to (4.1), we obtain that \(CR(\lambda,\mathcal{A}_{\omega})\in H^{2}\left(\mathcal{L}_{2}(H,Y)\right)\). Using again the Paley-Wiener theorem, we obtain that \(\int_{0}^{+\infty}\|C_{\lambda}\mathbb{S}_{\omega}(t)\|_{2}^{2}dt<\infty\). This means that \(C\) is \(\mathcal{S}\)-admissible for \(\mathcal{A}_{\omega}\), and hence also for \(\mathcal{A}\). As a consequence of this result, we obtain the following perturbation result for \((\mathbf{SCP})_{\mathbf{A},\mathbf{B}}\) under Desh-Schappacher perturbation. **Corollary 4.2**.: _Let \(\mathscr{P}\in\mathcal{L}(H,H_{-1})\) be an admissible control operator for \(A\). If \((\mathbf{SCP})_{\mathbf{A},\mathbf{B}}\) has a solution, then \((\mathbf{SCP})_{\mathscr{A},\mathbf{B}}\) has a solution as well._ Proof.: According to Proposition 3.4, it suffices to prove that \(B^{*}\) is \(\mathcal{S}\)-admissible observation operator for \(\mathscr{A}^{*}\). Observe that \(\mathscr{A}^{*}=A^{*}+\mathscr{P}^{*}\) with domain \(D(\mathscr{A}^{*})=D(A^{*})\). ( This is due to the fact that \(\mathscr{A}\) is exactly the generator of the closed loop system associated with the regualr triple \((A,\mathscr{P},I)\) and the identity as admissible feedback. Thus \(\mathscr{A}^{*}\) is the generator of the triple \((A^{*},I,\mathscr{P}^{*})\) with identity operator as admissible feedback.) Here we recall that \(\mathscr{P}^{*}\) is an admissible observation operator for \(A^{*}\). From Theorem 3.2 and Proposition 3.4 we deduce that \(B^{*}\) is an \(\mathcal{S}\)-admissible observation operator for \(A^{*}\). Therefore, Theorem 4.1 implies that \(B^{*}\) is an \(\mathcal{S}\)-admissible observation operator for \(\mathscr{A}^{*}\). This ends the proof. As bounded operators \(\mathscr{P}\in\mathcal{L}(H)\) are admissible control operators for \(A\), we have the following result as a consequence of Corollary 4.2. It extends the result in [16, Theorem 3.3] (where the case \(B\in\mathcal{L}(U,H)\) was considered) in the Hilbert setting. **Corollary 4.3**.: _Let \(\mathscr{P}\in\mathcal{L}(H)\). If \((\mathbf{SCP})_{\mathbf{A},\mathbf{B}}\) has a solution, then \((\mathbf{SCP})_{\mathbf{A}+\mathscr{P},\mathbf{B}}\) has a solution as well._ ## 5. An application to a one-dimensional heat equation with a white-noise Neumann boundary control We consider the following heat equation with Neumann boundary control of white-noise type: \[(\mathbf{HE})\ \begin{cases}\frac{\partial X}{\partial t}(t,\xi)=\frac{\partial^{2}X}{ \partial\xi^{2}}(t,\xi),&0<\xi<\pi,\ t\in[0,T],\\ \frac{\partial X}{\partial\xi}(t,0)=MX(t,\cdot),&t\in[0,T],\\ \frac{\partial X}{\partial\xi}(t,\pi)=\dot{W}(t),&t\in[0,T],\\ X(0,\xi)=X_{0}(\xi),&0<\xi<\pi,\end{cases}\] where \(M\in\mathcal{L}\left(L^{2}(0,\pi),\mathbb{R}\right)\) and \((W(t))_{t\in[0,T]}\) is real Wiener process. To reformulate the system into the abstract setting, we consider the following operators \[\mathscr{A}_{m}:=\frac{\partial^{2}}{\partial\xi^{2}},\qquad D( \mathscr{A}_{m}):=\left\{\varphi\in H^{2}(0,\pi),\quad\varphi^{\prime}(0)=M \varphi\right\},\] \[\mathscr{G}:D(\mathscr{A}_{m})\to\mathbb{R},\qquad\mathscr{G} \varphi=\varphi^{\prime}(\pi).\] With these operators, the problem \((\mathbf{HE})\) is reformulated in \(H:=L^{2}(0,\pi)\) as follows \[\begin{cases}\dot{X}(t)=\mathscr{A}_{m}X(t),&t\in[0,T],\\ \mathscr{G}X(t)=\dot{W}(t),&t\in[0,T],\\ X(0)=X_{0}.\end{cases} \tag{5.1}\] In order to reformulate (5.1) as a stochastic Cauchy problem, we verify assumptions \((\mathbf{A1})\) and \((\mathbf{A2})\). Since the operator \(\mathscr{G}\) is surjective, it remains to verify that the operator \[\mathscr{A}:=\mathscr{A}_{m},\qquad D(\mathscr{A}):=\left\{\varphi\in H^{2}(0,\pi),\quad\varphi^{\prime}(0)=M\varphi,\ \ \varphi^{\prime}(\pi)=0\right\},\] generates a strongly continuous semigroup on \(H\). Let us first define the operators \[A_{m}:=\frac{\partial^{2}}{\partial\xi^{2}},\qquad D(A_{m}):= \left\{\varphi\in H^{2}(0,\pi),\quad\varphi^{\prime}(\pi)=0\right\},\] \[G:D(A_{m})\to\mathbb{R},\qquad G\varphi=\varphi^{\prime}(0).\] It is well-known that the operator \[A:=A_{m},\qquad D(A):=\left\{\varphi\in H^{2}(0,\pi),\quad\varphi^{\prime}(0) =0,\ \varphi^{\prime}(\pi)=0\right\},\] generates a strongly continuous semigroup \(\mathbb{T}:=(\mathbb{T}(t))_{t\geq 0}\) on \(H\) and the operator \(G\) is surjective. Then, the Dirichlet operator \(\mathbb{D}_{\lambda}:\mathbb{R}\to H\) associated with \(A_{m}\) and \(G\) satisfies \[\mathbb{D}_{\lambda}\alpha=\varphi\iff\left\{\lambda\varphi-\varphi^{\prime \prime}\ \text{ in }[0,\pi],\quad\varphi^{\prime}(0)=\alpha,\ \varphi^{\prime}(\pi)=0\right\}.\] Define \(B:=-A_{-1}\mathbb{D}_{0}\in\mathcal{L}(\mathbb{R},H_{-1})\). **Lemma 5.1**.: \(B\) _is an admissible control operator for \(A\) and its adjoint \(B^{*}\in\mathcal{L}(D(A),\mathbb{R})\) is given by_ \[B^{*}\varphi=-\varphi(0).\] Proof.: See for example [21, Chap. 3]. We select the operator \(\mathscr{P}:=BM\in\mathcal{L}(H,H_{-1})\). We have the following lemma. **Lemma 5.2**.: 1. \(\mathscr{P}\) _is an admissible control operator for_ \(A\) _._ 2. _The operator_ \(\mathscr{A}\) _coincides with the operator_ \[\mathfrak{A}:=A_{-1}+\mathscr{P},\qquad D(\mathfrak{A})=\{\varphi\in H,\quad(A_{-1}+\mathscr{P})\varphi\in H\}\,.\] _Moreover, it generates a strongly continuous semigroup_ \(\mathscr{T}:=\left(\mathscr{T}(t)\right)_{t\geq 0}\) _on_ \(H\)_._ Proof.: (i) Let \(\Phi_{t}^{\mathscr{P}}\) be the input map associated with \(A\) and \(\mathscr{P}\) \[\Phi_{t}^{\mathscr{P}}u:=\int_{0}^{t}\mathbb{T}_{-1}(t-s)\mathscr{P}u(s)ds, \qquad u\in\mathcal{U}_{T}.\] Since \(B\) is admissible for \(A\), we have \[\left\|\Phi_{t}^{\mathscr{P}}u\right\| =\left\|\int_{0}^{t}\mathbb{T}_{-1}(t-s)BMu(s)ds\right\|\] \[\leq\gamma(t)\|M\|\left(\int_{0}^{t}\|u(s)\|^{2}ds\right)^{\frac {1}{2}}\] for any \(u\in\mathcal{U}_{T}\). Thus, \(\mathscr{P}\) is an admissible control operator for \(A\). For (ii) see for example [17]. Now, let \(\mathscr{D}_{\lambda}\in\mathcal{L}(\mathbb{R},H)\), \(\lambda\in\rho(\mathscr{A})\), be the Dirichlet operator associated with \(\mathscr{A}_{m}\) and \(\mathscr{G}\). Then for any \(\alpha\in\mathbb{R}\) and \(\varphi\in H\) we have \[\mathscr{D}_{\lambda}\alpha=\varphi\iff\left\{\lambda\varphi- \varphi^{\prime\prime}\ \text{ in }[0,\pi],\quad\varphi^{\prime}(0)=M\varphi,\ \ \varphi^{\prime}(\pi)=\alpha\right\}.\] We set \(\mathscr{B}:=-\mathscr{A}_{-1}\mathscr{D}_{0}\in\mathcal{L}(\mathbb{R},H_{-1})\). Thus, the problem (5.1) is reformulated as \[\begin{cases}\dot{X}(t)=\mathscr{A}_{-1}X(t)+\mathscr{B}dW(t),&t\in[0,T],\\ X(0)=X_{0}.\end{cases} \tag{5.2}\] **Lemma 5.3**.: \(\mathscr{B}\) _is \(\mathcal{S}\)-admissible control operator for \(A\)._ Proof.: First we remark that \(D(\mathscr{A}^{*})\supseteqq D(A^{*})=D(A)\), since \(A\) is self-adjoint. Using Lemmas 5.1 and 5.2, we get \[\langle\mathscr{B}\alpha,\varphi\rangle =-\langle\mathscr{A}_{-1}\mathscr{D}_{0}\alpha,\varphi\rangle\] \[=-\langle\mathscr{D}_{0}\alpha,\mathscr{A}^{*}\varphi\rangle\] \[=-\langle\mathscr{D}_{0}\alpha,A^{*}\varphi\rangle-\langle \mathscr{D}_{0}\alpha,M^{*}B^{*}\varphi\rangle\] \[=-\langle\mathscr{D}_{0}\alpha,A\varphi\rangle+M\mathscr{D}_{0} \varphi(0)\alpha\] for any \(\alpha\in\mathbb{R}\) and any \(\varphi\in D(A)\). On the other hand, Using an integration by parts twice and the definition of \(\mathscr{D}_{0}\), we get that \[\langle\mathscr{D}_{0}\alpha,A\varphi\rangle=-\varphi(\pi)\alpha+M\mathscr{D} _{0}\varphi(0)\alpha.\] Thus, \[\langle\mathscr{B}\alpha,\varphi\rangle=\varphi(\pi)\alpha,\] which means that the adjoint operator of \(\mathscr{B}\) is given by \[\mathscr{B}^{*}\phi=\phi(\pi),\quad\forall\phi\in D(A).\] We recall that the family \((\phi_{n})_{n\in\mathbb{N}}\), defined by \[\phi_{n}(x)=\sqrt{\frac{2}{\pi}}\cos\left(nx\right),\quad\forall n\in\mathbb{N },\ \ x\in(0,\pi),\] is an orthonormal basis in \(H\) formed by the eigenvectors of \(A\) and the corresponding eigenvalues are \(\lambda_{n}=-n^{2}\), with \(n\in\mathbb{N}\). Thus, the semigroup \(\mathbb{T}\) is given by \[\mathbb{T}(t)f=\sum_{n\in\mathbb{N}}e^{-n^{2}t}\langle f,\phi_{n}\rangle\phi_{ n},\] and \(\mathscr{B}^{*}\) is defined by \(\mathscr{B}^{*}\phi_{n}=\sqrt{\frac{2}{\pi}}\cdot(-1)^{n}\). An easy calculation shows that \(\mathscr{B}^{*}\) is an admissible observation operator for \(A\). Then, By (ii) of Proposition 3.4, \(B\) is \(\mathcal{S}\)-admissible for \(A\) if and only \[\int_{0}^{T}\|\mathscr{B}^{*}_{\Lambda}\mathbb{T}^{*}(t)\|_{2}^{2}dt<\infty.\] For any \(T>0\), we have \[\int_{0}^{T}\|\mathscr{B}^{*}_{\Lambda}\mathbb{T}^{*}(t)\|_{2}^{2}dt =\sum_{n\in\mathbb{N}}\int_{0}^{T}|\mathscr{B}^{*}\mathbb{T}^{*}( t)\phi_{n}|^{2}dt\] \[=\sum_{n\in\mathbb{N}}\int_{0}^{T}|\mathscr{B}^{*}e^{-n^{2}t}\phi_ {n}|^{2}dt\] \[=\frac{2}{\pi}\sum_{n\in\mathbb{N}}\int_{0}^{T}e^{-2n^{2}t}dt<\infty.\] Consequently, \(\mathscr{B}\) is a \(\mathcal{S}\)-admissible control operator for \(A\). **Theorem 5.4**.: _The problem_ (**HE**) _has a unique mild solution \(X\) given by_ \[X(t)=\mathscr{T}(t)X_{0}+\int_{0}^{t}\mathscr{T}_{-1}(t-s)BdW(s),\qquad t\in[0,T],\] _for any \(X_{0}\in H\)._ Proof.: Using the Lemma 5.2 and (5.2), the problem (**HE**) is reformulated as \[\begin{cases}dX(t)=\mathfrak{A}X(t)dt+\mathscr{B}dW(t),&t\in[0,T]\\ X(0)=X_{0},\end{cases}\] The result follows from Lemma 5.2, Lemma 5.3 and Corollary 4.2.
2305.01685
Baryons, multi-hadron systems, and composite dark matter in non-relativistic QCD
We provide a formulation of potential non-relativistic quantum chromodynamics (pNRQCD) suitable for calculating binding energies and matrix elements of generic hadron and multi-hadron states made of heavy quarks in $SU(N_c)$ gauge theory using quantum Monte Carlo techniques. We compute masses of quarkonium and triply-heavy baryons in order to study the perturbative convergence of pNRQCD and validate our numerical methods. Further, we study $SU(N_c)$ models of composite dark matter and provide simple power series fits to our pNRQCD results that can be used to relate dark meson and baryon masses to the fundamental parameters of these models. For many systems comprised entirely of heavy quarks, the quantum Monte Carlo methods employed here are less computationally demanding than lattice field theory methods, although they introduce additional perturbative approximations. The formalism presented here may therefore be particularly useful for predicting composite dark matter properties for a wide range of $N_c$ and heavy fermion masses.
Benoît Assi, Michael L. Wagman
2023-05-02T18:00:05Z
http://arxiv.org/abs/2305.01685v2
# Baryons, multi-hadron systems, and composite dark matter in non-relativistic QCD ###### Abstract We provide a formulation of potential non-relativistic quantum chromodynamics (pNRQCD) suitable for calculating binding energies and matrix elements of generic hadron and multi-hadron states made of heavy quarks in \(SU(N_{c})\) gauge theory using quantum Monte Carlo techniques. We compute masses of quarkonium and triply-heavy baryons in order to study the perturbative convergence of pNRQCD and validate our numerical methods. Further, we study \(SU(N_{c})\) models of composite dark matter and provide simple power series fits to our pNRQCD results that can be used to relate dark meson and baryon masses to the fundamental parameters of these models. For many systems comprised entirely of heavy quarks, the quantum Monte Carlo methods employed here are less computationally demanding than lattice field theory methods, although they introduce additional perturbative approximations. The formalism presented here may therefore be particularly useful for predicting composite dark matter properties for a wide range of \(N_{c}\) and heavy fermion masses. + Footnote †: preprint: FERMILAB-PUB-23-127-T ###### Contents * I Introduction * II pNRQCD for multi-hadron systems * II.1 pNRQCD formalism * II.2 Quark-antiquark potential * II.3 Quark-quark potential * II.4 Three-quark potentials * II.5 Four- and more-quark potentials * II.6 pNRQCD Hamiltonian * III Many-body methods * III.1 Variational Monte-Carlo * III.2 Green's Function Monte Carlo * IV Coulombic trial wavefunctions * IV.1 Quarkonium * IV.2 Baryons * V QCD binding energy results * IV.1 Heavy quarkonium * IV.2 Triply-heavy baryons * VI Dark hadrons * VI Dark Mesons * VI.1 Dark Baryons * VII Outlook ## I Introduction Heavy quark systems provide a theoretically clean laboratory for studying quantum chromodynamics (QCD) because of the large separation of scales between the heavy quark mass and the confinement scale. Spurred initially by the discovery of doubly-heavy mesons \(J/\psi\)[1; 2] and \(\Upsilon\)[3], the use of non-relativistic (NR) effective field theory (EFT) to study heavy quarkonium in QCD [4; 5; 6; 7], analogous to the previous treatment of positronium in NR quantum electrodynamics (NRQED) [8], has been investigated extensively [5; 6; 9; 10; 11; 12; 13]. Prior to this first principles treatment of quarkonia with EFTs derived from QCD, studies mainly relied on potential quark models [14; 15; 16; 17; 18; 19]. Such models rely on phenomenological input whose connection with QCD parameters is obscure and thus cannot be systematically improved. Beyond quarkonium, there has been recent excitement about understanding the properties of baryons and exotic hadrons containing heavy quarks including tetraquarks, hadronic molecules, hybrid states containing explicit gluon degrees of freedom, and more [20; 21; 22; 23; 24]. Theoretically calculating the spectra of baryons and exotic states experimentally observed so far and predicting the presence of other states provide tests of our understanding of QCD in more complex systems than quarkonium. In particular, doubly-heavy baryons have recently been experimentally observed [25; 26; 27], and triply-heavy baryons, although not yet observed experimentally, have long been of theoretical interest as probes of confining QCD dynamics that are free from light quark degrees of freedom requiring relativistic treatment [28]. Additionally, one can consider generic composite states analogous to QCD, bound under a confining \(SU(N_{c})\) gauge theory. Such states have received particular attention recently as attractive dark matter (DM) candidates [29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51]. Motivated by the stability of the proton in the Standard Model (SM), a dark sector with non-Abelian gauge interactions can give rise to a stable, neutral dark matter candidate. Simple models of an \(SU(N_{c})\) dark sector with one heavy quark can provide UV-complete and phenomenologically viable models of composite DM [50; 51]. It would therefore be interesting to probe masses, lifetimes, and self-interactions in composite DM theory to make predictions for experiments. In this work, we study the description of generic hadronic bound states composed entirely of heavy quarks that are well-described by the EFT of potential NRQCD (pNRQCD) [4; 6; 8; 9; 52]. This EFT takes advantage of the experimental evidence that heavy quark bound state splittings are smaller than the quark mass, \(m_{Q}\). Thus, all dynamical scales are small relative to \(m_{Q}\). Assuming quark velocity is therefore small, \(v\ll 1\), one can exploit the hierarchy of scales \(m_{Q}\gg p_{Q}\sim m_{Q}v\gg E_{Q}\sim m_{Q}v^{2}\) in the system [8]. NRQCD is obtained from QCD by integrating out the hard scale, \(m_{Q}\), and pNRQCD is obtained from integrating out the soft scale \(p_{Q}\sim m_{Q}v\). The inverse of the soft scale gives the typical size of the bound state, analogous to the Bohr radius in the Hydrogen atom. In QCD, one has to consider the confinement scale \(\sim\Lambda_{\rm QCD}\), below which non-perturbative effects other than resummation of potential gluons must be included. Here, we will work in the so-called weak coupling regime [13], \(m_{Q}v\gg\Lambda_{\rm QCD}\), which is valid for treatment of top and bottom bound states and starts to become less reliable for charm-like masses and below. Both the weak- and strong-coupling regimes can be studied using lattice QCD (LQCD), and in particular lattice calculations of NRQCD are useful for studying heavy quark systems. The advantage of using pNRQCD to study the weak-coupling regime is that precise results can be obtained using modest computational resources: the quantum Monte Carlo (QMC) calculations below use ensembles of 5,000 configurations with \(3N_{Q}\) degrees of freedom representing the spatial coordinates of \(N_{Q}\) heavy quarks in contrast to LQCD calculations that commonly use ensembles of hundreds or thousands of configurations with \(10^{8}\) or more degrees of freedom representing the quark and gluon fields at each lattice site. In many previous studies of pNRQCD, the main focus was heavy quarkonia in QCD [9; 10; 13; 53; 54]. The heavy quarkonium spectrum, as well as other properties such as decay widths, were studied in detail to N\({}^{3}\)LO. Ultrasoft effects were also considered as they play a role beyond NNLO [11]. Additionally, pNRQCD was extended for doubly- and triply-heavy baryons in QCD [55]. The three-quark potential was also recently determined for baryon states and was shown to contribute at NNLO [56; 57]. In this work, we employ a pNRQCD formalism previously developed for the case of heavy quarkonia [9; 13], in which we take the operators to be dependent on heavy quark and antiquark fields. In particular, we generalize this formalism to apply to arbitrary hadronic systems comprised totally of heavy quarks. Thus, we can probe exotic states and multi-hadron systems such as tetraquarks, meson-meson molecules, and the deuteron in the heavy quark limit. Moreover, we generalize all the components of the EFT to treat arbitrary bound systems of heavy fermions charged under \(SU(N_{c})\). We determine the operators and matching coefficients describing the action of two- and three-quark potentials on arbitrary hadronic states up to NNLO for general \(N_{c}\). Our formalism is then applicable to extract properties of the bound states such as binding energies and matrix elements with the use of variational Monte-Carlo (VMC) and Green's function Monte-Carlo (GFMC) methods [58; 59; 60]. Both VMC and GFMC are state-of-the-art in nuclear physics simulations, and we apply them to study heavy-quark bound states in QCD and \(SU(N_{c})\) gauge theories in general. Recently, VMC was employed to determine the binding energy and mass spectra of triply-heavy bottom and charm baryons in QCD [61; 62]. The results are mass-scheme dependent, and in this work, we tie our heavy quark mass to the spin-averaged mass of the measured \(1S\) state of the associated quarkonia. After tuning the charm and bottom quark masses to reproduce the quarkonia masses, we predict the mass spectrum of triply-heavy bottom and charmed baryons and compare with previous LQCD results for the same masses. As for the dark sector, we study the spectra of heavy dark mesons and baryons in \(SU(N_{c})\) gauge theory for \(N_{c}\in\{3,\ldots,6\}\) and extrapolate to large \(N_{c}\). We demonstrate that QMC calculations using pNRQCD can provide predictions for composite DM observables that enable efficient scanning over a wide range of mass scales. The computational simplicity of this approach is beneficial for studying composite DM, in which the fundamental parameters of the underlying theory are not yet known. Further, we fit our QMC pNRQCD results for dark meson and baryon masses to power series in the dark strong coupling constant and \(1/N_{c}\) that provide analytic approximations that can be used straightforwardly in phenomenological studies of composite DM. The remainder of this work is organized as follows. Section II introduces pNRQCD in a formulation suitable for studying multi-hadron systems. Section III reviews QMC methods that can be used to compute matrix elements of the Hamiltonian and other operators. In Section IV, we describe and justify the choice of initial trial wavefunctions used as inputs for VMC and GFMC calculations of heavy quarkonium and triply-heavy baryons. Results of these calculations for heavy mesons and baryons in QCD are described in Section V, and results for \(SU(N_{c})\) dark mesons and baryons are described in Section VI. We discuss some prospects for future investigations in Section VII. ## II PNRQCD for multi-hadron systems \(SU(N_{c})\) gauge theory with \(n_{f}\) light fermions and \(n_{h}\) heavy fermions is a straightforward generalization of QCD at the perturbative level. In this section, this theory will be referred to as QCD with "quark" and "gluon" degrees of freedom; however, all the formalism we present is relevant for the more general case of \(SU(N_{c})\) gauge theory discussed for dark hadrons in Sec. VI. ### pNRQCD formalism The QCD Lagrange density is given by \[\mathcal{L}_{\rm QCD}=\mathcal{L}_{g}+\mathcal{L}_{l}+\mathcal{L}_{h}, \tag{1}\] where the gluon, light quark, and heavy quark terms are \[\mathcal{L}_{g}(x) =-\frac{1}{2}\operatorname{tr}\left[G_{\mu\nu}(x)G^{\mu\nu}(x) \right], \tag{2}\] \[\mathcal{L}_{l}(x) =\sum_{f=1}^{n_{f}}\overline{q}_{f}(x)[i\not{D}+m_{f}]q_{f}(x),\] (3) \[\mathcal{L}_{h}(x) =\sum_{h=1}^{n_{h}}\overline{Q}_{h}(x)[i\not{D}+m_{Q}]Q_{h}(x), \tag{4}\] where \(G_{\mu\nu}=[D_{\mu},D_{\nu}]=G_{\mu\nu}^{a}T^{a}\) is the gluon field-strength tensor, \(D_{\mu}=\partial_{\mu}+ig_{s}A_{\mu}^{a}T^{a}\) is the gauge-covariant derivative, \(A_{\mu}^{a}\) is the gluon field, \(g_{s}\) is the strong coupling, \(m_{f}\) and \(m_{h}\) are light and heavy quark masses respectively, and the \(T^{a}\) are generators of \(\mathfrak{su}(3)\) normalized as \(\operatorname{tr}[T^{a}T^{b}]=\frac{1}{2}\delta^{ab}\). Light quarks with \(m_{f}\ll\Lambda_{\rm QCD}\) contribute to the RG evolution of \(\alpha_{s}(\mu)=g(\mu)^{2}/(4\pi)\) and will be approximated as massless below. Heavy quarks with \(m_{h}\gg\Lambda_{\rm QCD}\) have negligible effects on the RG evolution of \(\alpha_{s}\) for \(\mu\lesssim m_{h}\), and in systems where heavy quarks are nonrelativistic EFT methods can be used to expand observables in power series of \(\Lambda_{\rm QCD}/m_{h}\). The \(\overline{\rm MS}\) renormalization scheme is used throughout this work for simplicity. In the \(\overline{\rm MS}\) scheme, effective interactions between heavy quarks only depend on \(n_{f}\) and \(n_{h}\) through the number of flavors with mass less than \(\mu\) in the RG evolution of \(\alpha_{s}(\mu)\) and in the values of other EFT couplings, defining this number to be \(n_{f}\) leads to a decoupling of heavy quarks from one another, and we, therefore, omit heavy flavor indices and denote the heavy quark mass by \(m_{Q}\) below. NRQCD is the EFT employed to study systems of two or more heavy quarks. The Lagrangian is determined by integrating out degrees of freedom with the energy of the order of the heavy-quark masses [63; 64; 4; 4]. The Lagrangian operators are determined by QCD symmetries and are organized as a power series in inverse quark mass, \(m_{Q}\), with \(m_{Q}\gg\Lambda_{\rm QCD}\). The NRQCD Lagrangian including light quarks reads [8; 12], \[\mathcal{L}_{\rm NRQCD}=\mathcal{L}_{\psi}+\mathcal{L}_{\chi}+ \mathcal{L}_{\psi\chi}+\mathcal{L}_{\psi\psi}+\mathcal{L}_{\chi\chi}+ \mathcal{L}_{g}+\mathcal{L}_{l}, \tag{5}\] where \(\psi\) and \(\chi_{c}=-i\sigma_{2}\chi^{*}\) are the Pauli spinors that annihilate a quark and create an antiquark, respectively, which are related to the QCD heavy quark field by \[Q(x)=\sqrt{Z}\begin{pmatrix}e^{-im_{Q}t}\psi(x)\\ e^{im_{Q}t}\chi(x)\end{pmatrix}, \tag{6}\] in the Dirac basis in which \(\gamma^{0}=\text{diag}(1,1,-1,-1)\); for further discussion see Ref. [13]. In Eq. (5), the NRQCD gauge and light quark terms \(\mathcal{L}_{g}^{\rm NRQCD}\) and \(\mathcal{L}_{l}^{\rm NRQCD}\) are identical to their QCD counterparts \(\mathcal{L}_{g}\) and \(\mathcal{L}_{l}\) in Eq. (4) up to \(O(1/m_{Q}^{2})\) corrections. Interaction terms with light degrees of freedom are suppressed by \(\mathcal{O}(\alpha_{s}/m_{Q}^{2})\) and are given in Ref. [5]. The effects of these heavy-light interactions on heavy-heavy interactions are suppressed by the square of this factor and are therefore \(\mathcal{O}(\alpha_{s}^{2}/m_{Q}^{4})\) and neglected below; however, these interactions could be relevant for studies of heavy-light systems. The heavy quark one-body term is given by \[\mathcal{L}_{\psi} =\psi^{l}\left\{iD_{0}+c_{2}\frac{\mathbf{D}^{2}}{2m_{Q}}+c_{4}\frac{ \mathbf{D}^{4}}{8m_{Q}^{3}}+c_{F}g_{s}\frac{\mathbf{\sigma}\cdot\mathbf{B}}{2m_{Q}}\right.\] \[\quad+c_{D}g_{s}\frac{[\mathbf{D}\cdot\mathbf{E}]}{8m_{Q}^{2}}+\left.icsg _{s}\frac{\mathbf{\sigma}\cdot(\mathbf{D}\times\mathbf{E}-\mathbf{E}\times\mathbf{D})}{8m_{Q}^{2}} \right\}\psi, \tag{7}\] where \(c_{2}=c_{4}=1\) is guaranteed by reparameterization invariance [65], and the remaining Wilson coefficients to \(\mathcal{O}(1/m_{Q}^{3})\) are given in [64]. The corresponding heavy antiquark one-body terms \(\mathcal{L}_{\chi}\) are equal to \(\mathcal{L}_{\psi}\) with \(\psi\to\chi\). There are also four-quark operators involving the heavy quark and antiquark [5; 52], \[\mathcal{L}_{\psi\chi} =\frac{d_{\mathbf{11}}}{m_{Q}^{2}}\psi_{l}^{\dagger}\psi_{j}\chi_{l}^ {\dagger}\chi_{l}\delta_{ij}\delta_{kl}+\frac{d_{\mathbf{13}}}{m_{Q}^{2}}\psi_{l}^ {\dagger}\mathbf{\sigma}\psi_{j}\chi_{l}^{\dagger}\mathbf{\sigma}\chi_{l}\delta_{ij} \delta_{kl}\] \[\quad+\frac{d_{\mathbf{81}}}{m_{Q}^{2}}\psi_{l}^{\dagger}T_{ij}^{a} \psi_{j}\chi_{l}^{\dagger}T_{kl}^{a}\psi_{l}\] \[\quad+\frac{d_{\mathbf{83}}}{m_{Q}^{2}}\psi_{l}^{\dagger}T_{ij}^{a} \mathbf{\sigma}\psi_{j}\chi_{k}^{\dagger}T_{kl}^{a}\mathbf{\sigma}\chi_{l}. \tag{8}\] as well as operators involving either quarks or antiquarks, \[\mathcal{L}_{\psi\psi} =\frac{d_{\mathbf{31}}}{m_{Q}^{2}}\psi_{l}^{\dagger}\psi_{j}^{\dagger} \psi_{k}\psi_{l}\epsilon_{ijm}\epsilon_{klm}+\frac{d_{\mathbf{33}}}{m_{Q}^{2}}\psi_{ l}^{\dagger}\mathbf{\sigma}\psi_{j}^{\dagger}\psi_{k}^{\dagger}\mathbf{\sigma}\psi_{l} \epsilon_{ijm}\epsilon_{klm}\] \[\quad+\frac{d_{\mathbf{61}}}{m_{Q}^{2}}\psi_{l}^{\dagger}\psi_{j}\psi_ {k}^{\dagger}\psi_{l}(\delta_{il}\delta_{jk}+\delta_{jl}\delta_{ik})\] \[\quad+\frac{d_{\mathbf{63}}}{m_{Q}^{2}}\psi_{l}^{\dagger}\mathbf{\sigma} \psi_{j}\psi_{k}^{\dagger}\mathbf{\sigma}\psi_{l}(\delta_{il}\delta_{jk}+\delta_{jl} \delta_{ik}),\] \[\mathcal{L}_{\chi\chi} =\mathcal{L}_{\psi\psi}(\psi\leftrightarrow\chi_{c}), \tag{9}\] The Wilson coefficients, \(d_{rr^{\prime}}\), sub-scripted by color and spin representations, are given for both equal and unequal mass cases in Refs. [52]. The covariant derivative is \(D^{\mu}=\partial^{\mu}+ig_{s}A_{s}^{\mu}T^{a}\equiv(D_{t},-\mathbf{D})\), such that \(iD_{t}=i\partial_{t}-g_{s}A_{0}\) and \(i\mathbf{D}=i\mathbf{\partial}+g_{s}\mathbf{A}\). The chromo-electric and magnetic fields are defined as \(B^{i}=\frac{i}{2g_{s}}\epsilon^{ijk}[D_{j},D_{k}]\) and \(\mathbf{E}=-\frac{i}{g_{s}}[D_{t},\mathbf{D}]\), respectively. The matching coefficients \(c_{i}\), and \(d_{rr^{\prime}}\) for the equal and unequal mass cases are known to two- and one-loop order in QCD and the SM, respectively [66; 52; 67]. Note that Eqs. (7) and (8) are constructed by including all parity-preserving, rotationally invariant, Hermitian combinations of \(iD_{t}\), \(\mathbf{D}\), \(\mathbf{E}\), \(i\mathbf{B}\), and \(i\mathbf{\sigma}\). Although NRQCD is a powerful tool for studying heavy quarkonium, it fails to exploit the entire hierarchy of scales in such a system, namely momentum, \(|\mathbf{p}|\sim m_{Q}|\mathbf{v}|\ll m_{Q}\) and binding energy, \(E\sim m_{Q}|\mathbf{v}|^{2}\ll|\mathbf{p}|\). As we are interested in physics at the scale of the binding energies, we can further expand NRQCD in \(|\mathbf{p}|\gg E\). The resulting EFT is an expansion in powers of \(E/|\mathbf{p}|\) known as potential NRQCD (pNRQCD) [9; 52]. Interactions in the pNRQCD Lagrangian that are suppressed by powers of \(E/|\mathbf{p}|\) are local in time but non-local in space and are therefore equivalent to nonrelativistic (two- or more-body) potentials. Non-potential quark-gluon interactions are also present in pNRQCD but are suppressed by powers of \(\alpha_{s}(\mu)\). The renormalization scale \(\mu\) should ideally be chosen in the range \(|\mathbf{p}|<\mu<m_{Q}\) for typical momentum scales since logarithms of \(\mathbf{p}/\mu\) arise in matching NRQCD to pNRQCD and logarithms of \(m_{Q}/\mu\) are present from matching QCD to NRQCD. There are two different kinematic regions with different pNRQCD descriptions: the weak (\(|\mathbf{p}|\gg\Lambda_{\rm QCD}\)) and strong (\(|\mathbf{p}|\sim\Lambda_{\rm QCD}\)) coupling regimes. In the strong-coupling regime, matching between NRQCD and pNRQCD must be performed non-perturbatively and has been studied by using lattice QCD results in matching calculations to determine pNRQCD potentials; for a review see [5]. In this work, we will consider only the weak-coupling regime, where matching between NRQCD and pNRQCD can be performed perturbatively in a dual expansion in \(\alpha_{s}\) and \(1/m_{Q}\), as reviewed in Ref. [13]. Weak-coupling pNRQCD has been used extensively to study heavy quarkonium with the degrees of freedom typically taken to be a composite field describing the heavy \(Q\overline{Q}\) system, light quarks, and gluons. Analogous composite \(QQQ\) fields have been used in pNRQCD studies of baryons [55; 56]. It is also possible to use the nonrelativistic quark spinor degrees of freedom of NRQCD as the heavy quark degrees of freedom of pNRQCD [13]. This latter choice of degrees of freedom is not commonly used. However, it permits a unified construction of the pNRQCD operators relevant for describing arbitrary multi-hadron states composed of heavy quarks, and the construction of the pNRQCD Lagrangian with explicit heavy quark degrees of freedom is therefore pursued below. With this choice of degrees of freedom, the fields of pNRQCD are identical to those of NRQCD. The theories differ in that pNRQCD includes spatially non-local heavy quark "potential" interactions in its Lagrangian, \[L_{\rm pNRQCD}=L_{\rm NRQCD}^{\rm us}+L_{\rm pot}. \tag{10}\] The potential piece, \(L_{\rm pot}\), is given by a sum of a quark-antiquark potential as well as quark-quark, three-quark, and higher-body potentials relevant for baryon and multi-hadron systems composed of heavy quarks, \[L_{\rm pot}=L_{\psi\chi}^{\rm pot}+L_{\psi\psi}^{\rm pot}+L_{3\psi}^{\rm pot}+ \ldots. \tag{11}\] The different terms in \(L_{\rm pot}\) will be discussed below. The remaining term \(L_{\rm NRQCD}^{\rm us}(t)\) corresponds to \(L_{\rm NRQCD}(t)\equiv\int d^{3}x\ \mathcal{L}(t,\mathbf{x})\) with only ultra-soft gluon modes included: in other words a multipole expansion of the quark-gluon vertices is performed, and contributions which are not suppressed by \(E/|\mathbf{p}|\) are explicitly removed since they correspond to the soft modes whose effects are described by \(L_{\rm pot}\)[6; 11; 68]. The remaining subleading multipole contributions correspond to ultra-soft modes, and since they do not include infrared singular contributions by construction, they can be included perturbatively. Ultra-soft contributions to meson and baryon masses in pNRQCD have been studied and found to be N\({}^{3}\)LO effects suppressed by \(O(\alpha_{s}^{3})\) compared to the LO binding energies [6; 11]. The state-dependence of ultra-soft gluon effects arises through integrals over coordinate space involving the initial- and final-state wavefunctions and are therefore N\({}^{3}\)LO for arbitrary hadron or multi-hadron systems. Ultrasoft gluon effects will be neglected below since we work to NNLO accuracy. We note, however, that they could be included as perturbative corrections to the binding energies computed in Sec. V-VI by determining the baryonic analogs of ultrasoft gluon corrections to quarkonium energy levels, as discussed in Refs. [11; 13]. ### Quark-antiquark potential A sum of color-singlet and color-adjoint terms gives the quark-antiquark potential for arbitrary \(N_{c}\), \[L_{\psi\chi}^{\rm pot}= -\int d^{3}\mathbf{r}_{1}d^{3}\mathbf{r}_{2}\,\psi_{i}^{\dagger}(t,\mathbf{r} _{1})\chi_{j}(t,\mathbf{r}_{2})\chi_{k}^{\dagger}(t,\mathbf{r}_{2})\psi_{l}(t,\mathbf{r}_{1})\] \[\times\left[\frac{1}{N_{c}}\delta_{ij}\delta_{kl}V_{\mathbf{1}}^{\psi \chi}(\mathbf{r}_{12})+\frac{1}{T_{F}}T_{ij}^{a}T_{kl}^{a}V_{\rm A\bar{a}}^{\psi\chi }(\mathbf{r}_{12})\right], \tag{12}\] where \(\mathbf{r}_{12}\equiv\mathbf{r}_{1}-\mathbf{r}_{2}\), \(T_{F}=1/2\), and the fermion spin indices are implicitly contracted with indices of the potential. The potential depends on the renormalization scale \(\mu\) as well as \(\mathbf{p}_{a}=-i\mathbf{\nabla}_{a}\) and \(\mathbf{S}_{a}=\mathbf{\sigma}_{a}/2\), but these dependencies will generally be kept implicit except where otherwise noted. Here and below, \(i,j,k,\ldots\) represent fundamental color indices, and \(a,b,c,\ldots\) index adjoint color indices. The color-singlet potential is expanded as a power series in \(1/m_{Q}\), \[\begin{split} V_{\mathbf{1}}^{\psi\chi}(r)=V_{\mathbf{1}}^{\psi\chi,(0)}(r )+\frac{V_{\mathbf{1}}^{\psi\chi,(1)}(r)}{m_{Q}}\\ +\frac{V_{\mathbf{1}}^{\psi\chi,(2)}(r)}{m_{Q}^{2}}+\mathcal{O}(1/m_ {Q}^{3}).\end{split} \tag{13}\] The \(\mathcal{O}(1/m_{Q}^{0})\) and \(\mathcal{O}(1/m_{Q})\) potentials are given by \[V_{\mathbf{1}}^{\psi\chi,(0)}(r,\mu)= -C_{F}\frac{\alpha_{V_{\mathbf{1}}}(r,\mu)}{r}, \tag{14}\] \[V_{\mathbf{1}}^{\psi\chi,(1)}(r,\mu)= -\frac{C_{F}C_{A}}{2m_{Q}r^{2}}D^{(1)}(\mu), \tag{15}\] where \(\mu\) dependence is shown explicitly, the perturbative expansion of \(\alpha_{V_{\bf 1}}(r,\mu)\) is discussed below, \(C_{A}=N_{c}\), \(C_{F}=(N_{c}^{2}-1)/(2N_{c})\), and \(D^{(1)}(\mu)=\alpha_{s}(\mu)^{2}+\mathcal{O}(\alpha_{s}^{3})\) in Coulomb gauge as discussed in in Refs. [5; 69]. At \(\mathcal{O}(1/m_{Q}^{2})\) there are spin-independent and spin-dependent potentials that arise, \[V_{\bf 1}^{\psi\chi,(2)}(r) =V_{\bf 1,\rm SI}^{\psi\chi,(2)}(r)+V_{\bf 1,\rm SD}^{\psi\chi,(2)}(r), \tag{16}\] \[V_{\bf 1,\rm SI}^{\psi\chi,(2)}(r) =-\,\frac{C_{F}D_{1}^{(2)}}{2m_{Q}^{2}}\left\{\frac{1}{r},\mathbf{p}^ {2}\right\}+\frac{C_{F}D_{2}^{(2)}}{2m_{Q}^{2}r^{3}}\mathbf{L}^{2}\] (17) \[\quad+\frac{\pi C_{F}D_{\delta}^{(2)}}{m_{Q}^{2}}\delta^{(3)}( \mathbf{r}),\] \[V_{\rm 1,\rm SD}^{\psi\chi,(2)}(r) =\frac{4\pi C_{F}D_{S^{2}}^{(2)}}{3m_{Q}^{2}}\mathbf{S}^{2}\delta^{(3) }(\mathbf{r})+\frac{3C_{F}D_{LS}^{(2)}}{2m_{Q}^{2}r^{3}}\mathbf{L}\cdot\mathbf{S}\] (18) \[\quad+\frac{C_{F}D_{S_{12}}^{(2)}}{4m_{Q}^{2}r^{3}}\mathbf{S}_{12}( \hat{\mathbf{r}}),\] where \(\mathbf{S}=\mathbf{S}_{1}+\mathbf{S}_{2}\), \(\mathbf{S}_{12}(\hat{\mathbf{r}})=3\hat{\mathbf{r}}\cdot\mathbf{\sigma}_{1}\hat{\mathbf{r}}\cdot \mathbf{\sigma}_{2}-\mathbf{\sigma}_{1}\cdot\mathbf{\sigma}_{2}\), \(\mathbf{L}=\mathbf{r}\times\mathbf{p}\), and the \(D^{(2)}\) coefficients are given in [5; 70]. The singlet potential is known to N\({}^{3}\)LO in QCD and NLO in the SM [10; 71]. The adjoint (octet for \(N_{c}=3\)) potential is also known to N\({}^{3}\)LO and is given in Ref. [72]. The potentials such as \(V_{\bf 1}^{(0)}(r)\) appearing in Eq. (15) are Wilson coefficients in pNRQCD, which can be obtained by matching with NRQCD. The color-singlet potential has been computed to N\({}^{3}\)LO in the \(\overline{\rm MS}\) scheme for the case of heavy quarks with equal masses (the unequal mass case is not fully known to \(\mathcal{O}(1/m_{Q}^{2})\), although various pieces have been computed [73]) and has the perturbative expansion \[\alpha_{V_{\bf 1}}(r,\mu)=\alpha_{s}(\mu)\left(1+\sum_{n=1}^{3}\left(\frac{ \alpha_{s}(\mu)}{4\pi}\right)^{n}\tilde{a}_{n}(r;\mu)\right), \tag{19}\] where \[\tilde{a}_{1}(r;\mu) =a_{1}+2\beta_{0}\ln(r\mu e^{\gamma_{\mathbf{\mathrm{E}}}}),\] \[\tilde{a}_{2}(r;\mu) =a_{2}+\frac{\pi^{2}}{3}\beta_{0}^{2}+(4a_{1}\beta_{0}+2\beta_{1} )\ln(r\mu e^{\gamma_{\mathbf{\mathrm{E}}}})\] \[\quad+4\beta_{0}^{2}\ln^{2}(r\mu e^{\gamma_{\mathbf{\mathrm{E}}}}),\] \[\tilde{a}_{3}(r;\mu) =a_{3}+a_{1}\beta_{0}^{2}\pi^{2}+\frac{5\pi^{2}}{6}\beta_{0} \beta_{1}+16\zeta_{3}\beta_{0}^{3}+\] \[\quad+\left(2\pi^{2}\beta_{0}^{3}+6a_{2}\beta_{0}+4a_{1}\beta_{1 }+2\beta_{2}\right.\] \[\quad\left.+\frac{16}{3}C_{A}^{3}\pi^{2}\right)\ln(r\mu e^{\gamma_ {\mathbf{\mathrm{E}}}})+\left(12a_{1}\beta_{0}^{2}\right.\] \[\quad\left.+10\beta_{0}\beta_{1}\right)\ln^{2}(r\mu e^{\gamma_{ \mathbf{\mathrm{E}}}})+8\beta_{0}^{3}\ln^{3}(r\mu e^{\gamma_{\mathbf{\mathrm{E}}}}). \tag{20}\] The coefficients up to N\({}^{3}\)LO are given in Ref. [10]. The numerical calculations presented below are carried out to NNLO accuracy and therefore require the coefficients \[\beta_{0} =\frac{11}{3}C_{A}-\frac{4}{3}T_{F}n_{f}, \tag{21}\] \[\beta_{1} =\frac{34}{3}C_{A}^{2}-4C_{F}T_{F}n_{f}-\frac{20}{3}C_{A}T_{F}n_{f}, \tag{22}\] and \[a_{1} =\frac{31}{9}C_{A}-\frac{20}{9}T_{F}n_{f}, \tag{23}\] \[a_{2} =\left(\frac{4343}{162}+4\pi^{2}-\frac{\pi^{4}}{4}+\frac{22}{3} \zeta_{3}\right)C_{A}^{2}\] \[\quad-\left(\frac{55}{3}-16\zeta_{3}\right)C_{F}T_{F}n_{f}+\frac {400}{81}T_{F}^{2}n_{f}^{2}\] \[\quad-\left(\frac{1798}{81}+\frac{56}{3}\zeta_{3}\right)C_{A}T_{F} n_{f}. \tag{24}\] Note that in obtaining the pNRQCD Lagrangian presented here, a single fixed renormalization scale \(\mu\) is assumed to be used during matching from QCD to NRQCD and NRQCD to pNRQCD. This renormalization scale, therefore, acts as an effective cutoff for both heavy-quark momenta \(\mathbf{p}\) satisfying \(|\mathbf{p}|\ll m_{Q}\) as well as the four-momenta \(\ell^{\mu}\) of the light degrees of freedom satisfying \(\ell^{\mu}\sim\mathbf{p}^{2}/m_{Q}\ll|\mathbf{p}|\). Further refinements to pNRQCD can be achieved by renormalization-group evolving the NRQCD Wilson coefficients to resum logarithms of \(m_{Q}/\mu\) or performing renormalization-group improvement of the pNRQCD potentials to resum logarithms of \(|\mathbf{p}|/\mu\)[13]. However, such improvement is not straightforward to implement in the QMC approaches to studying multi-quark systems in pNRQCD discussed below, and it is not pursued in this work. ### Quark-quark potential The quark-quark potential appearing in Eq. (11) is given by a sum of color-antisymmetric and color-symmetric terms, \[L_{\psi\psi}^{\rm pot}= -\int d^{3}\mathbf{r}_{1}d^{3}\mathbf{r}_{2}\,\psi_{i}^{\dagger}(t,\mathbf{r} _{1})\psi_{j}^{\dagger}(t,\mathbf{r}_{2})\psi_{k}(t,\mathbf{r}_{2})\psi_{l}(t,\mathbf{r}_{1})\] \[\times\left[\frac{N_{c}-1}{4}\left(\mathcal{F}_{ij}^{\rm Am} \right)^{*}\mathcal{F}_{kl}^{\rm Am}V_{\rm A}^{\psi\psi}(\mathbf{r}_{12})\right.\] \[\quad\left.+\frac{1}{2}\left(\mathcal{F}_{ij}^{\rm Sm}\right)^{*} \mathcal{F}_{kl}^{\rm Sm}V_{\rm S}^{\psi\psi}(\mathbf{r}_{12})\right],\] where \(V_{\mathbf{\rho}}^{\psi\psi}(r)\) with \(\rho={\rm A}\) and \(\rho={\rm S}\) denote the potentials for quark-quark states in antisymmetric and symmetric representations, respectively, which are presented explicitly below. The antisymmetric and symmetric color tensors \(\mathcal{F}_{ij}^{\rm Am}=-\mathcal{F}_{ji}^{\rm Am}\) and \(\mathcal{F}_{ij}^{\rm Sm}=\mathcal{F}_{ji}^{\rm Sm}\) are orthogonal and satisfy \(\mathcal{F}_{ij}^{\rm Am}\mathcal{F}_{ij}^{\rm A\mu^{\prime}}=\delta^{mm^{\prime}}\) and \(\mathcal{F}_{ij}^{\rm Sm}\mathcal{F}_{ij}^{\rm Sm^{\prime}}=\delta^{\eta\eta^{ \prime}}\) where \(\eta\in\{1,\ldots,N_{c}(N_{c}+1)/2\}\). Explicit representations for \(\mathcal{F}_{ij}^{\rm Am}\) and \(\mathcal{F}_{ij}^{\rm Sm}\) can be found in Appendix B of Ref. [55] but will not be needed below; the products appearing in Eq. (25) are given by \[\mathcal{F}^{\mathrm{A}m}_{ij}\mathcal{F}^{\mathrm{A}m}_{kl} =\frac{1}{(N_{c}-1)!}\epsilon_{ij\sigma_{1}\dots\sigma_{N_{c}-2}} \epsilon_{kl\sigma_{1}\dots\sigma_{N_{c}-2}}, \tag{26}\] \[\mathcal{F}^{S\eta}_{ij}\mathcal{F}^{S\eta}_{kl} =\frac{1}{2}\left(\delta_{il}\delta_{jk}+\delta_{jl}\delta_{ik} \right). \tag{27}\] The coefficients of the operators appearing in Eq. (25) are chosen so that the action of \(L^{\mathrm{pot}}_{\psi\psi}\) on a quark-quark state in either the antisymmetric or symmetric representation, \(\ket{\psi_{i}(\mathbf{x}_{1})\psi_{j}(\mathbf{x}_{2})}\mathcal{F}^{\mathrm{A}m}_{ij}\) or \(\ket{\psi_{i}(\mathbf{x}_{1})\psi_{j}(\mathbf{x}_{2})}\mathcal{F}^{\mathrm{S}\psi}_{ij}\), is equivalent to multiplying that state by \(V^{\psi\psi}_{\mathrm{A}}(\mathbf{r}_{12})\) or \(V^{\psi\psi}_{\mathrm{S}}(\mathbf{r}_{12})\) respectively, as detailed in Sec. II.6 below. The pNRQCD quark-quark potentials \(V^{\psi\psi}_{\mathbf{\rho}}(r)\) have the same shape as the quark-antiquark potential up to NLO and differ only in the color factors governing the sign and normalization of the potential. To determine the appropriate color factors, the tensors associated with the two quark and antiquark fields in each operator, \(\mathcal{F}^{\rho\zeta}_{ij}\) and \(\mathcal{F}^{\rho\zeta}_{kl}\), can be used as creation and annihilation operators for initial and final states in particular representations (here \(\zeta\) denotes a generic irrep row index). The color factor for this representation is obtained by contracting these initial- and final-state color tensors with the color structure resulting from a given NRQCD Feynman diagram, denoted \(\mathcal{D}^{2\psi,d}_{ijkl}\), where the superscript \(d\) labels the particular diagram and normalizing the result [74], \[\mathcal{C}^{\psi\psi,d}_{\mathbf{\rho}}=\frac{\mathcal{F}^{\rho\zeta}_{ij} \mathcal{D}^{2\psi,d}_{ijkl}\left(\mathcal{F}^{\rho\zeta}_{kl}\right)^{*}}{ \sqrt{\left(\mathcal{F}^{\rho\zeta^{\prime}}_{i^{\prime}j^{\prime}}\right)^{* }\mathcal{F}^{\rho\zeta^{\prime}}_{i^{\prime}j^{\prime}}\left(\mathcal{F}^{ \rho\zeta^{\prime\prime}}_{k^{\prime}l^{\prime}}\right)^{*}\mathcal{F}^{\rho \zeta^{\prime\prime}}_{k^{\prime}l^{\prime}}}}. \tag{28}\] Summing over all relevant diagrams gives \[\mathcal{C}^{\psi\psi}_{\mathbf{\rho}}=\sum_{d}\mathcal{C}^{\psi\psi,d}_{\mathbf{\rho}}. \tag{29}\] This color factor can be determined by applying Eq. (28) to the tree-level diagram \[\mathcal{D}^{\psi\psi,\mathrm{tree}}_{ijkl}=(T^{a})_{ij}(T^{a})_{kl}, \tag{30}\] to give [55; 56] \[C^{\psi\psi,\mathrm{tree}}_{\mathrm{A}} =-\frac{C_{F}}{N_{c}-1}, \tag{31}\] \[C^{\psi\psi,\mathrm{tree}}_{\mathrm{S}} =\frac{C_{F}}{N_{c}+1}. \tag{32}\] The antisymmetric quark-quark potential is therefore attractive, while the symmetric quark-quark potential is repulsive. No further representation-dependence arises in the potential at NLO, and so, for example, the antisymmetric quark-quark potential is related to the quark-antiquark potential by [56], \[V^{\psi\psi}_{\mathrm{A}}=\frac{1}{N_{c}-1}V^{\psi\chi}_{\mathbf{1}}+\mathcal{O}( \alpha_{s}^{3}). \tag{33}\] The same proportionality holds at NLO for generic color representations, \[V^{\psi\psi}_{\mathbf{\rho}}=-\frac{C^{\psi\psi,\mathrm{tree}}_{\mathbf{\rho}}}{C_{F} }V^{\psi\chi}_{\mathbf{1}}+\mathcal{O}(\alpha_{s}^{3}). \tag{34}\] At NNLO, the correction to the two body potential of a general color representation \(\mathbf{\rho}\) is known to have the form [75], \[V^{\psi\psi}_{\mathbf{\rho}}(r)=-\mathcal{C}^{\psi\psi,\mathrm{tree}}_{\mathbf{\rho}} \left(\frac{1}{C_{F}}V^{\psi\chi}_{\mathbf{1}}(r)-\frac{\alpha_{s}^{3}}{(4\pi)^{2} }\frac{\delta a_{\mathbf{\rho}}}{r}\right). \tag{35}\] The NNLO correction, \(\delta a_{\mathbf{\rho}}\) has been determined for various color representations [75; 76], and varies based on the color factor of the H-diagram in Fig. 1, first computed in Ref. [77]. The value of this diagram, modulo coupling and color structure, is \(1/r\) times \(\mathcal{H}=2\pi^{2}(\pi^{2}-12)\). The color tensor of the H-diagram shown in Fig. 1 is \[\mathcal{D}^{2\psi,H}_{ijkl}=(T^{a}T^{c})_{ij}(T^{e}T^{b})_{kl}f^{abd}f^{ced}. \tag{36}\] The color factors \(\mathcal{C}^{\psi\psi,H}_{\mathbf{\rho}}\) can be determined by projecting into the color symmetric and antisymmetric representations using Eq. (28). The NNLO correction factor is then given by \(\delta a_{\mathbf{\rho}}=\mathcal{H}\mathcal{C}^{\psi\psi,H}_{\mathbf{\rho}}/\mathcal{C} ^{\psi\psi,\mathrm{tree}}_{\mathbf{\rho}}\) as \[\delta a_{\mathrm{A}} =\frac{N_{c}(N_{c}-2)}{2}\pi^{2}(\pi^{2}-12), \tag{37}\] \[\delta a_{\mathrm{S}} =\frac{N_{c}(N_{c}+2)}{2}\pi^{2}(\pi^{2}-12). \tag{38}\] This completes the two-body potentials needed to study generic multi-hadron systems in pNRQCD at NNLO. What remains are higher-body potentials, which, as discussed in Ref. [56] and below, also arise at NNLO. ### Three-quark potentials Three-quark forces first appear in NRQCD at \(\mathcal{O}(\alpha_{s}^{3})\). Non-zero contributions in Coulomb gauge arise from the two diagrams shown in Fig. 2 and their permutations as discussed in Ref. [56]. Specializing first to \(N_{c}=3\) Figure 1: NRQCD Feynman diagram that contributes to the representation-dependent potential \(\delta a_{\mathbf{\rho}}\) when matching to pNRQCD. Dotted and curly lines correspond to longitudinal and transverse gluons in Coulomb gauge. the three-quark potential for a generic representation \(\mathbf{\rho}\) arising in \(\mathbf{3}\otimes\mathbf{3}\otimes\mathbf{3}=(\mathbf{\overline{3}}\oplus\mathbf{6})\otimes\mathbf{3}= \mathbf{1}\oplus\mathbf{8}_{\Lambda}\oplus\mathbf{8}_{\SS}\oplus\mathbf{10}\) is given by \[\begin{split} V^{3\psi}_{\mathbf{\rho}uv}&=\alpha\left( \frac{\alpha}{4\pi}\right)^{2}\left[\mathcal{C}^{3\psi,1}_{\mathbf{\rho}uv}v_{3}( \mathbf{r}_{12},\mathbf{r}_{13})\right.\\ &\left.+\mathcal{C}^{3\psi,2}_{\mathbf{\rho}uv}v_{3}(\mathbf{r}_{12},\bm {r}_{23})+\mathcal{C}^{3\psi,3}_{\mathbf{\rho}uv}v_{3}(\mathbf{r}_{13},\mathbf{r}_{23}) \right],\end{split} \tag{39}\] where \(\mathbf{r}_{IJ}\equiv\mathbf{r}_{I}-\mathbf{r}_{J}\), the indices \(u,v\in\{\text{A},\text{S}\}\) label the octet color tensors, which are antisymmetric or symmetric respectively in their first two indices, and should be neglected for \(\mathbf{\rho}\in\{\mathbf{1},\mathbf{10}\}\) where only one operator appears, and \(\mathcal{C}^{3\psi,q}_{\mathbf{\rho}uv}\) is the color factor for the permutation of the three-quark diagrams shown in Fig. 2 and discussed further below. Here, \(v_{3}(\mathbf{r},\mathbf{r}^{\prime})\) describes the spatial structure of the 3-quark potential diagrams, which takes a universal form given by [56] \[v_{3}(\mathbf{r},\mathbf{r}^{\prime})=16\pi\int_{0}^{1}dxdy\left[\hat{\mathbf{r}}\cdot\hat {\mathbf{r}}^{\prime}\mathcal{I}_{1}+\hat{\mathbf{r}}^{i}\hat{\mathbf{r}}^{\prime j} \mathcal{I}_{2}\right], \tag{40}\] where \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\) are defined in terms of \(\mathbf{R}=x\mathbf{r}-y\mathbf{r}^{\prime}\), \(R=|\mathbf{R}|\), and \(A=|\mathbf{r}|\sqrt{x(1-x)}+|\mathbf{r}^{\prime}|\sqrt{y(1-y)}\) by \[\mathcal{I}_{1} =\frac{1}{R}\left[\left(1-\frac{A^{2}}{R^{2}}\right)\arctan\frac{ R}{A}+\frac{A}{R}\right], \tag{41}\] \[\mathcal{I}_{2} =\frac{\hat{\mathbf{R}}^{i}\hat{\mathbf{R}}^{j}}{R}\left[\left(1+\frac{3 A^{2}}{R^{2}}\right)\arctan\frac{R}{A}-3\frac{A}{R}\right]. \tag{42}\] The color factors in Eq. (39) can be expressed as contractions of the tensors associated with the three quark and antiquark fields in each operator, \(\mathcal{F}^{\mathbf{\rho}u\zeta}_{ijk}\) and \(\mathcal{F}^{\mathbf{\rho}u\zeta}_{lmn}\), with the color tensor relevant for a particular diagram \[\mathcal{C}^{3\psi,q}_{\mathbf{\rho}uv}=\frac{\left(\mathcal{F}^{\mathbf{\rho}u\zeta} _{ijk}\right)^{*}\mathcal{D}^{3\psi,q}_{ijklmn}\mathcal{F}^{\mathbf{\rho}u\zeta}_{ lmn}}{\sqrt{\left(\mathcal{F}^{\mathbf{\rho}u\zeta^{\prime}}_{i^{\prime}j^{\prime}k^{ \prime}}\right)^{*}\mathcal{F}^{\mathbf{\rho}u\zeta^{\prime}}_{i^{\prime}j^{ \prime}k^{\prime}}\left(\mathcal{F}^{\mathbf{\rho}\mathbf{\rho}\mathbf{\rho}^{\chi\prime \prime}}_{l^{\prime}m^{\prime}n^{\prime}}\right)^{*}\mathcal{F}^{\mathbf{\rho}u \zeta^{\prime\prime}}_{l^{\prime}m^{\prime}n^{\prime}}}. \tag{43}\] Octet color tensors that are antisymmetric or symmetric respectively in their first two indices are defined by \[\mathcal{F}^{\mathbf{8}\text{A}a}_{ijk} =\frac{1}{\sqrt{2T_{F}}}\epsilon_{ijp}T^{a}_{pk}, \tag{44}\] \[\mathcal{F}^{\mathbf{8}\text{S}a}_{ijk} =\frac{1}{\sqrt{6T_{F}}}\left(\epsilon_{ikp}T^{a}_{pj}+\epsilon_ {jkp}T^{a}_{pi}\right), \tag{45}\] and satisfy \(\mathcal{F}^{\mathbf{8}ua}_{ijk}\mathcal{F}^{\mathbf{8}b\mathbf{\cdot}b}_{ijk}=\delta^{uv} \delta^{ab}\). Totally antisymmetric and totally symmetric color tensors \(\mathcal{F}^{\mathbf{1}}_{ijk}\) and \(\mathcal{F}^{\mathbf{10}\mathbf{\cdot}\mathbf{\delta}}_{ijk}\) satisfying \(\mathcal{F}^{\mathbf{1}}_{ijk}\mathcal{F}^{\mathbf{1}}_{ijk}=1\) and \(\mathcal{F}^{\mathbf{10}\mathbf{\delta}}_{ijk}\mathcal{F}^{\mathbf{10}\mathbf{\delta}^{\prime }}_{ijk}=\delta^{\delta\delta^{\prime}}\) with \(\delta\in\{1,\dots,10\}\) are explicitly presented in Appendix B of Ref. [55]; below we only need the products \[\mathcal{F}^{\mathbf{1}}_{ijk}\mathcal{F}^{\mathbf{1}}_{lmn} =\frac{1}{6}\epsilon_{ijk}\epsilon_{lmn}, \tag{46}\] \[\mathcal{F}^{\mathbf{10}\mathbf{\delta}}_{ijk}\mathcal{F}^{\mathbf{10}\mathbf{ \delta}}_{lmn} =\frac{1}{6}\left(\delta_{il}\delta_{jm}\delta_{kn}+\delta_{il} \delta_{jn}\delta_{km}\right.\] (47) \[\left.+\delta_{im}\delta_{jl}\delta_{kn}+\delta_{im}\delta_{jn} \delta_{kl}\right.\] \[\left.+\delta_{in}\delta_{jm}\delta_{kl}+\delta_{in}\delta_{jl} \delta_{km}\right).\] The color tensor relevant for the particular diagram shown in Fig. 2 is \[\begin{split}\mathcal{D}^{3\psi,3}_{ijklmn}&=\frac{1}{ 2}\left[T^{a}_{im}T^{a}_{jl}T^{b}_{kr}T^{e}_{rn}f^{bdc}f^{aec}\right.\\ &\left.+T^{d}_{im}T^{a}_{jl}T^{e}_{kr}T^{b}_{rn}f^{bdc}f^{aec} \right],\end{split} \tag{48}\] and the tensors for its permutations can be obtained using \(\mathcal{D}^{3\psi,3}_{ijklmn}=\mathcal{D}^{3\psi,1}_{mnklij}\) and \(\mathcal{D}^{3\psi,2}_{ijklmn}=\mathcal{D}^{3\psi,3}_{ijmnkl}\). Evaluating Eq. (43) for these diagrams shows that the \(\mathbf{1}\) and \(\mathbf{10}\) color factors do not depend on the permutation label \(q\) and are given by \(\mathcal{C}^{3\psi,q}_{\mathbf{1}}=-\frac{1}{2}\) and \(\mathcal{C}^{3\psi,q}_{\mathbf{10}}=-\frac{1}{4}\)[56]. Evaluating Eq. (43) for the adjoint operators leads to \[\begin{pmatrix}\mathcal{C}^{3\psi,1}_{\mathbf{8}\Lambda\Lambda}& \mathcal{C}^{3\psi,1}_{\mathbf{8}\Lambda\Sigma}\\ \mathcal{C}^{3\psi,1}_{\mathbf{8}\Lambda\Lambda}&\mathcal{C}^{3\psi,1}_{\mathbf{8} \Lambda\Sigma}\\ \mathcal{C}^{3\psi,2}_{\mathbf{8}\Lambda\Sigma}&\mathcal{C}^{3\psi,2}_{\mathbf{8} \Lambda\Sigma}\\ \mathcal{C}^{3\psi,3}_{\mathbf{8}\Lambda\Lambda}&\mathcal{C}^{3\psi,3}_{\mathbf{8} \Lambda\Sigma}\\ \mathcal{C}^{3\psi,3}_{\mathbf{8}\Lambda\Sigma}&\mathcal{C}^{3\psi,3}_{\mathbf{8} \Lambda\Sigma}\\ \end{pmatrix} =\begin{pmatrix}\frac{1}{16}&-\frac{\sqrt{3}}{8}\\ -\frac{\sqrt{3}}{8}&\frac{5}{16}\\ -\frac{\sqrt{3}}{8}&\frac{5}{16}\\ \end{pmatrix}, \tag{49}\] \[\begin{pmatrix}\mathcal{C}^{3\psi,2}_{\mathbf{8}\Lambda\Lambda}&\mathcal{C}^{3 \psi,2}_{\mathbf{8}\Lambda\Sigma}\\ \mathcal{C}^{3\psi,2}_{\mathbf{8}\Lambda\Sigma}&\mathcal{C}^{3\psi,2}_{\mathbf{8}\Sigma \Sigma}\\ \end{pmatrix} =\begin{pmatrix}\frac{1}{16}&\frac{\sqrt{3}}{8}\\ \frac{\sqrt{3}}{8}&\frac{5}{16}\\ \end{pmatrix},\] (50) \[\begin{pmatrix}\mathcal{C}^{3\psi,3}_{\mathbf{\Lambda}\Lambda}&\mathcal{C}^{3 \psi,3}_{\mathbf{8}\Lambda\Sigma}\\ \mathcal{C}^{3\psi,3}_{\mathbf{8}\Lambda\Sigma}&\mathcal{C}^{3\psi,3}_{\mathbf{8} \Sigma}\\ \end{pmatrix} =\begin{pmatrix}\frac{7}{16}&0\\ 0&-\frac{1}{16}\\ \end{pmatrix}, \tag{51}\] which completes the construction of \(L^{\text{pot},N_{e}=3}_{3\psi}\) to NNLO. The potential for three-quark states in the adjoint representation is computed at LO in Ref. [55], and the presence of mixing between states created with \(\mathbf{8}_{\text{A}}\) and \(\mathbf{8}_{\text{S}}\) operators are discussed in Ref. [56]. The NNLO \(3\psi\) potentials for the adjoint representation are reported here for the first time. While the \(\mathbf{1}\) and \(\mathbf{10}\) three-quark potentials are Figure 2: Leading NRQCD Feynman diagrams in Coulomb gauge that lead to non-vanishing contributions to the 3 always attractive, the adjoint three-quark potentials are repulsive for some configurations. The action of this three-quark potential can be reproduced using the pNRQCD Lagrangian term \[\begin{split} L^{\text{pot},N_{c}=3}_{3\psi}=&-\int d ^{3}\mathbf{r}_{1}d^{3}\mathbf{r}_{2}d^{3}\mathbf{r}_{3}\,\psi_{i}^{\dagger}(t,\mathbf{r}_{1}) \psi_{j}^{\dagger}(t,\mathbf{r}_{2})\psi_{k}^{\dagger}(t,\mathbf{r}_{3})\\ &\times\psi_{l}(t,\mathbf{r}_{3})\psi_{m}(t,\mathbf{r}_{2})\psi_{n}(t,\bm {r}_{1})\\ &\times\left[\mathcal{F}^{\mathbf{1}}_{ijk}\mathcal{F}^{\mathbf{1}}_{lmn} \frac{1}{6}V^{3\psi}_{\mathbf{1}}+\mathcal{F}^{\mathbf{10}}_{ijk}\mathcal{F}^{\mathbf{10} }_{lmn}\frac{1}{6}V^{3\psi}_{\mathbf{10}}\right.\\ &\quad\left.+\mathcal{F}^{\mathbf{8}\text{A}a}_{ijk}\mathcal{F}^{ \mathbf{8}\text{A}a}_{lmn}W^{3\psi}_{\mathbf{8}\text{A}}+\mathcal{F}^{\mathbf{8}\text{S}a }_{ijk}\mathcal{F}^{\mathbf{8}\text{S}a}_{lmn}W^{3\psi}_{\mathbf{8}}\right],\end{split} \tag{52}\] where the functions \(W^{3\psi}_{\mathbf{8}u}\) defined below are related to but not identical to the adjoint potentials \(V^{3\psi}_{\mathbf{8}uv}\). The action of either the symmetric or antisymmetric adjoint potential operator on the corresponding symmetric or antisymmetric adjoint state leads to a linear combination of symmetric and antisymmetric adjoint states arising from non-trivial Wick contractions. Direct computation of the matrix elements of the adjoint operators in \(L^{\text{pot},N_{c}=3}_{3\psi}\) between states creates by operators involving \(\mathcal{F}^{\mathbf{8}\text{A}a}_{ijk}\) and \(\mathcal{F}^{\mathbf{8}\text{S}a}_{ijk}\) shows that the desired potentials \(V^{3\psi}_{\mathbf{8}u}\) are reproduced using \[\begin{split} W^{3\psi}_{\mathbf{8}\text{A}}&=\alpha \left(\frac{\alpha}{4\pi}\right)^{2}\left[-\frac{1}{48}v_{3}(\mathbf{r}_{12},\mathbf{r }_{13})\right.\\ &\quad\left.-\frac{1}{48}v_{3}(\mathbf{r}_{12},\mathbf{r}_{23})+\frac{11} {48}v_{3}(\mathbf{r}_{13},\mathbf{r}_{23})\right],\end{split} \tag{53}\] and \[\begin{split} W^{3\psi}_{\mathbf{8}\text{S}}&=\alpha \left(\frac{\alpha}{4\pi}\right)^{2}\left[\frac{7}{48}v_{3}(\mathbf{r}_{12},\mathbf{r }_{13})\right.\\ &\quad\left.+\frac{7}{48}v_{3}(\mathbf{r}_{12},\mathbf{r}_{23})-\frac{5}{ 48}v_{3}(\mathbf{r}_{13},\mathbf{r}_{23})\right].\end{split} \tag{54}\] This construction can be generalized1 to \(N_{c}\geq 3\). Mixed-symmetry color adjoint tensors satisfying the same normalization condition as the \(N_{c}=3\) case above can be defined in general by Footnote 1: The case of \(N_{c}=2\) must be treated separately and is not considered here. \[\begin{split}\mathcal{F}^{\text{M}\text{A}a}_{ijkq_{1}\dots q_{N _{c}-3}}&=\frac{1}{\sqrt{T_{f}(N_{c}-1)!}}\epsilon_{ijpq_{1}\dots q _{N_{c}-3}}T^{a}_{pk},\end{split} \tag{55}\] \[\begin{split}\mathcal{F}^{\text{M}\text{S}a}_{ijkq_{1}\dots q_{N _{c}-3}}&=\frac{1}{\sqrt{2T_{f}N_{c}(N_{c}-2)!}}\left(\epsilon_{ik pq_{1}\dots q_{N_{c}-3}}T^{a}_{pj}\right.\\ &\quad\left.+\epsilon_{jkpq_{1}\dots q_{N_{c}-3}}T^{a}_{pi}\right). \end{split} \tag{56}\] The totally antisymmetric and totally symmetric tensors satisfy [56] \[\mathcal{F}^{\text{A}}_{ijk}\mathcal{F}^{\text{A}}_{lmn} =\frac{1}{N_{c}!}\epsilon_{ijko_{1}\dots o_{N_{c}-3}}\epsilon_{ lmmo_{1}\dots o_{N_{c}-3}}, \tag{57}\] \[\begin{split}\mathcal{F}^{\text{S}\text{S}}_{ijk}\mathcal{F}^{ \text{S}\text{S}}_{lmn}&=\mathcal{S}(N_{c})\left(\delta_{il} \delta_{jm}\delta_{kn}+\delta_{il}\delta_{jn}\delta_{km}\right.\\ &\quad\left.+\delta_{im}\delta_{jl}\delta_{kn}+\delta_{im}\delta_{jn }\delta_{kl}\right.\\ &\quad\left.+\delta_{in}\delta_{jm}\delta_{kl}+\delta_{in}\delta_{ jl}\delta_{km}\right),\end{split} \tag{58}\] where \[\begin{split}\mathcal{S}(N_{c})&=\frac{1}{N_{c}^{3}+3N_{ c}^{2}+2N_{c}}\binom{2N_{c}-1}{N_{c}}\\ &=\frac{(2N_{c}-1)!}{(N_{c}!)^{2}(N_{c}^{2}+3N_{c}+2)}.\end{split} \tag{59}\] The structure of the potential in all cases is given by Eq. (39). Color factors can be obtained for general \(N_{c}\geq 3\) using Eq. (43) with the results \[\mathcal{C}^{3\psi,q}_{\text{A}} =-\frac{N_{c}+1}{8}, \tag{60}\] \[\mathcal{C}^{3\psi,q}_{\text{S}} =-\frac{N_{c}-1}{8}, \tag{61}\] which agree with the general \(N_{c}\) results of Ref. [56], and \[\begin{split}\begin{pmatrix}\mathcal{C}^{3\psi,1}_{\text{M}\text{A} }&\mathcal{C}^{3\psi,1}_{\text{M}\text{S}\text{S}}\\ \mathcal{C}^{3\psi,1}_{\text{M}\text{S}\text{A}}&\mathcal{C}^{3\psi,1}_{\text{M} \text{S}\text{S}}\end{pmatrix}=\begin{pmatrix}\frac{1}{8(N_{c}-1)}&-\frac{ \sqrt{N_{c}}}{4\sqrt{2}\sqrt{N_{c}-1}}\\ -\frac{\sqrt{N_{c}}}{4\sqrt{2}\sqrt{N_{c}-1}}&\frac{N_{c}+2}{16}\end{pmatrix}, \end{split} \tag{62}\] \[\begin{pmatrix}\mathcal{C}^{3\psi,2}_{\text{M}\text{A}\text{A}}&\mathcal{C}^{3 \psi,2}_{\text{M}\text{S}}\\ \mathcal{C}^{3\psi,2}_{\text{M}\text{A}\text{S}}&\mathcal{C}^{3\psi,2}_{\text{M} \text{S}\text{S}}\end{pmatrix}=\begin{pmatrix}\frac{1}{8(N_{c}-1)}&\frac{\sqrt{N_{c} }}{4\sqrt{2}\sqrt{N_{c}-1}}\\ \frac{\sqrt{N_{c}}}{4\sqrt{2}\sqrt{N_{c}-1}}&\frac{N_{c}+2}{16}\end{pmatrix}, \end{split} \tag{63}\] \[\begin{pmatrix}\mathcal{C}^{3\psi,3}_{\text{M}\text{A}\text{A}}&\mathcal{C}^{3 \psi,3}_{\text{M}\text{S}}\\ \mathcal{C}^{3\psi,3}_{\text{M}\text{A}\text{S}}&\mathcal{C}^{3\psi,3}_{\text{M} \text{S}}\\ \mathcal{C}^{3\psi,3}_{\text{M}\text{A}\text{S}}&\mathcal{C}^{3\psi,3}_{\text{M} \text{S}}\\ \end{pmatrix}=\begin{pmatrix}\frac{2N_{c}+1}{8(N_{c}-1)}&0\\ 0&\frac{N_{c}-4}{16}\end{pmatrix}. \tag{64}\] These potentials are attainable with a pNRQCD Lagrangian term \[\begin{split} L^{\text{pot}}_{3\psi}=&-\int d^{3}\mathbf{r}_{1}d^{3} \mathbf{r}_{2}d^{3}\mathbf{r}_{3}\,\psi_{i}^{\dagger}(t,\mathbf{r}_{1})\psi_{j}^{\dagger}(t, \mathbf{r}_{2})\psi_{k}^{\dagger}(t,\mathbf{r}_{3})\\ &\times\psi_{l}(t,\mathbf{r}_{3})\psi_{m}(t,\mathbf{r}_{2})\psi_{n}(t, \mathbf{r}_{1})\\ &\times\left[\frac{N_{c}}{36}(N_{c}-1)(N_{c}-2)\mathcal{F}^{ \text{A}}_{ijk}\mathcal{F}^{\text{A}}_{lmn}V^{3\psi}_{\text{A}}\right.\\ &\quad\left.+\frac{1}{36\mathcal{S}(N_{c})}\mathcal{F}^{\text{S} \text{S}}_{ijk}\mathcal{F}^{\text{S}\text{S}}_{lmn}V^{3\psi}_{\text{S}}\right.\\ &\quad\left.+\mathcal{F}^{\text{M}\text{A}a}_{ijk}\mathcal{F}^{ \text{M}\text{A}a}_{lmn}W^{3\psi}_{\text{M}}+\mathcal{F}^{\text{M}\text{S}a}_{ijk} \mathcal{F}^{\text{M}\text{S}a}_{lmn}W^{3 and \(V_{\text{MS}\omega}^{3\psi}\) are reproduced using \[W_{\text{MA}}^{3\psi} =\alpha\left(\frac{\alpha}{4\pi}\right)^{2}\left[-\frac{N_{c}-1}{16( (N_{c}-2)N_{c}+3)}v_{3}(\mathbf{r}_{12},\mathbf{r}_{13})\right.\] \[\left.-\frac{N_{c}-1}{16((N_{c}-2)N_{c}+3)}v_{3}(\mathbf{r}_{12},\bm {r}_{23})\right.\] \[\left.+\frac{(N_{c}-1)(2(N_{c}-2)N_{c}+5)}{16((N_{c}-2)N_{c}+3)}v _{3}(\mathbf{r}_{13},\mathbf{r}_{23})\right], \tag{66}\] and \[W_{\text{MS}}^{3\psi} =\alpha\left(\frac{\alpha}{4\pi}\right)^{2}\left[\frac{N_{c}+4}{4 8}v_{3}(\mathbf{r}_{12},\mathbf{r}_{13})\right.\] \[\left.+\frac{N_{c}+4}{48}v_{3}(\mathbf{r}_{12},\mathbf{r}_{23})+\frac{N_{c }-8}{48}v_{3}(\mathbf{r}_{13},\mathbf{r}_{23})\right]. \tag{67}\] This completes the construction of the pNRQCD Lagrangian required to describe three-quark forces in generic hadron or multi-hadron states at NNLO. Three-antiquark potentials are identical to three-quark potentials by symmetry. However, it is noteworthy that additional \(\psi\psi\psi^{\dagger}\) and \(\psi\psi^{\dagger}\psi^{\dagger}\) potentials with distinct color factors are required to describe tetraquarks and other multi-hadron states containing both heavy quarks and heavy antiquarks at NNLO. Even higher-body potentials involving combinations of four quark and antiquark fields are also relevant for such systems and are discussed next. ### Four- and more-quark potentials Four-quark and higher-body potentials that do not factorize into iterated insertions of two-quark and three-quark potentials must arise at some order in \(\alpha_{s}\) during matching between pNRQCD and NRQCD and will need to be included in pNRQCD calculations of multi-hadron states at that order. Perhaps surprisingly, the \(\mathcal{O}(\alpha_{s})\) suppression of three-quark potentials in comparison with quark-quark potentials does not extend to four-quark potentials: for generic multi-hadron systems, four-quark potentials arise at NNLO and therefore at the same order at three-quark potentials. This can be seen by considering the diagrams in Fig. 3. The transverse gluon propagator in the four-quark analog of the H-diagram shown leads to momentum dependence that does not factorize into products of fewer-body potentials, and for both diagrams shown, the color structures of all four quarks are correlated by the gluon interactions in a way that does not factorize. Matching the contributions from these diagrams in pNRQCD therefore requires the introduction of four-quark potentials at NNLO. Barring unexpected cancellations between diagrams, this four-quark potential - and analogous four-body potentials involving one or more heavy antiquarks - must be obtained and included in pNRQCD calculations of generic multi-hadron systems at NNLO. Although a complete determination of the NNLO four-quark potential is beyond the scope of this work, it is straightforward to show that the potentials relevant for quarks in \(SU(N_{c})\) single-baryon systems greatly simplify and that for four-quark potentials vanish at NNLO for these special cases. For \(N_{c}\leq 3\), there are fewer than four quarks in a baryon, and it follows trivially that four-quark forces do not contribute to single-baryon observables.2 For \(N_{c}\geq 4\), the absence of four-quark forces at NNLO for single-baryon systems is non-trivial, and we argue below that it follows from the color structure of single-baryon states. These single-baryon states contain \(N_{c}\) quarks in a color-singlet configuration and can therefore be constructed from linear combinations of states of the form Footnote 2: Three- and four-body forces do not contribute to single-meson observables for any \(N_{c}\) for the same reason. \[\ket{B(\mathbf{r}_{1},\ldots,\mathbf{r}_{N_{c}})}\equiv\frac{\epsilon_{i_ {1}\ldots i_{N_{c}}}}{\sqrt{N_{c}!}}\psi_{i_{1}}^{\dagger}(\mathbf{r}_{1})\cdots \psi_{i_{N_{c}}}^{\dagger}(\mathbf{r}_{N_{c}})\ket{0}. \tag{68}\] Figure 3: Example NNLO diagrams for \(N=4\) which contribute to the 4-body potential and demonstrate that \(\alpha_{s}^{N}\) suppression at each higher body order is not necessarily respected beyond 3-body. The antisymmetry of \(\epsilon_{i_{1}\dots i_{N_{c}}}\) implies that contributions from any potential operator to single-baryon observables will be totally antisymmetrized over the color indices of all \(\psi_{i}\) and \(\psi_{i}^{\dagger}\) fields arising in the operator. This means that only \(V_{\rm A}^{i\psi\psi}\) and \(V_{\rm A}^{3\psi}\) contribute to the quark-quark and three-quark potentials for single-baryon states, respectively. Further, the color structures of the four-quark potential diagrams shown in Fig. 3 involve factors of \[T_{ik}^{a}T_{jl}^{b}f^{abc}, \tag{69}\] where \(i\) and \(j\) (\(k\) and \(l\)) label the color indices of any two of the incoming (outgoing) quark lines. Contracting with the color tensors for single-baryon initial and final states leads to \[\begin{split}& T_{ik}^{a}T_{jl}^{b}f^{abc}\epsilon^{ikm_{1}\dots m _{N_{c}-2}}\epsilon^{jlm_{1}\dots m_{N_{c}-2}}\\ &=-T_{ik}^{b}T_{jl}^{a}f^{abc}\epsilon^{ikm_{1}\dots m_{N_{c}-2} }\epsilon^{jlm_{1}\dots m_{N_{c}-2}}\\ &=-T_{jl}^{b}T_{ik}^{a}f^{abc}\epsilon^{ikm_{1}\dots m_{N_{c}-2} }\epsilon^{jlm_{1}\dots m_{N_{c}-2}}\\ &=0,\end{split} \tag{70}\] where the antisymmetry of \(f^{abc}\) has been used in going from the first to the second line, and the antisymmetry of \(\epsilon_{i_{1}\dots i_{N_{c}}}\) has been used in subsequently going to the third line. For \(N_{c}\geq 4\) single-baryon systems, diagrams with additional gluon propagators3 lead to four-body forces at N\({}^{3}\)LO that are not expected to vanish. For multi-hadron systems, including tetraquarks and bound or scattering states of heavy baryons, total color antisymmetry of initial and final state quarks does not apply, and we emphasize that these four-quark potentials that have not yet been determined are required for complete NNLO calculations. Footnote 3: An example of such a diagram can be obtained from Fig. 2 by adding a fourth quark interacting with a potential gluon that is connected to the transverse gluon by a three-gluon interaction. Five-quark (and higher-body) interactions require an additional gluon propagator compared to four-quark interactions and do not arise until N\({}^{3}\)LO. ### pNRQCD Hamiltonian The Lagrangian formulation of pNRQCD described above can be readily converted to a nonrelativistic Hamiltonian form. The generic kinetic and potential operators needed to construct the pNRQCD Hamiltonian are explicitly defined below. The action of the potential operator greatly simplifies when acting on quarkonium states and baryon states, and the particular structures of these states are also discussed in this section. For concreteness, unit-normalized quarkonium states are defined by \[\left|Q\overline{Q}(\mathbf{r}_{1},\mathbf{r}_{2})\right\rangle=\frac{1}{\sqrt{N_{c}} }\left|\psi_{m}(\mathbf{r}_{1}),\chi_{n}^{\dagger}(\mathbf{r}_{2})\right\rangle\delta _{mn}, \tag{71}\] while baryon states are defined by Eq. (68). The nonrelativistic potential operator \(\hat{V}^{\psi\chi}\) appearing in the pNRQCD Hamiltonian is just \(-L_{\psi\chi}^{\rm pot}\) with fermion fields replaced by Hilbert space operators. Its action on a quark-antiquark state is given by \[\begin{split}&\hat{V}^{\psi\chi}\left|\psi_{m}(\mathbf{r}_{1}),\chi_{n }^{\dagger}(\mathbf{r}_{2})\right\rangle\\ &=\int d^{3}\mathbf{s}_{1}d^{3}\mathbf{s}_{2}\left[\frac{1}{N_{c}}\delta_ {ij}\delta_{kl}V_{\mathbf{1}}^{\psi\chi}(\mathbf{s}_{12})+\frac{1}{T_{F}}T_{ij}^{a}T_{ kl}^{a}V_{\rm Ad}^{\psi\chi}(\mathbf{s}_{12})\right]\\ &\quad\times\psi_{i}^{\dagger}(t,\mathbf{s}_{1})\chi_{j}(t,\mathbf{s}_{2} )\chi_{k}^{\dagger}(t,\mathbf{s}_{2})\psi_{l}(t,\mathbf{s}_{1})\left|\psi_{m}(\mathbf{r}_{ 1}),\chi_{n}^{\dagger}(\mathbf{r}_{2})\right\rangle\\ &=\left[\frac{1}{N_{c}}\delta_{ij}\delta_{kl}V_{\mathbf{1}}^{\psi\chi }(\mathbf{r}_{12})+\frac{1}{T_{F}}T_{ij}^{a}T_{kl}^{a}V_{\rm Ad}^{\psi\chi}(\mathbf{r} _{12})\right]\left|\psi_{i}(\mathbf{r}_{1}),\chi_{j}^{\dagger}(\mathbf{r}_{2})\right\rangle \delta_{km}\delta_{ln}\end{split} \tag{72}\] The action of the potential operator on quarkonium states therefore simplifies to \[\begin{split}&\hat{V}^{\psi\chi}\left|Q\overline{Q}(\mathbf{r}_{1}, \mathbf{r}_{2})\right\rangle=\hat{V}^{\psi\chi}\left|\psi_{m}(\mathbf{r}_{1}),\chi_{n }^{\dagger}(\mathbf{r}_{2})\right\rangle\delta_{mn}\\ &=\left[\frac{1}{N_{c}}\delta_{ij}\delta_{kl}V_{\mathbf{1}}^{\psi\chi} (\mathbf{r}_{12})+\frac{1}{T_{F}}T_{ij}^{a}T_{kl}^{a}V_{\rm Ad}^{\psi\chi}(\mathbf{r} _{12})\right]\left|\psi_{i}(\mathbf{r}_{1}),\chi_{j}^{\dagger}(\mathbf{r}_{2})\right\rangle \delta_{km}\delta_{ln}\delta_{mn}\\ &=.V_{\mathbf{1}}^{\psi\chi}(\mathbf{r}_{12})\frac{1}{N_{c}}\delta_{ij} \delta_{kl}\delta_{kl}\left|\psi_{i}(\mathbf{r}_{1}),\chi_{j}^{\dagger}(\mathbf{r}_{2}) \right\rangle\\ &=.V_{\mathbf{1}}^{\psi\chi}(\mathbf{r}_{12})\delta_{ij}\left|\psi_{i}( \mathbf{r}_{1}),\chi_{j}^{\dagger}(\mathbf{r}_{2})\right\rangle\\ &=V_{\mathbf{1}}^{\psi\chi}(\mathbf{r}_{12})\left|Q\overline{Q}(\mathbf{r}_{1 },\mathbf{r}_{2})\right\rangle,\end{split} \tag{73}\] where \(T_{kl}^{a}\delta_{kl}=0\) has been used to eliminate the color-adjoint term. The action on a color-adjoint \(Q\overline{Q}\) state \(\left|\psi_{i}(\mathbf{r}_{1}),\chi_{j}^{\dagger}(\mathbf{r}_{2})\right\rangle T_{ji}^{ a}/\sqrt{T_{F}}\) analogously eliminates the color-singlet piece and is equivalent to multiplying the state by \(V_{\rm Ad}^{\psi\chi}(\mathbf{r}_{12})\) because \(T_{kl}^{a}T_{lk}^{b}=T_{F}\delta^{ab}\). This establishes that the terms in \(-L_{\psi\chi}^{\rm pot}\) are correctly normalized to reproduce pNRQCD matching calculations for color-singlet and color-adjoint quark-antiquark states [75; 76; 72]. The action of the quark-antiquark potential on more complicated multi-hadron states is given by applying the same operator \(\hat{V}^{\psi\chi}\) to these states. For instance, the potential for a heavy tetraquark state is given by a color contraction of the action of the potential on a generic state with two heavy quarks and two heavy antiquarks, \[\begin{split}&\hat{V}^{\psi\chi}\left|\psi_{n_{1}}(\mathbf{r}_{1}), \chi_{n_{2}}^{\dagger}(\mathbf{r}_{2})\psi_{n_{3}}(\mathbf{r}_{3}),\chi_{n_{4}}^{ \dagger}(\mathbf{r}_{4})\right\rangle\\ &=\int d^{3}\mathbf{s}_{1}d^{3}\mathbf{s}_{2}\left[\frac{1}{N_{c}}\delta _{ij}\delta_{kl}V_{\bf 1}^{\psi\chi}(\mathbf{s}_{12})+\frac{1}{T_{F}}T_{ij}^{a}T_{kl}^{a}V_{ \rm Ad}^{\psi\chi}(\mathbf{s}_{12})\right]\\ &\quad\times\psi_{i}^{\dagger}(t,\mathbf{s}_{1})\chi_{j}(t,\mathbf{s}_{2 })\chi_{k}^{\dagger}(t,\mathbf{s}_{2})\psi_{l}(t,\mathbf{s}_{1})\left|\psi_{n_{1}}(\bm {r}_{1}),\chi_{n_{2}}^{\dagger}(\mathbf{r}_{2})\psi_{n_{3}}(\mathbf{r}_{3}),\chi_{n_{4 }}^{\dagger}(\mathbf{r}_{4})\right\rangle\\ &=\left[\frac{1}{N_{c}}\delta_{ij}\delta_{n_{1}n_{2}}V_{\bf 1}^{ \psi\chi}(\mathbf{r}_{12})+\frac{1}{T_{F}}T_{ij}^{a}T_{n_{1}n_{2}}^{a}V_{\rm Ad}^{ \psi\chi}(\mathbf{r}_{12})\right]\left|\psi_{i}(\mathbf{r}_{1}),\chi_{j}^{\dagger}(\bm {r}_{2})\psi_{n_{3}}(\mathbf{r}_{3}),\chi_{n_{4}}^{\dagger}(\mathbf{r}_{4})\right\rangle \\ &\quad+\left[\frac{1}{N_{c}}\delta_{ij}\delta_{n_{1}n_{4}}V_{\bf 1 }^{\psi\chi}(\mathbf{r}_{14})+\frac{1}{T_{F}}T_{ij}^{a}T_{n_{1}n_{4}}^{a}V_{\rm Ad}^ {\psi\chi}(\mathbf{r}_{14})\right]\left|\psi_{i}(\mathbf{r}_{1}),\chi_{n_{2}}^{\dagger }(\mathbf{r}_{2})\psi_{n_{3}}(\mathbf{r}_{3}),\chi_{j}^{\dagger}(\mathbf{r}_{4})\right\rangle \\ &\quad+\left[\frac{1}{N_{c}}\delta_{ij}\delta_{n_{1}n_{2}}V_{\bf 1 }^{\psi\chi}(\mathbf{r}_{32})+\frac{1}{T_{F}}T_{ij}^{a}T_{n_{1}n_{2}}^{a}V_{\rm Ad}^ {\psi\chi}(\mathbf{r}_{32})\right]\left|\psi_{n_{1}}(\mathbf{r}_{1}),\chi_{j}^{\dagger }(\mathbf{r}_{2})\psi_{i}(\mathbf{r}_{3}),\chi_{n_{4}}^{\dagger}(\mathbf{r}_{4})\right\rangle \\ &\quad+\left[\frac{1}{N_{c}}\delta_{ij}\delta_{n_{3}n_{4}}V_{\bf 1 }^{\psi\chi}(\mathbf{r}_{34})+\frac{1}{T_{F}}T_{ij}^{a}T_{n_{3}n_{4}}^{a}V_{\rm Ad}^ {\psi\chi}(\mathbf{r}_{34})\right]\left|\psi_{n_{1}}(\mathbf{r}_{1}),\chi_{n_{2}}^{ \dagger}(\mathbf{r}_{2})\psi_{i}(\mathbf{r}_{3}),\chi_{j}^{\dagger}(\mathbf{r}_{4})\right\rangle.\end{split} \tag{74}\] The action of a generic \(SU(N_{c})\) quark-quark potential operator \(\hat{V}^{\psi\psi}\) on an \(N_{Q}\) quark state is analogously given by \(-L_{\psi\psi}^{\rm pot}\) in Eq. (25) with fermion fields replaced by Hilbert-space operators and has the color decomposition \[\begin{split}&\hat{V}^{\psi\psi}\left|\psi_{1}(\mathbf{r}_{1}),\ldots, \psi_{N_{q}}(\mathbf{r}_{N_{q}})\right\rangle\\ &=\sum_{\mathbf{\rho}\in\{{\rm A,S}\}}\int d^{3}\mathbf{s}_{1}d^{3}\mathbf{s}_ {2}V_{\mathbf{\rho}}^{\psi\psi}(\mathbf{s}_{12})\left(\mathcal{F}_{ij}^{\rho}\right)^ {*}\mathcal{F}_{kl}^{\rho}\psi_{i}^{\dagger}(t,\mathbf{s}_{1})\psi_{j}^{\dagger}(t,\mathbf{s}_{2})\psi_{k}(t,\mathbf{s}_{2})\psi_{l}(t,\mathbf{s}_{1})\left|\psi_{n_{1}}(\bm {r}_{1}),\ldots,\psi_{n_{N_{q}}}(\mathbf{r}_{N_{q}})\right\rangle\\ &=\sum_{I\neq J}\sum_{\mathbf{\rho}\in\{{\rm A,S}\}}V_{\mathbf{\rho}}^{ \psi\psi}(\mathbf{r}_{IJ})\left(\mathcal{F}_{m_{I}m_{J}}^{\mathbf{\rho}}\right)^{*} \mathcal{F}_{n_{I}n_{J}}^{\mathbf{\rho}}\left|\psi_{n_{1}}(\mathbf{r}_{1}),\ldots,\psi_ {m_{I}}(\mathbf{r}_{I}),\ldots,\psi_{m_{J}}(\mathbf{r}_{J}),\ldots,\psi_{n_{N_{q}}}(\bm {r}_{N_{q}})\right\rangle,\end{split} \tag{75}\] The action of a three-quark potential operator is analogous, \[\begin{split}&\hat{V}^{3\psi}\left|\psi_{1}(\mathbf{r}_{1}),\ldots, \psi_{N_{q}}(\mathbf{r}_{N_{q}})\right\rangle\\ &=\sum_{\mathbf{\rho}\in\{{\rm A,S,MA,MS}\}}\int d^{3}\mathbf{s}_{1}d^{3} \mathbf{s}_{2}d^{3}\mathbf{s}_{3}V_{\mathbf{\rho}}^{3\psi}(\mathbf{s}_{12},\mathbf{s}_{13},\mathbf{s}_{2 3})\left(\mathcal{F}_{ijk}^{\mathbf{\rho}}\right)^{*}\mathcal{F}_{lmn}^{\rho}\psi_{ i}^{\dagger}(t,\mathbf{s}_{1})\psi_{j}^{\dagger}(t,\mathbf{s}_{2})\psi_{k}^{\dagger}(t,\mathbf{s}_{3}) \\ &\quad\times\psi_{l}(t,\mathbf{s}_{3})\psi_{m}(t,\mathbf{s}_{2})\psi_{n}(t,\mathbf{s}_{1})\left|\psi_{n_{1}}(\mathbf{r}_{1}),\ldots,\psi_{n_{N_{q}}}(\mathbf{r}_{N_{q }})\right\rangle\\ &=\sum_{I\neq J\neq K}\sum_{\mathbf{\rho}\in\{{\rm A,S,MA,MS}\}}V_{\mathbf{ \rho}}^{3\psi}(\mathbf{r}_{IJ},\mathbf{r}_{IK},\mathbf{r}_{JK})\left(\mathcal{F}^{\rho} _{m_{I}m_{J}m_{K}}\right)^{*}\mathcal{F}_{n_{I}n_{J}n_{K}}^{\mathbf{\rho}}\\ &\quad\times\left|\psi_{n_{1}}(\mathbf{r}_{1}),\ldots,\psi_{m_{I}}(\bm {r}_{I}),\ldots,\psi_{m_{J}}(\mathbf{r}_{J}),\ldots,\psi_{m_{K}}(\mathbf{r}_{K}), \ldots,\psi_{n_{N_{q}}}(\mathbf{r}_{N_{q}})\right\rangle.\end{split} \tag{76}\] The four-quark potential operator \(\hat{V}^{4\psi}\) can be defined analogously, although its explicit form at NNLO has not yet been computed. These can be combined to define a total potential operator \[\begin{split}&\hat{V}=\hat{V}^{\psi\chi}+\hat{V}^{\psi\psi}+\hat{V}^{3 \psi}+\hat{V}^{4\psi}\\ &\quad\quad+V^{\psi\psi\chi}+V^{\psi\psi\chi}+V^{\psi\psi\chi\chi}+ \psi\leftrightarrow\chi+\ldots,\end{split} \tag{77}\] where 5-quark and higher-body potentials that do not contribute at NNLO are omitted, and \(\psi\leftrightarrow\chi\) refers to antiquark-antiquark, 3-antiquark, and 4-antiquark potentials obtained by taking \(\psi\leftrightarrow\chi\) in the quark-quark, 3-quark, and 4-quark potential operators. Note that besides the 3-quark and 4-quark operators described above there are analogs of 3-quark and 4-quark potentials where only some of the quarks are replaced with antiquarks that enter the total potential at NNLO and arise for example in heavy tetraquark systems. In conjunction with the usual nonrelativistic kinetic energy operator \[\begin{split}&\hat{T}\left|\psi_{n_{1}}(\mathbf{r}_{1}),\ldots,\psi_{n_{N_{q }}}(\mathbf{r}_{N_{q}})\right\rangle\\ &=\sum_{I}\frac{\mathbf{p}_{I}^{2}}{2m_{Q}}\left|\psi_{n_{1}}(\mathbf{r}_ {1}),\ldots,\psi_{N_{q}}(\mathbf{r}_{n_{N_{q}}})\right\rangle\\ &=-\sum_{I}\frac{\mathbf{\nabla}_{I}^{2}}{2m_{Q}}\left|\psi_{n_{1}}( \mathbf{r}_{1}),\ldots,\psi_{n_{N_{q}}}(\mathbf{r}_{N_{q}})\right\rangle,\end{split} \tag{78}\] this potential operator can be used to construct the pNRQCD Hamiltonian operator \[\hat{H}=\hat{T}+\hat{V}, \tag{79}\] which is the basic ingredient used in the many-body calculations discussed below. The eigenvalues of the nonrelativistic Hamiltonian \(\hat{H}\) are equal to the total energies of the corresponding eigenstates minus the rest masses of any heavy quarks and antiquarks appearing in the state, since the rest mass is removed from the Hamiltonian by the transformation in Eq. (6). The ground state of the sector of pNRQCD Hilbert space containing \(N_{Q}\) heavy quarks, denoted \(\left|Q_{1}\ldots Q_{N_{Q}},0\right\rangle\), with mass or total energy \(M_{Q_{1}\ldots Q_{N}}\) therefore has Hamiltonian matrix elements \[\begin{split}\Delta E_{Q_{1}\ldots Q_{N}}&\equiv \left\langle Q_{1}\ldots Q_{N_{Q}},0\right|\hat{H}\left|Q_{1}\ldots Q_{N_{Q}},0\right\rangle\\ &=M_{Q_{1}\ldots Q_{N}}-N_{Q}m_{Q}.\end{split} \tag{80}\] The pNRQCD Hamiltonian and therefore \(\Delta E_{Q_{1}\ldots Q_{N}}\) will depend on the definition of \(m_{Q}\) above and, in particular, whether it is a bare or renormalized mass. Although the unphysical nature of the pole mass \(m_{Q}\) appearing in Eq. (6) and the pNRQCD Hamiltonian, therefore, leads to ambiguities in the definition of the nonrelativistic energy \(\Delta E_{Q_{1}\ldots Q_{N}}\), the total energy \(M_{Q_{1}\ldots Q_{N}}\) is independent of the prescription used to define \(m_{Q}\) up to perturbative truncation effects. Analogous considerations apply to pNRQCD states containing heavy quarks and antiquarks (assuming their separate number conservation), for example, \[\Delta E_{Q\overline{Q}}\equiv\left\langle Q\overline{Q},0\right|\hat{H} \left|Q\overline{Q},0\right\rangle=M_{Q\overline{Q}}-2m_{Q}. \tag{81}\] Once the value of \(m_{Q}\) in a given scheme is determined, for example by matching \(M_{Q\overline{Q}}\) or another hadron mass to experimental data or lattice QCD calculations, it can be used to predict other physical hadron masses from pNRQCD calculations of \(\hat{H}\) eigenvalues and for example predict \(M_{Q_{1}\ldots Q_{N}}\) from \(\Delta E_{Q_{1}\ldots Q_{N}}\). For baryon states, the quark-quark potential involves the color-tensor contraction \(\mathcal{F}^{\mathbf{\rho}}_{m_{I}m_{J}}\mathcal{F}^{\mathbf{\rho}}_{n_{I}n_{J}} \epsilon_{n_{1}\ldots n_{N_{c}}}\), which vanishes for the symmetric potential involving \(\mathcal{F}^{S}_{n_{I}n_{J}}\) and is equal to \(\epsilon_{n_{1}\ldots m_{I}\ldots m_{J}\ldots n_{N_{c}}}/2\) for the antisymmetric potential using the color tensors defined in Eq. (27). Inserting this in into Eq. (75) applied to the baryon state defined in Eq. (68) gives \[\begin{split}\hat{V}^{\psi\psi}\left|B\right\rangle&= \frac{1}{2}\sum_{I\neq J}V_{\rm A}^{\psi\psi}(\mathbf{r}_{IJ})\left|B\right\rangle \\ &=\sum_{I<J}V_{\rm A}^{\psi\psi}(\mathbf{r}_{IJ})\left|B\right\rangle, \end{split} \tag{82}\] where the coordinate dependence of \(\left|B(\mathbf{r}_{1},\ldots,\mathbf{r}_{N_{c}})\right\rangle\) has been suppressed for brevity and the \(I\leftrightarrow J\) symmetry of the potential has been used in going from the first to the second line. The analogous contraction for the three-quark potential \(\mathcal{F}^{\mathbf{\rho}}_{m_{I}m_{J}m_{K}}\mathcal{F}^{\mathbf{\rho}}_{n_{I}n_{J}n_{ K}}\epsilon_{n_{1}\ldots n_{N_{c}}}\) vanishes for all potentials except the totally antisymmetric case with \(\mathbf{\rho}=\text{A}\). In this case the color-tensor contraction is equal to \(\epsilon_{n_{1}\ldots m_{I}\ldots m_{J}\ldots m_{K}\ldots n_{N_{c}}}/3!\), which gives \[\begin{split}\hat{V}^{3\psi}\left|B\right\rangle&= \frac{1}{3!}\sum_{I\neq J\neq K}V_{\rm A}^{3\psi}(\mathbf{r}_{IJ},\mathbf{r}_{IK},\mathbf{r }_{JK})\left|B\right\rangle\\ &=\sum_{I<J<K}V_{\rm A}^{3\psi}(\mathbf{r}_{IJ},\mathbf{r}_{IK},\mathbf{r}_{JK })\left|B\right\rangle.\end{split} \tag{83}\] Since the 4-quark interaction color tensors are orthogonal to \(\epsilon_{ij\ldots}\) as discussed in Eq. (II.5) \[\hat{V}^{4\psi}\left|B\right\rangle=0, \tag{84}\] at NNLO with non-zero contributions possible at N\({}^{3}\)LO. These results establish that the color-antisymmetric two- and three-quark potential operators are correctly normalized to reproduce the pNRQCD matching calculations performed using baryon-level Lagrangian operators in Refs [55; 56]. It can be shown similarly that the mixed-symmetry adjoint potential operators defined above are correctly normalized so that their action on an adjoint baryon state is equivalent to matrix multiplication by \(V_{\rho uv}^{3\psi}\). We end this section with an interesting cross-check discussed for \(N_{c}=3\) in Ref. [56]: the antisymmetric two-quark potential can be obtained to NNLO (including two-loop diagrams) using the NNLO three-body potential (which only includes one-loop diagrams) and setting \(N_{c}-1\) quarks to be at the same position. These \(N_{c}-1\) quarks then behave as an antiquark in color space, and thus a color-singlet quarkonium state arises. Baryon states with \(N_{c}-1\) co-located quarks can be defined by \[\left|M(\mathbf{r}_{1},\mathbf{r}_{2})\right\rangle\equiv\left|B(\mathbf{r}_{1},\mathbf{r}_{2}= \ldots=\mathbf{r}_{N_{c}})\right\rangle. \tag{85}\] The correspondence between an \(N_{c}-1\) quark color source and an antiquark color source suggests that matrix elements can be equated between quarkonium states \(\left|Q\overline{Q}\right\rangle\) and heavy baryon states with \(N_{c}-1\) quark positions identified, \[\left\langle Q\overline{Q}\right|\hat{V}\left|Q\overline{Q}\right\rangle=\left \langle M\right|\hat{V}\left|M\right\rangle, \tag{86}\] at least to leading order in \(1/m_{Q}\) where heavy quarks are equivalent to static color sources. This provides a relation between the quark-antiquark and multi-quark potentials in each representation that make non-zero contributions in quarkonium and baryon states, \[\begin{split}&\left\langle Q\overline{Q}(\mathbf{r}_{1},\mathbf{r}_{2}) \right|\hat{V}_{\mathbf{1}}^{\psi\chi}\left|Q\overline{Q}(\mathbf{r}_{1},\mathbf{r}_{2 })\right\rangle\\ &=\left\langle M(\mathbf{r}_{1},\mathbf{r}_{2})\right|\hat{V}_{A}^{\psi \psi}+\hat{V}_{A}^{3\psi}\left|M(\mathbf{r}_{1},\mathbf{r}_{2})\right\rangle\\ &=\sum_{I}V_{\mathrm{A}}^{\psi\psi}(\mathbf{r}_{1I})+\sum_{I<J}V_{A}^ {3\psi}(\mathbf{r}_{1I},\mathbf{r}_{1J},\mathbf{0}),\end{split} \tag{87}\] where potentials with all quark fields located at the same point have been removed since these correspond to local counterterms. There is only one four-quark separation \(\mathbf{r}=\mathbf{r}_{12}=\mathbf{r}_{13}=\ldots\), and so the sums can be evaluated as \[\begin{split} V_{\mathbf{1}}^{\psi\chi}(\mathbf{r})&=( N_{c}-1)V_{\mathrm{A}}^{\psi\psi}(\mathbf{r})\\ &+\frac{1}{2}(N_{c}-1)(N_{c}-2)V_{A}^{3\psi}(\mathbf{r},\mathbf{r},\mathbf{0}),\end{split} \tag{88}\] where the counting factor arises from the \(\binom{N_{c}-1}{2}=(N_{c}-1)(N_{c}-2)/2\) three-body interactions between the \(N_{c}-1\) identically located quarks and the quark at a specific position. Solving for the quark-quark antisymmetric potential, inserting the form of the three-quark potential in Eq. (39) with singular factors of \(v_{3}(\mathbf{r},\mathbf{0})\) removed (by local counterterms), and noting that the three-quark color factor given in Eq. (61) and the quark-antiquark color factor \(-C_{F}\) are related by \[C_{\mathrm{A}}^{3\psi,q}=-C_{F}\left[\frac{N_{c}}{4(N_{c}-1)}\right], \tag{89}\] the quark-quark potential can be obtained in terms of the quark-antiquark potential and the three-quark potential function as \[\begin{split} V_{\mathrm{A}}^{\psi\psi}(\mathbf{r})&= \frac{1}{N_{c}-1}\ \left[V_{\mathbf{1}}^{\psi\chi}(\mathbf{r})\right.\\ &\left.+C_{F}\alpha_{s}\left(\frac{\alpha_{s}}{4\pi}\right)^{2} \frac{N_{c}(N_{c}-2)}{8}v_{3}(\mathbf{r},\mathbf{r})\right].\end{split} \tag{90}\] The three-quark potential function with equal arguments simplifies to \[v_{3}(\mathbf{r},\mathbf{r})=-\frac{4\pi^{2}(\pi^{2}-12)}{|\mathbf{r}|}, \tag{91}\] which relates the quark-quark and quark-antiquark potentials at NNLO as \[\begin{split} V_{\mathrm{A}}^{\psi\psi}(\mathbf{r})&= \frac{1}{N_{c}-1}\ \left[V_{\mathbf{1}}^{\psi\chi}(\mathbf{r})\right.\\ &\left.-\frac{\alpha_{s}C_{F}}{|\mathbf{r}|}\left(\frac{\alpha_{s}}{4 \pi}\right)^{2}\frac{N_{c}(N_{c}-2)}{2}\pi^{2}(\pi^{2}-12)\right].\end{split} \tag{92}\] This is consistent with the antisymmetric quark-quark potential attained previously in Eq. 38 and matches the result obtained for the case of \(N_{c}=3\) in Ref. [56]. Note that for \(N_{c}\geq 4\), this agreement is further consistent with the result above that four-quark potentials do not contribute to \(N_{c}\)-color baryon states at NNLO. ## III Many-body methods A wide range of techniques have been developed for solving nonrelativistic quantum many-body problems in nuclear and condensed matter physics. Quantum Monte Carlo methods provide stochastic estimates of energy spectra and other observables of quantum many-body states with systematic uncertainties that can be quantified and reduced with increased computational resources [58; 59; 60]. In particular, the variational Monte Carlo approach allows upper bounds to be placed on many-body ground state energies that can be numerically optimized using a parameterized family of trial wavefunctions. The Green's function Monte Carlo approach augments VMC by including imaginary time evolution that exponentially suppresses excited-state contributions and allows exact ground-state energy results to be obtained from generic trial wavefunctions (more precisely any trial wavefunction not orthogonal to the ground state) in the limit of large imaginary-time evolution. The statistical precision of GFMC calculations is greatly improved by a good choice of the trial wavefunction that has a large overlap with the ground state, and often the optimized wavefunctions resulting from VMC calculations are used as the initial trial wavefunctions in subsequent GFMC calculations [58; 60]. Ground-state energy results obtained using GFMC are themselves variational upper bounds on the true ground-state energy, as discussed further below. This combination of methods leverages the desirable features of VMC while using GFMC to remove hard-to-quantify systematic uncertainties associated with the Hilbert space truncation induced by a wavefunction parameterization with a finite number of parameters. Previous works have used few-body methods, for example based on Fadeev equations, and variational methods to calculate quarkonium and baryon masses using potential models [78; 79; 80; 81; 82]. Two previous works have applied variational methods to calculate baryon masses using pNRQCD potentials: Ref. [61] uses the LO potential and a one-parameter family of analytically integrable variational wavefunctions, and Ref. [62] uses po tentials up through NNLO with a two-parameter family of variational wavefunctions. Here, we extend these results by performing GFMC calculations with trial wavefunctions obtained using VMC in order to obtain reliable predictions for quarkonium and triply-heavy baryon masses across a wide range of \(m_{Q}\) for QCD as well as \(SU(N_{c})\) gauge theories of dark mesons and baryons with \(N_{c}\in\{2,\ldots,6\}\). The methods used here are very computationally efficient - by generating Monte Carlo ensembles for VMC and GFMC by applying the Metropolis algorithm with optimized trial wavefunctions used for importance sampling, we achieve more than an order of magnitude more precise results than previous calculations with modest computational resources. The techniques developed here can further be applied straightforwardly to systems with more than three heavy quarks. The remainder of this section discusses the formalism required to apply VMC and GFMC methods to pNRQCD for these systems and beyond. ### Variational Monte-Carlo The quantum mechanical state \(\ket{\Psi}\) of a system containing \(N_{Q}\) heavy quarks/antiquarks can be described by a coordinate space wavefunction \(\Psi(\mathbf{R})\equiv\bra{\mathbf{R}}\Psi\) where \(\mathbf{R}\equiv(\mathbf{r}_{1},\ldots,\mathbf{r}_{N_{Q}})\) is a vector of coordinates. The normalization condition \[\begin{split} 1&=\bra{\Psi}\Psi\rangle=\int d\mathbf{R} \bra{\Psi}\mathbf{R}\bra{\mathbf{R}}\Psi\\ &=\int d\mathbf{R}\ket{\Psi(\mathbf{R})}^{2},\end{split} \tag{93}\] will be used throughout this work. The LO pNRQCD Hamiltonian is simply the Coulomb Hamiltonian, which is known to be bounded from below, and this boundedness will be assumed for the pNRQCD Hamiltonian at higher orders below and verified _a posteriori_. This implies that there is a set of unit-normalized energy eigenstates \(\ket{n}\) with \(H\ket{n}=\Delta E_{n}\ket{n}\) (note that we continue using \(\Delta E\) to denote nonrelativistic energies here and below) that can be ordered such that \(\Delta E_{0}\leq\Delta E_{1}\leq\ldots\), from which the well-known Rayleigh-Ritz variational bound follows, \[\bra{\Psi}H\ket{\Psi}=\sum_{n}|\bra{\Psi}n\rangle|^{2}\Delta E_{n}\geq\Delta E _{0}. \tag{94}\] This variational principle is the starting point for VMC methods. Any trial wavefunction \(\Psi(\mathbf{R};\mathbf{\omega})\) depending on a set of parameters \(\omega=(\omega_{1},\ldots)\) satisfies the variational principle, \[\begin{split}\Delta E_{0}&\leq\bra{\Psi_{T}(\mathbf{ \omega})}H\ket{\Psi_{T}(\mathbf{\omega})}\\ &=\int d^{3}\mathbf{R}\,\Psi_{T}(\mathbf{R};\mathbf{\omega})^{*}H(\mathbf{R}) \Psi_{T}(\mathbf{R};\mathbf{\omega}),\end{split} \tag{95}\] where \(\bra{\mathbf{R}}H\ket{\mathbf{R}^{\prime}}=H(\mathbf{R})\delta(\mathbf{R}-\mathbf{R}^{\prime})\). By iteratively varying \(\mathbf{\omega}\) using a numerical optimization procedure, the upper bound on \(\Delta E_{0}\) provided by a parameterized family of trial wavefunctions can be successively improved. If the trial wavefunction is sufficiently expressive as to describe the true ground-state wavefunction for some set of parameters, then the true ground-state energy and wavefunction can be determined using such an optimization procedure. This is generally not the case for complicated many-body Hamiltonians and numerically tractable trial wavefunctions, and in this generic case, variational methods provide an upper bound on \(\Delta E_{0}\) rather than a rigorous determination of the ground-state energy. The integral in Eq. (95) is \(3N_{Q}\) dimensional and is challenging to compute exactly for many-body systems. Instead, VMC methods apply Monte Carlo integration techniques to stochastically approximate the integral in Eq. (95). The magnitude of the trial wavefunction can be used to define a probability distribution, \[\mathcal{P}(\mathbf{R};\mathbf{\omega})=|\Psi_{T}(\mathbf{R};\mathbf{\omega})|^{2}, \tag{96}\] from which coordinates \(\mathbf{R}\) can be sampled. The standard Metropolis algorithm can then be used to approximate the integral in Eq. (95): coordinates \(\mathbf{R}_{0}\) are sampled from \(\mathcal{P}(\mathbf{R};\mathbf{\omega})\), updated coordinates \(\mathbf{R}_{1}=\mathbf{R}_{0}+\varepsilon\mathbf{x}\) are chosen using, for example, zero-mean and unit-variance Gaussian random variables \(\mathbf{x}\) and a step size \(\varepsilon\) discussed further below. The updated coordinates are accepted with probability \(w_{1}=\mathcal{P}(\mathbf{R}_{1};\mathbf{\omega})/\mathcal{P}(\mathbf{R}_{0};\mathbf{\omega})\) or with probability \(1\) if \(w_{1}>1\), and they are rejected otherwise. If the coordinates are accepted, then \(\mathbf{R}_{1}\) is added to an ensemble of coordinate values, while if they are rejected, then \(\mathbf{R}_{0}\) is added. This procedure is repeated with coordinates \(\mathbf{R}_{i+1}\) updated analogously from the latest coordinates \(\mathbf{R}_{i}\) in the ensemble. The new coordinates are accepted with probability \(w_{i+1}=\mathcal{P}(\mathbf{R}_{i+1};\mathbf{\omega})/\mathcal{P}(\mathbf{R}_{i};\mathbf{\omega})\) (or probability \(1\) if \(w_{i+1}>1\)). The resulting ensemble is approximately a set of random variables drawn from \(\mathcal{P}(\mathbf{R};\mathbf{\omega})\) if the coordinates from an initial thermalization period of \(N_{\text{therm}}\) updates are omitted, and they are approximately statistically independent if \(N_{\text{skip}}\) update steps are skipped between successive members of the final coordinate ensemble, where \(N_{\text{skip}}\) is chosen to be longer than the autocorrelation times of observables of interest.4 Footnote 4: Below, we find \(N_{\text{skip}}\gtrsim 100\) to be sufficient to achieve negligible autocorrelations in \(\bra{\Psi_{T}(\mathbf{\omega})}H\ket{\Psi_{T}(\mathbf{\omega})}\) using \(\epsilon\) on the order of the Bohr radius of the Coulombic trial wavefunctions discussed in Sec. IV. An ensemble of \(N_{\text{var}}\) such coordinates can then be used to approximate the integral in Eq. (95) as \[\bra{\Psi_{T}(\mathbf{\omega})}H\ket{\Psi_{T}(\mathbf{\omega})}\approx\frac{1}{N_{ \text{var}}}\sum_{i=1}^{N_{\text{var}}}H(\mathbf{R}_{i}). \tag{97}\] In VMC methods, this approximation of \(\bra{\Psi_{T}(\mathbf{\omega})}H\ket{\Psi_{T}(\mathbf{\omega})}\) is used as a loss function to be minimized using numerical optimization techniques. For a complete review of VMC and its implementation, see Ref. [58]. In the VMC calculation below, we use the Adam optimizer [83] to update our trial wavefunction parameters iteratively. Default Adam hyperparameters are used with a step size initially chosen to be \(10^{-2}\). After the change in loss function fails to improve for 10 updates, the step size is reduced by a factor of 10. After two such reductions of the step size, optimization is restarted using the best trial wavefunction parameters from the previous optimization round and step sizes of \(10^{-3}\) and subsequently \(10^{-4}\) in order to refresh the Adam momenta and improve convergence to optimal parameters without overshooting. Gradients of the loss function are stochastically estimated in analogy to Eq. (97) using auto-differentiation techniques implemented in the Python package PyTorch[84]. ### Green's Function Monte Carlo The optimal trial wavefunction \(\Psi_{T}(\mathbf{R},\mathbf{\omega})\) obtained using VMC methods still may not provide an accurate determination of \(\Delta E_{0}\) because of the limited expressiveness of a finite-parameter function suitable for numerical optimization. To overcome this limitation, we use the standard QMC strategy of taking the optimal trial wavefunction obtained from VMC as the starting point for subsequent GFMC calculation [58; 60]. GFMC calculations use evolution5 in imaginary time \(\tau\) to exponentially suppress excited-state components of \(\ket{\Psi_{T}}\), which is analogous to the imaginary-time evolution used in lattice QCD calculations. In the limit of infinite imaginary-time evolution, the ground state with a given set of quantum numbers can be obtained from any trial wavefunction with the same quantum numbers, Footnote 5: This evolution is often described as diffusion because the free particle nonrelativistic imaginary-time Schrödinger equation is the diffusion equation. \[\ket{0}=\lim_{\tau\to\infty}e^{-H\tau}\ket{\Psi_{T}}. \tag{98}\] In our case, imaginary-time evolution can be used to determine the ground-state energy and wavefunction of a system with \(N_{Q}\) heavy quarks/antiquarks using a pNRQCD Hamiltonian with conserved heavy quark/antiquark numbers. In general, directly computing the propagator in (98) is not feasible for arbitrary \(\tau\). However, taking small imaginary time, \(\delta\tau=\tau/N\) for \(N\gg 1\) and recovering the full projection in large time can be achieved by a Lie-Trotter product [85], \[\Psi(\tau,\mathbf{R}_{N})= \int\prod_{i=0}^{N-1}d\mathbf{R}_{i}\bra{\mathbf{R}_{N}}e^{-H\delta\tau} \ket{\mathbf{R}_{N-1}}\] \[\times\cdots\times\bra{\mathbf{R}_{1}}e^{-H\delta\tau}\ket{\mathbf{R}_{0 }}\bra{\mathbf{R}_{0}}\Psi_{t}, \tag{99}\] making the computation feasible. We can then define the GFMC wavefunction in integral form at an imaginary time \(\tau+\delta\tau\) \[\Psi(\tau+d\tau,\mathbf{R})=\int d\mathbf{R}^{\prime}\,G_{\delta\tau}(\mathbf{R},\mathbf{R}^{ \prime})\Psi(\tau,\mathbf{R}^{\prime}), \tag{100}\] in terms of a Green's function, \[G_{\delta\tau}(\mathbf{R},\mathbf{R}^{\prime})=\bra{\mathbf{R}}e^{-H\delta\tau}\ket{\mathbf{R }^{\prime}}. \tag{101}\] Practically, one approximates short-time propagation with the Trotter-Suzuki expansion, \[G_{\delta\tau}(\mathbf{R},\mathbf{R}^{\prime})= e^{-V(\mathbf{R})\delta\tau/2}\bra{\mathbf{R}}e^{-T\delta\tau}\ket{\mathbf{R}^{ \prime}}\times\] \[e^{-V(\mathbf{R}^{\prime})\delta\tau/2}+\mathcal{O}(\delta\tau^{2}), \tag{102}\] where \(V\) is the potential in configuration space and \(T\) is the kinetic energy which defines the free-particle propagator, which for nonrelativistic systems is expressible as a Gaussian distribution in configuration space, \[\bra{\mathbf{R}}e^{-T\delta\tau}\ket{\mathbf{R}^{\prime}}=\left(\frac{1}{\lambda^{3} \pi^{3/2}}\right)^{N_{Q}}e^{-(\mathbf{R}-\mathbf{R}^{\prime})^{2}/\lambda^{2}}, \tag{103}\] where \(\lambda^{2}=2\delta\tau/m_{Q}\)[58; 60]. The integral in Eq. (100) describing the action of a single Trotter step to the wavefunction is therefore computed by sampling \(\mathbf{R}-\mathbf{R}^{\prime}\) from Eq. (103) and then explicitly multiplying by the potential factors appearing in Eq. (102). In order to reduce the variance of GFMC results, a further resampling step is applied in which \(\mathbf{R}-\mathbf{R}^{\prime}\) and \(-(\mathbf{R}-\mathbf{R}^{\prime})\) are both proposed as possible updates, and a Metropolis sampling step is used to select one proposed update as described in more detail in Ref. [60]. If the action of the pNRQCD potential on a given state can be described by a spin- and color-independent potential depending only on \(\mathbf{R}\), then it is straightforward to exponentiate the potential as indicated in Eq. (102). Conveniently, precisely this situation arises for meson and baryon states at \(\mathcal{O}(m_{Q}^{0})\) as shown in Sec. II.6. In applications of pNRQCD to multi-hadron systems, this will not usually be the case because generic states are not eigenstates of a single color tensor operator but instead will include contributions from multiple color tensor operators in the potential. Calculations including \(\mathcal{O}(1/m_{Q}^{2})\) effects will also have spin-dependent potentials even in the single-meson and single-baryon cases. In generic applications including color- and spin-dependent potentials it will be necessary to expand the exponential, for instance as a Taylor series \(e^{-V\delta\tau/2}\approx 1-V\delta\tau/2+V^{2}(\delta\tau)^{2}/8+\ldots\). Since the potential appearing in these expressions is a \(2N_{c}N_{Q}\times 2N_{c}N_{Q}\) matrix, the accuracy of this expansion will have to be balanced against the computational cost of its evaluation when deciding how many terms to include. More details on including matrix-valued potentials in GFMC calculations can be found in Refs. [58; 60]. Different treatment will be required for momentum-dependent potentials at \(\mathcal{O}(1/m_{Q}^{2})\). Applying an operator \(\mathcal{O}\) to the imaginary-time-evolved wavefunction \(\Psi_{T}(\mathbf{R},\tau)\) leads to the mixed expectation values \[\left\langle\Psi_{T}\right|\mathcal{O}\left|\Psi_{T}(\tau)\right\rangle=\left\langle \Psi_{T}\right|\mathcal{O}e^{-H\tau}\left|\Psi_{T}\right\rangle. \tag{104}\] Expectation values involving symmetric insertions of imaginary-time evolution operators can also be computed from the mixed expectation values \(\left\langle\Psi_{T}\right|\mathcal{O}\left|\Psi_{T}(\tau)\right\rangle\) and \(\left\langle\Psi_{T}(\tau)\right|\mathcal{O}\left|\Psi_{T}\right\rangle\)[58; 86]. Since \(H\) commutes with \(e^{-H\tau}\), Hamiltonian matrix elements are automatically symmetric, \[\left\langle\Psi_{T}\right|H\left|\Psi_{T}(\tau)\right\rangle =\left\langle\Psi_{T}\right|e^{-H\tau/2}He^{-H\tau/2}\left|\Psi_{ T}\right\rangle \tag{105}\] \[=\left\langle\Psi_{T}(\tau/2)\right|H\left|\Psi_{T}(\tau/2) \right\rangle.\] By Eq. (95), this implies that GFMC binding-energy determinations provide variational upper bounds on the energy of the ground state \(E_{0}\) with quantum numbers of \(\Psi_{T}\). It further implies that GFMC Hamiltonian matrix elements have the spectral representation \[\left\langle\Psi_{T}\right|H\left|\Psi_{T}(\tau)\right\rangle=\sum_{n}\Delta E _{n}|Z_{n}|^{2}e^{-\Delta E_{n}\tau}, \tag{106}\] where \(Z_{n}=\left\langle n\right|\left.\Psi_{T}\right\rangle\). In the large-\(\tau\) limit, dependence on \(Z_{0}\) can be removed by dividing by \(\left\langle\Psi_{T}\right|\left.\Psi_{T}(\tau)\right\rangle\) since \[\left\langle\Psi_{T}\right|\left.\Psi_{T}(\tau)\right\rangle=\sum_{n}|Z_{n}|^{ 2}e^{-\Delta E_{n}\tau}. \tag{107}\] Defining the GFMC approximation to the Hamiltonian matrix element as \[\left\langle H(\tau)\right\rangle\equiv\frac{\left\langle\Psi_{T}\right|H \left|\Psi_{T}(\tau)\right\rangle}{\left\langle\Psi_{T}\right|\left.\Psi_{T}( \tau)\right\rangle}, \tag{108}\] and the excitation gap \[\delta\equiv\Delta E_{1}-\Delta E_{0}, \tag{109}\] this shows that in the large \(\tau\) limit \[\left\langle H(\tau)\right\rangle =\frac{\sum_{n}\Delta E_{n}|Z_{n}|^{2}e^{-\Delta E_{n}\tau}}{ \sum_{n}|Z_{n}|^{2}e^{-\Delta E_{n}\tau}} \tag{110}\] \[=\Delta E_{0}+\left|\frac{Z_{1}}{Z_{0}}\right|^{2}\delta\ e^{- \delta\tau}+\ldots,\] where \(\ldots\) denotes terms exponentially suppressed by \(e^{-\delta\tau}\) for \(n>1\). Corrections to \(\left\langle H(\tau)\right\rangle\approx\Delta E_{0}\) are therefore exponentially suppressed by \(\delta\tau\), and GFMC calculations can achieve accurate ground-state energy estimates even if \(Z_{1}/Z_{0}\) is not small as long as \(\tau\gg 1/\delta\). The computational simplicity of pNRQCD, particularly for mesons and baryons at \(\mathcal{O}(m_{Q}^{0})\), makes it straightforward to achieve \(\tau\gg 1/\delta\) in the numerical calculations below. Constant fits to \(\left\langle H(\tau)\right\rangle\) using correlated \(\chi^{2}\)-minimization are therefore used below to fit ground-state energies from GFMC results. To avoid contamination from \(\mathcal{O}(e^{-\delta\tau})\) excited-state effects, the minimum imaginary time used for fitting \(\tau_{\text{min}}\) was varied, and in particular 30 different \(\tau_{\text{min}}\) were chosen from \([0,L_{\tau}-1]\). The covariance matrices for these fits are ill-conditioned due to the large number of imaginary time steps used, and results are therefore averaged over windows of consecutive \(\tau\) before performing fits. Linear shrinkage [87; 88] is used with the diagonal of the co-viance matrix as the shrinkage target in order to further improve the numerical stability of covariance matrix estimation. The results \(\Delta E^{f}\) obtained from \(\chi^{2}\)-minimization for each choice of fit range \([\tau_{\text{min}}^{f},L_{\tau}-1]\) enumerated by \(f=1,\ldots,30\) with corresponding \(\chi^{2}\) minima \(\chi_{f}^{2}\) are then averaged in order to penalize fits with poor goodness-of-fit (arising from non-negligible excited-state effects) using the Bayesian model averaging method of Ref. [89] with flat priors. This corresponds to \[\Delta E=\sum_{f}w_{f}\Delta E^{f}, \tag{111}\] where the normalized weights \(w_{f}\) are defined by \[\begin{split}\tilde{w}_{f}&=\exp\left[-\frac{1}{2} \left(\chi_{f}^{2}+2\tau_{\text{min}}^{f}\right)\right],\\ w_{f}&=\frac{\tilde{w}_{f}}{\sum_{g}\tilde{w}_{g} },\end{split} \tag{112}\] where a constant factor of two times the number of parameters that cancels from the weighted average defined in Eq. (111) below has been omitted. The model averaged fit uncertainties \(\delta\Delta E\) are then given in terms of the individual fit uncertainties \(\delta\Delta E\) by [89] \[\begin{split}\delta\Delta E&=\sum_{f}w_{f}\delta \Delta E^{f}\\ &\quad+\sum_{f}w_{f}(\Delta E_{f})^{2}-\sum_{f}(w_{f}\Delta E_{f} )^{2},\end{split} \tag{113}\] where the terms on the second line provide a measure of systematic uncertainty arising from the variance of the ensemble of fit results. Finally, the size of the \(\tau\) averaging window is varied in order to test the stability of covariance matrix determination, and stability of the final fit results after model averaging is tested for different choices of \(\tau\) averaging window size starting with 2 and 4 and then continuing by doubling the window size until \(1\sigma\) consistency between consecutive window-size choices is achieved. In this manner, the model-averaged \(\Delta E\) from the first \(\tau\) window size consistent with the previous \(\tau\) window size is taken as the final GFMC result quoted for all parameter choices below. ## IV Coulombic trial wavefunctions The QMC methods above require a parameterized family of trial wavefunctions \(\psi_{T}(\mathbf{R},\mathbf{C})\) as the starting point for VMC. At LO, the color-singlet quark-antiquark potential is identical to a rescaled Coulomb potential, and the ground-state wavefunction is known analytically. Beyond LO, there are logarithmic corrections to the Coulombic shape of the potential. To assess how accurately a given variational family of trial wavefunctions has described the ground state of these higher-order potentials, GFMC calculations are performed using these variationally optimized trial wavefunctions. The amount of imaginary-time evolution required to converge toward the true ground-state energy, as well as the statistical precision of the GFMC calculations with a given trial wavefunction, provide quantitative measures of how close a given trial wavefunction is to the true ground state. Several families of trial wavefunctions are considered for these systems below. A simple variational ansatz corresponding to Coulomb ground-state wavefunctions with appropriately tuned Bohr radii provides relatively stringent variational bounds on NLO and NNLO quarkonium energies while also leading to computationally efficient GFMC calculations. Analogous variational and GFMC calculations for baryons show that products of Coulomb ground-state wavefunctions with appropriately tuned Bohr radii provide simple but remarkably effective trial wavefunctions for heavy baryons. ### Quarkonium The pNRQCD potential for quarkonium states is given at \(\mathcal{O}(m_{Q}^{0})\) from Eq. (73) and Eq. (16) by \[\hat{V}\left|Q\overline{Q}(\mathbf{r}_{1},\mathbf{r}_{2})\right> =V_{\mathbf{1}}^{\psi\chi,(0)}(\mathbf{r}_{12})\left|Q\overline{Q}(\mathbf{r} _{1},\mathbf{r}_{2})\right>\] \[=-\frac{C_{F}\alpha_{V}(|\mathbf{r}_{12}|,\mu)}{|\mathbf{r}_{12}|}\left| Q\overline{Q}(\mathbf{r}_{1},\mathbf{r}_{2})\right>. \tag{114}\] At LO, \(\alpha_{V}(|\mathbf{r}_{12}|,\mu)=\alpha_{s}(\mu)\) and Eq. (114) takes the Coulombic form \[\hat{V}^{\rm(LO)}\left|Q\overline{Q}(\mathbf{r}_{1},\mathbf{r}_{2})\right>=-\frac{C_{ F}\alpha_{s}}{|\mathbf{r}_{12}|}\left|Q\overline{Q}(\mathbf{r}_{1},\mathbf{r}_{2})\right>. \tag{115}\] Therefore, the pNRQCD Hamiltonian for quarkonium at LO is identical to a rescaled version of the Hamiltonian for positron [90]. The energy eigenstate wavefunctions \(\psi_{nlm}(\mathbf{r}_{12})\) can therefore be classified by the same quantum numbers as the Hydrogen atom, \(n\in\mathbb{N}\), \(l=0,\ldots,n-1\) and \(m=-l,\ldots,l\). They further share the same functional form as the Hydrogen atom wavefunctions with \[\psi_{100}(\mathbf{r};a)=\frac{1}{\sqrt{\pi}a^{3/2}}e^{-|\mathbf{r}|/a}, \tag{116}\] where \(a\) is a constant analogous to the Hydrogen atom Bohr radius that for quarkonium at LO is equal to \[a^{\rm(LO)}=\frac{2}{\alpha_{s}C_{F}}. \tag{117}\] The corresponding quarkonium ground-state energy is equal to \[\Delta E^{\rm(LO)}_{Q\overline{Q}}=-\frac{\alpha_{s}^{2}C_{F}^{2}m_{Q}}{4}. \tag{118}\] Knowledge of the exact ground-state wavefunction for this case provides a powerful test of numerical QMC methods because \[\hat{H}^{\rm(LO)}\left|\psi_{i}(\mathbf{r}_{1})\chi_{i}(\mathbf{r}_{2}) \right>\psi_{100}\left(\mathbf{r}_{12};a=\frac{2}{\alpha_{s}C_{F}}\right)\] \[=\Delta E^{\rm(LO)}_{Q\overline{Q}}\left|\psi_{i}(\mathbf{r}_{1}) \chi_{i}(\mathbf{r}_{2})\right>\psi_{100}\left(\mathbf{r}_{12};a=\frac{2}{\alpha_{s}C_ {F}}\right), \tag{119}\] for any \(\mathbf{r}_{1}\), and \(\mathbf{r}_{2}\). Therefore QMC results must reproduce \(\Delta E^{\rm(LO)}_{Q\overline{Q}}\) with zero variance when using \(\psi_{100}\) with \(a=2/(\alpha_{s}C_{F})\) as a trial wavefunction. A generic quarkonium wavefunction can be expanded in a basis of hydrogen wavefunctions as \[\Psi_{T}(\mathbf{r}_{1},\mathbf{r}_{2};\mathbf{C},a)=\sum_{n=0}^{\Lambda}\sum_{l=0}^{n-1} \sum_{m=-l}^{l}C_{nlm}\psi_{nlm}(\mathbf{r}_{12},a), \tag{120}\] where \(\Lambda\) provides a truncation of the complete infinite family of wavefunctions, leading to a finite-dimensional family of trial wavefunctions suitable for VMC calculations. We have verified that variational calculations using the LO potential and \(\Lambda\in\{0,1,2\}\) reproduce the exact LO ground-state energy within uncertainties and are consistent with \(C_{nlm}\propto\delta_{n0}\delta_{l0}\delta_{m0}\) and \(a=2/(\alpha_{s}C_{F})\). Beyond LO, we find that over a wide range of \(\alpha_{s}\in[0.05,0.5]\) the best variational bounds obtained using generic wavefunctions with \(\Lambda\in\{0,1,2\}\) are consistent with those where \(C_{nlm}\propto\delta_{n0}\delta_{l0}\delta_{m0}\). Since the \(\mathcal{O}(m_{Q}^{0})\) potential is a central potential only depending on \(|\mathbf{r}_{12}|\), orbital angular momentum is a conserved quantum number, and it is not surprising that the ground state is \(S\)-wave with only \(l=0\) wavefunctions present. Contributions to the ground-state from wavefunctions with \(n>0\) should arise in principle beyond LO; however, we find that including \(n>0\) wavefunctions in our variational calculations leaves variational bound on \(\Delta E_{Q\overline{Q}}\) unchanged with few percent precision over a wide range of \(\alpha_{s}\). Similarly, we find that trial wavefunctions described by sums of 2-3 exponentials or Gaussians do not achieve lower variational bounds than those with a single \(n=0\) Coulomb wavefunction at the level of a few percent precision. These results motivate the simple one-parameter wavefunction ansatz \[\Psi_{T}(\mathbf{r}_{1},\mathbf{r}_{2};a)=\psi_{000}(\mathbf{r}_{12},a). \tag{121}\] Using VMC to determine the optimal \(a\) for NLO and NNLO leads to significantly lower ground-state energies than those obtained with \(a^{\rm(LO)}\). The optimal \(a\) are smaller than \(a^{\rm(LO)}\), which is to be expected if the NLO potential is approximately Coulombic because \(\alpha_{V}(|\mathbf{r}_{12}|,\mu)>\alpha_{s}(\mu)\) at NLO and beyond. Assuming that \(\mu\) is chosen to be on the order of \(1/|\mathbf{r}_{12}|\) for distances where the wavefunction is peaked, contributions to the NLO potential proportional to \(\ln(\mu|\mathbf{r}_{12}|e^{\gamma_{E}})\) can be approximated as a constant denoted \(L_{\mu}\). This corresponds to an approximation of the NLO potential as a Coulomb potential with \(\alpha_{s}(\mu)\) replaced by the \(|\mathbf{r}_{12}|\)-independent constant \(\alpha_{V}(|\mathbf{r}_{12}|,\mu=e^{L_{\mu}-\gamma_{E}}/|\mathbf{r}_{12})\). The ground-state wavefunction under the approximation is Figure 4: Heavy quarkonium binding energy GFMC results for \(\langle H(\tau)\rangle\) with \(\alpha_{s}=0.2\) as functions of \(\tau m_{Q}\) using LO trial wavefunctions (green) and the trial wavefunctions obtained using VMC calculations (purple). The Hamiltonian includes the \(\mathcal{O}(m_{Q}^{0})\) pNRQCD potential with the different perturbative orders in \(\alpha_{s}\) indicated. Figure 5: Heavy quarkonium binding energy GFMC results for \(\langle H(\tau)\rangle\) with \(\alpha_{s}=0.3\) analogous to those in Fig. 4. \(\psi_{000}(|\mathbf{r}_{12};a(L_{\mu}))\) with \[a(L_{\mu})=\frac{2}{\alpha_{V}(|\mathbf{r}_{12}|,\mu=e^{L_{\mu}-\gamma_{E}}/|\mathbf{r}_{ 12}|)C_{F}}. \tag{122}\] Without assuming any approximation for the potential, \(\psi_{000}(|\mathbf{r}_{12};a(L_{\mu}))\) can be viewed as a variational ansatz that is equivalent to \(\psi_{000}(|\mathbf{r}_{12};a)\) with the only difference being that \(L_{\mu}\) is the variational parameter to be explicitly optimized instead of \(a\). The advantage of the \(a(L_{\mu})\) parameterization is that the dependence of the ground-state Bohr radius on \(\alpha_{s}\) is approximately incorporated into \(a(L_{\mu})\) with constant \(L_{\mu}\). Empirically, \(\psi_{000}(|\mathbf{r}_{12};a(L_{\mu}))\) with \(L_{\mu}=0\) is found to give ground-state energy results that are consistent at the few-percent level with optimal VMC results over a range of \(\alpha_{s}\in[0.1,0.3]\) (somewhat larger \(L_{\mu}\sim 0.5\) are weakly preferred for small \(\alpha_{s}\)). We are therefore led to the simple trial wavefunction ansatz \[\Psi_{T}(\mathbf{r}_{1},\mathbf{r}_{2})=\psi_{000}(\mathbf{r}_{12},a(L_{\mu}=0)). \tag{123}\] GFMC results using the VMC trial wavefunctions \(\Psi_{T}(\mathbf{r}_{1},\mathbf{r}_{2})\) are shown in Figs. 4-5 for quark masses corresponding to \(\alpha_{s}(\mu_{p})=0.2\) and \(\alpha_{s}(\mu_{p})=0.3\) respectively, using the renormalization scale choice \(\mu_{p}=4\alpha_{s}(\mu_{p})m_{Q}\) discussed further below. Results using the exact LO wavefunction with \(a=2/(\alpha_{s}C_{F})\) as GFMC trial wavefunctions are also shown for comparison. Both results are identical at LO and reproduce the exact result, Eq. (118), with zero variance at machine precision. At NLO, the VMC wavefunctions give 3% and 4% lower variational bounds than LO wavefunctions for \(\alpha_{s}=0.2\) and \(\alpha_{s}=0.3\), respectively. After GFMC evolution, both results approach energies 2% lower than the VMC variational bounds for both \(\alpha_{s}\). Slightly less imaginary-time evolution is required to achieve ground-state saturation at a given level of precision for VMC wavefunctions than LO wavefunctions. At NNLO, the VMC wavefunctions achieve more significant 7% and 11% lower variational bounds than LO wavefunctions for \(\alpha_{s}=0.2\) and \(\alpha_{s}=0.3\), respectively. GFMC evolution again leads to 2% lower energies than optimized variational wavefunctions for both \(\alpha_{s}\). Significantly less imaginary-time evolution is required to achieve ground-state saturation using optimized variational wavefunctions at NNLO. For NLO potentials, the variance of \(\langle H(\tau)\rangle\) computed using VMC trial wavefunction is similar to that obtained using LO trial wavefunctions. For NNLO potentials, the corresponding variance is 50% smaller using VMC trial wavefunctions than using LO trial wavefunctions. Notably, significantly more imaginary-time evolution is required to achieve ground-state saturation with \(\alpha_{s}=0.2\) than with \(\alpha_{s}=0.3\). At both NLO and NNLO, 1\(\sigma\) agreement between model-averaged fit results and Hamiltonian matrix elements at particular \(\tau\) is seen for \(\tau\gtrsim 25/m_{Q}\) with \(\alpha_{s}=0.3\) and is only seen for \(\tau\gtrsim 50/m_{Q}\) with \(\alpha_{s}=0.3\). This scaling is consistent with theoretical expectations for a Coulombic system: the energy gap between the ground- and the first-excited state at LO is \[\delta^{\text{(LO)}}=\frac{3\alpha_{s}^{2}C_{F}^{2}m_{Q}}{16}, \tag{124}\] and excited-state contributions to GFMC results are suppressed by \(e^{-\delta\tau}\). The observed scaling of \(\delta\) in our GFMC results is consistent with \(\delta\sim\alpha_{s}^{2}m_{Q}\) holding approximately at higher orders. These GFMC results include discretization effects arising from the Trotterization of the imaginary-time evolution operator \(e^{-\hat{H}\tau}\) discussed in Sec. III.2 and were performed using \(\delta\tau=0.4/m_{Q}\). We repeated GFMC calculations using a wide range of \(\delta\tau\in[0.2/m_{Q},6.4/m_{Q}]\) in order to study the size of these discretization effects; results for \(\alpha_{s}=0.2\) are shown in Fig. 6. Discretization effects are found to be sub-percent level and smaller than our GFMC statistical uncertainties for \(\tau m_{Q}\lesssim 2\) with evidence for few-percent discretization effects at larger \(\delta\tau\). Similar results are found for other \(\alpha_{s}\) with Figure 6: Heavy quarkonium binding energy results obtained from fits to the \(\langle H(\tau)\rangle\) results in Fig. 4 are shown for GFMC calculating with several different choices of Trotterization scale \(\delta\tau\) as functions of \(\delta\tau m_{Q}\) for NLO and NNLO pNRQCD potentials. The LO results are not shown since the results are exact and therefore \(\delta\tau\) independent. the smallest \(\delta\tau\) where discretization effects are visibly found to increase with decreasing \(\alpha_{s}\) roughly as \(1/\alpha_{s}\). To validate this determination, we computed the expectation value of \([\hat{V},\hat{T}]\) and found that the \(\delta\tau\) scales where discretization effects become visible are roughly consistent with as \(\Delta E_{Q\overline{Q}}/\left\langle Q\overline{Q}\right|\left|\hat{V},\hat{T }\right|\left|Q\overline{Q}\right\rangle\) as expected from the Baker-Campbell-Hausdorff commutator corrections arising from approximating \(e^{(\hat{T}+\hat{V})\delta\tau}\) as \(e^{-\hat{T}\delta\tau}e^{-\hat{V}\delta\tau}\)[91]. ### Baryons The pNRQCD quark-quark potential acting on baryon states is given at \(\mathcal{O}(m_{Q}^{0})\) by Eq. (82) and Eq. (33) by \[\begin{split}\hat{V}^{\psi\psi}\left|B\right\rangle& =\sum_{I<J}V_{\text{A}}^{\psi\psi,(0)}(\mathbf{r}_{IJ})\left|B\right\rangle\\ &=-\sum_{I<K}\frac{C_{B}\alpha_{V}(|\mathbf{r}_{IJ}|,\mu)}{|\mathbf{r}_{IJ }|}\left|B\right\rangle,\end{split} \tag{125}\] where \(C_{B}=C_{F}/(N_{c}-1)\). As discussed above, three-quark potentials arise for baryons at NNLO; however the quark-quark potential arises at LO and can therefore be expected to play a dominant role. The baryon quark-quark potential has a similar Coulombic form to the quarkonium potential, except that for the baryon case, there is a sum over Coulomb potentials for all relative coordinate differences. A similar (though not identical) summation arises in the kinetic term if the baryon wavefunction is taken to be a linear combination of products of Coulomb wavefunctions, \[\Psi_{T}(\mathbf{R};\mathbf{C},a)=\prod_{I=1}^{N_{c}}\sum_{J<I}\sum_{n=0}^{\Lambda} \sum_{l=0}^{n-1}\sum_{m=-l}^{l}C_{nlm}\psi_{nlm}(\mathbf{r}_{IJ},a), \tag{126}\] where \(\mathbf{R}=(\mathbf{r}_{1},\dots,\mathbf{r}_{N_{c}})\). Although VMC calculations are performed using \(\Lambda\in\{0,1,2\}\), the variational energy bounds obtained for \(N_{c}=3\) baryons are consistent with those obtained using ground-state wavefunctions where \(C_{nlm}\propto\delta_{n0}\delta_{l0}\delta_{m0}\). Similarly, results using sums of one or two exponential or Gaussian corrections to a product of \(n=0\) Coulomb wavefunctions are found to give consistent variational bounds at the one percent level across a wide range of \(\alpha_{s}\). This motivates the simple one-parameter family of trial wavefunctions \[\Psi_{T}(\mathbf{R};a)=\prod_{I=1}^{N_{c}}\sum_{J<I}\psi_{000}(\mathbf{r}_{IJ},a). \tag{127}\] Analogous results are found for (less systematic) VMC studies with \(N_{c}\in\{4,5,6\}\). This VMC ansatz is similar to the exponential wavefunction ansatz used in variational calculations of pNRQCD baryons at LO in Ref. [61]. However, it differs significantly from the ansatz used in analogous NNLO calculations in Ref. [62], which used a product of momentum-space exponentials that therefore have power-law decays at large separations to describe \(N_{c}=3\) baryons. It is perhaps surprising that baryon ground-state energies are accurately described using a product of Coulomb ground-state wavefunctions even at NNLO with three-quark potentials present; however, as discussed in Sec. V.2 below the three-quark potentials lead to sub-percent corrections to results using just quark-quark potentials for \(\alpha_{s}\lesssim 0.3\). At LO, the optimal variational bounds obtained from VMC with this one-parameter trial wavefunction family are consistent with \[a^{(\text{LO})}=\frac{2}{\alpha_{s}C_{B}}, \tag{128}\] which is the same Bohr radius appearing in the exact LO quarkonium result rescaled by the color factor applying in the baryon potential. Beyond LO, we again parameterize the Bohr radius by \(a(L_{\mu})\) defined in Eq. (122) where \(L_{\mu}\) corresponds to the value of \(\ln(\mu re^{\mu_{E}})\) if logarithmic \(r\) dependence is approximated as constant. The optimal value of \(L_{\mu}\) increases mildly with increasing \(m_{Q}\), but across the range, \(0.1\leq\alpha_{s}\leq 0.3\) ground-state energy results with a constant value of \(L_{\mu}=0.5\) are within a few percent of optimal VMC ground-state energies (somewhat larger \(L_{\mu}\sim 1\) are weakly preferred for small \(\alpha_{s}\)). The GFMC calculations of QCD and \(SU(N_{c})\) baryons below therefore use the simple trial wavefunction ansatz \[\Psi_{T}(\mathbf{R})=\prod_{I=1}^{N_{c}}\sum_{J<I}\psi_{000}(\mathbf{r}_{IJ},a(L_{\mu}= 0.5)). \tag{129}\] GFMC results using the VMC trial wavefunctions \(\Psi_{T}(\mathbf{r}_{1},\mathbf{r}_{2})\) are shown in Figs. 7-8 for the same quark masses and renormalization scales as for quarkonium above. Although the LO baryon wavefunction is not an eigenstate of \(\hat{H}^{(\text{LO})}\), it provides remarkably precise and approximately \(\tau\)-independent Hamiltonian matrix elements with excited-state contamination not visible within \(0.1\%\) statistical uncertainties. Similar results are found with \(N_{c}\in\{4,5,6\}\). This suggests that the product form of the baryon trial wavefunction used here is suitable for describing multi-quark states with identical attractive Coulomb interactions between all quarks. Beyond LO, similar patterns arise as in the quarkonium case above, but excited-state effects are more pronounced for baryons before VMC optimization. VMC wavefunctions give \(6\%\) and \(10\%\) lower variational bounds than LO wavefunctions for NLO potentials with \(\alpha_{s}=0.2\) and \(\alpha_{s}=0.3\), respectively. Excited-state contamination is still visible in GFMC results using VMC wavefunctions for \(\tau\lesssim 50/m_{Q}\) with \(\alpha_{s}=0.2\) and \(\tau\lesssim 25/m_{Q}\) with \(\alpha_{s}=0.3\), which is similar to the corresponding \(\tau\) required for similar suppression of quarkonium excited-states and shares the same \(1/(\alpha_{s}^{2}m_{Q})\) scaling expected for Coulombic excited-state effects. At least a factor of two larger \(\tau\) is required to achieve the same level of excited-state suppression using LO baryon wavefunctions. The fitted GFMC ground-state energy is \(1\%\) and \(2\%\) lower than the VMC wavefunction results for \(\alpha_{s}=0.2\) and \(\alpha_{s}=0.3\), respectively. At NNLO, VMC wavefunctions give 10% and 17% lower variational bounds than LO wavefunctions with \(\alpha_{s}=0.2\) and \(\alpha_{s}=0.3\), respectively. Excited-state effects are mild and similar to NLO using VMC wavefunctions with 1% differences between VMC and fitted GFMC ground-state energy results, but very large excited-state effects and large variance increase with \(\tau\) are both visible using LO baryon wavefunctions with NNLO potentials. The reduction in variance between VMC and LO baryon wavefunctions is more than an order of magnitude for some \(\tau\), and for large \(\tau\), the signal using LO wavefunctions is lost while VMC wavefunctions have relatively Figure 8: Triply-heavy baryon binding energy results for \(\langle H(\tau)\rangle\) with \(\alpha_{s}=0.3\) analogous to those in Fig. 4. Figure 7: Triply-heavy baryon binding energy results for \(\langle H(\tau)\rangle\) with \(\alpha_{s}=0.2\) analogous to those in Fig. 4. mild variance increases. It is perhaps not surprising that LO baryon wavefunctions do not provide a suitable trial wavefunction for GFMC calculations at NNLO, where in particular three-quark potentials enter. However, it is remarkable that simple VMC optimization of the Bohr radius of a product of Coulomb wavefunctions is sufficient to provide a trial wavefunction leading to high-precision GFMC results with few-percent excited-state effects only for \(\tau\lesssim 2/(\alpha_{s}^{2}m_{Q})\). The dependence of fitted GFMC results on \(\delta\tau\) is shown in Fig. 9 for the example of \(N_{c}=3\) baryons with \(\alpha_{s}=0.2\). Interestingly, LO baryon ground-state energy results are observed to be independent of \(\delta\tau\) to percent-level precision for \(\delta\tau\lesssim 100/m_{Q}\) even though the LO baryon wavefunction is not exactly a LO energy eigenstate. Discretization effects are also not clearly resolved at NLO for \(\delta\tau\lesssim 6/m_{Q}\), although more significant effects appear for larger \(\delta\tau\). At NNLO, there are clear signals of percent-level discretization effects of \(\delta\tau\gtrsim 1/m_{Q}\), but negligible sub-percent discretization effects are seen for smaller \(\delta\tau\). The calculations below target percent-level determinations of ground-state (nonrelativistic) energies and therefore use \(\delta\tau=0.4/m_{Q}\) for QCD and \(\delta\tau\in[0.4/m_{Q},0.8/m_{Q}]\) for exploring strongly coupled dark sectors for which these discretization effects are expected to be negligible. ## V QCD Binding Energy Results The heavy quarkonium mass \(M_{Q\overline{Q}}=2m_{Q}+\Delta E_{Q\overline{Q}}\) is one of the simplest pNRQCD observables, and matching its calculated value to experimental results provides a way to fix the pNRQCD parameter \(m_{Q}\). The heavy quarkonium spectrum has been previously computed in pNRQCD for \(b\) and \(c\) mesons to N\({}^{3}\)LO [54; 10] using perturbative quark mass definitions. Here, we use an alternative quark-mass definition, analogous to definitions used in lattice QCD, in which we tune the pole mass \(m_{Q}\) to reproduce experimental quarkonium masses. Once \(m_{Q}\) is determined using this tuning procedure, pNRQCD can be used to make predictions for other hadron masses and matrix elements. Below, the masses of triply-heavy baryons containing \(b\) and \(c\) quarks are computed and compared with lattice QCD results [92; 93] in order to validate the methods discussed above. Further, it is straightforward and relatively computationally inexpensive to extend pNRQCD calculations over a wide range of \(m_{Q}\), which allows the dependence of meson and baryon masses on \(m_{Q}\) to be studied for a wide range of \(m_{Q}\gg\Lambda_{QCD}\). For each choice of \(m_{Q}\), the renormalization scale \(\mu\) is chosen to be in the range \(\alpha m_{Q}<\mu<m_{Q}\) so that neither the logs of \(\mu/m_{Q}\) arising in NRQCD matching or the logs of \(\mu r\) explicitly appearing in the potential are too large [13; 5] since on average \(r\sim 1/(vm_{Q})\sim 1/(\alpha m_{Q})\) as supported by the success of Hydrogen wavefunction with this value of the Bohr radius discussed above. In particular, the GFMC results below use a central value of the renormalization scale \[\mu_{p}=4\alpha_{s}(\mu_{p})m_{Q}, \tag{130}\] which can be solved using iterative numerical methods to determine \(\mu_{p}\) for a given value of \(m_{Q}\). In order to Figure 9: Triply-heavy baryon binding energy results obtained from fits to the \(\langle H(\tau)\rangle\) results in Fig. 7 are shown for GFMC calculating with several different choices of Trotterization scale \(\delta\tau\) as functions of \(\delta\tau m_{Q}\) for each perturbative order studied. study the dependence on this choice of scale, GFMC calculations are performed with \(\mu=2\mu_{p}\) and \(\mu=\mu_{p}/2\) as well as with \(\mu=\mu_{p}\). The renormalization group evolution of \(\alpha_{s}(\mu)\) is solved using the \(\beta\)-function calculated at one order higher in perturbation theory than the pN-RQCD potential, and in particular, the one-, two-, and three-loop \(\beta\) functions are used along with the LO, NLO, and NNLO potentials. The \(\beta\)-function coefficients, the values of the Landau pole scale \(\Lambda_{QCD}\) required to reproduce the experimentally precisely constrained value \(\alpha_{s}(M_{Z})=0.1184(7)\) for the three-loop \(\alpha_{s}\), and the quark threshold matching factors related theories with \(N_{f}\) and \(N_{f}-1\) flavors are reviewed in Ref. [94]; the same initial condition is used to determine the values of \(\Lambda_{QCD}\) used for one- and two-loop \(\alpha_{s}\) in the LO and NLO results of this work. Numerical results in this section use GFMC calculations with the trial wavefunction discussed in Sec. IV. Calculations use 8 equally spaced values of \(m_{Q}\in[m_{c},m_{b}]\) (using the \(\overline{\rm MS}\) masses [95]) for which the \(N_{f}=4\) potential is used (the renormalization scale satisfies \(\mu_{p}>m_{c}\) for this range) and another 8 equally spaced values of \(m_{Q}\in[m_{b},m_{t}]\) for which the \(N_{f}=5\) potential is used. The Trotterization scale \(\delta\tau=0.4/m_{Q}\) is chosen, which is expected to lead to sub-percent discretization effects on binding energies according to the results of Sec. IV. The total imaginary-time length of GFMC evolution is chosen to be \(N_{\tau}\delta\tau=8/(\alpha_{s}^{2}m_{Q})\) in order to ensure that imaginary times much larger than the expected inverse excitation gap \(\delta\sim 1/(\alpha_{s}^{2}m_{Q})\) are achieved, which the results of Sec. IV indicate are sufficient to reduce excited-state contamination to the sub-percent level. This corresponds to \(N_{\tau}\in[200,1400]\) for \(m_{Q}\in[m_{c},m_{t}]\). Relatively modest GFMC ensembles with \(N_{\rm walkers}=5,000\) are found to be sufficient to achieve sub-percent precision on binding energy determinations. ### Heavy quarkonium Results for the heavy quarkonium binding energy \(\Delta E_{Q\overline{Q}}\) for the ranges of \(\alpha_{s}\) above with \(N_{f}=4\) and \(N_{f}=5\) at LO, NLO, and NNLO in pNRQCD are obtained from fits to GFMC results as described above and shown as functions of \(\alpha_{s}\) in Fig. 10. At LO, the exact result \(\Delta E_{Q\overline{Q}}^{(\rm LO)}/m_{Q}/\alpha_{s}^{2}=-C_{F}^{2}/4\) is reproduced as discussed above. At NLO and NNLO clear dependence on \(\alpha_{s}\) can be seen in \(\Delta E_{Q\overline{Q}}/m_{Q}/\alpha_{s}^{2}\). For a Coulombic system, NLO corrections of \({\cal O}(\alpha_{s})\) would lead to \({\cal O}(\alpha_{s})\) and \({\cal O}(\alpha_{s}^{2})\) corrections to the quarkonium binding energy. Further corrections arise from the logarithmic differences between pNRQCD and Coulomb potentials, but as discussed in Sec. IV, these differences are relatively mild for \(\alpha_{s}\lesssim 0.3\) and the renormalization scale \(\mu_{p}\) discussed above. Quadratic fits to the NLO results in Fig. 10 with constant terms fixed to \(-C_{F}^{2}/4\) achieve \(\chi^{2}/{\rm dof}\sim 0.7\) for \(N_{f}=5\) results and \(\chi^{2}/{\rm dof}\sim 2.1\) for \(N_{f}=4\) results with \(\mu=\mu_{p}\), indicating that logarithmic effects are not well-resolved for couplings in the \(N_{f}=5\) range but may be apparent for couplings in the \(N_{f}=4\) range. Similarly, NNLO corrections to the potential should be approximately described by an \({\cal O}(\alpha_{s}^{4})\) polynomial with constant Figure 10: Heavy quarkonium binding energy results as functions of \(\alpha_{s}\) (excluding points with \(N_{f}=5\) and \(m_{q}=m_{c}\) for clarity). term \(-C_{F}^{2}/4\) and the same linear term as arises at NLO. Fits of this form to the NNLO results in Fig. 10 achieve \(\chi^{2}/\text{dof}\sim 0.8\) for \(N_{f}=5\) results and \(\chi^{2}/\text{dof}\sim 2.0\) for \(N_{f}=4\) results with \(\mu=\mu_{p}\). Performing analogous fits to results with \(\mu=2\mu_{p}\) leads to slightly better goodness-of-fit for \(N_{f}=4\) results with \(\chi^{2}/\text{dof}\sim 1\) and slightly worse goodness-of-fit with \(\chi^{2}/\text{dof}\sim 2\) for \(N_{f}=5\) results. On the other hand, identical fits to results with \(\mu=\mu_{p}/2\) achieve similar goodness of fit for \(N_{f}=5\) results, and unacceptably bad \(\chi^{2}/\text{dof}\) for NNLO results at \(N_{f}=4\). These results suggest that the choice \(\mu=\mu_{p}\) is effective at minimizing the size of logarithmic effects over the range of \(m_{Q}\in[m_{c},m_{t}]\) and in particular that large logarithmic effects arise for \(m_{Q}\sim m_{c}\) and \(\mu=\mu_{p}/2\). The same results for \(\Delta E_{Q\overline{Q}}/m_{Q}/\alpha_{s}^{2}\) at each order of pNRQCD and with \(\mu\in\{\mu_{p},2\mu_{p},\mu_{p}/2\}\) are shown as functions of \(m_{Q}\) in Fig. 11. Large differences are visible between LO and NLO results, with smaller but still significant differences between NLO and NNLO results. The (exact) LO result is independent of the renormalization scale, \(\Delta E_{Q\overline{Q}}^{(\text{LO})}/m_{Q}/\alpha_{s}^{2}=-C_{F}^{2}/4\). Non-trivial dependence on the renormalization scale enters at NLO. The dependence on the renormalization scale is somewhat more significant at NNLO, with a sharp increase in \(\Delta E_{Q\overline{Q}}^{(\text{NNLO})}/m_{Q}/\alpha_{s}^{2}\) at small \(m_{Q}\) arising with \(\mu=\mu_{p}/2\). The relative sizes of differences in quarkonium binding energies computed at different orders of pNRQCD are shown in Fig. 12. Large differences of 40-70% are seen between LO and NLO over the range of \(\alpha_{s}\) studied here. Smaller but still significant differences of 20-50% are seen between NLO and NNLO results. This suggests that the perturbative expansion in \(\alpha_{s}(\mu_{p})\) does not converge rapidly over the range of \(m_{Q}\) studied here, and even for \(m_{Q}\sim m_{t}\), NLO and NNLO effects on the relation between \(\Delta E_{Q\overline{Q}}\) and \(m_{Q}\) are still 40% and 20% of LO results respectively. These results for \(\Delta E_{Q\overline{Q}}\) do not provide physical predictions until \(m_{Q}\) has been specified. The parameter \(m_{Q}\) appearing in the pNRQCD Lagrangian is a pole mass that can be fixed once it is related to a known observable. Perturbation theory generally leads to slowly converging relations between pole mass definitions and physical observables due to infrared renormalon ambiguities [96; 97]. Better convergence can be expected for predictions of relationships between physical observables where renormalon effects cancel. We, therefore, use the nonperturbative (in terms of treatment of the potential) results for \(\Delta E_{Q\overline{Q}}\) provided by the GFMC calculations above to relate \(M_{Q\overline{Q}}\) and \(m_{Q}\) at each order of pNRQCD. In particular, we define \(m_{c}\) and \(m_{b}\) by the values of \(m_{Q}\) for which \(M_{Q\overline{Q}}\) agrees with experimental determinations of the spin-averaged quarkonium mass combinations \(M_{Q\overline{Q}}=3/4M_{Q\overline{Q}}^{3}+M_{Q\overline{Q}}^{1}\). An iterative tuning procedure is used to determine \(m_{b}\), and \(m_{c}\) in which fits Figure 11: Heavy quarkonium binding energy results as functions of \(m_{Q}\). Fitted GFMC results are shown as points with error bars showing the statistical plus fitting systematic uncertainties discussed in the main text. Shaded bands connect results with renormalization scale choices \(\mu\in\{\mu_{p},2\mu_{p},\mu_{p}/2\}\). Figure 12: Relative differences between heavy quarkonium binding energies calculated at different orders of pNRQCD (excluding points with \(N_{f}=5\) and \(m_{q}=m_{c}\) for clarity). to the GFMC results above are used to provide initial guesses for the masses that are then refined by performing additional GFMC calculations with the current best-fit \(m_{b}\) and \(m_{c}\) and then re-fitting including these results. This is repeated until the procedure has converged within our GFMC statistical uncertainties, which leads to the values of \(m_{b}\) and \(m_{c}\) at each order of pNRQCD shown in Table 1. Large order-by-order shifts in the values of \(m_{b}\) and \(m_{c}\) needed to reproduce experimental quarkonium results are seen, as expected from the poor perturbative convergence of relations between quark pole masses and quarkonium masses. Analogous effects arise in relations between quark pole masses and other hadron masses. With \(m_{b}\) and \(m_{c}\) fixed to reproduce quarkonium masses, further pNRQCD hadron mass predictions are effectively relations between hadron masses that should have better convergence than the relations between the individual hadron masses and the quark pole masses. ### Triply-heavy baryons Results for triply-heavy baryon binding energies \(\Delta E_{QQQ}\) over the same ranges of \(\alpha_{s}\) with \(N_{f}=4\) and \(N_{f}=5\) are shown in Fig 13. The same results for \(\Delta E_{QQQ}/m_{Q}/\alpha_{s}^{2}\) at each order of pNRQCD and with \(\mu\in\{\mu_{p},2\mu_{p},\mu_{p}/2\}\) are shown as functions of \(m_{Q}\) in Fig. 14. The order-by-order differences in the relation between \(\Delta E_{QQQ}/m_{Q}/\alpha_{s}^{2}\) and \(m_{Q}\) are similar to the case of heavy quarkonium discussed above. Although LO results are not exactly renormalization scale independent for baryons, numerical results are found to be scale independent to better than 0.1% precision. Visible scale dependence appears at NLO, with slightly large scale dependence appearing at NNLO. The similarities between Fig. 11 and Fig. 14 suggest that the large order-by-order shifts in the relations between the pole mass \(m_{Q}\) and both the quarkonium and baryon masses are highly correlated and that predictions of the ratio of the baryon and quarkonium binding energies as a function of \(m_{Q}\) have much better perturbative convergence than either binding energy individually. This is confirmed by directly calculating the perturbative differences of these ratios shown in Fig. 15. Although both quarkonium and baryon binding energies individually have 40-70% differences between LO and NLO over the range of \(\alpha_{s}\) studied here, the corresponding change in the ratio of baryon and meson binding energies, \[R_{QQQ}\equiv\frac{\Delta E_{QQQ}}{\Delta E_{Q\overline{Q}}}, \tag{131}\] is 5-10%. Similarly, both quarkonium and baryon binding energies have 20-50% differences between NNLO and NLO, but \(R_{QQQ}\) differences by only 3-8%. It is further possible to separate the contributions to \(\Delta E_{QQQ}\) arising from three-quark potentials from those arising from quark-quark potentials only. The effects of three-body potentials, which first arise at NNLO, are isolated by performing GFMC calculations using only the NNLO quark-quark potentials and taking the difference with results obtained with three-quark potentials included. The relative size of this difference is shown as a function of \(\alpha_{s}\) in Fig. 16. Interestingly, including three-body potentials leads to sub-percent changes to NNLO heavy baryon binding energies for \(\alpha_{s}\lesssim 0.3\), which is much smaller than the overall difference between NLO and NNLO binding energies. Still, three-body potential effects of around 0.25% - 1% of NNLO binding energy results are well-resolved from zero and seen to lower baryon masses in comparison with results obtained using only quark-quark potentials, as expected since the color-antisymmetric three-quark potential is attractive. The binding-energy ratio \(R_{QQQ}\) results are shown in Fig. 17. It is clear that \(R_{QQQ}\) is approximately independent of \(m_{Q}\) over the entire range of quark masses studied here. At LO, constant fits to GFMC results with \(N_{f}=5\) and \(\mu=\mu_{p}\) give \[R_{QQQ}^{(\rm LO)}\approx 1.0717(1), \tag{132}\] with \(\chi^{2}/{\rm dof}=1.6\) with consistent results obtained for other choices of \(\mu\) and for \(N_{f}=4\). Beyond LO, mild \(m_{Q}\) dependence can be resolved in \(R_{QQQ}\) that can be described by an \({\cal O}(\alpha_{s})\) linear correction. At NLO, a linear fit to GMFC results with \(N_{f}=5\) and \(\mu=\mu_{p}\) gives \[R_{QQQ}^{(\rm NLO)}\approx 1.114(3)+0.33(2)\alpha_{s}, \tag{133}\] \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \(1S\) mesons & Order & \(\alpha_{s}(\mu)\) & \(m_{Q}\) & \(\chi^{2}/{\rm dof}\) & \(M_{Q\bar{Q}}\) & Measured \(M_{Q\bar{Q}}\)[95] \\ \hline \hline \((J/\psi,\eta_{c})\) & LO (exact) & 0.282678 & 1.56206 & - & 3.06865 & 3.06865(10) \\ \((J/\psi,\eta_{c})\) & NLO & 0.313613 & 1.65234 & 1.12 & 3.06870(33) & 3.06865(10) \\ \((J/\psi,\eta_{c})\) & NNLO & 0.297100 & 1.77092 & 0.59 & 3.06824(45) & 3.06865(10) \\ \((\Upsilon,\eta_{b})\) & LO (exact) & 0.214850 & 4.77050 & - & 9.44295 & 9.44295(90) \\ \((\Upsilon,\eta_{b})\) & NLO & 0.2273183 & 4.86886 & 1.09 & 9.44255(46) & 9.44295(90) \\ \((\Upsilon,\eta_{b})\) & NNLO & 0.224750 & 4.96983 & 1.15 & 9.44386(40) & 9.44295(90) \\ \hline \hline \end{tabular} \end{table} Table 1: Spin-averaged \({}^{1}S\) heavy quarkonium masses computed in this work for \(c\overline{c}\) and \(b\overline{b}\) systems are compared with experimental results. The errors quoted in the \(M_{Q\overline{Q}}\) column show combined statistical and fitting systematic uncertainties (LO results are exact). The quark masses shown in the \(m_{Q}\) column are tuned in order to achieve agreement between calculated and measured masses. The quoted \(\chi^{2}/{\rm dof}\) is a weighted average (using the weights in Eq. (112) of the individual \(\chi^{2}/{\rm dof}\) from each fit to GFMC results performed as described in the main text. with \(\chi^{2}/{\rm dof}=1.0\). Results with other choices of \(\mu\) lead to consistent constant terms with \({\cal O}(\alpha_{s})\) terms ranging from 0.31 - 0.4. Fits to \(N_{f}=4\) results are consistent with \(N_{f}=5\) results but have larger uncertainties and somewhat worse \(\chi^{2}/{\rm dof}\sim 2\). At NNLO, an analogous linear fit to \(N_{f}=5\) results with \(\mu=\mu_{p}\) gives \[R_{QQQ}^{\rm(NNLO)}\approx 1.116(2)+0.60(2)\alpha_{s}, \tag{134}\] with \(\chi^{2}/{\rm dof}=1.4\). Other NNLO results are generally described similarly or slightly worse by linear fits. However, NNLO results with \(N_{f}=4\) and \(\mu=\mu_{p}/2\) show nonlinear features for small \(m_{Q}\) in Fig. 17 that are not accurately described by an \({\cal O}(\alpha_{s})\) polynomial, which is not surprising because the corresponding results for \(\Delta E_{Q\overline{Q}}\) and \(\Delta E_{QQQ}\) show evidence for significant non-Coulombic effects at small \(m_{Q}\). These pNRQCD results can be compared with general constraints from QCD inequalities. The Weingarten inequality, \(M_{N}\geq m_{\pi}\)[98], was extended by Detmold to \(M_{N}\geq 3/2m_{\pi}\)[99] by showing that all maximal isospin multi-meson interactions are repulsive or vanishing at threshold and do not lead to bound states. The same arguments apply for \(Q\overline{Q}\) multi-meson states if quark-antiquark annihilation is neglected because identical patterns of quark contractions arise in this case as for \(u\overline{d}\). Since neglecting \(Q\overline{Q}\) annihilation is a valid approximation for heavy quarks up to \({\cal O}(1/m_{Q}^{2})\)[52], the corresponding heavy-quark meson and baryon mass inequality is \[M_{QQQ}\geq\frac{3}{2}M_{Q\overline{Q}}+{\cal O}(1/m_{Q}^{2}). \tag{135}\] These bounds can be directly compared with the pNRQCD results obtained here. Comparisons can also be made at the level of the meson and baryon binding energies, since \[\frac{M_{QQQ}}{M_{Q\overline{Q}}}=\frac{3m_{Q}+\Delta E_{QQQ}}{2m_{Q}+\Delta E _{Q\overline{Q}}}\geq\frac{3}{2}, \tag{136}\] Figure 14: Triply-heavy baryon binding energy results as functions of \(m_{Q}\) with shaded bands connecting results with renormalization scale choices \(\mu\in\{\mu_{p},2\mu_{p},\mu_{p}/2\}\). Figure 13: Triply-heavy baryon binding energy results as functions of \(\alpha_{s}\) (excluding points with \(N_{f}=5\) and \(m_{q}=m_{c}\) for clarity). leads after multiplying by \(M_{Q\overline{Q}}\) to \[\frac{\Delta E_{QQQ}}{\Delta E_{Q\overline{Q}}}\leq\frac{3}{2}, \tag{137}\] where \(M_{Q\overline{Q}}>0\) and \(\Delta E_{Q\overline{Q}}<0\) have been assumed when forming ratios and \(\mathcal{O}(1/m_{Q}^{2})\) effects have been neglected. Note that Eq. (136) is necessarily saturated as \(m_{Q}\to\infty\), where \(\alpha_{s}\) at scales proportional to \(m_{Q}\) vanishes and therefore \(M_{QQQ}\to 3m_{Q}\), \(M_{Q\overline{Q}}\to 2m_{Q}\), and \(M_{QQQ}/M_{Q\overline{Q}}\to 3/2\). However, the lack of saturation of Eq. (137) for arbitrary \(m_{Q}\) implies that the saturation of Eq. (136) is only logarithmic as \(m_{Q}\to\infty\). Eqs. (132)-(134) show that \(\Delta E_{QQQ}/\Delta E_{Q\overline{Q}}\) is predicted to be 72-74% of the way to saturating the Detmold inequality at LO-NNLO in pNRQCD, demonstrating that in the \(m_{Q}\to\infty\) limit baryons in QCD are almost but not entirely as bound as is allowed by the positivity of the QCD path integral measure. Precise pNRQCD predictions for \(ccc\), \(ccb\), \(bbc\), and \(bbb\) baryon masses can be made using the values of \(m_{c}\) and \(m_{b}\) tuned to reproduce \(M_{c\overline{c}}\) and \(M_{b\overline{b}}\) and given in Table 1. Due to the exchange symmetry of the Coulomb trial wavefunctions used here, \(\nabla_{I}^{2}\Psi_{T}(\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{r}_{3})\) is independent of \(a\). The correct kinetic-energy operator for \(bbc\) baryons is therefore obtained by considering three equal-mass quarks with mass equal to \[m_{bbc}=\frac{3}{2}\left(\frac{1}{m_{b}}+\frac{1}{2m_{c}}\right)^{-1}=\frac{3m _{b}m_{c}}{2m_{b}+m_{c}}. \tag{138}\] An analogous \(ccb\) reduced mass \(m_{ccb}\) is obtained by taking \(b\leftrightarrow c\) in Eq. (138). Corresponding renormalization scales are defined as usual and for example Figure 16: Relative differences between triply-heavy baryon binding energies calculated using NNLO two-quark potentials only and full NNLO results including both two- and three-quark potentials. Figure 17: Ratios of triply-heavy baryon and heavy quarkonium binding energy results as functions of \(m_{Q}\) with shaded bands connecting results with renormalization scale choices \(\mu\in\{\mu_{p},2\mu_{p},\mu_{p}/2\}\). Figure 15: Relative differences between triply-heavy baryon binding energies calculated at different orders of pNRQCD (excluding points with \(N_{f}=5\) and \(m_{q}=m_{c}\) for clarity). \(4\alpha_{s}(\mu_{bbc})m_{bbc}\). GFMC results for triply-heavy baryon masses using \(m_{Q}\in\{m_{c},m_{ccb},m_{bbc},m_{bbb}\}\) therefore lead to pNRQCD predictions for \(\Omega_{ccc}\), \(\Omega_{ccb}\), \(\Omega_{bbc}\), and \(\Omega_{bbb}\) baryon masses shown in Table 2. These pNRQCD predictions are compared with LQCD results [92; 93; 100; 101] for these baryon masses and found to underpredict LQCD by about 200 MeV at LO and 100 MeV at NNLO for all baryon masses considered. The differences between NNLO and NLO results are significantly smaller than those between NLO and LO results, suggesting good convergence for the \(\alpha_{s}\) expansion of the pNRQCD potential. The remaining differences between NNLO and LQCD results likely arise primarily from the \(1/m_{Q}\) effects neglected in this work. In particular, the calculations of \(\Omega_{bbb}\) in Refs. [92; 93; 101] employ lattice NRQCD actions with \({\cal O}(1/m_{Q}^{2})\) terms included, and therefore the differences in \(\Omega_{bbb}\) mass predictions must arise from \({\cal O}(1/m_{Q})\), \({\cal O}(1/m_{Q}^{2})\), and higher-order \(\alpha_{s}\) corrections. Relative differences have been pNRQCD, and LQCD baryon mass predictions decrease with increasing quark mass as roughly \(1/m_{Q}\) and at NNLO ranges from 2 % for the \(\Omega_{ccc}\) to 0.7 % for the \(\Omega_{bbb}\). It is noteworthy that our GFMC pNRQCD predictions have 10-100 times smaller statistical uncertainties than LQCD results with both relativistic and NR quark actions; however, it is clear that systematic uncertainties from neglected effects in the pNRQCD potential are much larger than statistical uncertainties in either case and require the inclusion of \(1/m_{Q}\) effects to be reduced. A further measure of the size of systematic uncertainties arising from perturbative truncation effects is provided by comparing predictions for heavy baryon and meson masses with different choices of \(\mu\in\{\mu_{p},2\mu_{p},\mu_{p}/2\}\). As seen in Fig. 18, the perturbative convergence of \(M_{QQQ}/M_{Q\overline{Q}}\) as a function of \(M_{Q\overline{Q}}\) is better than the convergence of either mass individually, and differences between different scales are reduced. However, significant \(\mu\) dependence arises at NNLO for \(m_{Q}\sim m_{c}\) due to the nonlinear dependence of both \(M_{Q\overline{Q}}\) and \(M_{QQQ}\) on \(\alpha_{s}\) for \(\mu=\mu_{p}/2\) with relatively small \(m_{Q}\). Since \(m_{Q}\) does not enter this comparison, it is straightforward to compare to LQCD results, and the differences between NNLO pNRQCD results and LQCD results are seen to be comparable to the differences between pNRQCD results with different \(\mu\) choices. Both pNRQCD and LQCD results obey Eq. (136). ## VI Dark Hadrons Inspired by the stability of the proton, a dark sector with non-Abelian gauge interactions can give rise to a stable, neutral dark matter candidate - the dark baryon - as reviewed in Refs. [43; 44; 48; 49]. A simple UV-complete model of dark baryons is a hidden \(SU(N_{c})\) dark sector with \(N_{c}\) dark colors. If one includes dark quarks, then a dark QCD sector, charged under \(SU(N_{c})\) or \(G_{\rm SM}\times SU(N_{c})\) with \(n_{d}\) dark flavors arises. The pure hidden sector Lagrangian is then given by, \[{\cal L}_{D}=-\frac{1}{2}{\rm Tr}G_{\mu\nu}^{2}+\sum_{i=1}^{n_{d}}\overline{ Q_{d}^{i}}\left[i\not{D}+m_{d}^{i}\right]Q_{d}^{i} \tag{139}\] with masses \(m_{d}^{i}\) and coupling \(\alpha_{d}=g_{d}^{2}/(4\pi)\), dark gauge fields \(A_{d}\) and dark fermions, \(Q_{d}^{i}\), and \(D^{\mu}=\partial^{\mu}-ig_{d}A_{d}^{\mu,a}T^{a}\). A global \(U(1)\) symmetry leads to a conserved dark baryon number and, therefore, the stability of dark baryons, denoted \(B_{d}\) below. A dark composite sector also arises naturally for BSM extensions in which the Higgs boson is composite [102; 103]. As in QCD, at renormalization scales \(\mu\) well above the dark confinement scale, \(\mu\gg\Lambda_{d}\), the perturbative relation, \[\Lambda_{d}^{\rm(LO)}=\mu\exp\bigg{(}-\frac{2\pi}{\beta_{d}\alpha_{d}(\mu)} \bigg{)}, \tag{140}\] defines the relationship between \(\alpha_{d}\) and \(\Lambda_{d}\) at the lowest order, where \(\beta_{d}\) is the one-loop dQCD beta function, with analogous expressions arising at higher order [94]. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Baryon & This work: \(M_{QQQ}\) & This work: \(\chi^{2}\)/dof & Variational Methods (\(M_{QQQ}\)) & Lattice QCD (\(M_{QQQ}\)) \\ \hline \hline & LO: 4.62672(1) & LO: 1.0 & LO: 4.76(6) [61] & \\ \(\Omega_{ccc}\) & NLO: 4.66561(69) & NLO: 1.4 & NNLO+mNLO: 4.97(20) [62] & \\ & NNLO: 4.72025(70) & NNLO: 0.9 & & \\ \hline & LO: 7.82767(1) & LO: 1.5 & & \\ \(\Omega_{ccb}\) & NLO: 7.87571(24) & NLO: 1.1 & & LO: 7.98(7) [61] & 8.007(9)(20) [93] \\ & NNLO: 7.91300(39) & NNLO: 1.0 & & NNLO+mNLO: 8.20(15) [62] & 8.005(6)(11) [100] \\ \hline & LO: 11.02370(1) & LO: 1.3 & LO: 11.48(12) [61] & 11.195(8)(20) [93] \\ \(\Omega_{bbb}\) & NLO: 11.07740(32) & NLO: 1.1 & NNLO+mNLO: 11.34(26) [62] & 11.194(5)(12) [100] \\ & NNLO: 11.11310(29) & NNLO: 1.0 & & \\ \hline & LO: 14.20660(8) & LO: 1.2 & & 14.371(4)(12) [92] \\ \(\Omega_{bbb}\) & NLO: 14.25400(187) & NLO: 1.4 & & NNLO+mNLO: 14.57(25) [62] & 14.366(9)(20) [93] \\ & NNLO: 14.26040(148) & NNLO: 1.4 & & \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of the triply-heavy baryon mass results obtained here with results from other pNRQCD and LQCD calculations. All masses are given in GeV and obtained using \(\alpha_{s}\) and \(m_{Q}\) from Table 1 and the \(\chi^{2}\)/dof correspond to weighted averages analogous to the quarkonium results. For \(n_{d}\ll 4N_{c}\) the theory is confining. Below we consider \(n_{d}=1\) for simplicity and denote the dark quark mass as \(m_{d}\). In the regime \(m_{d}\gg\Lambda_{d}\), the pNRQCD formalism and numerical methods discussed above can be used to make reliable perturbative predictions for the masses, lifetimes, and other properties of hidden-sector composite particles referred to as dark hadrons below. One can further weakly couple the dark sector to the visible sector in various ways, leading to direct detection signatures [104; 105; 31]. If dark sector quarks are changed under parts of the SM, production, and decay of dark quarks can result in striking collider phenomenology [106; 107; 108]. If \(m_{d}\sim\sqrt{s}\), dark fermions are frequently produced via Drell-Yan and other SM processes. If the dark quark mass is much larger than the dark confinement scale, \(m_{d}\gg\Lambda_{d}\), the dark color strings do not fragment, and the dark fermions are bound by a dark color string for macroscopic distances. This results in exotic tracks, dependent on the SM charges of the dark fermions, which are unique and not producible by the SM alone [107; 106]. Searches for such long-lived particles have been rapidly increasing at the LHC and beyond [109; 110]. Lattice gauge theory calculations have been performed for dQCD models with several choices of \(N_{c}\): \(SU(2)\)[111; 112; 32; 113], \(SU(3)\)[36], \(SU(4)\)[37], and higher \(N_{c}\)[114; 115; 116; 48], as well as other gauge groups including \(SO(N_{c})\) and \(Sp(N_{c})\)[117; 34]. The primary challenge for using lattice gauge theory to explore dQCD is that there is a vast space of possibilities to explore depending on the gauge group and matter content [44]. The utility of pNRQCD is that precise results can be obtained quickly with very modest computational resources, which enables scans over wide ranges of parameters such as \(m_{d}\) and \(N_{c}\). The major downside of pNRQCD is its restriction to theories with dark quark masses \(m_{d}\gg\Lambda_{d}\); however, there are phenomenologically viable dQCD models of DM that land firmly in this regime [50; 51; 45]. Models of composite DM with \(m_{d}\ll\Lambda_{d}\) and models with \(m_{d}\gg\Lambda_{d}\) have distinct phenomenological features. In the regime \(m_{d}\ll\Lambda_{d}\), if one assumes that all \(B_{d}\overline{B}_{d}\) annihilation channels scale simply with the dark baryon mass \(M_{B_{d}}\) as \(\sigma v\sim 100/m_{B_{d}}^{2}\), then matching to thermal freezeout cross section [118; 119], requires cross-sections nearly as strong as allowed by unitarity and \(m_{B_{d}}\sim 200\) TeV [120; 121; 122]. In the heavy dark quark mass regime \(m_{d}\gg\Lambda_{d}\), thermal freezeout occurs before the confinement transition in the dark sector. The confinement transition's subsequent dynamics involve trapping dark quarks inside pockets of the deconfined phase that significantly reduce the resulting DM relic abundance [50]. Studies of the dynamics of this phase transition for the case of \(N_{c}=3\) show that the correct relic abundance for dark baryons to account for all of DM can be achieved with \(m_{d}/\Lambda_{d}\in[100,10^{4}]\) and in particular \(m_{d}\in[1,100]\) PeV [50; 51]. This motivates more detailed studies of the dynamics and possible detection signatures of \(SU(N_{c})\) composite DM with \(m_{d}\gg\Lambda_{d}\). Dark baryon masses, \(M_{B_{d}}\), and dark meson masses, \(M_{\Pi_{d}}\), can be calculated for generic \(SU(N_{c})\) gauge theories in the \(m_{d}\gg\Lambda_{d}\) regime using GFMC calculations of pNRQCD that are entirely analogous to the \(SU(3)\) calculations above. These results can be used to relate these dark hadron observables to the dark-sector Lagrangian's fundamental parameters, particularly \(m_{d}\) and \(\alpha_{d}\). Since the relation between the pole mass \(m_{d}\) appearing in the pNRQCD Hamiltonian and observables such as hadron masses do not show good convergence in \(\alpha_{d}\) as discussed for the QCD case above, these relations can then be used to replace dependence on \(m_{d}\) with dependence on \(M_{\Pi_{d}}\) in other dark hadron quantities and enable better-converging predictions relating different dark-sector observables. Dependence on \(\alpha_{d}\) can similarly be exchanged with dependence on \(\Lambda_{d}\) using Eq. (140) and its higher-order analogs. In particular, perturbative expan Figure 18: The top panel shows triply-heavy baryon masses \(M_{QQQ}\) as functions of \(M_{Q\overline{Q}}\) with renormalization scale choices \(\mu\in\{\mu_{p},2\mu_{p},\mu_{p}/2\}\). The bottom panel shows the ratio \(M_{QQQ}/M_{Q\overline{Q}}\) analogously. The LQCD results of Ref. [93] calculated using NRQCD including \(\mathcal{O}(1/m_{Q}^{2})\) effects are shown for comparison as red points with error bands showing total statistical plus systematic uncertainties. Experimental results for \(M_{N}/m_{\pi}\) are also shown for reference on the top panel as a purple triangle. sions for meson and baryon masses as functions of \(N_{c}\) and \(\alpha_{d}\) obtained by fitting to GFMC results are used below to predict the ratios of dark baryon and meson masses for \(SU(N_{c})\) dark sectors as a function of \(N_{c}\) and \(M_{\Pi_{d}}/\Lambda_{d}\) below. Other observables, such as the dark-sector matching coefficients relating \(N_{c}\) and \(M_{\Pi_{d}}/\Lambda_{d}\) to interaction rates in dark matter direct detection experiments [123; 124; 125; 126; 127], can be studied in future pNRQCD calculations of dark-baryon matrix elements using the optimized wavefunctions obtained here. ### Dark Mesons The dark meson binding energy \(\Delta E_{\Pi_{d}}\) and mass \(M_{\Pi_{d}}=2m_{d}+\Delta E_{\Pi_{d}}\) can be calculated as functions of \(m_{d}/\Lambda_{d}\) by applying GFMC methods to the pNRQCD Hamiltonian with the appropriate value of \(N_{c}\) and the corresponding zero-flavor strong-coupling \(\alpha_{d}\). As above, calculations are performed for renormalization scales \(\mu_{d}\equiv 4m_{d}\alpha_{s}(\mu_{d})\) as well as scales \(\mu_{d}/2\) and \(2\mu_{d}\) in order to study scale dependence. We considered a wide range of dark quark masses \(m_{d}/\Lambda_{d}\in\{2,4,8,16,32,64,128,256\}\) for \(N_{c}\in\{3,4,5,6\}\). Our GFMC calculations used6\(\delta\tau\leq 0.8m_{Q}\) and \(N_{\tau}\delta\tau\geq 2/\alpha_{d}^{2}\) with statistical ensembles of size \(N_{\rm walkers}=5,000\). The results for \(\Delta E_{\Pi_{d}}\) with are shown as functions of \(m_{d}/\Lambda_{d}\) for each \(N_{c}\in\{3,\ldots,6\}\) in Fig. 19. Similar qualitative features arise as in the QCD results for \(\Delta E_{Q\overline{Q}}\): significant scale dependence arises beyond LO, large order-by-order changes in dependence on \(m_{d}/\Lambda_{d}\) are apparent, and for the smallest \(m_{d}/\Lambda_{d}\) considered the results with \(\mu=\mu_{d}/2\) begin to show significant curvature arising from logarithmic effects in the potential. Footnote 6: These bounds were saturated except that \(\delta\tau=0.4m_{Q}\) was used for \(N_{c}\in\{3,4\}\) and \(N_{\tau}\delta\tau=4/\alpha_{d}^{2}\) was used for \(N_{c}=3\). Although precise predictions for dark hadron observables with \(m_{d}\gg\Lambda_{d}\) require pNRQCD calculations with particular choices of \(m_{d}/\Lambda_{d}\), phenomenological estimates of the dependence of dark-sector observables on \(m_{d}/\Lambda_{d}\) can be made more conveniently using analytic parameterizations that have been fit to pNRQCD results over the relevant range of \(m_{d}/\Lambda_{d}\). To provide such a parameterization, we perform fits to power series expansions in \(\alpha_{d}\) and \(1/N_{c}\) of these GFMC results for \(\Delta E_{\Pi_{d}}/m_{d}/\alpha_{d}^{2}\). At LO, the exact result \[\Delta E_{\Pi_{d}}^{\rm(LO)}=-\frac{C_{F}^{2}}{4}m_{d}\alpha_{d}^{2}, \tag{141}\] can be cast into this form by dividing by \(N_{c}^{2}\) to remove the leading large \(N_{c}\) dependence of \(C_{F}\), \[\frac{\Delta E_{\Pi_{d}}^{\rm(LO)}}{m_{d}\alpha_{d}^{2}N_{c}^{2}}=\frac{C_{F}^ {2}}{4N_{c}^{2}}=0.0625-\frac{0.0125}{N_{c}^{2}}+\frac{0.0625}{N_{c}^{4}}. \tag{142}\] At NLO, \(\mathcal{O}(\alpha_{d})\) corrections can be expected to lead to \(\mathcal{O}(\alpha_{d})\) and \(\mathcal{O}(\alpha_{d}^{2})\) corrections to binding energies for an approximately Coulombic potential and we adopt the power series ansatz, \[\frac{\Delta E_{\Pi_{d}}^{\rm(NLO)}}{m_{d}\alpha_{d}^{2}N_{c}^{2}}\approx- \frac{C_{F}^{2}}{4N_{c}^{2}}-\alpha_{d}A_{\Pi_{d}}^{\rm(NLO,1)}-\alpha_{d}^{2} A_{\Pi_{d}}^{\rm(NLO,2)}. \tag{143}\] The coefficients \(A^{\rm(NLO,1)}\) and \(A^{\rm(NLO,2)}\) can be further expanded as power series in \(1/N_{c}\) that are truncated to include at most three terms since calculations are only performed for four values of \(N_{c}\). Fits to GFMC results are performed using \(\chi^{2}\)-minimzation with results with all \(m_{d}/\Lambda_{d}\) for \(N_{c}=3\) and all \(m_{d}/\Lambda_{d}\geq 4\) for \(N_{c}\in\{4,5,6\}\), which corresponds to a total of 25 points. Fit parameter uncertainties are determined using bootstrap resampling methods. The Akaike information criterion Akaike (1954) (AIC) is used to determine whether one, two, or three terms are included in the \(1/N_{c}\) expansion for each coefficient. This leads to the results \[\begin{split} A_{\Pi_{d}}^{\rm(NLO,1)}&\approx 1.1801(23)- \frac{3.051(25)}{N_{c}}+\frac{2.59(4)}{N_{c}^{2}},\\ A_{\Pi_{d}}^{\rm(NLO,2)}&\approx 0.487(6)-\frac{0.7 21(18)}{N_{c}},\end{split} \tag{144}\] with \(\chi^{2}/{\rm dof}=1.3\). Analogous fits can be performed at NNLO using a series expansion, including two additional orders in \(\alpha_{d}\), \[\begin{split}\frac{\Delta E_{\Pi_{d}}^{\rm(NNLO)}}{m_{d} \alpha_{d}^{2}N_{c}^{2}}&\approx-\frac{C_{F}^{2}}{4N_{c}^{2}}- \alpha_{d}A_{\Pi_{d}}^{\rm(NLO,1)}-\alpha_{d}^{2}A_{\Pi_{d}}^{\rm(NNLO,2)}\\ &-\alpha_{d}^{3}A_{\Pi_{d}}^{\rm(NNLO,3)}-\alpha_{d}^{4}A_{\Pi_{d} }^{\rm(NNLO,4)}.\end{split} \tag{145}\] The constant and \(\mathcal{O}(\alpha_{d})\) terms should be unaffected by NNLO corrections to the potential, and we, therefore, fix these terms to their lower order values as indicated in Eq. (145). It is not possible to obtain a fit with \(\chi^{2}/{\rm dof}\sim 1\) using \(\mathcal{O}(1/N_{c}^{3})\) power series expansions, and in particular an \(\mathcal{O}(1/N_{c}^{4})\) term in \(A_{\Pi_{d}}^{\rm(NNLO,2)}\) is required to achieve \(\chi^{2}/{\rm dof}\lesssim 2\). Since such a term would lead to a Figure 20: Dark meson binding energy GFMC results as functions of \(m_{d}/\Lambda_{d}\) wth \(\mu=\mu_{d}\) and \(N_{c}\in\{3,\ldots,6\}\) are shown in comparison with the power series fit results described in the main text. interpolation rather than fitting of \(1/N_{c}\) dependence, we do not include such a term and take this to indicate that a simple power series ansatz is not able to describe the \(N_{c}\) dependence of \(\Delta E_{\Pi_{d}}^{\rm(NNLO)}\) in pNQRCD to the level of precision of our GFMC results. We therefore multiply our GFMC uncertainties on \(\Delta E_{\Pi_{d}}^{\rm(NNLO)}\) by a factor of 5 so that the best \(\mathcal{O}(1/N_{c}^{3})\) fit for \(A_{\Pi_{d}}^{\rm(NNLO,2)}\) obtains a \(\chi^{2}/{\rm dof}\sim 1\). This fit corresponds to \[A_{\Pi_{d}}^{\rm(NNLO,2)} \approx 25.5(3)-\frac{126(2)}{N_{c}}+\frac{178(6)}{N_{c}^{2}},\] \[A_{\Pi_{d}}^{\rm(NNLO,3)} \approx 13.6(8)-\frac{32(9)}{N_{c}}, \tag{146}\] \[A_{\Pi_{d}}^{\rm(NNLO,4)} \approx-1(5).\] Comparisons of GFMC results with these fit results for each order are shown in Fig. 20. The \(N_{c}\) scaling behavior of meson masses has previously been studied using LQCD in Refs. [116; 129]. However, without computing the relationship between either \(\Lambda_{d}\) or the pole mass \(m_{d}\) used here and another dimensionful observable such as the pion decay constant, it is not possible to compare results for \(M_{\Pi_{d}}/\Lambda_{d}\) or \(M_{\Pi_{d}}/m_{d}\) directly with the LQCD results of these works. Such comparisons are therefore deferred to future studies, including dark meson matrix element calculations in pN-RQCD. ### Dark Baryons Dark baryon binding energies \(\Delta E_{B_{d}}\) and masses \(M_{B_{d}}\) are computed by applying GFMC methods to \(SU(N_{c})\) baryon states with the pNRQCD Hamiltonian at LO, NLO, and NNLO with the same range of masses \(m_{d}/\Lambda_{d}\in[2,256]\) and \(N_{c}\in[3,6]\) as in the dark meson case discussed above. The trial wavefunctions described in Sec. IV are found to provide suitable initial states for GFMC evolution using the same relation between the Bohr radius and \(\alpha_{d}\) as the QCD case shown in Eq. (129). Excited-state effects are found to increase only mildly with \(N_{c}\) using this prescription. Results for \(\Delta E_{B_{d}}\) obtained from single-state fits as described above are shown for each \(N_{c}\) as functions of \(m_{d}/\Lambda_{d}\) in Fig. 21. As in the dark meson case above, we can analytically parameterize our GFMC dark baryon binding-energy results as a power series in \(\alpha_{d}\) and \(1/N_{c}\)[130, 131]. These power series expressions cannot capture the complete non-analytical structure of pNRQCD, but they can provide convenient estimates and accurately describe our pNRQCD results to a relatively high level of precision over the range of quark masses, and \(N_{c}\) studied. At LO, it is sufficient to parameterize \(\Delta E_{B_{d}}/m_{d}/\alpha_{d}^{2}\) as a constant that only depends on \(N_{c}\), \[\frac{\Delta E_{B_{d}}^{(\text{LO})}}{m_{d}\alpha_{d}^{2}N_{c}^{4}}\approx-A_{ B_{d}}^{(\text{LO},0)}(N_{c}). \tag{147}\] The factor of \(1/N_{c}^{4}\) is included to ensure that the result is finite as \(N_{c}\rightarrow\infty\) and the following (naive) argument for the scaling of the binding energy with \(N_{c}\): the quark-quark potential is proportional to \(C_{F}/(N_{c}-1)\sim N_{c}^{0}\) and the total potential, therefore, scales as \(\sum_{I<J}\sim N_{c}^{2}\). Since the binding energy for a Coulombic system is proportional to the potential squared, it can therefore be expected to scale as \(N_{c}^{4}\). However, fits to a constant plus \(\mathcal{O}(1/N_{c})\) and/or \(\mathcal{O}(1/N_{c}^{2})\) corrections lead to a vanishing constant term at LO. Including two additional powers of \(1/N_{c}\) and fitting to the same set of 25 GFMC results with varying \(m_{d}/\Lambda_{d}\) and \(N_{c}\) as in the dark meson case using the same \(\chi^{2}\)-minimization and bootstrap resampling techniques leads to \[\begin{split} A_{B_{d}}^{(\text{LO},0)}\approx&\ \frac{0.0132814(16)}{N_{c}}+\frac{0.020772(34)}{N_{c}^{2}}\\ &-\frac{0.02307(5)}{N_{c}^{3}},\end{split} \tag{148}\] with a \(\chi^{2}/\text{dof}=1.4\). This observed scaling \(\Delta E_{B_{d}}/m_{d}\sim\ \alpha_{d}^{2}N_{c}^{3}\) is consistent with Witten's large-\(N_{c}\) arguments in Ref. [132]. Since the strong coupling is taken to scale as \(\alpha_{d}\sim 1/N_{c}\)[130] this leads to the usual result that \(\Delta E_{B_{d}}/m_{d}\sim\ N_{c}\) while \(\Delta E_{\Pi_{d}}/m_{d}\sim\ N_{c}^{0}\). At NLO, an \(\mathcal{O}(\alpha_{d}^{2})\) power series analogous to the one used in the dark meson case is given by \[\begin{split}\frac{\Delta E_{B_{d}}^{(\text{NLO})}}{m_{d} \alpha_{d}^{2}N_{c}^{4}}\approx-A_{B_{d}}^{(\text{LO},0)}-\alpha_{d}A_{B_{d}}^ {(\text{NLO},1)}-\alpha_{d}^{2}A_{B_{d}}^{(\text{NLO},2)},\end{split} \tag{149}\] where \(A_{B_{d}}^{(\text{LO},0)}\) is fixed to it's LO value. Expanding the \(\mathcal{O}(\alpha_{d})\) term to \(\mathcal{O}(1/N_{c}^{2})\) and the \(\mathcal{O}(\alpha_{d}^{2})\) term to \(\mathcal{O}(1/N_{c})\) gives \[\begin{split} A_{B_{d}}^{(\text{NLO},1)}\approx&\ 0.01917( 10)+\frac{0.2073(8)}{N_{c}}-\frac{0.24181(5)}{N_{c}^{2}},\\ A_{B_{d}}^{(\text{NLO},2)}\approx&\ 0.0456(6)+\frac{0. 002(1)}{N_{c}},\end{split} \tag{150}\] where GFMC uncertainties have been inflated by a factor of two before fitting in order to obtain a \(\chi^{2}/\text{dof}\sim 1\) since, as in the NNLO dark meson case, deviations from Figure 22: Dark baryon binding energy GFMC results as functions of \(m_{d}/\Lambda_{d}\) wth \(\mu=\mu_{d}\) and \(N_{c}\in\{3,\dots,6\}\) are shown in comparison with the power series fit results described in the main text. a simple power series ansatz can be seen at the high level of precision of our GFMC results. In this case \(N_{c}^{4}\) scaling is observed for fixed \(\alpha_{d}\); however, since \(\alpha_{d}\sim 1/N_{c}\) in the large \(N_{c}\) scaling of Ref. [132] the expected scaling \(\Delta E_{B_{d}}/m_{d}\sim\alpha_{d}^{2}N_{c}^{3}\sim N_{c}\) is reproduced by pNRQCD at NLO. The same arguments apply at higher orders since further powers of \(\alpha_{d}\) contribute additional powers of \(1/N_{c}\) and are, therefore, further subleading corrections in the large \(N_{c}\) limit. At NNLO, an analogous power series expansion to the dark meson case is used, \[\begin{split}\frac{\Delta E_{B_{d}}^{\rm(NNLO)}}{m_{d}\alpha_{d} ^{2}N_{c}^{4}}&\approx-A_{B_{d}}^{\rm(LO,0)}-\alpha_{d}A_{B_{d}}^{ \rm(NLO,1)}-\alpha_{d}^{2}A_{B_{d}}^{\rm(NNLO,2)}\\ &\quad-\alpha_{d}^{3}A_{B_{d}}^{\rm(NNLO,3)}-\alpha_{d}^{4}A_{B_ {d}}^{\rm(NNLO,4)},\end{split} \tag{151}\] and fits to our GFMC results give \[\begin{split} A_{B_{d}}^{\rm(NNLO,2)}&\approx 0.985(4)- \frac{2.35(3)}{N_{c}}+\frac{2.41(10)}{N_{c}^{2}},\\ A_{B_{d}}^{\rm(NNLO,3)}&\approx 1.34(2)-\frac{1.34(1 7)}{N_{c}},\\ A_{B_{d}}^{\rm(NNLO,4)}&\approx-1.00(8),\end{split} \tag{152}\] where uncertainties have again been inflated by a factor of two to achieve \(\chi^{2}/{\rm dof}\sim 1\). Comparisons of these power series fit results with GFMC results for \(N_{c}\in\{3,\ldots,6\}\) dark baryon masses at each perturbative order are shown in Fig. 22. The ratio \(M_{B_{d}}/M_{\Pi_{d}}\) is shown as a function of \(M_{\Pi_{d}}/\Lambda_{d}\) for GFMC results in Fig. 23 and compared with power series fits in Fig. 24. To obtain hadron mass ratios as functions of \(M_{\Pi_{d}}/\Lambda_{d}\), the functions \(M_{\Pi_{d}}(m_{d})=m_{d}(2-\alpha_{d}^{2}C_{F}^{2}/4-\ldots)\) defined at NLO and NNLO by the series expansions in Eq. (143) and Eq. (145), which implicitly depend on \(m_{d}\) through \(\alpha_{d}(\mu=4\alpha_{d}m_{d})\), are inverted numerically to obtain \(m_{d}(M_{\Pi_{d}})\) and subsequently Figure 23: Ratios of dark baryon and meson masses as functions of the dark meson mass with shaded bands connecting results with renormalization scale choices \(\mu\in\{\mu_{d},2\mu_{d},\mu_{d}/2\}\) and \(SU(N_{c})\) gauge groups with \(N_{c}\in\{3,\ldots,6\}\) as indicated. \(\alpha_{d}(\mu=4\alpha_{d}m_{d}(M_{\Pi_{d}}))\) at each order. The \(m_{d}\) and \(\alpha_{d}\) determined in this way can be inserted in Eq. (147)-(151) to obtain \(M_{B_{d}}(M_{\Pi_{d}})\). These results have the advantage of only depending on dark hadron masses and the \(\overline{\rm MS}\) Landau pole scale \(\Lambda_{d}\) and are free from ambiguities in the scheme used to define \(m_{d}\), apart from the renormalization scale dependence arising in fixed-order results from perturbative truncation effects. These results can be compared with generalizations of the QCD inequalities discussed in Sec. V.2. The proof in Ref. [99] that there are no multi-meson bound states with maximal isospin is valid for \(SU(N_{c})\) gauge-theory with generic \(N_{c}\), and if \(1/m_{Q}^{2}\) effects are neglected are valid for heavy-quark hadrons in \(SU(N_{c})\) gauge-theory with generic \(N_{f}\). By the arguments in Section 10 of Ref [133], this is sufficient to establish that meson and baryon masses in \(SU(N_{c})\) gauge theory satisfy the inequality \[M_{B_{d}}\geq\frac{N_{c}}{2}M_{\Pi_{d}}. \tag{153}\] This bound holds for the lightest meson and baryon constructed from quarks of a given flavor and, therefore, to generic \(SU(N_{c})\) dark sectors. As discussed after Eq. (136), this leads to an equivalent bound on binding energies \[\Delta E_{B_{d}}\geq\frac{N_{c}}{2}\Delta E_{\Pi_{d}}. \tag{154}\] Both Eq. (153) and Eq. (154) are respected by all GFMC results of this work where \(\Lambda_{d}/m_{d}\) corrections are expected to be perturbative,7 as seen in Fig. 24. It is noteworthy that pNRQCD results approximately saturate Eq. (153) with \(M_{B_{d}}/M_{\Pi_{d}}/(N_{c}/2)\) within 5% of unity for \(m_{d}/\Lambda_{d}\gtrsim 5\) for \(N_{c}\in\{3,\ldots,6\}\). As in the QCD case discussed above, \(\Delta E_{B_{d}}/\Delta E_{\Pi_{d}}\) is approximately independent of \(m_{d}\) and Eq. (154) is not saturated in the \(m_{d}\to\infty\) limit, which means that \(M_{B_{d}}/M_{\Pi_{d}}\) approaches \(N_{c}/2\) logarithmically as \(m_{d}\to\infty\). The degree to which Eq. (153) is saturated for a given \(m_{d}/\Lambda_{d}\) is further seen to decrease with increasing \(N_{c}\). This behavior is unsurprising because for \(N_{c}=2\) meson and baryon masses are guaranteed to be identical and therefore saturate Eq. (153), while saturation is not exact for \(N_{c}=3\). Footnote 7: For sufficiently small \(m_{d}/\Lambda_{d}\), corrections to the static potential considered here from effects suppressed by \(1/m_{Q}\) will be significant and pNRQCD results using only the static potential may not satisfy general features of the QCD. Indeed, our pNRQCD results with \(N_{c}=6\), \(m_{d}/\Lambda_{d}=8\), and \(\mu=\mu_{p}/2\) predict \(\Delta E_{Q\overline{Q}}<-2m_{Q}\) and therefore lead to unphysical predictions of negative meson masses as well as unphysical violations of Eq. (153) and Eq. (154). In the large \(N_{c}\) limit, the NLO and NNLO results above provide subleading corrections, and the LO result above simplifies to \[M_{B_{d}}=N_{c}m_{d}\left(1-0.0132814(16)\alpha_{d}^{2}N_{c}^{2} \right)+\mathcal{O}\left(\frac{1}{N_{c}}\right). \tag{155}\] Figure 24: Ratios of dark baryon and meson masses as functions of the dark meson mass with \(N_{c}\in\{3,\ldots,6\}\) computed using \(\mu=\mu_{d}\) are shown in comparison with the power series fit results described in the main text. An analogous formula was derived using mean-field results in the joint large quark mass and large \(N_{c}\) limit in Ref. [134]. Identical scaling with quark mass, strong coupling, and \(N_{c}\) is obtained here and in Ref. [134]; however, the numerical value of the coefficient obtained there is 0.05426, which is larger than our result by roughly a factor of four. The corresponding LO meson result is known analytically, \[M_{\pi_{d}}=2m_{d}\left(1-\frac{C_{F}^{2}}{8}\alpha_{d}^{2}\right)+{\cal O} \left(\frac{1}{N_{c}}\right), \tag{156}\] and so the \(SU(N_{c})\) heavy-quark Detmold bound implies that the numerical coefficient in Eq. (155) must be smaller in magnitude than \(C_{F}^{2}/(8N_{c}^{2})=0.03125+{\cal O}(1/N_{c})\). This bound is satisfied by Eq. (155) but not by the results of Ref. [134], which indicates that the discrepancy must arise from uncertainties in the mean-field approach used there. The large-\(N_{c}\) behavior of baryon masses has also been studied in lattice gauge theory calculations [135; 114; 115; 116; 136; 37]. The baryon-to-meson mass ratio provides a well-defined dimensionless observable that can be matched to lattice gauge theory results for each \(N_{c}\), allowing us to select the \(m_{q}/\Lambda_{d}\) that reproduces lattice gauge theory results with any particular quark mass. However, other observables must be calculated to make non-trivial predictions to compare with \(SU(N_{c})\) lattice gauge theory, which is left to future work. ## VII Outlook We have presented a formulation of pNRQCD suitable for calculating binding energies and matrix elements of generic hadron and multi-hadron states made of heavy quarks in \(SU(N_{c})\) gauge theory using quantum Monte Carlo techniques. The complete two- and three-quark potentials required for generic multi-hadron systems are constructed up to NNLO in the strong coupling. The appearance of four-quark potentials arising at NNLO is pointed out, and a complete construction of these potentials should be pursued in future work. We further employed VMC and GFMC to compute quarkonium and triply-heavy baryon binding energies in pNRQCD at \({\cal O}(m_{Q}^{0})\). Precise results are obtained with modest computational resources, but we underpredict the baryon masses computed using LQCD by 1-2% for all baryons comprised of \(b\) and \(c\) quarks. Differences between perturbative orders demonstrate good convergence for the \(\alpha_{s}\) expansion of the pNRQCD potential. The remaining differences between NNLO and LQCD results likely arise primarily from \(1/m_{Q}\) and \(1/m_{Q}^{2}\) effects in the pNRQCD potential that are neglected in this work. Extending this work by incorporating spin-dependent potentials and determining suitable trial wavefunctions with these potentials included will be an essential step toward improving the predictive power of this framework. It will also be interesting to extend these studies towards heavy exotics such as tetraquarks and multi-baryon systems, as well as quarkonium and baryon excited states. Applying quantum Monte Carlo methods to pNRQCD may be particularly useful for studies of composite dark matter. A \(SU(N_{c})\) dark sector with one heavy dark quark provides a simple, UV-complete, phenomenological viable model of composite DM [50; 51]. QMC calculations using pNRQCD can provide computationally simple predictions for composite DM observables that enable efficient scanning over a wide range of mass scales. This is particularly useful in the composite DM context, where the underlying theory's actual parameters are not yet known. The works provide pNRQCD results and simple analytic parameterizations of the dark meson and dark baryon masses in \(SU(N_{c})\) gauge theory as functions of \(N_{c}\) and the dark sector parameters \(m_{d}\) and \(\Lambda_{d}\). The properties and interactions of these dark hadrons should be studied in future applications of QMC to pNRQCD. ###### Acknowledgements. We thank Matthew Baumgart, Elias Bernreuther, Nora Brambilla, William Detmold, Jacopo Ghiglieri, Florian Herren, Chia Hsien-Shen, Gurtej Kanwar, Aneesh Manohar, Joan Soto, Daniel Stolarski, and Antonio Vairo for helpful discussions and insightful comments. This manuscript has been authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics.
2310.14980
A Dimensionally-Reduced Nonlinear Elasticity Model for Liquid Crystal Elastomer Strips with Transverse Curvature
Liquid Crystalline Elastomers (LCEs) are active materials that are of interest due to their programmable response to various external stimuli such as light and heat. When exposed to these stimuli, the anisotropy in the response of the material is governed by the nematic director, which is a continuum parameter that is defined as the average local orientation of the mesogens in the liquid crystal phase. This nematic director can be programmed to be heterogeneous in space, creating a vast design space that is useful for applications ranging from artificial ligaments to deployable structures to self-assembling mechanisms. Even when specialized to long and thin strips of LCEs -- the focus of this work -- the vast design space has required the use of numerical simulations to aid in experimental discovery. To mitigate the computational expense of full 3-d numerical simulations, several dimensionally-reduced rod and ribbon models have been developed for LCE strips, but these have not accounted for the possibility of initial transverse curvature, like carpenter's tape spring. Motivated by recent experiments showing that transversely-curved LCE strips display a rich variety of configurations, this work derives a dimensionally-reduced 1-d model for pre-curved LCE strips. The 1-d model is validated against full 3-d finite element calculations, and it is also shown to capture experimental observations, including tape-spring-like localizations, in activated LCE strips.
Kevin LoGrande, M. Ravi Shankar, Kaushik Dayal
2023-10-23T14:27:04Z
http://arxiv.org/abs/2310.14980v1
A Dimensionally-Reduced Nonlinear Elasticity Model for Liquid Crystal Elastomer Strips with Transverse Curvature ###### Abstract Liquid Crystalline Elastomers (LCEs) are active materials that are of interest due to their programmable response to various external stimuli such as light and heat. When exposed to these stimuli, the anisotropy in the response of the material is governed by the nematic director, which is a continuum parameter that is defined as the average local orientation of the mesogens in the liquid crystal phase. This nematic director can be programmed to be heterogeneous in space, creating a vast design space that is useful for applications ranging from artificial ligaments to deployable structures to self-assembling mechanisms. Even when specialized to long and thin strips of LCEs - the focus of this work - the vast design space has required the use of numerical simulations to aid in experimental discovery. To mitigate the computational expense of full 3-d numerical simulations, several dimensionally-reduced rod and ribbon models have been developed for LCE strips, but these have not accounted for the possibility of initial transverse curvature, like carpenter's tape spring. Motivated by recent experiments showing that transversely-curved LCE strips display a rich variety of configurations, this work derives a dimensionally-reduced 1-d model for pre-curved LCE strips. The 1-d model is validated against full 3-d finite element calculations, and it is also shown to capture experimental observations, including tape-spring-like localizations, in activated LCE strips. ## 1 Introduction Liquid crystalline elastomers (LCEs) are active materials that have grown in popularity due to their proposed use in a wide variety of applications, from artificial muscles, to deployable structures, to soft robotics [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]. LCEs, at the molecular level, are made of ellipsoidal molecules (mesogens) typical to the liquid crystal phase, which are further cross-linked by polymer chains. The average direction of alignment of the mesogens at a material point is called the nematic director. When exposed to external stimuli--depending on the molecular makeup of the mesogen, one could use heat, light, or electric field--LCEs will shrink in the direction of the nematic director and expand in transverse directions. Because the nematic director is highly programmable in the manufacturing of LCEs, a wide range of controlled deformations is possible [8, 11, 12]. Even restricting the design space to the mechanical deformation of long and thin LCE strips, which are the focus of the present work, allows for a large range of deformation mechanisms. Varying the nematic director through the thickness of these strips (e.g., in a twisted or splay-bend geometry) can lead to the spontaneous development of transverse curvature, buckling, instabilities, and coupled bending/twisting behavior, e.g. [13, 14]. However, this highly programmable nature of LCEs necessitates the use of modeling to explore the vast space of design parameters. Fully 3-d models based on nonlinear elasticity can be applied to strips by treating them as slender 3-d continuum bodies, and have been shown to perform well when numerical methods are carefully designed, e.g. [15], but these can be computationally expensive. The present work aims to leverage the small width and thickness dimensions to derive a dimensionally-reduced 1-d ribbon model that can capture the complex 3-d deformations of LCE strips. In particular, it accounts for the transverse curvature that provides a rich space of configurations [13]. **Prior Work.** Several prior contributions have put forward energy densities for dimensionally-reduced LCE structures such as strips, e.g. [16, 17, 18, 19, 20, 21, 22], but these have not considered the presence of transverse curvature. Recent experiments, particularly [13], however, observe interesting phenomena in LCE strips with transverse curvature: e.g., high torque-density snap-through in pre-curved strips, which is relevant to the use of LCEs as actuators [4], and motivate our work. Further, both flat and pre-curved strips and rods have long been studied, and the slender geometry leads to large nonlinear deformations and a rich variety of instabilities, e.g. [23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33]. Particularly in the context of pre-curved strips, a ubiquitously-observed instability is the classic tape spring instability [34, 35, 36]. While there have been several contributions that put forward dimensionally-reduced models of strips and related structures that account for the tape-spring and other instabilities, these are in the context of mechanical materials without the consideration of active behavior, e.g. [37, 38]. As observed in [13], there is a rich interplay between the geometric feature of transverse curvature in the tape spring geometry and the stimuli-driven spontaneous deformations that are possible in LCE. There has also been recent work devoted to dimension reduction for models including differential growth for rods with a general cross-sectional shape[39], but these models define the cross-section by a single small parameter, i.e., they consider _rods_, not _strips_. The observations of interest from [13] require thin strips with a hierarchy of dimensions, i.e., the thickness is much smaller than the width, which is in turn much smaller than the length. Accounting for this hierarchy of scales enables our model to capture highly localized strains like tape-spring instabilities. **Contributions of This Paper.** Several important elements of our approach follow [40], with some notable differences: first, that work begins from 2-d and reduces to 1-d, whereas we begin from 3-d and reduce to 1-d; second, that work considers a purely-mechanical response, whereas we consider the effect of stimuli-driven deformation; and third, that work starts with an isotropic small-strain plate model, whereas we use a nonlinear elastic model in 3-d. We highlight here that we cannot simply adapt the model and approach of [40] to our setting. Rather, we must necessarily start from 3-d because we consider twisted nematics - i.e., the nematic director varies through the thickness - which is the root cause of the observed complexity. Further, the physics of LCE materials requires a nonlinear elastic model to capture the range of observed behavior. We start by assuming an ansatz on the full 3-d deformation field that is inspired by the 2-d membrane ansatz proposed in [40]. Further, our ansatz also accounts for the variation of the nematic director through the thickness of the strip. Furthermore, the warping function accounts for the transverse shear that we expect during nematic activation of an LCE strip with a director field that varies through the thickness. Our proposed kinematic ansatz is used in combination with the standard neo-Hookean strain energy density adapted to glassy LCE [41, 42]. Following the conventional procedure of integrating the energy density over the cross-section of the strip, we arrive at a reduced linicic strain energy density that governs the deformation of the centerline of the strip while tracking changes in the shape of the cross-section. We test the reduced model by comparing its predictions against those of the full 3-d model in examples of interest to experiment. We then use the model to predict and explain recent experimental observations from [13]. The success of our dimensionally-reduced model is also notable in the computational efficiency. We highlight a powerful framework for a very general class of cross-sectional deformations proposed in [43]. However, that approach also requires intensive computational effort: it requires either a 2-d finite element (FE) calculation for every element along the centerline curve (i.e., an FE\({}^{2}\) method), or it requires the training of the model on an arbitrarily large class of cross-section deformations to try to learn an effective behavior for implementation into a 1-d model. While potentially much more accurate, both of these approaches are far more computationally intensive than the simple 1-d FE calculations required for the model proposed in this paper. This fast computation is a vital component of our work, since it enables a rapid exploration of the design space needed to guide experimental design. **Organization.** In Section 2, we provide the 3-d formulation, nematic patterning, and constitutive model that completely describe the 3-d pre-curved LCE strip. In Section 3, we develop the ansatz for the deformation. In Section 4, we apply the ansatz to obtain the dimension reduction of the full 3-d model to the reduced 1-d model. Finally, in Sections 5 and 6, we describe, respectively, the numerical solution procedure and the results from the dimensionally-reduced model. ## 2 Three-dimensional Formulation, Nematic Patterning, and Constitutive Modeling ### Elastic Strain Energy Density It is usual to model LCEs using the neo-Hookean hyperelasticity model, which is based on the statistical mechanics of polymer chains, but with the stress-free state dependent on the state of the nematic director. Our model described here is standard in the literature [42]. Given a deformation gradient tensor, \(\mathbf{F}\), the compressible neo-Hookean strain energy density is: \[\mathcal{W}_{\text{orb}}\left(\mathbf{F}\right)=\frac{\mu}{2}\left(\operatorname{ tr}\left(\mathbf{F}^{T}\mathbf{F}\right)-3-2\log J\right)+\frac{\kappa}{2}\mathcal{W}_{ \text{vol}}\left(J\right)\quad\text{where }J=\det\mathbf{F} \tag{2.1}\] where \(\mu\) and \(\kappa\) are positive material constants with the dimension of energy per unit volume, and correspond to the shear and bulk moduli respectively. \(\mathcal{W}_{\text{vol}}\left(J\right)\) term accounts for compressibility effects, and satisfies that: (1) \(\mathcal{W}_{\text{vol}}\left(J\right)=0\iff J=1\); (2) As \(J\to 0^{+}\), \(\mathcal{W}_{\text{vol}}\left(J\right)\rightarrow\infty\); and (3) \(\mathcal{W}_{\text{vol}}^{{}^{\prime}}\left(1\right)>0\). For the present work, we will use: \[\mathcal{W}_{\text{vol}}\left(J\right)=J^{2}-1-2\log\left(J\right) \tag{2.2}\] The effect of the thermal or optical stimulus is to change the nematic ordering, that we denote _nematic activation_. In the thermal case, at low temperature the nematic mesogens have local ordering that can be described by the unit vector director field \(\mathbf{n}\); at higher temperatures, the nematic ordering is lost and becomes isotropic. In the optical case, the LCE specimens are doped with azobenzene - a light-sensitive molecule - that undergoes a _trans-cis_ transformation when exposed to light of the appropriate wavelength [44]. While the molecular-scale mechanisms are different in the thermal and optical cases, at the continuum scale both can be described by a stress-free deformation that we will denote \(\mathbf{F}_{l}\). A key difference between the thermal and optical cases is that the former can be well-approximated as having a uniform temperature - and consequently uniform nematic activation - in the body, whereas absorption of light through the thickness is essential to model the optical case accurately, e.g. [4; 9]. In this work, we assume uniform nematic activation. We can write \(\mathbf{F}_{l}\) using the spectral representation as: \[\mathbf{F}_{l}=\alpha^{1/3}\mathbf{n}\otimes\mathbf{n}+\alpha^{-1/6}\left(\mathbf{I}-\mathbf{n} \otimes\mathbf{n}\right) \tag{3}\] where \(\mathbf{n}\) is the nematic director of unit length, \(\mathbf{I}\) is the identity tensor, and \(\alpha>0\) is the coefficient of stretch along the nematic direction. Notice that \(\det\mathbf{F}_{l}=1\). In the present work, we take our reference to be the low-temperature state. The nematic activation at high temperature results in contraction along the director and isotropic expansion in the plane orthogonal to the director. Consequently, we have \(\alpha<1\). We note that our convention is opposite to others assumed in the literature, e.g. [16] assumes a high-temperature reference which merely implies that \(\alpha>1\) for them. Given \(\mathbf{F}_{l}\) from (3), we can define the usual multiplicative decomposition of \(\mathbf{F}\) in terms of the elastic deformation \(\mathbf{F}_{e}\): \[\mathbf{F}=\mathbf{F}_{e}\mathbf{F}_{l}\quad\Longleftrightarrow\quad\mathbf{F}_{e}=\mathbf{F} \mathbf{F}_{l}^{-1} \tag{4}\] Noting the spectral decomposition of \(\mathbf{F}_{l}\), we can write \(\mathbf{F}_{e}\) as: \[\mathbf{F}_{e}=\alpha^{1/6}\mathbf{F}+\left(\alpha^{-1/3}-\alpha^{1/6}\right)\mathbf{F} \left(\mathbf{n}\otimes\mathbf{n}\right) \tag{5}\] Since \(\mathbf{F}_{l}\) changes the stress-free state, the elastic energy density for the LCE can be written: \[\hat{\mathcal{W}}\left(\mathbf{F}\right)=\mathcal{W}_{\text{nfl}}\left(\mathbf{F}_{e}\right) \tag{6}\] ### Change of Reference to a Flat Strip The strips that we consider in this work are produced as nominally flat, but then annealed to significantly reduce the stresses. In the process of annealing, the strips also develop small, but critical, transverse curvature; this is achieved by adhering the strip Figure 1: Change of reference to an initially flat configuration. lengthwise to a thin cylinder during annealing [13]. Given this process of annealing, we assume in this work that the annealed configuration with transverse curvature is stress-free, i.e., the natural state. Working directly with this pre-curved natural reference poses some difficulties, in terms of algebraic complexity, for deriving a dimensionally-reduced model. Therefore, we use a non-stress-free flat strip as our reference configuration. We discuss here the transformation of the reference configuration, as shown in Figure 1. Following Figure 1, we have the relation: \[\mathbf{F}^{*}=\mathbf{F}\mathbf{F}_{eq}^{-1} \tag{7}\] Note that if the strip is not pre-curved, \(\mathbf{F}_{eq}=\mathbf{I}\) and we recover \(\mathbf{F}^{*}=\mathbf{F}\). Since the deformation from the natural stress-free configuration to the current configuration is \(\mathbf{F}^{*}\), it follows that we should replace (4) by: \[\mathbf{F}^{*}=\mathbf{F}_{e}\mathbf{F}_{l} \tag{8}\] Combining (7) and (8) gives the expression for the elastic deformation using the flat reference configuration: \[\mathbf{F}_{e}=\mathbf{F}\mathbf{F}_{eq}^{-1}\mathbf{F}_{l}^{-1} \tag{9}\] which should be used as the argument for the neo-Hookean energy density in (6). ### Director Patterning through the Cross-Section We use in the current work a twisted nematic director pattern. In the flat reference configuration, using the orthonormal frame shown in Figure 2, this pattern has a director that lies in the \(\mathbf{e}_{1}-\mathbf{e}_{2}\) plane and which varies only through the thickness. This director can be represented in the fixed orthonormal frame as: \[\hat{\mathbf{n}}=\cos\hat{\theta}\,\mathbf{e}_{1}+\sin\hat{\theta}\,\mathbf{e}_{2} \tag{10}\] where \(\hat{\theta}\) varies linearly through the thickness of the strip. The glassy behavior of the LCE causes the director to be fixed with respect to the polymer matrix background. Therefore, it is assumed to be mapped under the deformation gradient as a material vector, but normalized to remain unit [42]: \[\mathbf{n}=\frac{\mathbf{F}\hat{\mathbf{n}}}{|\mathbf{F}\hat{\mathbf{n}}|} \tag{11}\] Note that since \(\hat{\mathbf{n}}\) is defined in the flat reference, it is mapped under \(\mathbf{F}\) and not \(\mathbf{F}^{*}\). Figure 2: A twisted nematic director pattern in the flat reference state. The length of the strip is aligned along \(\mathbf{e}_{1}\), the width along \(\mathbf{e}_{2}\), and the thickness along \(\mathbf{e}_{3}\). Note that \(\theta_{top}=\pi/2\) here, corresponding to \(\hat{\mathbf{n}}=\mathbf{e}_{2}\) on the top surface and \(\hat{\mathbf{n}}=\mathbf{e}_{1}\) on the bottom surface. We note that our approach can also be used easily for other nematic director patterns. One popular configuration is the _splay-bend_ pattern which lies in the \(\mathbf{e}_{1}-\mathbf{e}_{3}\) plane with the nematic director normal to the surface at the top of the strip and parallel to the centerline at the bottom of the strip, with linear variation through the thickness. We do, however, in the dimension reduction procedure in Section 4 assume that the only variation of the director is in the thickness direction. ## 3 Kinematic Assumptions for Dimension Reduction To conduct the dimension reduction procedure, we follow the approach of constructing an ansatz for the deformation that will enable us to integrate over the width and thickness directions. Our ansatz is inspired by [40], but extended to the 3-d setting. Consider a long, thin, and initially flat strip whose body in the reference state occupies the space, \(\Omega_{\text{3D}}\), represented by the coordinates \((s_{1},s_{2},s_{3})\in[0;L]\times[-a/2,a/2]\times[-h/2,h/2]\) that are orthogonal and Cartesian in the reference configuration, but are mapped to curvilinear coordinates in the deformed configuration. Here, \(L\) is the length of the centerline of the strip, \(a\) is the length of the cross-section curve of the middle surface, and \(h\) is the uniform thickness of the strip. We construct a fixed orthonormal frame \((O,\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3})\) from the initially flat configuration, such that \(O\) is the centroid of the cross-section at \(s_{1}=0\); \((O,\mathbf{e}_{1})\) contains the centerline of the strip; and \(\mathbf{e}_{2}\) and \(\mathbf{e}_{3}\) align with the directions along the width and thickness of the strip respectively. A material point of the strip in the flat reference configuration has the position \(\mathbf{X}\) given by the expression: \[\mathbf{X}\left(s_{1},s_{2},s_{3}\right)=s_{1}\mathbf{e}_{1}+s_{2}\mathbf{e}_{2}+s_{3}\bm {e}_{3}. \tag{3.1}\] In the deformed configuration, we decompose the position vector of each material point into a rigid motion measured in the global coordinate frame and a local deformation of the cross-section, measured in a local orthonormal frame, \((\mathbf{e}_{1}^{\text{loc}},\mathbf{e}_{2}^{\text{loc}},\mathbf{e}_{3}^{\text{loc}})\), that is related to the fixed orthonormal frame \((\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3})\) as follows: \[\mathbf{e}_{i}^{\text{loc}}(s_{1})=\mathbf{R}(s_{1})\mathbf{e}_{i} \tag{3.2}\] The rigid motion of each cross-section is given by a displacement, \(\mathbf{u}(s_{1})=u_{1}(s_{1})\mathbf{e}_{1}+u_{2}(s_{1})\mathbf{e}_{2}+u_{3}(s_{1})\mathbf{e }_{3}\), and a finite rotation characterized by a rotation tensor, \(\mathbf{R}(s_{1})\). The local cross-section deformation is described by the coordinates, \(y^{\text{loc}}(s_{1},s_{2})\) and \(z^{\text{loc}}(s_{1},s_{2})\), which track the position of the \(s_{2}\) coordinate on the middle surface cross-section curve in the local frame (Fig. 3). Figure 3: Schematic of an LCE strip in the deformed configuration. The red line in the image of the full strip (left) corresponds to the \(s_{1}\) coordinate direction. Each point along this centerline curve will have a corresponding position coordinate, \(\mathbf{u}(s_{1})\), and a rotation matrix, \(\mathbf{R}(s_{1})\) that track the local orthonormal frame at each point. A typical deformed cross-section is shown (right) with the open circular middle plane shown with the dashed line and the \(s_{2}\) coordinate direction in red. The coordinates \(y^{\text{loc}}\) and \(z^{\text{loc}}\) track the position of the \(s_{2}\) coordinate, while the \(s_{3}\) coordinate will traverse through the thickness of the strip in the direction perpendicular to the cross-section curve. We further introduce the \(s_{3}\) coordinate that traverses through the thickness of the strip. Since this direction remains perpendicular to the middle surface cross-section curve, it can be expressed in the local frame by components represented by the derivatives of \(y^{\text{loc}}\) and \(z^{\text{loc}}\) with respect to \(s_{2}\), normalized by a local stretch factor, \(j_{2}(s_{1},s_{2})\) given by the following: \[j_{2}(s_{1},s_{2})=\sqrt{\left(y^{\text{loc}}_{,2}\right)^{2}+\left(z^{\text{ loc}}_{,2}\right)^{2}}. \tag{3.3}\] where \((\bullet)_{,i}\) refers to the the partial derivative of \((\bullet)\) with respect to \(s_{i}\). Lastly, we define the out-of-plane warping displacement of the cross-section along the length of the strip, \(\chi(s_{1},s_{2})\), described in detail below in Section 3.A.3. Putting all this together, we have that the material point \(\mathbf{X}(s_{1},s_{2},s_{3})\) in the flat reference goes to the location \(\mathbf{x}(s_{1},s_{2},s_{3})\) in the deformed configuration, given by: \[\mathbf{x}(s_{1},s_{2},s_{3})=\underbrace{(s_{1}+u_{1})\mathbf{e}_{1}+u_{2}\mathbf{e}_{2}+ u_{3}\mathbf{e}_{3}}_{=:\tilde{\mathbf{u}}}+\underbrace{\chi\mathbf{e}^{\text{loc}}_{ \mathbf{1}}+\left(y^{\text{loc}}-\frac{z^{\text{loc}}_{,2}}{j_{2}}s_{3}\right) \mathbf{e}^{\text{loc}}_{2}+\left(z^{\text{loc}}+\frac{y^{\text{loc}}_{,2}}{j_{2} }s_{3}\right)\mathbf{e}^{\text{loc}}_{3}}_{=:\mathbf{R}^{T}\mathbf{x}^{\text{loc}}} \tag{3.4}\] We note that the introduction here of the \(s_{3}\) coordinate is representative of our three-dimensional generalization of the kinematics developed in [40]. Among other things, explicitly including the thickness coordinate is necessary to allow for a nematic director that varies through the thickness as described in Section 2.C. The inclusion of this parameter is essential to our model, since the through-thickness variation of the nematic director is the root cause of the spontaneous bending and twisting curvatures in the strip, which in turn further drive the large class of deformations possible with LCE strips. The breadth of nonlinear behavior driven by this through-thickness variation requires careful averaging and re-scaling that we describe in Section 4. For ease of notation, we have defined \(\tilde{\mathbf{u}}(s_{1})\) and \(\mathbf{x}^{\text{loc}}(s_{1},s_{2},s_{3})\) as the vectors containing the terms measured in the global frame and local frame respectively; the presence of \(\mathbf{R}\) is to map to the global frame. We can write the deformation gradient as: \[\mathbf{x}=\tilde{\mathbf{u}}+\mathbf{R}^{T}\mathbf{x}^{\text{loc}}\Rightarrow\mathbf{F}=\nabla \tilde{\mathbf{u}}+\nabla\left(\mathbf{R}^{T}\mathbf{x}^{\text{loc}}\right) \tag{3.5}\] Note that because our curvilinear coordinates, \((s_{1},s_{2},s_{3})\), correspond to the material coordinates in the reference configuration, the derivatives used for the construction of the deformation gradient are taken with respect to these curvilinear coordinates. ### Further Simplifying Assumptions We here present a list of assumptions that together simplify the kinematics and allow us to carry through with the dimensionality reduction. #### a.a.1 A Long, Thin Strip We assume that the thickness of the strip is much smaller than its width, which is in turn much smaller than the length of the strip. The extreme aspect ratio of the cross-section allows us to use a specific warping function for thin-walled open cross-sections, which we describe in Section 3.A.3, while the disparity between width and length of the strip will be important in our dimension reduction procedure given in Section 4, where we will use truncated Taylor expansions in terms of the small dimensionless quantities \(h/L\) and \(a/L\). \[\frac{h}{a}\ll 1\quad\text{ and }\quad\frac{a}{L}\ll 1 \tag{3.6}\] #### a.a.2 Open Circular Cross-Section Curve We next assume that the curve that makes up the cross-section of the middle surface of the strip remains open circular throughout the deformation. This means that we can write explicitly a form for \(y^{\text{loc}}(s_{1},s_{2})\) and \(z^{\text{loc}}(s_{1},s_{2})\) in terms of a new variable, \(\beta(s_{1},s_{2})\), which describes the angle the tangent line to the cross-section curve makes with the \(\mathbf{e}^{\text{loc}}_{2}\) direction: \[y^{\text{loc}}_{,2}=\cos\beta\quad,\quad z^{\text{loc}}_{,2}=\sin\beta \tag{3.7}\] The assumption that the cross-section is circular implies that \(\beta(s_{1},s_{2})\) is linear in \(s_{2}\), and hence has the expression: \[\beta=2\beta_{e}\frac{s_{2}}{a} \tag{3.8}\] where \(\beta_{e}(s_{1})\) is the opening angle of the cross-section (Fig. 3(b)). We note that a more general cross-section shape would correspond to a more general functional dependence of \(\beta\) on \(s_{2}\). Assuming this form give the following expressions for \(y^{\text{loc}}(s_{1},s_{2})\) and \(z^{\text{loc}}(s_{1},s_{2})\). \[y^{\text{loc}}=\frac{a}{2\beta_{e}}\sin\left(2\beta_{e}\frac{s_{2}}{a}\right) \tag{3.9}\] \[z^{\text{loc}}=\frac{a}{2\beta_{e}^{2}}\left(\sin\left(\beta_{e}\right)-\beta_ {e}\cos\left(2\beta_{e}\frac{s_{2}}{a}\right)\right) \tag{3.10}\] This leads to further simplifications. We note first that: \[\left(y^{\text{loc}}_{,2}\right)^{2}+\left(z^{\text{loc}}_{,2}\right)^{2}=1 \implies j_{2}=1 \tag{3.11}\] This is a significant simplification in the expressions (3.4) and (3.5). We can use a similar characterization to simplify \(\mathbf{F}_{eq}\). For a strip in the natural, stress-free configuration with an open-circular cross-section curve of opening angle \(\bar{\beta}_{e}\), we can write the inverse of \(\mathbf{F}_{eq}\) as follows: \[\mathbf{F}_{eq}^{-1}=\begin{bmatrix}1&0&0\\ 0&\frac{a\cos\left(2\bar{\beta}_{e}s_{2}/a\right)}{a-2\bar{\beta}_{e}s_{3}}& \frac{a\sin\left(2\bar{\beta}_{e}s_{2}/a\right)}{a-2\bar{\beta}_{e}s_{3}}\\ 0&-\sin\left(2\bar{\beta}_{e}s_{2}/a\right)&\cos\left(2\bar{\beta}_{e}s_{2}/a \right)\end{bmatrix}.\] Note that if the opening angle \(\bar{\beta}_{e}=0\), then we recover \(\mathbf{F}_{eq}=\mathbf{I}\). Further, the determinant of \(\mathbf{F}_{eq}\) depends on \(s_{3}\) and \(a\): \[\det\mathbf{F}_{eq}^{-1}=\frac{a}{a-2\bar{\beta}_{e}s_{3}}, \tag{3.12}\] but, since \(s_{3}\in[-h/2,h/2]\) and the thickness of the strip is much less than its width from (3.6), we have \(\det\mathbf{F}_{eq}^{-1}\approx 1\), implying that: \[\det\mathbf{F}^{*}=\det\mathbf{F}_{eq}^{-1}\det\mathbf{F}\approx\det\mathbf{F} \tag{3.13}\] #### 3.a.3 Assumptions on the Warping Our final assumptions concern the form of the warping. We assume that the out-of-plane displacement is due only to warping, and the warping is due only to twisting, following the classical Vlassov theory [45] for the warping of thin-walled and open cross-sections. This warping displacement can be written as: \[\chi=\omega k_{t}^{r}, \tag{3.14}\] where \(\omega(s_{1},s_{2})\) is the so-called sectorial coordinate of the middle surface cross-section curve and \(k_{t}^{r}(s_{1})=(\mathbf{e}_{2}^{\text{loc}})_{,1}\cdot\mathbf{e}_{3}^{\text{loc}}\) is the twisting curvature. Under the assumption that the thickness of the strip is much smaller than the width, \(\omega(s_{1},s_{2})\) can be obtained from the following relation [40]: \[\omega_{,2}=y^{\text{loc}}_{,2}z^{\text{loc}}-y^{\text{loc}}z^{\text{loc}}_{,2 }\quad,\quad\omega=\int\omega_{,2}\;\mathrm{d}s_{2} \tag{3.15}\] After integrating with respect to \(s_{2}\), the constant of integration can be solved for by noting: \[\int_{-a/2}^{a/2}\omega\;\mathrm{d}s_{2}=0. \tag{3.16}\] This gives us the following final form for \(\omega(s_{1},s_{2})\): \[\omega=\frac{a}{4\beta_{e}^{3}}\left(a\sin\left(\beta_{e}\right)\sin\left(2 \beta_{e}\frac{s_{2}}{a}\right)-2\beta_{e}^{2}s_{2}\right) \tag{3.17}\] Further, since the warping displacement, \(\chi(s_{1},s_{2})\), remains small with respect to the width of the strip, we have that: \[(\chi_{,2})^{2}+(y^{\text{loc}}_{,2})^{2}+(z^{\text{loc}}_{,2})^{2}\approx 1 \tag{3.18}\] In other words, assuming that the cross-section is open circular constrains our middle surface cross-section curve to be effectively inextensible, which is reasonable for the deformations of long, thin strips. ## 4 Dimensionally-Reduced Model Starting from (6) and taking into account the approximate relation in (33), we can rewrite the 3-d energy density as follows: \[\begin{split}\hat{\mathcal{W}}&=\frac{\mu}{2} \left(\alpha^{1/3}F_{ij}^{*}F_{ij}^{*}+\left(\alpha^{-2/3}-\alpha^{1/3} \right)F_{ij}^{*}n_{j}F_{ik}^{*}n_{k}-3-2\log J\right)+\frac{\kappa}{2} \mathcal{W}_{\text{vol}}(J)\\ &=\frac{\mu}{2}\left(\alpha^{1/3}F_{ij}^{*}F_{ij}^{*}+\left( \alpha^{-2/3}-\alpha^{1/3}\right)F_{ij}\hat{n}_{j}F_{ik}\hat{n}_{k}-3-2\log J \right)+\frac{\kappa}{2}\mathcal{W}_{\text{vol}}(J)\end{split} \tag{41}\] where we use the Einstein summation convention here and in the following. Note that the second version of \(\hat{\mathcal{W}}\) arises from the fact that \(\hat{\mathbf{n}}\) is defined in the flat reference state. Our 3-d energy, \(\mathcal{E}_{\text{3D}}\), is the integral over the body, \(\Omega_{\text{3D}}\), of the strain energy density: \[\mathcal{E}_{\text{3D}}=\int_{\Omega_{\text{3D}}}\hat{\mathcal{W}}(\mathbf{F}_{e} (s_{1},s_{2},s_{3})) \tag{42}\] We begin by passing to the limit as \(h\to 0\) to obtain a 2-d shell model. As is typical in methods of dimensionality reduction, in order to avoid trivial results - i.e., either zero or infinite energy - we first re-scale the energy by dividing by the small thickness parameter, \(h\), and introduce the new variable \(s_{3}^{\prime}\) such that \(s_{3}=hs_{3}^{\prime}\). This allows us to write the energy density in a fixed, \(h\)-independent, re-scaled reference configuration occupying the space, \(\Omega_{\text{2D}}\times[-1/2,1/2]\), where \(\Omega_{\text{2D}}:=[0,L]\times[-a/2,a/2]\). \[\mathcal{E}_{h}=\frac{1}{h}\mathcal{E}_{\text{3D}}=\int_{\Omega_{\text{2D}} \times[-\frac{1}{2},\frac{1}{2}]}\hat{\mathcal{W}}\left(\mathbf{F}_{e}(s_{1},s_{2},hs_{3}^{\prime})\right) \tag{43}\] Under this rescaling, \(\tilde{\mathbf{u}}\) and \(\mathbf{R}\) remain the same, but \(\mathbf{x}^{\text{loc}}\) and \(\nabla\mathbf{x}^{\text{loc}}\) are transformed to: \[\mathbf{x}^{\text{loc}}=\left\{\begin{array}{ccc}\chi\\ y^{\text{loc}}-z_{2}^{\text{loc}}hs_{3}^{\prime}\\ z^{\text{loc}}+y_{2}^{\text{loc}}hs_{3}^{\prime}\end{array}\right\}\quad,\quad \nabla\mathbf{x}^{\text{loc}}=\left[\begin{matrix}\chi_{,1}&\chi_{,2}&0\\ y_{1}^{\text{loc}}-z_{21}^{\text{loc}}hs_{3}^{\prime}&y_{2}^{\text{loc}}-z_{2 2}^{\text{loc}}hs_{3}^{\prime}&-z_{,2}^{\text{loc}}\\ z_{,1}^{\text{loc}}+y_{2,1}^{\text{loc}}hs_{3}^{\prime}&z_{,2}^{\text{loc}}+y_{ 2,2}^{\text{loc}}hs_{3}^{\prime}&y_{2}^{\text{loc}}\end{matrix}\right] \tag{44}\] Furthermore, \(\hat{\theta}\) (and by extension \(\hat{\mathbf{n}}\) and \(\mathbf{n}\)) become independent of \(h\). \[\hat{\theta}(s_{3}^{\prime})=\left(\theta_{\text{top}}+\frac{\pi}{4}\right)- \frac{\pi}{2}s_{3}^{\prime} \tag{45}\] Lastly, in order to obtain finite - i.e., non-zero and non-infinite - curvatures in the limit as \(h\to 0\), it is necessary to scale the expansion coefficient, \(\alpha\), linearly in \(h\): \[\alpha=1+\frac{\alpha_{0}}{h_{0}}h, \tag{46}\] where \(\alpha_{0}<0\) is a dimensionless parameter and \(h/h_{0}\to 1\) as we take \(h\to 0\). Note that the authors in [16] instead use \(\alpha_{0}>0\) because they use the high-temperature state as their reference, where we use the low-temperature state. Next, we use a linear series expansion of the re-scaled energy density about \(h=0\) then integrate with respect to \(s_{3}^{\prime}\). Keeping in mind our definition of \(\alpha\) from (46), the first term in (41) has the following expansion: \[\alpha^{1/3}F_{ij}^{*}F_{ij}^{*}=\left[F_{ij}^{*}F_{ij}^{*}\right]_{h=0}+h \frac{\alpha_{0}}{3h_{0}}\left[F_{ij}^{*}F_{ij}^{*}\right]_{h=0}+2h\left[F_{ ij}^{*}\right]_{h=0}\frac{\partial F_{ij}^{*}}{\partial h}+O(h^{2}) \tag{47}\] We note that the third term in the above expansion is linear in \(s_{3}^{\prime}\), and will therefore integrate to zero over \(s_{3}^{\prime}\in[-1/2,1/2]\). The remaining terms from the expansion are constant with respect to \(s_{3}^{\prime}\) and will therefore carry through to the areal strain energy unchanged. The second term in (41) expands as follows: \[\left(\alpha^{-2/3}-\alpha^{1/3}\right)F_{ij}F_{kj}\hat{n}_{i}\hat{n}_{k}=-h \frac{\alpha_{0}}{h_{0}}\left[F_{ij}F_{kj}\hat{n}_{i}\hat{n}_{k}\right]_{h=0}+ O(h^{2}) \tag{48}\] The only dependency on \(s_{3}^{\prime}\) in this expansion is in \(n_{i}n_{k}\) so, proceeding without using a specific form for the variation of \(\mathbf{n}\) with \(s^{\prime}_{3}\), we have: \[\int_{-\frac{1}{2}}^{\frac{1}{2}}\left(\alpha^{-2/3}-\alpha^{1/3}\right)F_{ij}F_{ kj}\hat{n}_{i}\hat{n}_{k}\,\left.\mathrm{d}s^{\prime}_{3}=-\alpha_{0}\left[F_{ ij}F_{kj}\right]_{h=0}\mathsf{M}[\hat{n}_{i}\hat{n}_{k}]+O(h^{2}) \tag{4.9}\] where we define the through-the-thickness averaging operator functional: \[\mathsf{M}[\bullet]=\int_{-\frac{1}{2}}^{\frac{1}{2}}(\bullet)\,\mathrm{d}s^{ \prime}_{3} \tag{4.10}\] For the volumetric terms, i.e., those that depend on \(J\), we follow a different approach. Because the quantity \(J\) has a clear geometric interpretation that we aim to preserve, we do not directly work with the volumetric energy density; we instead reduce \(J\) to \(J^{h}\), and then apply the energy to \(J^{h}\). Roughly, our approach is based on the recognition that the volumetric term \(\hat{\mathcal{W}}\) is a penalty that approximates the incompressibility constraint \(J=1\); therefore, we first identify \(J^{h}\), and then apply the penalty \(\hat{\mathcal{W}}\) to approximately enforce the constraint on \(J^{h}\). To obtain \(J^{h}\), we first note that after re-scaling by the change of variables \(s_{3}=hs^{\prime}_{3}\), simply setting \(h=0\) would make \(J=0\). The appropriate quantity is \(J/h\). We find the limit as \(h\to 0\) of \(J/h\) and integrate with respect to \(s^{\prime}_{3}\): \[J^{h}=\mathsf{M}\left[\lim_{h\to 0}\frac{J}{h}\right] \tag{4.11}\] The aim of this rescaling is to avoid changing the topological structure of this penalty term in the integrand of the reduced functional. Maintaining this topological structure throughout the dimension reduction procedure also maintains the physical meaning of enforcing incompressibility through a penalty term. This heuristic idea works out, since the resulting expression satisfies \(J^{h}=\det\mathbf{F}^{h}\) (defined below in (4.13)), which mimics the behavior of the full 3-d description and shows that the reduction of \(J\) to \(J^{h}\) maintains the geometric meaning of the compressibility. With the results above, we can take the limit as \(h\to 0\) to obtain a new reduced strain energy density function, \(\mathcal{W}^{h}(s_{1},s_{2})\), given by: \[\hat{\mathcal{W}}^{h}=\frac{\mu}{2}\left(\left(1+\frac{\alpha_{0}}{3}\right) \left(F^{*h}_{ij}F^{*h}_{ij}\right)-\alpha_{0}\left(F^{h}_{ji}F^{h}_{jk} \mathsf{M}\left[\hat{n}_{i}\hat{n}_{k}\right]\right)-3-2\log J^{h}\right)+ \frac{\kappa}{2}\left(\mathcal{W}_{\mathsf{vol}}\left(J^{h}\right)\right) \tag{4.12}\] where \(\mathbf{F}^{h}(s_{1},s_{2})\) and \(\mathbf{F}^{*h}(s_{1},s_{2})\) are the reduced deformation gradients given by: \[\mathbf{F}^{h}=\left.\mathbf{F}\right|_{s_{3}=0}=\left(\nabla\tilde{\mathbf{u}}+\left( \nabla\mathbf{R}^{T}\right)\mathbf{x}^{\mathsf{loc},h}+\mathbf{R}^{T}\left(\nabla\mathbf{x}^{ \mathsf{loc}}\right)^{h}\right) \tag{4.13}\] and \[\mathbf{F}^{*h}=\left.\mathbf{F}^{*}\right|_{s_{3}=0}=\mathbf{F}^{h}\left(\mathbf{F}^{h}_{eq} \right)^{-1} \tag{4.14}\] where the quantities that appear above are given by: \[\mathbf{x}^{\mathsf{loc},h}=\left\{\begin{array}{l}\chi\\ y^{\mathsf{loc}}\\ z^{\mathsf{loc}}\end{array}\right\}\implies\left(\nabla\mathbf{x}^{\mathsf{loc}} \right)^{h}=\begin{bmatrix}\chi_{,1}&\chi_{,2}&0\\ y^{\mathsf{loc}}_{,1}&y^{\mathsf{loc}}_{,2}&-z^{\mathsf{loc}}_{,2}\\ z^{\mathsf{loc}}_{,1}&z^{\mathsf{loc}}_{,2}&y^{\mathsf{loc}}_{,2}\end{bmatrix} \tag{4.15}\] and \[\left(\mathbf{F}^{h}_{eq}\right)^{-1}=\begin{bmatrix}1&0&0\\ 0&\cos\left(2\bar{\beta}_{c}s_{2}/a\right)&\sin\left(2\bar{\beta}_{c}s_{2}/a \right)\\ 0&-\sin\left(2\bar{\beta}_{c}s_{2}/a\right)&\cos\left(2\bar{\beta}_{c}s_{2}/a \right)\end{bmatrix} \tag{4.16}\] Our 2-d energy functional is then given by \[\mathcal{E}_{\mathsf{2D}}=\int_{\Omega_{\mathsf{2D}}}\mathcal{W}^{h}(s_{1},s_{ 2}) \tag{4.17}\] We follow a slightly different procedure to pass to the 1-d model for LCE strips. We still rescale the energy by dividing by the width, \(a\), and introduce the new coordinate \(s_{2}^{\prime}\) such that \(s_{2}=as_{2}^{\prime}\). \[\mathcal{E}_{a}^{h}=\frac{1}{a}\mathcal{E}_{\text{2D}}=\int_{[0,L]\times[-\frac {1}{2},\frac{1}{2}]}\mathcal{W}^{h}\left(s_{1},as_{2}^{\prime}\right) \tag{4.18}\] However, we now would like to retain some information about the width of the strip in an attempt to capture the difference in twisting behavior associated with _wide_ vs _narrow_ strips, defined by the value of the aspect ratio \(a/L\). In [46], they balanced bending and stretching energies to define a non-dimensional width parameter that depends on the width, length, and a so-called _reference curvature_ which measures the natural curvature associated with saddle-like equilibrium configurations. Though we have eliminated the dependence on the thickness in the previous steps, our reference curvature (i.e., the curvature associated with nematic activation in twisted-nematic configurations) is governed by \(\alpha\), which we scaled with the thickness of the strip in (4.6). Therefore, keeping the width of our strip, \(a\), in the reduced 1-d energy density allows us to capture the behavior exhibited in [46]. So, to obtain the 1-d energy density, \(\mathcal{W}_{a}^{h}\) without taking the limit as \(a\to 0\), we simply integrate the re-scaled 2-d energy density above with respect to \(s_{2}^{\prime}\). With the current construction, all terms are integrable in closed form in \(s_{2}^{\prime}\); however, since this final 1-d energy density does not have a clean representation, we simply use an overbar to represent integration with respect to \(s_{2}^{\prime}\) as follows: \[\overline{(\cdot)}=\int_{-\frac{1}{2}}^{\frac{1}{2}}(\cdot)\;\mathrm{d}s_{2}^ {\prime} \tag{4.19}\] The final lineic 1-d strain energy density is given by: \[\hat{\mathcal{W}}_{a}^{h}=\frac{\mu}{2}\left(\left(1+\frac{\alpha_{0}}{3} \right)\left(\overline{F_{ij}^{*h}F_{ij}^{*h}}\right)-\alpha_{0}\left( \overline{F_{ji}^{h}F_{jk}^{h}\mathbf{M}\left[\hat{n}_{i}\hat{n}_{k}\right]} \right)-3-2\overline{\log J^{h}}\right)+\frac{\kappa}{2}\left(\overline{ \mathcal{W}_{\text{vol}}\left(J^{h}\right)}\right) \tag{4.20}\] This reduced energy, together with the definitions of \(\mathbf{F}^{h}\) and \(\mathbf{F}^{*h}\) (from (4.13) and (4.14), respectively) and the kinematic descriptions of \(y^{\text{loc}}\), \(z^{\text{loc}}\), and \(\chi\) (from (3.9), (3.10), and (3.14), respectively) makes up our reduced order model. Tracing back these equations, the final energy density has first-order derivatives in the quantities \(\mathbf{u}\), \(\mathbf{R}\), and \(\beta_{e}\). ### Description of Reduced Order Terms It should first be noted that this reduced form clearly separates the classical hyper-elasticity model and the effects of stimuli on the nematic liquid crystals. When \(\alpha_{0}=0\), corresponding to zero external stimuli, the nematic terms vanish leaving us with a reduced compressible hyper-elasticity model where the effects of the changing cross-sectional shape are integrated into \(\mathbf{F}^{h}\). To gain a better understanding of this reduced energy, we recall the three terms in \(\mathbf{F}^{h}\). 1. \(\nabla\mathbf{\tilde{u}}\) captures the stretching and shear in the ribbon. 2. \(\left(\nabla\mathbf{R}^{T}\right)\mathbf{x}^{\text{loc},h}\) includes a gradient of the cross-section rotation, so it captures bending and twisting. 3. \(\mathbf{R}^{T}\left(\nabla\mathbf{x}^{\text{loc}}\right)^{h}\) captures contributions from the changing cross-section shape through the gradient on \(\mathbf{x}^{\text{loc}}\). When \(\mathbf{F}^{h}\) is contracted with itself in the energy density, there arises couplings between these different behaviors. For example, the first term contracted with itself gives the stretching energy, while the first term contracts with the second term to give the bending-stretching coupling. For appropriate \(\alpha_{0}<0\), by (4.6) we have \(0<\alpha<1\) since both \(h\) and \(h_{0}\) are positive. From (2.3), it is clear that this would result in a compression along the nematic director and expansion transverse to the director, which is exactly the deformation associated with exposing a nematic LCE to external stimuli. In the reduced model, \(\alpha_{0}<0\) weights the energy density differently, giving less weight to the purely geometric terms-which look like a reduced \(\operatorname{tr}(\mathbf{F}^{T}\mathbf{F})\) - and more weight to the nematic terms - which look like a reduced \(\left|\mathbf{F}\mathbf{n}\right|^{2}\). Further, with no nematic activation (i.e., \(\alpha_{0}=0\)), the nematic terms are eliminated, leaving us with a form that looks like a reduced model for compressible Neo-Hookean materials. ### Boundary Conditions Dirichlet boundary conditions are easily applied in the traditional sense for each of the reduced kinematic variables. However, the corresponding Neumann boundary conditions do not have an easy geometric or physical interpretation. To illustrate this point, consider the traction boundary at the end of the LCE strip. The normal vector at \(s_{1}=L\) is approximately given by \(\mathbf{e}_{1}^{\text{loc}}(L)\), so to apply the traction, \(\mathbf{t}\), at the end of the strip from the fully 3-d perspective, we would need to solve with the boundary constraints from the classical theory, given by: \[\mathbf{t} =J^{-1}\frac{\partial\hat{W}}{\partial\mathbf{F}}\mathbf{F}^{T}\mathbf{e}_{1} ^{\text{loc}} \tag{4.21}\] \[=J^{-1}\left(\mu\left(\mathbf{F}\mathbf{F}^{-T}\right)+\kappa\left(J-J^{ -1}\right)J\mathbf{F}^{-T}\right)\mathbf{F}^{T}\mathbf{e}_{1}^{\text{loc}}\] \[=\left(\mu J^{-1}\left(\mathbf{F}\mathbf{F}^{T}-\mathbf{I}\right)+\kappa \left(J-J^{-1}\right)\mathbf{I}\right)\mathbf{e}_{1}^{\text{loc}}\] To arrive at the appropriate reduced traction at the boundary, we should proceed by averaging the contributions to (4.21) over the entire cross-section of the strip by integrating over \(s_{2}\) and \(s_{3}\) to obtain a point constraint to be applied at \(s_{1}=L\). This would produce a set of algebraic constraints on our reduced kinematic variables that would together describe the reduced boundary conditions. However, this becomes algebraically formidable to compute due to the amount of inverses and matrix multiplications on the deformation gradient, and the resulting constraints on the reduced variables would be difficult to implement in non-trivial cases. For this reason, the present work focuses only on calculations with Dirichlet and Neumann boundary conditions - despite the unclear interpretation of the latter - on the reduced kinematic variables. ## 5 Numerical Method ### Numerical Treatment of Finite Rotations To parameterize the 3-d finite rotations, we follow [40] in using unit quaternions for their noted numerical stability and computational efficiency. The 3-d axial rotation vector associated with the rotation tensor, \(\mathbf{R}(s_{1})\), is represented by a 4D vector, \(\mathbf{q}(s_{1})=(q_{0},q_{1},q_{2},q_{3})\) using the following relationship: \[\mathbf{R}=\begin{bmatrix}(1-2q_{2}^{2}-2q_{3}^{2})&2(q_{1}q_{2}-q_{0}q_{3})&2(q_ {1}q_{3}+q_{0}q_{2})\\ 2(q_{1}q_{2}+q_{0}q_{3})&(1-2q_{1}^{2}-2q_{3}^{2})&2(q_{2}q_{3}-q_{0}q_{1})\\ 2(q_{1}q_{3}-q_{0}q_{2})&2(q_{2}q_{3}+q_{0}q_{1})&(1-2q_{1}^{2}-2q_{2}^{2}) \end{bmatrix} \tag{5.1}\] where \(q_{0}(s_{1})\), \(q_{1}(s_{1})\), \(q_{2}(s_{1})\), and \(q_{3}(s_{1})\) are dimensionless, and \(\mathbf{q}(s_{1})\) is a unit vector: \[q_{0}^{2}+q_{1}^{2}+q_{2}^{2}+q_{3}^{2}=1. \tag{5.2}\] These quaternions are related to the rotation tensor, \(\mathbf{R}\), through its axial vector, \(\mathbf{r}\), by the following: \[q_{0}=\cos\frac{r}{2}\quad,\quad q_{1}=v_{1}\sin\frac{r}{2}\quad,\quad q_{2}=v _{2}\sin\frac{r}{2}\quad,\quad q_{3}=v_{3}\sin\frac{r}{2} \tag{5.3}\] where \(r=\|\mathbf{r}\|\) is the magnitude of the rotation and \(\mathbf{v}=\mathbf{r}/r\) is the unit vector in the direction of the axis of rotation. The unit-length constraint on the quaternions will be implemented with Lagrange multipliers in the lineic strain energy density: \[\tilde{\mathcal{W}}_{a}^{h}=\hat{\mathcal{W}}_{a}^{h}+\lambda(q_{0}^{2}+q_{1} ^{2}+q_{2}^{2}+q_{3}^{2}-1) \tag{5.4}\] ### Finite Element Discretization For this work, we find the deformation by minimizing the potential energy (subject to appropriate boundary conditions) with respect to the following reduced kinematic variables: * cross-section displacement: \(u_{1},u_{2},u_{3}\) * cross-section rotation: \(q_{0},q_{1},q_{2},q_{3}\) * cross-section opening angle: \(\beta_{e}\) We minimize the energy in this nonlinear problem in the usual way: we start with an initial guess for the kinematic variables, evaluate the functional derivative, and update to move in the steepest descent direction. At each step of the iteration, we calculate the current nematic director, \(\mathbf{n}\), by reconstructing the full 3-d deformation gradient (3.5) and mapping the nematic director as in (2.11), to provide the required expressions for the director field in the energy. For the finite element discretization, we use standard 1-d finite elements with quadratic interpolation for the reduced kinematic variables, and linear interpolation for the Lagrange multiplier that constrains the quaternions to unit length. This is in line with the heuristic for constrained problems of using a lower-order interpolation for the Lagrange multipliers. ## 6 Numerical Results In this section, we present some results for our 1-d model, computed numerically. We first present examples to validate the model. Since the main goals are to capture (1) the spontaneous curvature behavior of twisted nematic LCEs, and (2) the classic behaviors of tape springs, we test the ability of the 1-d model to capture these effects. We first compute in Section 6.A the spontaneous curvature of twisted nematic LCEs under nematic activation at different offset angles using the 1-d model. Next, in Section 6.B, we present our results from the 1-d model for the tape-spring instability and twist-bend coupling of transversely curved strips. To validate our model, we compare with the results of a full 3-d model in Section 6.C. Finally, after validation, we present further results from our reduced model that we compare qualitatively to experiments from [13]. We highlight that the comparisons to experiment are only qualitative because the experiments of [13] - as well as most others in the literature - do not quantitatively measure deformation, stress, or other quantities that can be used for a quantitative test of the model. ### Heating/Illumination in a Twisted Nematic LCE We capture the experimental observations shown in Figure 4 from [13] using the 1-d model; numerical results are shown in Figure 5 with the colors corresponding to the normalized lineic strain energy density. Figure 5 present results for two initially-flat LCE strips with a twisted nematic director configuration. The first has an offset angle, \(\theta_{top}=0^{\circ}\), while the second has \(\theta_{top}=40^{\circ}\). We hold the left end fixed with zero displacement (\(u_{1}=u_{2}=u_{3}=0\)), zero rotation (\(q_{0}=1\), \(q_{1}=q_{2}=q_{3}=0\)), and zero transverse curvature (\(\beta_{e}=0\)). After nematic activation, the strip with \(\theta_{top}=0^{\circ}\) bends upward as seen in the experiments in Figure 4; this is because the strip expands along the length on the bottom side of the strip and contracts on the top side. For the strip with \(\theta_{top}=40^{\circ}\), the model captures the observed combined bending and twisting. ### Classical Results in the Analysis of Tape-Springs A key goal of this work is to be able to effectively capture the localizations in LCE strips that are closely related to the classical tape-spring instability observed in the purely mechanical setting in transversely curved strips. Here, we show that the 1-d model is able to capture this instability, as well as well-known coupling between twisting and bending, in a purely mechanical setting. To model the tape-spring instability, the cross-section is set to have transverse curvature in its natural configuration. The left end of the strip is held fixed, while a downward force is applied to the right end. For ease of visualization, we also applied the equivalent of a roller boundary condition in the middle of the strip, i.e., \(u_{3}(s_{1}=0.5)=0\). As shown in Figure 6, the bending (and strain energy) localizes in a small region, and the transverse curvature vanishes here. The color corresponds to the change in opening angle \(\beta_{e}\) as compared to the natural configuration to highlight the flattening cross-section typical to the tape-spring Figure 4: This figure from [13] shows experimental observations of heating a twisted nematic LCE strip with different cut angles, \(\theta\). Top right: for an unconstrained strip, \(\theta=0^{\circ}\) shows pure bending, while \(\theta=40^{\circ}\) forms a helical shape with combined bending and twisting. Bottom right: by constraining the ends of the strip, we see that a spontaneous cross-sectional curvature develops from a strip with \(\theta=0^{\circ}\). instability. We note that our lineic energy density contains first-order derivatives of \(\beta_{e}\) which, roughly, keeps it continuous, so the tape-spring localization does not become singular and can be captured with a standard finite element discretization. Next, we demonstrate the ability of our model to capture coupled behavior. In particular, the cross-section was assigned a transverse curvature in the natural configuration, and a torque was applied to the right end while the left end was held fixed. Due to the cross-sectional curvature, the twisting was coupled with deformation in the same sense as the twisting, as shown in Figure Figure 5: (a) An initially-flat LCE strip with no nematic offset angle will undergo bending upon nematic activation. The expansion along the length on the bottom face and the opposing contraction on the top face create this distinct behavior. (b) For an initially-flat, twisted nematic LCE strip with an offset angle, \(\theta_{top}=40^{\circ}\), we obtain a combination of bending and twisting upon nematic activation. In both calculations, the left end of the strip was held fixed (Dirichlet boundary conditions) while the right end was free (Neumann boundary conditions). The color corresponds to the lineic strain energy density, normalized to be between \(0\) and \(1\). Figure 6: The 1-d reduced model captures classical tape-spring instability in pre-curved LCE strips under purely mechanical loading without nematic activation. The color shows the difference in opening angle, \(\beta_{e}\), between the natural and deformed configurations, normalized to be between \(0\) and \(1\), and highlights the flattening of the cross-section that goes with the localized bending. ### Model Validation While our 1-d model qualitatively captures a large range of behaviors, here we quantitatively compare the accuracy against the 3-d model, noting that the experimental observations are not quantitative and hence do not enable a comparison between 1-d model and experiment. Figure 8 shows the relative error of our reduced model as compared to a full 3-d FEM simulation. The error is defined as the absolute value of the percent difference between the two solutions, and it is averaged through the thickness of the strip to provide an easy visualization of the efficacy of the reduced model. One trend that immediately emerges is that the reduced model performs worse as we move away from the centerline. This is an expected outcome; constraining the cross-section curve to remain open-circular and taking Taylor expansions for small opening angles, \(\beta_{e}\), is going to have the largest effects far from the centerline. Further, this increase is expected to be more prominent in regions where we have rapidly-varying transverse curvature as can be seen in the middle of the strip for the tape-spring instability (Figure 8(c)) and the coupled bending and twisting of a tape-spring (Figure 8(d)). Also to be expected, in the case where the strip is flat and only bends (Figure 8(a)), there is not appear to be any contribution to the error away from the centerline. Figure 8: We define the error as the absolute value of the percent difference between the displacement in the 3-d model and the 1-d model. We then average this error through the thickness of the strip to provide a clear picture of the efficacy of the reduced model. We show here this error for (a) the bending of a strip via nematic activation (see Figure 5(a)); (b) the bending and twisting of a strip via nematic activation (see Figure 5(b)); (c) the tape-spring instability (see Figure 6); and (d) the coupled twisting and bending of a tape spring (see Figure 7). Figure 7: The non-linearity in the reduced model can capture the coupling between twisting and bending under a torsional load for transversely curved LCE strips without nematic activation. For this calculation, the left end of the strip is held fixed (\(u_{i}(0)=q_{i}(0)=0\) for \(i=1,2,3\); \(q_{0}(0)=1\)) while the right end of the strip is subjected to a rotation (\(q_{0}(L)=q_{2}(L)=q_{3}(L)=0\); \(q_{1}(L)=1\)) while keeping displacements free. The uneven warping throughout the cross-section results in a same-sense bending that is coupled to the twisting. The color corresponds to the lineic strain energy density, normalized to be between 0 and 1. Another trend evident in the error plots is the increase in error toward the free end of the strip. This is most evident in the case of the tape-spring instability (Figure 8(c)), in which the error seems to increase drastically after the localization. This is likely due to the error in the bend angle at the point of the localization, which then propagates because the rotations of the right portion of the tape-spring are consequently incorrect. It could also be due to the fact that we do not use the carefully constructed reduced form for the traction boundary conditions described in Section 4.B; instead, we have used the natural boundary conditions that arise from our reduced kinematic variables in the weak form of the reduced energy. ### Development of Cross-Section Curvature in a Constrained LCE In this section, we show the 1-d model can capture the well-known behavior of spontaneous curvature formation under nematic activation for a strip with the twisted-nematic director alignment; see the bottom of Figure 4 for the experimental observation. In this case, the director is oriented as in Figure 2. This initially flat strip was completely fixed at both ends: zero displacement (\(u_{1}=u_{2}=u_{3}=0\)), zero rotation (\(q_{0}=1\), \(q_{1}=q_{2}=q_{3}=0\)), and held with a flat cross-section (\(\beta_{e}=0\)). As shown in Figure 9, the strip develops transverse curvature as it is prevented from bending along its length. In the transverse direction, the strip elongates on the bottom face while shrinking along on the top face. ### Tape-Spring Localizations in Activated LCE We next examine intriguing experiments from [13] in which there is an interplay between the tape-spring localization behavior and the nematic activation. In the experiments shown in Figure 10(e), initially-flat LCE strips are bent into a U-shape. The constrained boundary conditions lead to the formation of transverse curvature, as discussed in Section 6.D. However, the transverse curvature interacts with the U-shape to localize much like the classical tape-spring instability. Depending on how far apart the ends are held fixed, these strips develop either 1 or 2 localizations upon nematic activation: with the ends held farther apart, we get a single localization, but when the ends are held closer together, we can obtain two localizations. Figure 10 shows the predictions from the 1-d model that captures this behavior. ### Effect of Aspect Ratio: Wide v. Narrow ribbons As noted previously, the width of the strip is a parameter in our 1-d model, allowing us to differentiate between strips of different width. As described in [46], narrow ribbons display pure twisting, while wider ribbons display a combination of bending and twisting. All of the results we presented in the previous examples used a non-dimensional width of \(a/L=0.1\), and here we examine the effect of using a narrower ribbon with \(a/L=0.05\). Figure 11 compares the effect of aspect ratio by examining an LCE strip that has all parameters and boundary conditions as in Figure 5(b), with the exception that the former has aspect ration \(a/L=0.05\) (narrow) and the latter has \(a/L=0.1\) (wide). We see that the narrow ribbon in Figure 11 shows pure twisting, while the wide ribbon in Figure 5(b) showed a combination of bending and twisting, broadly in accord with [46]. Figure 9: Nematic activation in a twisted nematic configuration causes spontaneous _transverse_ curvature in a constrained LCE strip. Held fixed at both ends, the curvature cannot be attained along the length of the strip, so the minimum energy configuration attains transverse curvature along the width of the strip instead. This reproduces the experimental observation shown at the bottom of Figure 4. The color shows the transverse curvature (given by the opening angle), normalized to be between 0 and 1. Figure 10: The 1-d model captures the interactions between tape-spring instabilities due to transverse curvature and nematic activation, following the observations from [13] shown in (a). Here, we see results of an initially flat strip with \(\theta_{top}=0\) under different constrained loading conditions, where we have plotted the centering position. In (b) and (d), we have the purely mechanical behavior of the bent strip prior to nematic activation, with \(u_{1}(L)<0\) and \(u_{1}(0)=u_{2}(0)=u_{3}(0)=u_{2}(L)=u_{3}(L)=0\). In (c), the curvature in (b) localizes in two different locations upon nematic activation. In (e), the curvature in (d) localizes in only one location upon nematic activation. The only difference between these calculations is the value prescribed to \(u_{1}(L)\): in (b) we use \(u_{1}(L)=-0.775\), and in (d) we use \(u_{1}(L)=-0.7\). ## 7 Discussion We have presented a 1-d model for transversely-curved LCE strips. Our work was motivated by the experiments of [13] which show that the transverse curvature leads to a complex set of configurations upon nematic activation. We show that the model captures well a large range of observed behavior, including localization instabilities that are related to the classical tape-spring instability. The 1-d nature of the final dimensionally-reduced model enables rapid exploration of the design space, and the match to experimental results provides confidence in the model being suitable for this purpose. While the success of the 1-d model is promising and useful to realistically model experiments, the following directions for future efforts appear to be promising: 1. generalization to non-circular cross-sections: A simplifying assumption in this work was that the deformed strips have open circular cross-sections. It is straightforward to relax this assumption, at the expense of (significantly) more complex algebraic expressions. However, this can potentially significantly improve the accuracy in capturing the tape-spring localizations. 2. consistent dimensionally-reduced boundary conditions: As discussed in Section 4.B, the physical meaning of applying Neumann boundary conditions is not transparent, while the consistent dimensionally-reduced boundary conditions can be algebraically formidable and potentially nonlinearly couple between different kinematic degrees of freedom. Applying these consistent boundary conditions as constraints may provide a feasible path forward, and could potentially be important to accurately capturing the tape-spring localization. 3. loops with overcurvature and self-contact: The experiments in [13] and analysis in [47, 26, 48] show that overcurvature of closed loops leads to complex and interesting configurations, including self-contact. This offers a rich design space that would be valuable to explore with an augmentation of our model that accounts for self-contact. 4. rigorous dimension reduction based on \(\Gamma\)-convergence: The ansatz-based approach in this work could be made more rigorous using the mathematical framework of \(\Gamma\)-convergence, following several prior results including the seminal work of Friesecke, James, and Muller [49] as well as others in the context of LCE [20] and Friesecke-James-Muller. The ansatz proposed in this work provides a natural starting point to construct the recovery sequence for such a \(\Gamma\)-convergence analysis. 5. methods of dimension-reduction drawing from data science: Recent methods in data science aimed at reducing the dimensionality of datasets, such as diffusion maps [50], can potentially provide insights into the appropriate variables and ansatz for dimension reduction by treating the numerical solution of the 3-d problem as high-dimensional "data". 6. the comparison to experiment is largely qualitative, and is due to the fact that quantitative experimental measurements of deformation and shape are very challenging and a focus of current activity in several research groups. When richer experimental measurements are available in the future, it will be essential to test our model against these observations. ## Software Availability The symbolic algebra calculations (in Mathematica) and the numerical implementation (in Matlab) are available at this link: github.com/klogrande/LCE Figure 11: As opposed to the _wide_ ribbon shown in Figure 5(b) that exhibited a combination of bending and twisting, this _narrow_ ribbon exhibits pure twisting. The color corresponds to the lineie strain energy density, normalized to be between \(0\) and \(1\). ### Conflicts of interest There are no conflicts of interest to declare. ### Acknowledgments We thank Mahnoush Babaei and Noel Walkington for useful discussions; NSF XSEDE for computing resources provided by the Pittsburgh Supercomputing Center; and the DOD SMART Fellowship program, NSF (MOMS 1635407, DMREF 1921857, DMS 2108784), ARO (W911NF-17-1-0084), BSF (2018183), and AFOSR (MURI FA9550-18-1-0095) for financial support. Figures 4 and 10(a) are reprinted from [13] with permission from Elsevier.
2308.07208
Data-driven analysis for understanding ultrahigh energy cosmic ray source spectra
One of the most challenging open questions regarding the origin of ultrahigh energy cosmic rays (UHECRs) deals with the shape of the source emission spectra. A commonly-used simplifying assumption is that the source spectra of the highest energy cosmic rays trace a Peters cycle, in which the maximum cosmic-ray energy scales linearly with $Z$, i.e., with the charge of the UHECR in units of the proton charge. However, this would only be a natural assumption for models in which UHECRs escape the acceleration region without suffering significant energy losses. In most cases, however, UHECRs interact in the acceleration region and/or in the source environment changing the shape of the source emission spectra. Energy losses are typically parameterized in terms of $Z$ and the UHECR baryon number $A$, and therefore one would expect the source emission spectra to be a function of both $Z$ and $A$. Taking a pragmatic approach, we investigate whether existing data favor any region of the $(Z,A)$ parameter space. Using data from the Pierre Auger Observatory, we carry out a maximum likelihood analysis of the observed spectrum and nuclear composition to shape the source emission spectra for the various particle species. We also study the impact of possible systematic uncertainties driven by hadronic models describing interactions in the atmosphere.
Marco Stein Muzio, Luis A. Anchordoqui, Michael Unger
2023-08-14T15:25:00Z
http://arxiv.org/abs/2308.07208v1
# Data-driven analysis for understanding ultrahigh energy cosmic ray source spectra ###### Abstract: One of the most challenging open questions regarding the origin of ultrahigh energy cosmic rays (UHECRs) deals with the shape of the source emission spectra. A commonly-used simplifying assumption is that the source spectra of the highest energy cosmic rays trace a Peters cycle, in which the maximum cosmic-ray energy scales linearly with \(Z\), i.e., with the charge of the UHECR in units of the proton charge. However, this would only be a natural assumption for models in which UHECRs escape the acceleration region without suffering significant energy losses. In most cases, however, UHECRs interact in the acceleration region and/or in the source environment changing the shape of the source emission spectra. Energy losses are typically parameterized in terms of \(Z\) and the UHECR baryon number \(A\), and therefore one would expect the source emission spectra to be a function of both \(Z\) and \(A\). Taking a pragmatic approach, we investigate whether existing data favor any region of the \((Z,A)\) parameter space. Using data from the Pierre Auger Observatory, we carry out a maximum likelihood analysis of the observed spectrum and nuclear composition to shape the source emission spectra for the various particle species. We also study the impact of possible systematic uncertainties driven by hadronic models describing interactions in the atmosphere. Introduction Among the many open questions in the study of ultrahigh energy cosmic rays (UHECRs, \(E\gtrsim 10^{18}\) eV = 1 EeV) is the dependence of the maximum energy of nuclei produced by sources on their mass \(A\) and charge \(Z\). To simplify modeling of the UHECR spectrum and composition, one often assumes nuclei are accelerated to a common maximum rigidity \(R_{\rm max}\), i.e. nuclei follow a _Peters cycle_[1], so that \(E_{\rm max}^{A}\propto Z\). However, other possible scalings of the maximum energy with \((A,Z)\) exist depending on the details of the acceleration mechanism and the dominant energy loss processes to which UHECRs are subject. For example, synchrotron and curvature radiation loss rates scale as \(Z^{4}/A^{2}\) and \(Z^{2}\), respectively [2, 3, 4]. When UHECRs are diffusively accelerated, significant synchrotron losses lead to a maximum energy which scales as \(A^{4}/Z^{4}\)[3]. On the other hand, when UHECRs undergo a one-shot acceleration process, synchrotron losses lead to a \(A^{2}/Z^{3/2}\) scaling, whereas curvature radiation losses produce a \(A/Z^{1/4}\) scaling [2]. Photodisintegration processes, which have been explored extensively in [5, 6, 7], preserve the energy-per-nucleon of the primary CR, so that \(E_{\rm max}^{A}\propto A\). Finally, beyond the Standard Model scenarios may result in a universal maximum energy scale [8, 9, 10], which would predict that the maximum energy of nuclei is independent of their mass or charge. We aim to explore the degree to which the UHECR data favors or disfavors these alternative scenarios to the traditional Peters cycle assumption. We also discuss the observational signatures of alternative scenarios which might be used to distinguish them from a Peters cycle. ## 2 Model To study the degree to which current UHECR data can distinguish different scenarios for the dependence of the maximum energy on \((A,Z)\) we adopt a simple two-parameter model: \[E_{\rm max}^{A}=E_{0}Z^{\alpha}A^{\beta}\, \tag{1}\] where \(E_{0}\) corresponds to the maximum proton energy. Within this model a Peters cycle would be given by \((\alpha,\beta)=(1,0)\). We use (1) to set the maximum energy scale of nuclei escaping a standard UHECR source, which we assume to follow a star-formation rate (SFR) evolution [11]. In particular, we adopt a simplified model with five mass groups escaping the source, representing \(p\), He, CNO, Si, and Fe, each following a exponentially-cutoff single power-law spectrum: \(J_{A}\propto E^{\gamma}\exp(-E/E_{\rm max}^{A})\). The relative abundances of these mass groups are free parameters in the fit. This leaves only two free parameters: the maximum proton energy, \(E_{0}\), and the spectral index of the escaping spectra, \(\gamma\), which we take to be common among all the mass groups. For a given set of parameters, UHECRs are propagated to Earth accounting for their interactions with the cosmic microwave background (CMB) and extragalactic background light (EBL) using propagation matrices built from CRPropa3 [12]. The predicted spectrum and composition at Earth are fit to data from the Pierre Auger Observatory (Auger) [13, 14]. In particular, we consider two hadronic interaction models, Sibyll2.3d [15] and EPOS-LHC [16], to interpret the depth of shower maximum, \(X_{\rm max}\), data in terms of \(\ln A\). Our model parameters are then tuned to minimize the total \(\chi^{2}\): \[\chi^{2}=\sum_{i}\frac{(J_{i}-J_{m,i})^{2}}{\sigma_{J,i}^{2}}+\sum_{i}\frac{(\mu_ {i}-\mu_{m,i})^{2}}{\sigma_{\mu,i}^{2}}+\sum_{i}\frac{(V_{i}-V_{m,i})^{2}}{ \sigma_{V,i}^{2}}\, \tag{2}\] where \(J\) is the UHECR flux, \(\mu\) and \(V\) are the mean and variance of \(\ln A\) respectively, and the subscript \(m\) denotes the model prediction. We perform this fit to the Auger data above \(10^{18.8}\) eV, which given our free parameters leaves \(N_{\rm dof}=29\). In a coming publication we also consider the sensitivity of our results to systematic shifts of the data, but here we focus on our benchmark set of data shifts, which provide the best-fit to the Auger data overall: shifting the energy scale by \({\rm dlg}E=+0.1\) and shifting the \(\langle X_{\rm max}\rangle\) by \(-1\sigma_{X}\). ## 3 Results To assess the degree to which an alternative scenario, \((\alpha,\beta)\), is favored or disfavored with respect to a Peters cycle, \((1,0)\), we use \[D\equiv{\rm sgn}\left(\chi_{\alpha,\beta}^{2}-\chi_{\rm Peters}^{2}\right)S^{- 1}\sqrt{\left|\chi_{\alpha,\beta}^{2}-\chi_{\rm Peters}^{2}\right|} \tag{3}\] as a metric, where \(S=\sqrt{\min(\chi_{\alpha,\beta}^{2},\chi_{\rm Peters}^{2})/N_{\rm dof}}\). With this definition, \(D>0\) indicates a worse fit than a Peters cycle and \(D<0\) indicates the fit has improved compared to a Peters cycle. The statistical significance of the change in fit quality relative to a Peters cycle can be calculated from \(D\) using Wilks' theorem. Figure 1 shows \(D\) in the \(\alpha-\beta\) plane and highlights the Peters cycle (PC) and a number of alternative scenarios: a photodisintegration-limited spectrum (PD), a synchrotron-limited diffusion accelerated spectrum (SDA), a synchrotron-limited one-shot accelerated spectrum (S1A), a curvature radiation-limited one-shot accelerated spectrum (C1A), and a universal energy loss spectrum (UEL). Different values of \((\alpha,\beta)\) change the relative energies of the different mass groups escaping the source, which in turn changes the quality of fit. We expect that directions in the \(\alpha-\beta\) plane along which the ratio of maximum energies of different nuclei is constant will produce similar quality fits to the UHECR data. In practice, this is realized due to the degeneracy between \(A\) and \(Z\): for stable nuclei \(A\simeq 2Z\), while for protons \(A=Z=1\). Combining this fact with (1) allows one to write the ratio of maximum energies between nuclei as \[\frac{E_{\rm max}^{A}}{E_{\rm max}^{A^{\prime}}}=\left(\frac{A}{A^{\prime}} \right)^{\alpha+\beta}\, \tag{4}\] and between a nucleus and a proton as \[\frac{E_{\rm max}^{A}}{E_{\rm max}^{P}}=2^{-\alpha}A^{\alpha+\beta}. \tag{5}\] Therefore, expect directions where \(\alpha+\beta\) is constant to produce similar quality fits. However, if a substantial proton component exists in the escaping spectrum then directions along which Figure 1: Change in quality of fit to the UHECR spectrum and composition relative to a Peters cycle. We consider (a) Sibyll2.3d and (b) EPOS-LHC. The family of scenarios with \(\alpha+\beta=0\) are indicated by the white dashed line. The Peters cycle and a number of alternative scenarios are highlighted (green dots). \((1-\log_{A}2)\alpha+\beta\) is constant will produce similar quality fits. This expectation can be seen clearly in Fig. 1. Equations (4) and (5) have two important consequences. First, equation (4) implies that if no significant proton component exists in the escaping spectrum, only the value of \(\alpha+\beta\) impacts the fit. Therefore, in this circumstance different acceleration scenarios fall into families which share a common value of \(\alpha+\beta\). Second, in the circumstance that a significant proton component is present in the escaping spectrum, equation (5) indicates that this proton component can peak either below or above the peak energy of nuclei, depending on the value of \(\alpha\). In particular, this means that while for \(\alpha+\beta>0\) nuclei will have peak energies which are order according to their mass (i.e. He peaking at the lowest energies and Fe at the highest), the proton component need not obey this ordering. Large values of \(\alpha\) will, therefore, place the peak of the proton component towards the iron-end of the UHECR spectrum (as is illustrated in Fig. 2). Regardless of the hadronic interaction model considered, Fig. 1 shows alternative scenarios from the Peters cycle can produce a better fit to the data. This result holds for all alternative data shifts explored as well. In some cases the improvement in fit can result in values of \(D<-5\). The region producing the largest improvement over a Peters cycle in the case of EPOS-LHC is one where \((1-\log_{A}2)\alpha+\beta\) is constant, rather than \(\alpha+\beta\) (as evidenced by this region not being parallel to the white dashed line representing \(\alpha+\beta=0\)). This implies that there is a significant proton component escaping the source in the scenarios producing the best-fits to the UHECR spectrum and composition. In particular, we find this region roughly follows \(\beta\simeq 0.4-0.8\alpha\), indicating that it is the relation of the maximum proton energy to the maximum energy of \(A\simeq 32\) which is driving the fit. Figure 2: Best-fit escaping spectra for the best-fit (\(\alpha\simeq 6.75\), \(\beta\simeq-5\)) scenario (solid lines) compared to those for a Peters cycle (dashed lines) under EPOS-LHC. Spectra are broken down by mass group (colored lines). It is clear from Fig. 1 that the best-fit lies outside of the plotted range (which was driven by the range of \(\alpha\) and \(\beta\) values among the reference set of alternative scenarios). To explore how far the best-fit is outside this range we performed a 1-D scan along the line following the best-fit region for EPOS-LHC: \(\beta\simeq 0.4-0.8\alpha\). The results of this scan are shown in Fig. 3. The best-fit occurs at \(\alpha\simeq 6.75\) and \(\beta\simeq-5\) for both hadronic interaction models considered. Currently, we are unaware of any processes, observed or theoretical, which could produce such a scaling for the UHECR maximum energy. For the reader's reference, in Fig. 2 we plot the escaping spectra for this best-fit (BF) model compared to those for a Peters cycle, assuming EPOS-LHC. While it is difficult to distinguish between the possibilities explored above using current UHECR data, there are some distinct observational signatures of alternative scenarios to a Peters cycle which could be used in the future. The most straight forward way to determine which scenario most observed UHECRs fall into would be to measure the peak energies of each mass group. This would directly probe \((\alpha,\beta)\), but is difficult to measure in practice. Here we consider two alternative signatures of alternative scenarios. First let us consider the case that there is a substantial proton flux escaping UHECR sources. In this case, if the value of \(\alpha\neq 1\) then this proton component will peak at energy a factor of \(2^{\alpha-1}\) different than the Peters cycle expectation. In particular for large values of \(\alpha\) this component will peak at energies higher than He. Therefore, measurement of the peak energy of the proton component of the UHECR spectrum will constrain the value of \(\alpha\). Moreover, if the value of \(\alpha\) is large enough, so protons escaping the source will exceed the GZK threshold [17, 18] and will therefore produce a substantial flux of EeV cosmogenic neutrinos. Measurement of such neutrinos will also provide a probe of \(\alpha\). Figure 3: Reduced \(\chi^{2}\) along the line \(\beta=0.4-0.8\alpha\) for Sibyll2.3d and EPOS-LHC. In both cases the minimum appears around \(\alpha\simeq 6.5\) and \(\beta\simeq-5\). In the case where no substantial flux of protons escapes the source only families of scenarios (those with a common value of \(\alpha+\beta\)) can be distinguished from one another. In particular, the Peters cycle family of scenarios (\(\alpha+\beta=1\)) can be distinguished from alternative families by measuring the proton component of the UHECR spectrum. These protons, by assumption, are not produced by the source directly and instead are the product of heavier nuclei photodisintegrating off of the CMB and EBL. These interactions preserve the energy-per-nucleon of the primary CR so that protons produced through photodisintegration of a CR with mass \(A\) will have a peak energy of \[E^{p}_{\rm APD,max}=2^{-\alpha}E_{0}A^{\alpha+\beta-1}. \tag{6}\] For the Peters cycle family of scenarios, all photodisintegrated protons will have the same peak energy irrespective of their parent CR's mass. However, for alternative scenarios this energy will depend on the mass of the primary CR, so that the ratio of their maximum energies is given by \[\frac{E^{p}_{\rm APD,max}}{E^{p}_{A^{\prime}\rm PD,max}}=\left(\frac{A}{A^{ \prime}}\right)^{\alpha+\beta-1}. \tag{7}\] Equation (7) implies that the spectrum of protons may have multiple peaks and will be much more extended in energy than would be expected from a Peters cycle. Therefore, measurement of an extended proton component will be a smoking gun signature of an alternative scenario to a Peters cycle. An important caveat to keep in mind is that fundamentally all of the signatures discussed rely on the measurement of an unexpectedly energetic proton component or their secondary neutrinos. This can be mimicked by a second population of UHECRs which produces a large flux of protons at higher energies than the population producing the bulk of observed UHECRs. The implications of this possibility have been explored in a number of studies including [19, 20, 21]. ## 4 Summary While a Peters cycle has been a convenient simplifying assumption in the study of UHECRs, there are a number of alternative scenarios motivated by both well-known and beyond the Standard Model processes. The UHECR data today cannot firmly establish which of these scenarios is realized by Nature, but some alternative scenarios are able to describe current UHECR spectrum and composition data better than is possible using the classic Peters cycle assumption. In particular, exotic scenarios can improve fits to UHECR data at a high level of significance. There are a number of observational signatures for alternative scenarios, including a proton component extending across a large energy range and the production of cosmogenic neutrinos, which can be used to constrain these possibilities. Until then, we must keep in mind the possibility that Nature may provide a UHECR spectrum that is more rich than a simple Peters cycle. ## Acknowledgments The work of L.A.A. is supported by the U.S. National Science Foundation (NSF Grant PHY-2112527). The research of M.S.M. is supported by the NSF MPS-Ascend Postdoctoral Award #2138121.
2304.11967
A Generalized Grand-Reaction Method for Modelling the Exchange of Weak (Polyprotic) Acids between a Solution and a Weak Polyelectrolyte Phase
We introduce a Monte-Carlo method that allows for the simulation of a polymeric phase containing a weak polyelectrolyte, which is coupled to a reservoir at a fixed pH, salt concentration and total concentration of a weak polyprotic acid. The method generalizes the established Grand-Reaction Method by Landsgesell et al. [Macromolecules 53, 3007-3020 (2020)] and thus allows for the simulation of polyelectrolyte systems coupled to reservoirs with a more complex chemical composition. In order to set the required input parameters that correspond to a desired reservoir composition, we propose a generalization of the recently published chemical potential tuning algorithm of Miles et al. [Phys. Rev. E 105, 045311 (2022)]. To test the proposed tuning procedure, we perform extensive numerical tests for both ideal and interacting systems. Finally, as a showcase, we apply the method to a simple test system which consists of a weak polybase solution that is coupled to a reservoir containing a small diprotic acid. The complex interplay of the ionization various species, the electrostatic interactions and the partitioning of small ions leads to a non-monotonous, stepwise swelling behaviour of the weak polybase chains.
David Beyer, Christian Holm
2023-04-24T10:02:05Z
http://arxiv.org/abs/2304.11967v2
A Generalized Grand-Reaction Method for Modelling the Exchange of Weak (Polyprotic) Acids between a Solution and a Weak Polyelectrolyte Phase ###### Abstract We introduce a Monte-Carlo method that allows for the simulation of a polymeric phase containing a weak polyelectrolyte, which is coupled to a reservoir at a fixed pH, salt concentration and total concentration of a weak polyprotic acid. The method generalizes the established Grand-Reaction Method by Landsgesell et al. [Macromolecules **53**, 3007-3020 (2020)] and thus allows for the simulation of polyelectrolyte systems coupled to reservoirs with a more complex chemical composition. In order to set the required input parameters that correspond to a desired reservoir composition, we propose a generalization of the recently published chemical potential tuning algorithm of Miles et al. [Phys. Rev. E **105**, 045311 (2022)]. To test the proposed tuning procedure, we perform extensive numerical tests for both ideal and interacting systems. Finally, as a showcase, we apply the method to a simple test system which consists of a weak polybase solution that is coupled to a reservoir containing a small diprotic acid. The complex interplay of the ionization various species, the electrostatic interactions and the partitioning of small ions leads to a non-monotonous, stepwise swelling behaviour of the weak polybase chains. ## I Introduction Electrically charged polymers, commonly called "polyelectrolytes", are a versatile class of materials with many applications and interesting properties. Simple polyelectrolyte chains can for instance be used as thickening agents in hygiene products such as shampoo or as flocculants in water treatment. More complex polyelectrolyte architectures, such as polyelectrolyte networks ("hydrogels") allow for even more sophisticated applications. Hydrogels are for instance used in areas as diverse as medicine, [1] agriculture, [2] for desalination [3; 4; 5; 6; 7] and hygiene products. [8] Other polyelectrolyte architectures include polyelectrolyte brushes, [9; 10; 11] which can for instance be used for protein purification and for the stabilization of colloidal solutions, and polyelectrolyte carcerates. [12; 13; 14; 15; 16; 17] In addition to many applications, polyelectrolytes are also of fundamental interest to molecular biology and biochemistry, since many biological macromolecules, such as proteins, DNA and RNA are in fact polyelectrolytes. [18] The presence of long-range electrostatic interactions makes the modelling of polyelectrolytes challenging from the point of view of theoretical and computational soft matter physics. For instance, special techniques are needed to deal with these interactions in an efficient way in computer simulations. In many cases, for example in many proteins, [19] the modelling can be even further complicated by the presence of some kind of association-dissociation reaction which leads to a complicated coupling between the chemical equilibrium, electrostatic interactions and the conformational degrees of freedom. [20] A paradigmatic example for such "weak polyelectrolytes" is a weak polyacid, i.e. a polymer chain consisting of weak acid monomers HA, which can become charged by releasing a proton into solution: \[\text{HA}\xrightleftharpoons[]{\text{A}^{-}+H^{+}}. \tag{1}\] Ideal weak acid particles, for instance realized experimentally in dilute solutions of individual weak acid molecules (i.e. no chains), are well-described by the Henderson-Hasselbach equation, \[\alpha=\frac{1}{1+10^{\text{pA}_{\text{A}}-\text{pH}}}, \tag{2}\] which relates the average degree of ionization \(\alpha\) with the pH of the solution and the \(\text{pA}_{\text{A}}\)-value of the considered molecules. In contrast to this simple case, the ionization behaviour of weak polyacids is strongly altered by the electrostatic interactions of ionized monomers, especially by the electrostatic repulsion of neighbouring monomers. These non-idealities lead to strongly shifted and deformed ionization curves as compared to the ideal theory. Because the ionization behaviour and the conformational degrees of freedom of weak polyelectrolytes are coupled in a complicated way, this effect, which has been termed the "polyelectrolyte effect" in the past, [21] is difficult to describe analytically. Consequently, a theoretical interest in weak polyelectrolytes has led to the development of several numerical methods [22; 23; 24; 25; 26; 27] over the decades in order to treat this problem. Some time ago, Landsgesell et al. [21] introduced the Grand-Reaction Monte-Carlo (G-RxMC) method. This algorithm allows for the simulation of two-phase systems consisting of a solution containing small ions and a polymeric phase containing a weak polyacid and/or polybase (in addition to the small ions). The method uses Monte-Carlo moves to model the exchange of small ions between the phases as well as the acid-base reactions in the polymeric phase. In contrast to earlier methods like the Reaction-Ensemble [23; 24] or constant-pH method, [22] this new method is applicable over the whole range of pH-values. Furthermore, in addition to the aforementioned "polyelectrolyte effect" it also correctly models the Donnan partitioning of ions between the polymeric phase and the solution, which in addition to the "polyelectrolyte effect" also influences the ionization behaviour. By accounting for both the charge regulation as well as the Donnan partitioning, this new method has, for the first time, made particle-based simulations of weak polyelectrolyte hydrogels, which are a natural realization of such a two-phase system, possible.[28] One of the current limitations of the G-RxMC method in its original form is that the composition of the considered reservoir is fairly simple, as it contains only monovalent ions. For instance, it does not allow for the exchange of weak ions (e.g. small weak acid molecules or even pH-responsive chain molecules such as polypeptides) between the two phases. In this publication, we show how the G-RxMC method can be extended to also model the exchange of small weak acid molecules. In particular, we also demonstrate how the chemical potentials can be dynamically tuned to achieve the desired reservoir composition. Since the reservoir composition is given in terms of concentrations, but the method takes as its input parameters chemical potentials, this problem is in fact non-trivial and could not be adequately addressed before. ## II Setup and Method In this paper, we will always assume a coarse-grained representation which explicitly models the polymer chains and ions, but treats the solvent only implicitly. We note that an extension of the presented method to atomistic simulation models would be non-trivial, since the pKa-values of the various groups cannot be an input to an atomistic simulation but should in fact be an output. Furthermore, it is important to note that in our simulations, each ion type has a distinct label, even if they are physically (i.e. with regards to their interactions) indistinguishable. As shown recently by Curk et al.[27] in the context of the standard G-RxMC method, one could in principle also employ unified ion types, reducing the number of distinct chemical reactions. However, because the concept of unified ion types is incompatible with the \(\mu\)-tuning scheme proposed in the next section, we refrain from using it in this publication. Still, using the equilibrium constants obtained from the \(\mu\)-tuning method, one could in principle employ unified ions in the actual simulation of the system. To specify the setup, as shown schematically in Figure 1, we consider the total system to consist of two distinct phases: 1. A polymeric phase (e.g. a hydrogel or a coacervate), called in the following the "system", containing a weak polyacid characterized by pK\({}_{\text{A}}\) with monomers HA (neutral) and A\({}^{-}\) (ionized) as well as small ions (H\({}^{+}\), OH\({}^{-}\), Na\({}^{+}\), Cl\({}^{-}\)). Furthermore, there is a small weak \(n\)-protic acid H\({}_{n}\)a (neutral), characterized by \(n\) pK\({}_{\text{A}}\)-values pK\({}_{\text{A}}^{1}\), pK\({}_{\text{A}}^{2}\),..., pK\({}_{\text{A}}^{n}\), which can become ionized (H\({}_{n-1}\)a\({}^{-}\), H\({}_{n-2}\)a\({}^{2-}\),..., a\({}^{n-}\)). Both H\({}_{n}\)a and all of its ionized states can also be exchanged with the reservoir. 2. An aqueous solution ("reservoir") containing small ions (H\({}^{+}\), OH\({}^{-}\), Na\({}^{+}\), Cl\({}^{-}\)) at fixed values of pH and \(c_{\text{NaCl}}^{\text{res}}\) and the small weak \(n\)-protic acid H\({}_{n}\)a at a fixed total concentration \(c_{\text{H}_{n}\text{a}}^{\text{res},0}=\sum_{i=0}^{n}c_{\text{H}_{n-i}\text{a }^{i-2}}^{\text{res}}\), i.e. the total dissolved amount of acid divided by the system volume. Note that while the total amount of acid is a free parameter that is needed to characterize the composition of the reservoir, the ratio of the different ionization states is fully determined by the chemical equilibrium. The phases are coupled grand-canonically, i.e. they have the same inverse temperature \(\beta\) and electrochemical potentials \(\hat{\mu}_{i}=\mu_{i}+z_{i}\psi^{\text{Don}}\), where \(\psi^{\text{Don}}\) is the Donnan potential and \(i=\text{H}^{+}\), OH\({}^{-}\), Na\({}^{+}\), Cl\({}^{-}\), H\({}_{n}\)a, H\({}_{n-1}\)a\({}^{-}\), H\({}_{n-2}\)a\({}^{2-}\),..., a\({}^{n-}\). The phases are in an electrochemical rather than a simple chemical equilibrium due to the macroscopic electroneutrality constraint imposed on both phases.[21] Although the considered systems are finite, the electroneutrality constraint still holds since both simulation boxes are supposed to represent typical subsystems of the macroscopic phases. This implies that the system and the reservoir can only exchange pairs of small ions (when only monovalent ions are involved) or more generally groups of \(z+1\) small ions (when a multivalent ion of valency \(z\) is involved), rather than individual ion particles, which would violate the electroneutrality. In this approach, the Donnan potential is a quantity that emerges automatically and does not need to be put in "by hand". As shown in our earlier publications,[28; 21; 29] the Donnan potential can be determined a-posteriori from the simulation. Formally, we represent the insertion and deletion moves by a set of virtual chemical reactions. The following four reactions with the indicated equilibrium constants \(K_{i}\) are always present (compare Landsgesell et al.[21]): \[\emptyset \rightleftharpoons H^{+}+\text{OH}^{-}, K_{\text{H}^{+},\text{OH}^{-}} \tag{3}\] \[\emptyset \rightleftharpoons Na^{+}+\text{Cl}^{-}, K_{\text{Na}^{+},\text{Cl}^{-}}\] (4) \[\emptyset \rightleftharpoons Na^{+}+\text{OH}^{-}, K_{\text{Na}^{+},\text{OH}^{-}}\] (5) \[\emptyset \rightleftharpoons H^{+}+\text{Cl}^{-}, K_{\text{H}^{+},\text{Cl}^{-}}. \tag{6}\] Although the reactions are described by a total of four equilibrium constants, only two of them are independent parameters since \(K_{\text{H}^{+},\text{OH}^{-}}=10^{-14}\) is fixed as the ionic product of water and \[K_{\text{Na}^{+},\text{OH}^{-}}=\frac{K_{\text{Na}^{+},\text{Cl}^{-}}K_{\text{ H}^{+},\text{OH}^{-}}}{K_{\text{H}^{+},\text{Cl}^{-}}}. \tag{7}\] The insertion and deletion moves involving H\({}_{n}\)a and its ionized forms can in the most general form be written as \[\emptyset \rightleftharpoons(z-l)\text{H}^{+}+l\text{Na}^{+}+\text{H}_{n-z} \text{a}^{z-} \tag{8}\] with the equilibrium constant \[K_{(z-l)\text{H}^{+},\text{Na}^{+},\text{H}_{n-z}\text{a}^{z-}} \tag{9}\] where \(l=0,...,z\) counts the number of Na\({}^{+}\) ions involved in the reaction. Consequently, there are \(z+1\) insertion and deletion moves involving H\({}_{n-z}\)a\({}^{z-}\) and a total of \(4+\sum_{z=0}^{n}(z+1)=4+(n+1)(n+2)/2\) insertion and deletion moves (including Equation 3, Equation 4, Equation 5 and Equation 6). The equilibrium constant \(K_{\text{H},\text{a}}\) for the insertion of the neutral species is simply determined by the chemical potential \(\mu_{\text{H}_{n}\text{a}}\). For the insertion reactions involving charged species, the reaction constants can in general be written as \[K_{(z-l)\text{H}^{+},\text{Na}^{+},\text{H}_{n-z}\text{a}^{z-}}=\left(\prod_{i= 1}^{z}K_{\text{a}}^{i}\right)\left(\frac{K_{\text{Na}^{+},\text{Cl}^{-}}}{K_{ \text{H}^{+},\text{Cl}^{-}}}\right)^{l}K_{\text{H}_{n}\text{a}}, \tag{10}\] where \(K_{\rm a}^{i}\) is the equilibrium constant associated with the dissociation reaction of \({\rm H}_{n-(i-1)}{\rm a}^{(i-1)-}\). This means that the equilibrium constants \(K_{(z-l){\rm H}^{+},N{\rm a}^{+},{\rm H}_{n-z}{\rm a}^{z-}}\) are completely determined by the other equilibrium constants. In addition to the insertion reactions, each of the \(n\) not fully ionized species \({\rm H}_{n-z}{\rm a}^{z-}\) (\(z=0,...,n-1\)) can dissociate in a reaction of the form \[{\rm H}_{n-z}{\rm a}^{z-}\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt \hbox{$<$}}H^{+}+{\rm H}_{n-(z+1)}{\rm a}^{(z+1)-} \tag{11}\] where the equilibrium constant \[K_{\rm a}^{z+1}=10^{-{\rm p}K_{\rm a}^{z+1}} \tag{12}\] is determined by the specific chemistry of the small acid particles and thus and input parameter in our coarse-grained simulation setup. To avoid sampling bottlenecks, we also include the following reformulations of the dissociation reaction:[21] \[{\rm H}_{n-z}{\rm a}^{z-} \mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt \hbox{$<$}}{\rm Na}^{+}+{\rm H}_{n-(z+1)}{\rm a}^{(z+1)-} \tag{13}\] \[{\rm H}_{n-z}{\rm a}^{z-}+{\rm OH}^{-} \mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt \hbox{$<$}}{\rm H}_{n-(z+1)}{\rm a}^{(z+1)-}\] (14) \[{\rm H}_{n-z}{\rm a}^{z-}+{\rm CI}^{-} \mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt \hbox{$<$}}{\rm H}_{n-(z+1)}{\rm a}^{(z+1)-} \tag{15}\] with equilibrium constants \[K_{\rm a}^{r+1} =K_{\rm a}^{z+1}\frac{K_{\rm Na^{+},{\rm CI}^{-}}}{K_{\rm H^{+}, {\rm CI}^{-}}} \tag{16}\] \[K_{\rm a}^{r\prime+1} =\frac{K_{\rm a}^{z+1}}{K_{\rm H^{+},{\rm OH}^{-}}}\] (17) \[K_{\rm a}^{r\prime+1} =\frac{K_{\rm a}^{z+1}}{K_{\rm H^{+},{\rm CI}^{-}}}. \tag{18}\] Figure 1: Schematic representation of the setup considered in this publication. The two-phase setup consists of a polymeric phase (“system”) and an aqueous solution (“reservoir”). The reservoir has a fixed pH value and contains salt and small \(n\)-protic acid molecules (shown here: diprotic H\({}_{2}\)a) at fixed concentrations. In addition to these constituents, the system also contains a weak polyelectrolyte, here shown as a weak polyacid with monomers HA. All species except the weak polyacid can be exchanged between the two phases. The exchange moves have to conserve the electroneutrality. Finally, inside the polymeric phase there is also the dissociation reaction of the weak polyacid HA: \[\text{HA}\Longleftrightarrow\text{A}^{-}+\text{H}^{+},\qquad K_{\text{A}} \tag{19}\] which is described by the equilibrium constant \(K_{\text{A}}\). As before, one should also include the following linear combinations in order to avoid sampling bottlenecks: \[\text{HA}\Longleftrightarrow\text{A}^{-}+\text{Na}^{+} \tag{20}\] \[\text{HA}+\text{CI}^{-}\Longleftrightarrow\text{A}^{-}\] (21) \[\text{HA}+\text{OH}^{-}\Longleftrightarrow\text{A}^{-} \tag{22}\] with equilibrium constants \[K_{\text{A}}^{\prime}=K_{\text{A}}\frac{K_{\text{Na}^{+},\text{ CI}^{-}}}{K_{\text{H}^{+},\text{CI}^{-}}} \tag{23}\] \[K_{\text{A}}^{\prime\prime}=\frac{K_{\text{A}}}{K_{\text{H}^{+}, \text{OH}^{-}}}\] (24) \[K_{\text{A}}^{\prime\prime\prime}=\frac{K_{\text{A}}}{K_{\text{H} ^{+},\text{CI}^{-}}}. \tag{25}\] Furthermore, one needs to include the dissociation reactions \[\text{H}_{n-z}\text{a}^{z-}+z\cdot\text{HA}\Longleftrightarrow z\cdot\text{A} ^{-} \tag{26}\] with the equilibrium constants \[\tilde{K}_{z}=\frac{(K_{\text{A}})^{z}}{K_{\text{H},\text{a}}\prod_{i=1}^{z}K_ {\text{a}}^{i}} \tag{27}\] These linear combinations become important when the system contains mostly multivalent ions. From a theoretical point of view, the presented set of reactions is redundant, however it guarantees a thorough sampling in a simulation. It would be straightforward to generalize the described approach to a reservoir containing different polyprotic acids and a system containing one (or multiple) polyprotic polyacids. Also, the whole framework is not only applicable to weak acids but also to weak bases. Since these generalizations only clutter the notation with even more indices but do not add any fundamentally new challenge, we will in the following focus on the case outlined above. Given the set of chemical reactions, it is straightforward to implement them using the well-established Reaction-Ensemble Monte-Carlo method (RxMC) [23; 24]. In brief, this method uses the Metropolis-Hastings algorithm [30; 31] to sample from a semi-grandcanonical distribution under the constraints enforced by the stoichiometry of the reactions. For each reaction step, one of the reactions and its direction is selected randomly with a uniform probability. Next, according to the stoichiometry, particles are added at a random position to the simulation box and/or randomly selected and deleted or have their identity changed. This proposed new configuration (n) is then accepted according to the criterion \[\begin{split} P_{\text{n},\text{o}}^{\text{RxMC}}=& \min\Bigg{\{}1,\left(\prod_{i}\frac{N_{i}^{0}!\,(Vc^{\odot})^{\nu_{i} \xi}}{(N_{i}^{0}+\nu_{i}\xi)!}\right)\times\\ &\times\exp\Bigg{(}\beta\left[\xi\sum_{i}\nu_{i}(\mu_{i}-\mu_{i}^ {\odot})-\Delta\mathcal{U}_{\text{n},\text{o}}\right]\Bigg{)}\Bigg{\}},\end{split} \tag{28}\] where \(\beta\) is the inverse temperature, \(V\) the simulation box volume, \(c^{\odot}=1\,\text{M}\) is the reference concentration, \(\mu_{i}\) is the chemical potential of species \(i\), \(\mu_{i}^{\odot}\) is the reference chemical potential of species \(i\), \(\Delta\mathcal{U}_{\text{n},\text{o}}=\mathcal{U}_{\text{n}}-\mathcal{U}_{\text{ o}}\) is the change in potential energy, \(\nu_{i}\) is the stoichiometric coefficient of species \(i\) and \(\xi\) is the extent of reaction which takes the value \(\xi=1\) for the forward and \(\xi=-1\) for the reverse reaction. If the new configuration is rejected, the previous configuration (o) is kept. From Equation 28 it is obvious that the method takes as its input a set of chemical potentials \(\mu_{i}-\mu_{i}^{\odot}\) or equivalently a set of equilibrium constants. Here an important question arises: How should one choose these input parameters in order to achieve a desired reservoir composition? This will be answered in the following section. In addition to sampling the different chemical compositions of the system, one must also sample different conformations of the polymer chains and particles. To do this one can use either MC techniques or molecular dynamics. In the following, we make use of both Metropolis-MC and Langevin MD. [32; 33] ## III Determining the reaction constants For the kind of setup considered here, the reservoir is fully characterized by three parameters, for instance \(K_{\text{H}^{+},\text{CI}^{-}}\), \(K_{\text{Na}^{+},\text{CI}^{-}}\) and \(K_{\text{H},\text{a}}\). (There are of course also the \(n\) p\(K_{\text{a}}\)-values p\(K_{\text{a}}^{1}\), p\(K_{\text{a}}^{2}\),..., p\(K_{\text{a}}^{n}\), however these are simply input parameters specific to the simulated acid.) In order to mimic experiments, we typically want to impose on the reservoir the pH-value, pH\({}^{\text{res}}\), the salt concentration, \(c_{\text{NaCI}}^{\text{res}}\), and the total concentration, \(c_{\text{H}_{\text{a}},\text{a}}^{\text{res},\text{o}}\). Because the input parameters for the simulations are the equilibrium constants (or equivalently the chemical potentials), finding the correct equilibrium constants to achieve a desired reservoir composition amounts to a so-called "inverse problem": we need to find the correct "cause" (the equilibrium constants) to achieve a desired "effect" (the reservoir composition). Landsgesell et al. [21] described two distinct ways to determine the required reactions constants, both of which rely on auxiliary simulations of the reservoir: 1. **Approach using Widom Particle Insertion**: In order to determine the reactions constants, we need to know the relationship between the activity coefficients and the concentrations of the various \(z\)-valent ions. The simplest way to determine this relation is to simulate a sufficiently large box of small ions at a range of different concentrations of the various ion types and to determine the excess chemical potential for different ion pairs, triplets, etc. using the method of Widom particle insertion. [34] (Of course one can in principle also use a semi-empirical formula like the Davies equation, [35] however the range of applicability of such an approach is inherently limited.) The \(n+1\) resulting chemical potentials are thus \(n+1\)-dimensional functions on a grid that can then be interpolated. In combination with the definition of the pH, the law of mass-action of the autoionization of water and the law of mass-action for the \(n\) various acid-dissociation reactions of H\({}_{n}\)a and its ion ized forms, this results in a set of nonlinear equations that can be solved in a self-consistent loop to ultimately yield the desired equilibrium constants. While this approach is in principle as exact as desired, it quickly becomes unfeasible as \(n\) grows, since the number of auxiliary simulations that are required grows exponentially with \(n\). 2. **Calibration Method**: Alternatively, one may simply impose values for the equilibrium constants (for instance using the ideal gas, Debye-Huckel theory or a semi-empirical formula as a starting point) and then run a reservoir simulation with these values. Afterwards one can calculate the reservoir composition from the simulation and slightly adjust the equilibrium constants. This results in an iterative procedure that stops once the desired accuracy is achieved. Since this approach requires in general multiple simulations and manual adjustments to achieve a desired reservoir composition, it can become cumbersome or unfeasibly long. Because both of these methods have their difficulties, we here propose an alternative approach, that can be viewed as a more sophisticated version of the calibration method. Our new approach generalizes a recently developed method to dynamically tune the chemical potential in a _single_ grand-canonical simulation to achieve a desired particle number.[36] For the reader unfamiliar with the original method, we shortly recap the essential points before describing how it can be applied to the current system. For convenience, we adopt the notation of Ref.[36] The method makes use of the fact that the derivative of the chemical potential \(\mu\) with respect to the particle number \(N\) ("compressibility" \(\kappa\)) can be expressed in terms of the variance \(\mathrm{Var}[N]\) of the particle number \(N\): \[\kappa=\frac{\mathrm{d}\left\langle N\right\rangle}{\mathrm{d}\mu}=\beta\, \mathrm{Var}[N]. \tag{29}\] Using \(\kappa\), in a grand-canonical simulation with desired particle number \(N^{*}\) one can make an initial guess \(\mu_{t=0}\) for the chemical potential and then periodically update \(\mu_{t}\) after a certain number of Monte-Carlo steps according to the linearized formula \[\mu_{t+1}=\overline{\mu}_{t}+\frac{N^{*}-\overline{N}_{t}}{\overline{\kappa}_ {t}}, \tag{30}\] where \(\overline{x}_{t}\)=\(\overline{x}^{t}\) denotes a time average of the quantity \(x\). In the following, we will use the average over the more recent half of the trajectory: \[\overline{x}_{t}=\frac{1}{L_{t}}\sum_{t^{\prime}=\lceil t/2\rceil}^{t}x_{t^{ \prime}}. \tag{31}\] Simply using Equation 29 to calculate \(\overline{\kappa}_{t}\) results in the fluctuation-based estimator \[\kappa_{t}^{\mathrm{fluc}}=\beta\,\overline{\mathrm{Var}}_{t}[N] \tag{32}\] that is calculated as the variance of \(N\) over the more recent half of the trajectory and which is only useful at later times. To guarantee reasonable values of \(\overline{\kappa}_{t}\) also at early times, one uses \[\overline{\kappa}_{t}=\max\left[\kappa_{t}^{\min},\min\left(\kappa_{t}^{\max },\kappa_{t}^{\mathrm{fluc}}\right)\right] \tag{33}\] with the bounds \[\kappa_{t}^{\min} =\frac{\alpha}{\sqrt{t+1}} \tag{34}\] \[\kappa_{t}^{\max} =\sqrt{\frac{\overline{\mathrm{Var}}_{t}[N]}{\overline{\mathrm{ Var}}_{t}[\mu]}}, \tag{35}\] where \(\alpha\propto V/U\) with the system volume \(V\) and a characteristic energy scale \(U\). Equation 30 in combination with Equation 33 results in a robust update scheme for \(\mu_{t}\) that eventually converges to the correct value. To avoid fully recalculating \(\overline{x}_{t}\) and \(\overline{\mathrm{Var}}_{t}[x]\) at every time step, they can be updated incrementally using a modified Welford algorithm.[36; 37] For the kinds of systems under consideration here, we need to tune two chemical potentials: \(\mu_{\mathrm{NaCl}}\) to achieve a desired salt concentration, \(c_{\mathrm{NaCl}}^{\mathrm{cs}}\), and \(\mu_{\mathrm{H}_{\mathrm{a}}}\) to achieve a desired total concentration of the dissolved acid, \(c_{\mathrm{H}_{\mathrm{a}},\mathrm{a}}^{\mathrm{cs},0}\). Because the \(n\) p\(K_{\mathrm{a}}\)-values p\(K_{\mathrm{a}}^{1}\), p\(K_{\mathrm{a}}^{2}\),..., p\(K_{\mathrm{a}}^{n}\) are fixed, the \(n\) chemical potentials \(\mu_{\mathrm{H}_{\mathrm{a}},-\mathrm{a}^{2}}\) for \(z=1,...,n\) are completely determined. In general, to apply Equation 30 to a multi-component system, \(\kappa\) needs to be promoted to a matrix \(\kappa_{ij}\). However, for the present application it turns out that neglecting the off-diagonal elements of \(\kappa_{ij}\) and applying the tuning procedure to each species independently is sufficient. Since the results can always be checked for consistency, this approximation is inherently safe. As we show below, the procedure reliably converges for a wide range of system parameters. For the salt NaCl we simply apply the original \(\mu\)-tuning method as described above. The instantaneous number of NaCl ion pairs can be measured as \[N_{\mathrm{NaCl}}^{t}=\min\left(N_{\mathrm{Na}^{+}}^{t},N_{\mathrm{Cl}^{-}}^ {t}\right). \tag{36}\] Note that in general we have either \(N_{\mathrm{Na}^{+}}>N_{\mathrm{NaCl}}\) or \(N_{\mathrm{Cl}^{-}}>N_{\mathrm{NaCl}}\), because one needs to add NaOH or HCI to adjust the pH to the desired value. Applying the method thus ultimately results in a dynamically evolving equilibrium constant \(K_{\mathrm{Na}^{+},\mathrm{Cl}^{-}}^{t}=\exp(\beta\,\mu_{\mathrm{NaCl}}^{t})\). For the dissolved acid, we make use of the identity (see appendix for a detailed calculation) \[\begin{split}\tilde{\kappa}^{\mathrm{fluc}}&=\frac{ \partial}{\partial\mu_{\mathrm{H}_{\mathrm{a}},\mathrm{a}}}\left\langle N_{ \mathrm{H}_{\mathrm{a}},\mathrm{a}}^{0}\right\rangle=\sum_{z=0}^{n}\frac{ \partial}{\partial\mu_{\mathrm{H}_{\mathrm{a}},\mathrm{a}}}\left\langle N_{ \mathrm{H}_{\mathrm{a}},\mathrm{a}^{2}}\right\rangle\\ &=\beta\sum_{z=0}^{n}\mathrm{Cov}[N_{\mathrm{H}_{\mathrm{a}},- \mathrm{a}^{2}},N_{\mathrm{H}_{\mathrm{a}}}],\end{split} \tag{37}\] where \(\mathrm{Cov}[x,y]\) denotes the covariance of \(x\) and \(y\), to arrive at the update rule \[\mu_{\mathrm{H}_{\mathrm{a}},\mathrm{a}}^{t+1}=\overline{\mu_{\mathrm{H}_{ \mathrm{a}},\mathrm{a}}}^{t}+\frac{N_{\mathrm{H}_{\mathrm{a}},\mathrm{a}}^{0}- \sum_{z=0}^{n}\overline{N_{\mathrm{H}_{\mathrm{a}},-\mathrm{a}^{2}}}^{t}}{ \overline{\overline{\kappa}_{t}}}. \tag{38}\] Here, \(\overline{\overline{\kappa}_{t}}\) is defined by \[\overline{\overline{\kappa}}_{t}=\max\left[\tilde{\kappa}_{t}^{\min},\min \left(\tilde{\kappa}_{t}^{\max},\tilde{\kappa}_{t}^{\mathrm{fluc}}\right)\right] \tag{39}\] with \[\bar{\kappa}_{\mathrm{t}}^{\mathrm{fluc}} =\sum_{z=0}^{n}\overline{\mathrm{Cov}}_{t}[N_{\mathrm{H}_{\mathrm{L} \mathrm{a}^{+},\mathrm{cl}^{-}}},N_{\mathrm{H}_{\mathrm{a}}\mathrm{a}}] \tag{40}\] \[\bar{\kappa}_{\mathrm{t}}^{\mathrm{min}} =\frac{\alpha}{\sqrt{t+1}}\] (41) \[\bar{\kappa}_{\mathrm{t}}^{\mathrm{max}}= \mathrm{sgn}\left(\sum_{z=0}^{n}\overline{\mathrm{Cov}}_{t}[N_{ \mathrm{H}_{\mathrm{L}\mathrm{a}^{-},\mathrm{cl}^{-}}},N_{\mathrm{H}_{\mathrm{ a}}\mathrm{a}}]\right)\times\] (42) \[\times\sqrt{\frac{\left|\sum_{z=0}^{n}\overline{\mathrm{Cov}}_{t }[N_{\mathrm{H}_{\mathrm{L}\mathrm{a}^{+},\mathrm{cl}^{-}}},N_{\mathrm{H}_{ \mathrm{a}}\mathrm{a}}]\right|}{\overline{\mathrm{Var}}_{t}[\mu]}},\] where we have again \(\alpha\propto V/U\). As we show in the appendix, there exists a simple formula to update \(\overline{\mathrm{Cov}}_{t}[x,y]\) incrementally. Analogous to the previous case, applying the method results in a dynamically evolving equilibrium constant \(K_{\mathrm{H}_{\mathrm{a}}\mathrm{a}}^{t}\). In our reservoir simulations, we tune \(\mu_{\mathrm{NaCl}}\) and \(\mu_{\mathrm{H}_{\mathrm{a}}\mathrm{a}}\) simultaneously, i.e. we periodically update both \(K_{\mathrm{Na}^{+},\mathrm{Cl}^{-}}^{t}\) and \(K_{\mathrm{H}_{\mathrm{a}}\mathrm{a}}^{t}\) after a certain number of Monte-Carlo steps. As a consequence, the equilibrium constants for the insertion moves, \(K_{(z-\mathrm{H})^{+},\mathrm{Na}^{+},\mathrm{H}_{\mathrm{a}^{-},\mathrm{cl}^ {-}}}\), also become time-dependent and are updated in each loop as well. There is still a subtlety concerning the equilibrium constant \(K_{\mathrm{H}^{+},\mathrm{Cl}^{-}}^{t}\) (and thus also \(K_{\mathrm{Na}^{+},\mathrm{OH}^{-}}^{t}\)), which is time-dependent as well: In the special case \(n=1\) one can use the charge neutrality condition \[c_{\mathrm{H}^{+}}+c_{\mathrm{Na}^{+}}=c_{\mathrm{OH}^{-}}+c_{\mathrm{Cl}^{-} }+c_{\mathrm{a}^{-}} \tag{43}\] and multiply by the mean activity coefficient \(\sqrt{\gamma}=\sqrt{\gamma_{+}\gamma_{-}}\) to get \[c_{\mathrm{Na}^{+}}\sqrt{\gamma}= c_{\mathrm{OH}^{-}}\sqrt{\gamma}+c_{\mathrm{Cl}^{-}}\sqrt{\gamma}-c_{ \mathrm{H}^{+}}\sqrt{\gamma}+c_{\mathrm{a}^{-}}\sqrt{\gamma}. \tag{44}\] Using the law of mass-action and the definition of the pH, we arrive at \[c_{\mathrm{Na}^{+}}\sqrt{\gamma}= c^{\ominus}10^{-(\mathrm{pH}-14)}+c_{\mathrm{Cl}^{-}}\sqrt{\gamma}-c^{ \ominus}10^{-\mathrm{pH}} \tag{45}\] \[+\frac{K_{\mathrm{a}}K_{\mathrm{Ha}}c^{\ominus}}{10^{-\mathrm{pH} }}.\] Inserting this expression into the definition of \(K_{\mathrm{Na}^{+},\mathrm{Cl}^{-}}\) results in a quadratic equation \[K_{\mathrm{Na}^{+},\mathrm{Cl}^{-}}(c^{\ominus})^{2}= \left(c_{\mathrm{Cl}^{-}}\sqrt{\gamma}\right)^{2}+c_{\mathrm{Cl}^{ -}}\sqrt{\gamma}c^{\ominus}10^{-(\mathrm{pH}-14)} \tag{46}\] \[-c_{\mathrm{Cl}^{-}}\sqrt{\gamma}\left(c^{\ominus}10^{-\mathrm{ pH}}-\frac{K_{\mathrm{a}}K_{\mathrm{Ha}}c^{\ominus}}{10^{-\mathrm{pH}}}\right)\] which can be solved for \(c_{\mathrm{Cl}^{-}}\sqrt{\gamma}\): \[c_{\mathrm{Cl}^{-}}\sqrt{\gamma}= -\frac{1}{2}\left(c^{\ominus}10^{-(\mathrm{pH}-14)}-c^{\ominus}1 0^{-\mathrm{pH}}+\frac{K_{\mathrm{a}}K_{\mathrm{Ha}}c^{\ominus}}{10^{- \mathrm{pH}}}\right) \tag{47}\] \[+\frac{1}{2}\left(\left(c^{\ominus}10^{-(\mathrm{pH}-14)}-c^{ \ominus}10^{-\mathrm{pH}}+\frac{K_{\mathrm{a}}K_{\mathrm{Ha}}c^{\ominus}}{10^{- \mathrm{pH}}}\right)^{2}\right.\] \[\left.+4K_{\mathrm{Na}^{+},\mathrm{Cl}^{-}}(c^{\ominus})^{2} \right)^{\frac{1}{2}}\] This thus enables one to give the exact value of \(K_{\mathrm{H}^{+},\mathrm{Cl}^{-}}\): \[K_{\mathrm{H}^{+},\mathrm{Cl}^{-}}=\frac{c_{\mathrm{H}^{+},\mathrm{Cl}^{-}}}{ \left(c^{\ominus}\right)^{2}}\gamma=10^{-\mathrm{pH}}\frac{c_{\mathrm{Cl}^{-} }}{c^{\ominus}}\sqrt{\gamma} \tag{48}\] For \(n>1\) this is not possible due to complications that arise because of the multivalent terms. An approach that works in this case is as follows: \(K_{\mathrm{H}^{+},\mathrm{Cl}^{-}}\) is given by Equation 48, where \(\gamma\) is the activity coefficient of a monovalent ion pair. To approximately calculate \(\gamma\) in the simulation, we can use \[\gamma^{\prime}=\frac{K_{\mathrm{Na}^{+},\mathrm{Cl}^{-}}^{t}(c^{\ominus})^{2} }{\overline{c_{\mathrm{Na}^{+}}}^{t}c_{\mathrm{Cl}^{-}}} \tag{49}\] and thus get \[K_{\mathrm{H}^{+},\mathrm{Cl}^{-}}^{t}=10^{-\mathrm{pH}}\sqrt{\frac{\overline{ c_{\mathrm{Cl}^{-}}}^{t}}{\overline{c_{\mathrm{Na}^{+}}}^{t}}}K_{\mathrm{Na^{+}, \mathrm{Cl}^{-}}}^{t}. \tag{50}\] While this expression is not exact, it converges to the correct value for large times as we show in the appendix. In the following, we always use the approximate scheme, if not stated otherwise. ## IV Numerical test: determining the reaction constants ### Setup To showcase our method, we ran a number of test simulations for systems which can exchange a small acid with a reservoir. In a first step, we show the validity of the dynamical tuning algorithm for the chemical potential to arrive at the desired reservoir composition. Here, we focus on the case of a monoprotic acid. The analogous results for a diprotic acid are reported in the appendix. For all of the tests described below we used the following simulation setup and protocol: We carried out bulk simulations in a box with periodic boundary conditions. The size of the cubic simulation box is set in such a way that according to an ideal gas estimate the number of particles of the most numerous species is \(N=500\). We carry out tests both for an ideal system and an interacting system. In the interacting case, as our simulation model we employ the restricted primitive model (RPM).[38] In the RPM, ions are represented by spheres with an excluded volume interaction, in our case modeled by a WCA potential [39] \[V_{\mathrm{WCA}}(r)=\begin{cases}4\varepsilon\left(\left(\frac{\sigma}{r}\right) ^{12}-\left(\frac{\sigma}{r}\right)^{6}\right)+\varepsilon&\text{if }r\leq 2^{\frac{1}{6}}\sigma\\ 0&\text{if }r>2^{\frac{1}{6}}\sigma\end{cases} \tag{51}\] with diameter \(\sigma=0.355\,\mathrm{nm}\) and an energy of \(\varepsilon=k_{\mathrm{B}}T\). These explicit ions are embedded into an implicit solvent and also interact via the Coulomb potential \[V_{\mathrm{Coulomb}}^{ij}(r)=\frac{\lambda_{\mathrm{B}}k_{\mathrm{B}}Tz_{i}z_{j}}{r} \tag{52}\] where \(z_{i}\) is the valency of species \(i\) and the Bjerrum length \(\lambda_{\text{B}}=e^{2}/4\pi\epsilon k_{\text{B}}T\) is set to a value of \(\lambda_{\text{B}}=2\sigma=7.1\,\text{\AA}\) which accounts for the dielectric properties of water at room temperature (\(T\approx 300\,\text{K}\)). In our simulations, we use the P\({}^{3}\)M algorithm [40] with a relative error [41; 42] of \(10^{-3}\) to sum up the electrostatic energies and forces. For the ideal system, we start the simulations at random initial particle numbers. In the interacting case, we use an extended Debye-Huckel formula (Davies-equation) estimate as our starting point. In both cases, we perform a total of \(10^{6}\) loops which consist of 10 reaction steps each. For the interacting system, we also include 10 MC single-particle displacement moves in each loop. These help to decorrelate the system faster than mere reaction moves. After each loop, we update the chemical potentials and thus the equilibrium constants according to the method introduced above. We set \(\alpha=0.1\). We ran tests for \(\text{p}K_{\text{a}}=4.0\), \(c_{\text{NaCl}}^{\text{res}}\in\{0.1\,\text{M},0.03\,\text{M},0.01\,\text{M}\}\), \(c_{\text{Ha}}^{\text{res},0}\in\{0.1\,\text{M},0.03\,\text{M},0.01\,\text{M}\}\) and \(\text{p}H^{\text{res}}\in[1.0,13.0]\). A real acid with \(\text{p}K_{\text{a}}\approx 4.0\) is for instance acrylic acid. All simulations were carried out using the simulation software package ESPResSo. [43] ### Ideal System The ideal limit provides an important test case, as it is accessible to an analytical solution (details are reported in the appendix). In Figure 2 (a) we show the evolution of the mean concentrations \(\overline{c_{\text{NaCl}}}^{\text{\tiny{res}}}\) and \(\overline{c_{\text{Ha}}^{\text{\tiny{res}}}}^{\text{\tiny{res}}}\) for all simulated parameter combinations. To make the different simulations comparable, all concentrations are normalized by the respective desired concentration. As the plot shows, the mean concentrations converge to the desired values for all simulated parameters. Because in the ideal case we can calculate the concentrations of all species analytically, we also compare these to the simulation results. The plot in Figure 2 (b) shows the mean concentration \(\overline{c_{i}^{\text{\tiny{res}}}}\) at the end of the simulation for the different species as a function of the respective ideal result. As the plot shows, all data points collapse (almost) perfectly onto the bisecting line. To suppress finite-size effects, only data points with a mean number of particles larger than 10 were included. We include an analogous comparison with the ideal theory for the equilibrium constants in the appendix (Figure 4), where we observe again a very good agreement. Figure 2: (a): Evolution of the mean concentrations \(\overline{c_{\text{NaCl}}}^{\text{\tiny{res}}}\) and \(\overline{c_{\text{Ha}}^{\text{\tiny{res}}}}^{\text{\tiny{res}}}\) (normalized by the resepective desired concentration) for an ideal system for all simulated parameter combinations. (b): Mean concentrations of the various species measured from the simulation for an ideal system (at the very end) vs. the concentrations predicted by the ideal theory. (c): Plot of the degree of ionization of the small acid particles for an interacting system as a function of pH for \(c_{\text{NaCl}}^{\text{res}}=0.1\,\text{M}\) and \(c_{\text{Ha}}^{\text{res},0}=0.1\,\text{M}\). The plot compares the ideal result given by the Henderson-Hasselbach equation (HH) with the result from the generalized G-RxMC method and the result obtained from a semi-analytical calculation using data from Widom insertion simulations. ### Interacting System We perform analogous tests for an interacting system with the interactions specified above. As a test, we compare the simulation results for this interacting system with a calculation that uses data from Widom insertion [34] simulations for a monovalent salt solution (compare the appendix for an explanation). In Figure 2 (c), we show the resulting ionization curve obtained using this procedure for the most non-ideal case considered here (\(c_{\text{NaCl}}^{\text{ces}}=0.1\,\text{M}\) and \(c_{\text{Ha}}^{\text{res,0}}=0.1\,\text{M}\).), which is in excellent agreement with the simulation results, while there are deviations from the ideal prediction. The enhanced ionization as compared to the ideal result is expected, as the excess chemical potential of a salt solution is negative in the range of concentrations considered here. Some more tests for correctness and internal consistency of the results are included in the appendix. V An example system: simulation of a weak polybase solution coupled to a reservoir containing a weak diprottic acid ### Setup Having shown that the propsed tuning procedure for the reservoir composition works correctly, we can now apply the method to a simple test system. Here we consider a solution of polybase molecules which is coupled to a reservoir at a given pH, salt concentration and concentration of a weak diprotic acid. All particles except the polybase chains can be exchanged between the two phases. Experimentally, such a setup could be realized by coupling the polybase solution to an aqueous solution via a semi-permeable membrane which does not allow the polybase chains (but all other particles) to pass. In the following, we denote the conjugate acid of the basic monomers by \(\text{BH}^{+}\) (i.e. the ionized/protonated state) and the neutral monomers by \(\text{B}\). This implies that we can write the reaction in the form \[\text{BH}^{+}\ \ ### Results As before, we first carried out tests for an ideal system which can be compared to the analytical solution. These results can be found in the appendix. For the interacting system, we observe strong deviations from the ideal prediction for the mean charge per acid particle, as shown in Figure 3 (a). In essence, the charge of the diprotic acid is enhanced as compared to the ideal prediction. Part of this enhancement is caused by the Donnan effect, which effectively increases the pH inside the system as compared to pH\({}^{\mathrm{res}}\). Furthermore, the ionization is also increased by the electrostatic interactions, which lead to a negative excess chemical potential of the ionized species. The increase of the shift we observe with increasing the monomer concentration in the system is thus also of two-fold origin: On the one hand, an increasing monomer concentration leads to a stronger Donnan effect. On the other hand, the increasing concentration of the monomers (which are themselves positively charged in the considered pH-regime) further lowers the excess chemical potential of the ions, leading to an increased ionization. For the weak polybase, we observe an ionization behaviour that is (as expected) suppressed as compared to the Henderson-Hasselbalch equation (Figure 3 (b)). As in the case of the weak diprotic acid, the shift is caused by a combination of the Donnan effect and the electrostatic interactions. Interestingly, the observed shift is only minor as compared to the rather large shifts typically observed in simulations of weak polyacids or -bases in monovalent salt solutions.[20] This effect is a consequence of a partial cancellation of the strong repulsion of the ionized basic monomers and the strong attraction of the monomers and the divalent ions.[46] Finally, in Figure 3 (c) we show the mean radius of gyration \(R_{\mathrm{g}}\) of the weak polybase chains as a function of pH\({}^{\mathrm{res}}\). The swelling behaviour of the chains is much more complex than for instance in the simpler case of chains which are coupled to a reservoir containing a strong, monovalent salt.[21] The most obvious feature of the swelling curves is that the swelling proceeds in a two-step manner when going from a high value of pH\({}^{\mathrm{res}}\) to a low value. This can be explained in the following way: at very high values of pH\({}^{\mathrm{res}}\) (pH\({}^{\mathrm{res}}>11\)) the polybase chains are essentially neutral and thus attain a fairly compact conformation. Once the pH-value is lowered, the chains become ionized and begin to swell. However, this swelling is only weak, because the chains stay fairly collapsed due to the presence of the doubly ionized acid particles, which act as divalent counterions. Interestingly, at this the swelling is almost the same for both investigated values of the monomer concentration. This behaviour is in agreement with the previous observation that beyond a certain threshold the addition of more multivalent counterions does not influence the conformational behaviour of a weak polyelectrolyte anymore.[46] The second swelling step, beginning around pH\({}^{\mathrm{res}}\approx 6\) is triggered by the doubly ionized weak diprotic acid particles first becoming monovalent and finally neutral. Here we observe that the swelling is more pronounced at a lower monomer concentration. This is because the Donnan effect is smaller in this case and thus the ionic strength inside the system is lower, leading Figure 3: Various plots for the interacting test system. The shown plots correspond to \(c_{\mathrm{NaCl}}^{\mathrm{res}}=0.1\,\mathrm{M}\) and \(c_{\mathrm{H_{2}}}^{\mathrm{res},0}=0.1\,\mathrm{M}\) and different monomer concentrations \(c_{\mathrm{mon}}\). (a): Plot of the absolute value of the mean charge per diprotic acid particle vs the pH in the reservoir. As a comparison, the prediction by the ideal theory (Henderson-Hasselbalch equation) is also shown. (b): Plot of the degree of ionization of the weak polybase as a function of pH\({}^{\mathrm{res}}\). The prediction according to the ideal theory (Henderson-Hasselbalch equation) is shown as well. (c): Plot of the mean radius of gyration of the chains as a function of pH\({}^{\mathrm{res}}\). to less screening of the electrostatic interactions. Finally, at very low values of pHres, the swelling decreases again. This efffect, similar to what is for instance also observed in weak polyelectrolyte hydrogels,[28] is caused by the increasing ionic strength in the system due to the addition of HCl, which increases the screening between the ionized base monomers. ## VI Summary and outlook To summarize, we introduced a generalized grand-reaction method to model the exchange of weak (polyprotic) acids and bases between a reservoir and a polyelectrolyte phase. To the best of our knowledge for the first time this new method makes it possible to investigate the partitioning of weak polyprotic acids between a solution and a polymeric phase such as a hydrogel. Because the resulting reservoir is now itself a much more complex system, the existing approaches to extract the required equilibrium constants that correspond to a desired reservoir composition are not feasible anymore. In order to solve this problem, we generalized the \(\mu\)-tuning algorithm by Miles et al.[36]. We performed extensive numerical tests in order to validate our proposed tuning method for the chemical potentials. Finally, we investigated a simple test system consisting of a weak polybase coupled to a solution containing a weak diprotic acid. As a consequence of the interplay of ion partioning and charge regulation effects (both of the weak polybase and the weak diprotic acid) we observed a two-step swelling behaviour of the chains. Through our generalization, the method can now be applied to study the partitioning of weak polyprotic acid molecules between a solution and a weak polyelectrolyte hydrogel. The study of the same effects in related systems such as coarser-vates would also be feasible. Furthermore, it should in principle also be possible to apply the described method to the partitioning of more complex particles such as short polypeptides. In this case, however, one would need to combine it with a biased Monte-Carlo scheme in order to ensure a high enough acceptance probability for insertion moves.[47] ###### Acknowledgements. CH acknowledges funds by the German Research Foundation (DFG) - grants No. 451980436 and No. 268449726. Parts of this work were also performed within the collaborative framework of the research unit _Adaptive Polymer Gels with Controlled Network Structure (FOR2811)_, funded by the German Research Foundation under No. 397384169. DB acknowledges helpful discussions with Mariano Brito and Jonas Landsgesell. ## Data Availability Statement The simulation scripts and data that support the findings of this study are available upon reasonable request. ## Conflict of interest The authors have no conflicts to disclose. ## Author contributions DB: Conceptualization, Formal Analysis, Methodology, Software, Writing (Original Draft) CH: Conceptualization, Supervision, Writing (Review and Editing) ## Appendix A Additional Calculations The identity \[\frac{\partial}{\partial\mu_{y}}\langle N_{x}\rangle=\beta\text{Cov}[N_{x},N_ {y}] \tag{1}\] which is used in the main text, can be shown using a straightforward calculation in the grand-canonical ensemble. As our starting point, we take the grand-canonical partition function for a system with \(s\) different species (the same calculation also holds for a semi-grand-canonical ensemble), which can be expressed using the trace operator: \[\begin{split} Z^{\text{G}}\left(\left\{\mu_{i}\right\},V,T \right)&=\sum_{N_{1}=0}^{\infty}...\sum_{N_{N}=0}^{\infty}Z\left( \left\{N_{i}\right\},V,T\right)\exp\left(\beta\sum_{i=1}^{N_{x}}\mu_{i}N_{i} \right)\\ &=\text{Tr}\left(\exp\left(-\beta\,\left(U-\sum_{i=1}^{N_{x}} \mu_{i}N_{i}\right)\right)\right).\end{split} \tag{2}\] As is well known, the mean particle number can be expressed as \[\langle N_{x}\rangle=\frac{1}{\beta}\frac{\partial}{\partial\mu_{x}}\log Z^{ \text{G}}=\frac{1}{Z^{\text{G}}}\text{Tr}\left(N_{x}\exp\left(-\beta\left(U- \sum_{i=1}^{N_{x}}\mu_{i}N_{i}\right)\right)\right). \tag{3}\] Performing the derivative with respect to \(\mu_{y}\), we arrive at \[\begin{split}&\frac{1}{\beta}\frac{\partial}{\partial\mu_{y}} \langle N_{x}\rangle=\frac{1}{\beta}\frac{\partial}{\partial\mu_{y}}\frac{1}{Z ^{\text{G}}}\text{Tr}\left(N_{x}\exp\left(-\beta\left(U-\sum_{i=1}^{N_{x}} \mu_{i}N_{i}\right)\right)\right)\\ &=\frac{1}{Z^{\text{G}}}\text{Tr}\left(N_{x}N_{y}\exp\left(-\beta \,\left(U-\sum_{i=1}^{N_{x}}\mu_{i}N_{i}\right)\right)\right)\\ &-\frac{1}{Z^{\text{G}}}\text{Tr}\left(N_{x}\exp\left(-\beta\, \left(U-\sum_{i=1}^{N_{x}}\mu_{i}N_{i}\right)\right)\right)\times\\ &\times\frac{1}{Z^{\text{G}}}\text{Tr}\left(N_{y}\exp\left(-\beta \,\left(U-\sum_{i=1}^{N_{x}}\mu_{i}N_{i}\right)\right)\right)\\ &=\langle N_{x}N_{y}\rangle-\langle N_{x}\rangle\langle N_{y} \rangle=\text{Cov}[N_{x},N_{y}]\end{split} \tag{4}\] which proves the identity. ## Appendix B Additional Material for "Numerical Test: Determining the Reaction Constants" In this first appendix, we include additional information and plots on the numerical tests we performed for the dynamical \(\mu\)-tuning procedure (section IV). #### b.1.1 Monoprotic Acid #### b.1.2 Ideal System As mentioned in the main text, the ideal case is accessible to an analytical solution. In particular, for a given pH\({}^{\text{res}}\), salt concentration \(c_{\text{NaCl}}^{\text{res}}\) and total concentration of the acid \(c_{\text{Ha}}^{\text{res,0}}\) one has the following concentrations: \[c_{\text{H}^{+}}^{\text{res}} =c^{\ominus}10^{-\text{pH}^{\text{res}}} \tag{11}\] \[c_{\text{OH}^{-}}^{\text{res}} =c^{\ominus}10^{-(14-\text{pH}^{\text{res}})}\] (12) \[c_{\text{Ha}}^{\text{res}} =\frac{c_{\text{Ha}}^{\text{res,0}}}{1+\frac{c_{\text{Ha}}^{\ominus }}{c_{\text{H}^{+}}^{\ominus}}}\] (13) \[c_{\text{a}^{-}}^{\text{res}} =\frac{c_{\text{Ha}}^{\text{res,0}}}{1+\frac{c_{\text{Ha}}^{\text{ res,0}}}{c_{\text{Ha}}^{\ominus}}}\] (14) \[c_{\text{Na}^{+}}^{\text{res}} =\max\left(c_{\text{NaCl}}^{\text{res}},c_{\text{NaCl}}^{\text{ res}}+c_{\text{OH}^{-}}^{\text{res}}-c_{\text{H}^{+}}^{\text{res}}+c_{\text{a}^{-}}^{ \text{res}}\right).\] (15) \[c_{\text{Cl}^{-}}^{\text{res}} =\max\left(c_{\text{NaCl}}^{\text{res}},c_{\text{NaCl}}^{\text{ res}}-c_{\text{OH}^{-}}^{\text{res}}+c_{\text{H}^{+}}^{\text{res}}-c_{\text{a}^{-}}^{ \text{res}}\right). \tag{16}\] In addition, the equilibrium constants \(K_{\text{Na}^{+},\mathbb{C}^{-}}\), \(K_{\text{H}^{+},\mathbb{C}^{-}}\) and \(K_{\text{Ha}}\) can be simply calculated from these concentrations. Supplementary to the plot of the concentrations as measured from the simulations vs. the concentrations as predicted by the ideal gas theory provided in the main text (Figure 2), we show here an analogous plot for the equilibrium constants (Figure 4 (a)) where we observe again a very good agreement. #### b.2 Interacting System For the interacting case of a monoprotic acid, we show in Figure 4 (b) the time evolution of the mean concentrations \(\overline{c_{\text{NaCl}}}^{\text{-}}\) and \(\overline{c_{\text{Ha}}^{\text{-}}}\) for all simulated parameter combinations. Again (as in Figure 2), we normalize the concentrations by the respective desired concentrations in order to show all curves in the same plot. We also show the mean concentrations of the various species measured from the simulation vs. the concentrations imposed in the dynamical \(\mu\)-tuning algorithm in Figure 4 (c). As the plots show, convergence is reached for all simulations. As an additional test, we compared the simulation results for this interacting system with a semi-analytical calculation that uses data from Widom insertion [34] simulations for a monovalent salt solution (compare Figure 2 (c) in the main text). The semi-analytical calculations were performed in the following way: Neglecting the excluded volume effect of the neutral acid particle Ha, which is a good approximation in the considered concentration regime, the composition of the reservoir for fixed input values of pH\({}^{\text{res}}\), \(c_{\text{NaCl}}^{\text{res}}\) and \(c_{\text{Ha}}^{\text{res},0}\) can be determined by solving the following system of non-linear equations: \[\text{pH}^{\text{res}} =-\log_{10}\left(\frac{c_{\text{H}^{+}}^{\text{res}}}{c^{\ominus} \sqrt{\gamma^{\text{res}}}}\right) \tag{17}\] \[\gamma^{\text{res}} =f\left(I^{\text{res}}\left(c_{\text{Ha}^{+}},c_{\text{OH}^{-}}^{ \text{res}},c_{\text{NaCl}}^{\text{res}},c_{\text{a}^{-}}^{\text{res}})\right)\] (18) \[c_{\text{OH}^{-}}^{\text{res}} =\frac{K_{\text{H}^{+},\text{OH}^{-}}(c^{\ominus})^{2}}{c_{\text{ H}^{+}}^{\text{res}}\gamma^{\text{res}}}\] (19) \[c_{\text{a}^{-}}^{\text{res}} =\frac{c_{\text{Ha}^{+},0}^{\text{res,0}}}{1+\frac{\overline{c_{ \text{Ha}}^{\text{res}}}}{c_{\text{Ha}}^{\ominus}}}. \tag{20}\] In our calculation, we use linearly interpolated data from Widom insertion calculations for the activity coefficient \(\gamma^{\text{res}}=f(I)\). Then, the above system of equations can be solved numerically, which we do in a self-consistent manner. To further validate our method, we also performed some additional checks. In Figure 4 (d), we plot the concentrations \(c_{i}\) of all considered species as obtained from the simulations using the approximate scheme as described in section III vs the concentrations obtained using the exact method (which only works for the special case of a monoprotic acid, i.e. \(n=1\)) As the plot shows, they are in agreement. In Figure 4 (e), we show for the approximate method the activity coefficient of an ion pair as calculated from the equilibrium constant \(K_{\text{NaCl}}\) vs. the same activity coefficient as calculated from the pH. The fact that they agree shows that the results are internally consistent. Finally, in Figure 4 (f), we show a plot of all four possible combinations of \(\gamma^{\text{res}}\) as calculated from the approximate and exact method using either \(K_{\text{NaCl}}\) or pH. The plot demonstrates that the two approaches yield the same results. #### b.2.2 Diprotic Acid In addition to the test for a monoprotic acid presented in the main text, we also ran tests for a diprotic acid. We retained the same simulation model and setup as above and ran tests for \(\text{p}K_{\text{a}}^{1}=4.0\), \(\text{p}K_{\text{a}}^{2}=7.0\), \(c_{\text{NaCl}}^{\text{res}}\in\{0.1\,\text{M},0.03\,\text{M},0.01\,\text{M}\}\), \(c_{\text{Ha}^{2}}^{\text{res,0}}\in\{0.1\,\text{M},0.03\,\text{M},0.01\,\text{M}\}\) and \(\text{pH}^{\text{res}}\in[1.0,13.0]\). #### b.2.3 Ideal System The first test case is again the ideal gas, which yields the following analytical solution: \[c_{\text{H}^{+}}^{\text{res}} =c^{\ominus}10^{-\text{pH}^{\text{res}}} \tag{21}\] \[c_{\text{OH}^{-}}^{\text{res}} =c^{\ominus}10^{-(14-\text{pH}^{\text{res}})}\] (22) \[c_{\text{H}^{2}}^{\text{res}} =\frac{c_{\text{H}^{+},\text{OH}^{-}}^{\text{res,0}}}{1+\frac{c_{ \text{Ha}}^{\ominus}}{c_{\text{H}^{+}}^{\ominus}}+\frac{(c^{\ominus})^{2}K_{ \text{a}}^{2}}{\left(c_{\text{Ha}^{+}}^{\ominus}\right)^{2}}} \tag{23}\] \[c_{\text{Ha}^{-}}^{\text{res}} =c_{\text{Ha}^{-}}^{\text{res}}\frac{c^{\odot}\mathbf{K}_{\text{a}}^{ \text{1}}}{c_{\text{H}^{+}}^{\text{res}}} \tag{14}\] \[c_{\text{a}^{2-}}^{\text{res}} =c_{\text{Ha}^{-}}^{\text{res}}\frac{c^{\odot}\mathbf{K}_{\text{a}}^{ \text{2}}}{c_{\text{H}^{+}}^{\text{res}}} \tag{15}\] \[c_{\text{Na}^{+}}^{\text{res}} =\max\left(c_{\text{Na}\text{Cl}^{\text{res}}}^{\text{res}},c_{ \text{NaCl}}^{\text{res}}+c_{\text{OH}^{-}}^{\text{res}}-c_{\text{H}^{+}}^{\text {res}}+c_{\text{Ha}^{-}}^{\text{res}}+2c_{\text{a}^{2-}}^{\text{res}}\right) \tag{16}\] \[c_{\text{Cl}^{-}}^{\text{res}} =\max\left(c_{\text{NaCl}}^{\text{res}},c_{\text{NaCl}}^{\text{ res}}-c_{\text{OH}^{-}}^{\text{res}}+c_{\text{H}^{+}}^{\text{res}}-c_{\text{Ha}^{-}}^{\text {res}}-2c_{\text{a}^{2-}}^{\text{res}}\right). \tag{17}\] Figure 4: (a): Plot of the various equilibrium constants obtained from the simulation of an ideal system vs. the values predicted by the ideal gas theory. (b): Evolution of the mean concentrations \(\overline{c_{\text{NaCl}}}^{\text{1}}\) and \(\overline{c_{\text{Ha}}^{\text{0}}}^{\text{1}}\) for an interacting system for all simulated parameter combinations. To make the different simulations comparable, all concentrations are normalized by the respective desired concentration. (c): Mean concentrations of the various species measured from the simulation for an interacting system (at the very end) vs. the concentrations imposed in the dynamical \(\mu\)-tuning algorithm. (d): Plot of the concentrations \(c_{i}\) of the various species as obtained from the simulations using the approximate method vs the concentrations obtained from the simulations using the exact method. (e): Plot of the activity coefficient \(\gamma\) of an ion pair as calculated from \(K_{\text{NaCl}}\) vs \(\gamma\) as calculated from the pH. The shown results were obtained using the approximate method. (f): Plot of all possible combinations of \(\gamma\) as calculated from either \(K_{\text{NaCl}}\) or \(\gamma\) as calculated from the pH for the approximate and exact methods. As a first plot, we show in Figure 5 (a) the evolution of the mean concentrations \(\overline{c_{\text{NaCl}}}^{\prime}\) and \(\overline{c_{\text{Ha}}^{\prime}}^{\prime}\) for an ideal system containing a diprotic acid and for all simulated parameter combinations. As before, to make the different simulations comparable, all concentrations are normalized by the respective desired concentration, showing the desired convergence. In Figure 5 (b), we show again a plot of mean concentrations of the various species measured from the simulation (at the very end) vs. the concentration predicted from the ideal gas theory, which is also in excellent agreement. We also observe a good agreement for the various equilibrium constants (Figure 5 (c)). #### c.2.2 Interacting System For the interacting version of this system, we show in Figure 5 (d) the evolution of the mean concentrations \(\overline{c_{\text{NaCl}}}^{\prime}\) and \(\overline{c_{\text{Ha}}^{\prime}}^{\prime}\) and observe again convergence. In Figure 5 (e), we show again a plot of mean concentrations of the various species measured from the simulation (at the very end) vs. the imposed concentrations, which is also in agreement. As a further consistency check, Figure 5 (f) shows the activity coefficient \(\gamma\) of a monovalent ion pair as calculated from \(K_{\text{NaCl}}\) vs \(\gamma\) as calculated from the pH. Finally, in Figure 6 (a) we show the absolute value of the mean charge per acid particle as a function of the pH for the case \(c_{\text{NaCl}}^{\text{res}}=0.1\,\text{M}\) and \(c_{\text{H}_{2}^{\text{res}}}^{\text{res},0}=0.1\,\text{M}\). As the plot shows, the ionization of the acid is enhanced as compared to the ideal result (Henderson-Hasselbach equation) due to the interactions. This non-ideal effect is especially strong in the regime where the acid particles become ionized for the second time, i.e. when they become divalent ions. Appendix C Additional Material for "An Example System: Simulation of a Weak Polybase Solution Coupled to a Reservoir Containing a Weak Diprotic Acid" As mentioned in the main text, we performed also tests for an ideal realization of the polybase system described previously. In order to compare the results of these simulations to a theoretical prediction, we need to extend the ideal Donnan theory for a system that can exchange both monovalent and divalent ions with a reservoir. The Donnan equilibrium is characterized by an equality of the chemical potentials of monovalent ion pairs \((+,-)\) and divalent ion triplets \((+,+,2-)\) between the two phases, i.e. \[\mu_{+}^{\text{sys}}+\mu_{-}^{\text{sys}} =\mu_{+}^{\text{res}}+\mu_{-}^{\text{res}} \tag{10}\] \[2\mu_{+}^{\text{sys}}+\mu_{2-}^{\text{sys}} =2\mu_{+}^{\text{res}}+\mu_{2-}^{\text{res}}. \tag{11}\] For an ideal gas, these conditions lead to the following equations for the concentrations of the different ions: \[c_{+}^{\text{sys}}c_{-}^{\text{sys}} =c_{+}^{\text{res}}c_{-}^{\text{res}} \tag{12}\] \[\left(c_{+}^{\text{sys}}\right)^{2}c_{2-}^{\text{sys}} =\left(c_{+}^{\text{res}}\right)^{2}c_{2-}^{\text{res}}. \tag{13}\] From these conditions, it is straightforward to see that the partition coefficients \(\xi_{i}\equiv c_{i}^{\text{sys}}/c_{i}^{\text{res}}\) have to obey the following relation: \[\xi_{+}=\frac{1}{\xi_{-}}=\frac{1}{\sqrt{\xi_{2-}}}. \tag{14}\] Combining this relation with the electroneutrality condition for the system, \[c_{\text{B}^{+}}^{\text{sys}}+c_{\text{H}^{+}}^{\text{sys}}+c_{\text{Na}^{+}} ^{\text{sys}} =c_{\text{OH}^{-}}^{\text{sys}}+c_{\text{Cl}^{-}}^{\text{sys}}+c_{ \text{Ha}^{-}}^{\text{sys}}+2c_{\text{a}^{2-}}^{\text{sys}}, \tag{15}\] results in the non-linear equation \[c_{\text{BH}^{+}}^{\text{sys}}+\xi_{+}\left(c_{\text{H}^{+}}^{\text{res}}+c_{ \text{Na}^{+}}^{\text{res}}\right)=\frac{1}{\xi_{+}}\left(c_{\text{OH}^{-}}^{ \text{res}}+c_{\text{Cl}^{-}}^{\text{res}}+c_{\text{Ha}^{-}}^{\text{res}} \right)+\frac{2}{\xi_{+}^{2}}c_{\text{a}^{2-}}^{\text{res}} \tag{16}\] for the partition coefficient \(\xi_{+}\), which can be brought into the form of a third-degree polynomial equation: \[\xi_{+}^{3}\left(c_{\text{H}^{+}}^{\text{res}}+c_{\text{Na}^{+}}^ {\text{res}}\right)+\xi_{+}^{2}c_{\text{BH}^{+}}^{\text{sys}} \tag{17}\] \[-\xi_{+}\left(c_{\text{OH}^{-}}^{\text{res}}+c_{\text{Cl}^{-}}^{ \text{res}}+c_{\text{Ha}^{-}}^{\text{res}}\right)-2c_{\text{a}^{2-}}^{\text{res} }=0. \tag{18}\] In the case of a weak polybase, \(c_{\text{BH}^{+}}^{\text{sys}}\) is not an independent parameter, but determined by the Henderson-Hasselbach equation according to \[c_{\text{BH}^{+}}^{\text{sys}}=\frac{c_{\text{B}}^{\text{sys},0}}{1+10^{\text {pH}^{\text{sys}}-\text{pH}^{\text{kas}}_{\text{A}}}}, \tag{19}\] where it is important to realize that \(\text{pH}^{\text{sys}}\neq\text{pH}^{\text{res}}\) because of the Donnan potential and thus \[\text{pH}^{\text{sys}} =-\log_{10}\left(\frac{c_{\text{H}^{+}}^{\text{sys}}}{c^{\odot}} \right)=-\log_{10}\left(\frac{c_{\text{H}^{+}}^{\text{res}}}{c^{\odot}} \right)-\log_{10}\left(\frac{c_{\text{H}^{+}}^{\text{sys}}}{c_{\text{H}^{+}}^{ \text{res}}}\right)\] \[=\text{pH}^{\text{res}}-\log_{10}\left(\xi_{+}\right). \tag{20}\] This means that the Donnan partitioning and the ionization equilibrium of the weak polybase are mutually coupled. One might think that this coupling also has to be explicitly taken into account for the weak diprotic acid, however an inspection of Equation 14 and Equation 20 reveals that the partitioning in combination with the shift in pH "automatically" leads to the correct behaviour. Thus, one simply has to (numerically) solve Equation 15, Equation 20 and Equation 20 in order to arrive at the correct value of \(\xi_{+}\). In Figure 6 we show the results for this analytical approach in comparison to simulation results for an ideal gas. As parameters, we chose \(\text{pH}^{\text{\text{\text{\text{\text{\text{\text{\text{\text{\ Donnan effect, which effectively increases the pH inside the system as compared to pHres. As the plot shows, the simulation and the theoretical prediction are in quantitative agreement. Similarly, we observe a perfect agreement for the degree of ionization of the weak polybase monomers, shown in Figure 6 (c). In contrast to the acid, for the polybase the ionization is suppressed due to the Donann effect. Figure 5: (a): Evolution of the mean concentrations \(\overline{c_{\text{NaCl}}}^{t}\) and \(\overline{c_{\text{H2}}^{0}}^{t}\) for an ideal system containing a diprotic acid and for all simulated parameter combinations. To make the different simulations comparable, all concentrations are normalized by the respective desired concentration. (b): Mean concentrations of the various species measured from the simulation (at the very end) for an ideal system containing a diprotic acid vs. the concentration predicted from the ideal gas theory. (c): Plot of the various equilibrium constants obtained from the simulation of an ideal system containing a diprotic acid vs. the values predicted by the ideal gas theory. (d): Evolution of the mean concentrations \(\overline{c_{\text{NaCl}}}^{t}\) and \(\overline{c_{\text{H2}}^{0}}^{t}\) for an interacting system containing a weak diprotic acid for all simulated parameter combinations. To make the different simulations comparable, all concentrations are normalized by the respective desired concentration. (e): Mean concentrations of the various species measured from the simulation (at the very end) for an interacting system containing a diprotic acid vs. the concentrations imposed in the dynamical \(\mu\)-tuning algorithm. (f): Plot of the activity coefficient \(\gamma\) of a monovalent ion pair as calculated from \(K_{\text{NaCl}}\) vs \(\gamma\) as calculated from the pH. ## Appendix D Welford Method for the Covariance Miles et al. [36] showed how the Welford algorithm [37] to incrementally update the variance can be adapted to the case where one wants to incrementally update the variance of only the more recent half of a sample. Because in our method we need not only to calculate the variance but also covariances, we show how this result generalizes to the covariance. Similar to the case of the variance, let us write the covariance in the form \[\overline{\text{Cov}_{t}}[x,y]=C_{t}/L_{t} \tag{20}\] with \[C_{t}=\sum_{t^{\prime}=\left\lfloor\epsilon\right\rfloor}^{t}(x_{t^{\prime}}- \overline{x}_{t})(y_{t^{\prime}}-\overline{y}_{t}). \tag{21}\] As in the case of the variance, there are two cases to consider: Case A: \[\left\lceil c(t+1)\right\rceil=\left\lceil c\tau\right\rceil\] (22) Case B: \[\left\lceil c(t+1)\right\rceil=\left\lceil c\tau\right\rceil+1\] (23) A straightforward calculation (see below) shows that \[C_{t+1}=\begin{cases}C_{t}+(x_{t+1}-\overline{x}_{t})(y_{t+1}-\overline{y}_{t+ 1})&\text{for Case A}\\ C_{t}+y_{t+1}\left(x_{t+1}-\overline{x}_{t+1}\right)+\overline{y}_{t}\left(x_ {\left\lceil c\right\rceil}-\overline{x}_{t+1}\right)+y_{\left\lceil c\right \rceil}\left(\overline{x}_{t+1}-x_{\left\lceil c\right\rceil}\right)&\text{ for Case B}\end{cases} \tag{24}\] Figure 6: (a): Plot of the absolute value of the mean charge per diprotic acid particle for the interacting system vs the pH (data points). The shown plot corresponds to \(\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$ \text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$}}$}$}$}$}$}$}}}}}}$}$}$=0.1 \,\text{M}}\) and \(\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$ \text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$ $}}}$}$}$}$}}}}}}$}}$}$=0.1\,\text{M}}\). As a comparison, the prediction by the ideal theory (Henderson-Hasselbalch equation) is also shown. (b): Plot of the absolute value of the mean charge per acid particle as a function of pH\({}^{\text{res}}\). The predictions according to the ideal theory (Henderson-Hasselbalch equation), both with and without the Donnan effect are shown as well. (c): Plot of the degree of ionization of the weak polybase as a function of pH\({}^{\text{res}}\). The predictions according to the ideal theory (Henderson-Hasselbalch equation), both with and without the Donnan effect are shown as well. It is easy to see that this result reduces to the expression found for the variance by Miles et al. [36] in the special case \(x=y\). In the following, we also provide the full calculation of this result. For Case A, we have \[C_{t+1}-C_{t} =\sum_{t^{\prime}=[\sigma(t+1)]}^{t+1}(x_{t^{\prime}}-\overline{x }_{t+1})(y_{t^{\prime}}-\overline{y}_{t+1})-\sum_{t^{\prime}=[\sigma]}^{t}(x_{ t^{\prime}}-\overline{x}_{t})(y_{t^{\prime}}-\overline{y}_{t})\] \[=\sum_{t^{\prime}=[\sigma]}^{t+1}(x_{t^{\prime}}-\overline{x}_{t+ 1})(y_{t^{\prime}}-\overline{y}_{t+1})-\sum_{t^{\prime}=[\sigma]}^{t}(x_{t^{ \prime}}-\overline{x}_{t})(y_{t^{\prime}}-\overline{y}_{t})\] \[=(x_{t+1}-\overline{x}_{t+1})(y_{t+1}-\overline{y}_{t+1})+\sum_{t ^{\prime}=[\sigma]}^{t}\left\{(x_{t^{\prime}}-\overline{x}_{t+1})(y_{t^{ \prime}}-\overline{y}_{t+1})-(x_{t^{\prime}}-\overline{x}_{t})(y_{t^{\prime} }-\overline{y}_{t})\right\}\] \[=(x_{t+1}-\overline{x}_{t+1})\left(y_{t+1}-\overline{y}_{t+1} \right)+\] \[\quad+\sum_{t^{\prime}=[\sigma]}^{t}\left\{-\overline{x}_{t+1}y_ {t^{\prime}}-x_{t^{\prime}}\overline{y}_{t+1}+\overline{x}_{t+1}\overline{y}_ {t+1}+x_{t^{\prime}}\overline{y}_{t}+\overline{x}_{t}y_{t^{\prime}}-\overline{ y}_{t}\overline{x}_{t}\right\}\] \[=(x_{t+1}-\overline{x}_{t+1})\left(y_{t+1}-\overline{y}_{t+1} \right)+(\overline{x}_{t+1}\overline{y}_{t+1}-\overline{y}_{t}\overline{x}_{t })\sum_{t^{\prime}=[\sigma]}^{t}1\ +\] \[\quad+\ (\overline{x}_{t}-\overline{x}_{t+1})\sum_{t^{\prime}=[ \sigma]}^{t}y_{t^{\prime}}+(\overline{y}_{t}-\overline{y}_{t+1})\sum_{t^{ \prime}=[\sigma]}^{t}x_{t^{\prime}}\] \[=(x_{t+1}-\overline{x}_{t+1})\left(y_{t+1}-\overline{y}_{t+1} \right)+(\overline{x}_{t+1}\overline{y}_{t+1}-\overline{y}_{t}\overline{x}_{t })\,L_{t}+(\overline{x}_{t}-\overline{x}_{t+1})\,L_{t}\overline{y}_{t}+( \overline{y}_{t}-\overline{y}_{t+1})\,L_{t}\overline{x}_{t}\] \[=(x_{t+1}-\overline{x}_{t+1})\left(y_{t+1}-\overline{y}_{t+1} \right)+L_{t}\left(\overline{x}_{t+1}\overline{y}_{t+1}-\overline{x}_{t+1} \overline{y}_{t}+\overline{y}_{t}\overline{x}_{t}-\overline{y}_{t+1}\overline {x}_{t}\right)\] \[=(x_{t+1}-\overline{x}_{t+1})\left(y_{t+1}-\overline{y}_{t+1} \right)+L_{t}\left(\overline{x}_{t+1}-\overline{x}_{t}\right)\left(\overline{ y}_{t+1}-\overline{y}_{t}\right)\] \[=(x_{t+1}-\overline{x}_{t+1})\left(y_{t+1}-\overline{y}_{t+1} \right)+(\overline{x}_{t+1}-\overline{x}_{t})\left(y_{t+1}-\overline{y}_{t+1}\right)\] \[=(x_{t+1}-\overline{x}_{t})\left(y_{t+1}-\overline{y}_{t+1} \right).\] For Case B, we have \[C_{t+1}-C_{t} =\sum_{t^{\prime}=[\sigma(t+1)]}^{t+1}(x_{t^{\prime}}-\overline{x }_{t+1})(y_{t^{\prime}}-\overline{y}_{t+1})-\sum_{t^{\prime}=[\sigma]}^{t}(x_{t ^{\prime}}-\overline{x}_{t})(y_{t^{\prime}}-\overline{y}_{t})\] \[=\sum_{t^{\prime}=[\sigma]+1}^{t+1}(x_{t^{\prime}}-\overline{x}_{t +1})\left(y_{t^{\prime}}-\overline{y}_{t+1}\right)-\sum_{t^{\prime}=[\sigma]}^ {t}(x_{t^{\prime}}-\overline{x}_{t})(y_{t^{\prime}}-\overline{y}_{t})\] \[=(x_{t+1}-\overline{x}_{t})\left(y_{t+1}-\overline{y}_{t} \right)-(x_{[\sigma]}-\overline{x}_{t})(y_{[\sigma]}-\overline{y}_{t})+\] \[\quad+\sum_{t^{\prime}=[\sigma]+1}^{t+1}\left\{(x_{t^{\prime}}- \overline{x}_{t+1})(y_{t^{\prime}}-\overline{y}_{t+1})-(x_{t^{\prime}}- \overline{x}_{t})(y_{t^{\prime}}-\overline{y}_{t})\right\}\] \[=(x_{t+1}-\overline{x}_{t})(y_{t+1}-\overline{y}_{t})-(x_{[\sigma ]}-\overline{x}_{t})(y_{[\sigma]}-\overline{y}_{t})+\] \[=(x_{t+1}-\overline{x}_{t})\left(y_{t+1}-\overline{y}_{t} \right)-(x_{[\sigma]}-\overline{x}_{t})(y_{[\sigma]}-\overline{y}_{t})+\] \[\quad+(\overline{x}_{t+1}\overline{y}_{t+1}-\overline{y}_{t} \overline{x}_{t})\sum_{t^{\prime}=[\sigma]+1}^{t+1}1\ +\ (\overline{x}_{t}-\overline{x}_{t+1})\sum_{t^{\prime}=[\sigma]+1}^{t+1}y_{t^{ \prime}}+(\overline{y}_{t}-\overline{y}_{t+1})\sum_{t^{\prime}=[\sigma]+1}^{t+ 1}x_{t^{\prime}}\] \[=(x_{t+1}-\overline{x}_{t})\left(y_{t+1}-\overline{y}_{t} \right)-(x_{[\sigma]}-\overline{x}_{t})(y_{[\sigma]}-\overline{y}_{t})+\] \[=(x_{t+1}-\overline{x}_{t})\left(y_{t+1}-\overline{y}_{t} \right)-(x_{[\sigma]}-\overline{x}_{t})\left(y_{[\sigma]}-\overline{y}_{t}\right)+\] \[\quad+(\overline{x}_{t}\overline{y}_{t+1}+\overline{x}_{t+1} \overline{y}_{t}-\overline{x}_{t+1}\overline{y}_{t+1}-\overline{y}_{t} \overline{x}_{t})\,L_{t+1}\] \[=(x_{t+1}-\overline{x}_{t})\left(y_{t+1}-\overline{y}_{t} \right)-(x_{[\sigma]}-\overline{x}_{t})\left(y_{[\sigma]}-\overline{y}_{t} \right)+\] \[\quad+(\overline{x}_{t}\overline{y}_{t+1}+\overline{x}_{t+1} \overline{y}_{t}-\overline{x}_{t+1}\overline{y}_{t+1}-\overline{y}_{t} \overline{x}_{t})\,L_{t+1}\] \[=(x_{t+1}-\overline{x}_{t})\left(y_{t+1}-\overline{y}_{t} \right)-(x_{[\sigma]}-\overline{y}_{t})\left(y_{[\sigma]}-\overline{y}_{t} \right)+\] \[=(x_{t+1}-\overline{x}_{t})\left(y_{t+1}-\overline{y}_{t} \right)-(x_{[\sigma]}-\overline{x}_{t})\left(y_{[\sigma]}-\overline{y}_{t} \right)+\] \[\quad+(\overline{x}_{t}\overline{y}_{t+1}+\overline{x}_{t+1} \overline{y}_{t}-\overline{x}_{t+1}\overline{y}_{t+1}-\overline{y}_{t} \overline{x}_{t})\,L_{t+1}\] \[=(x_{t+1}-\overline{x}_{t})\left(y_{t+1}-\overline{y}_{t} \right)-(x_{[\sigma]}-\overline{x}_{t})\left(y_{[\sigma]}-\overline{y}_{t} \right)+\] \[=(x_{t+1}-\overline{x}_{t})\left(y_{t+1}-\overline{y}_{t} \right)-(x_{[\sigma]}-\overline{x}_{t})\left(y_{[\sigma]}-\overline{y}_{t} \right)+\] \[\quad+(\overline{x}_{t}\overline{y}_{t+1}+\overline{x}_{t+1} \overline{y}_{t}-\overline{x}_{t+1}\overline{y}_{t+1}-\overline{y}_{t}
2308.09225
Insights and guidelines on the Cauchy horizon theorems
Recently there is progress to resolve the issue regarding the non-existence of the Cauchy horizon inside the static, charged, and spherically symmetric black holes. However, when we generically extend the black holes' spacetime, they are not just static but can be dynamical, thus the interior of black holes does not remain the same as the static case when we take into account the dynamical evolution of black holes. Then, our aim in this paper is to provide a few constructive insights and guidelines regarding this issue by revisiting a few examples of the gravitational collapse of spherically symmetric charged black holes using the double-null formalism. Our numerical results demonstrate that the inside of the outer horizon is no longer static even in late time, and the inner apparent horizon exists but is not regular. The inner apparent horizon can be distinguished clearly from the Cauchy horizon. The spherical symmetric property of black holes allows the inner horizon to be defined in two directions, i.e., the differentiation of the areal radius vanishes along either the out-going or the in-going null direction. Moreover, the Cauchy horizon can be generated from a singularity. Still, the notion of the singularity can be subtle where it can have a vanishing or non-vanishing areal radius; the corresponding curvature quantities could be finite or diverge, although the curvatures can be greater than the Planck scale. Finally, we show some examples that the "hair" which is associated with the matter field on the inner horizon is not important to determine the existence of the Cauchy horizon; rather, the hair on the outer horizon might play an important role on the Cauchy horizon. Therefore, the dynamic properties of the interior of charged black holes could shed light for us to understand deeply about the Cauchy horizon for the extensions of no-Cauchy-horizon theorems.
Xiao Yan Chew, Dong-han Yeom
2023-08-18T01:00:10Z
http://arxiv.org/abs/2308.09225v1
# Insights and guidelines on the Cauchy horizon theorems ###### Abstract Recently there is progress to resolve the issue regarding the non-existence of the Cauchy horizon inside the static, charged, and spherically symmetric black holes. However, when we generically extend the black holes' spacetime, they are not just static but can be dynamical, thus the interior of black holes does not remain the same as the static case when we take into account the dynamical evolution of black holes. Hence, the properties of the Cauchy horizon could behave differently in the dynamical case. Then, our aim in this paper is to provide a few constructive insights and guidelines regarding this issue by revisiting a few examples of the gravitational collapse of spherically symmetric charged black holes using the double-null formalism. Our numerical results demonstrate that the inside of the outer horizon is no longer static even in late time, and the inner apparent horizon exists but is not regular. The inner apparent horizon can be distinguished clearly from the Cauchy horizon. The spherical symmetric property of black holes allows the inner horizon to be defined in two directions, i.e., the differentiation of the areal radius vanishes along either the out-going or the in-going null direction. Moreover, the Cauchy horizon can be generated from a singularity. Still, the notion of the singularity can be subtle where it can have a vanishing or non-vanishing areal radius; the corresponding curvature quantities could be finite or diverge, although the curvatures can be greater than the Planck scale. Finally, we show some examples that the "hair" which is associated with the matter field on the inner horizon is not important to determine the existence of the Cauchy horizon; rather, the hair on the outer horizon might play an important role on the Cauchy horizon. Therefore, the dynamic properties of the interior of charged black holes could shed light for us to understand deeply about the Cauchy horizon for the extensions of no-Cauchy-horizon theorems. ###### Contents * I Introduction * II Preliminaries of inner horizons * II.1 Static charged black holes * II.2 Instability of inner horizons * II.3 No-Cauchy-horizon theorem * III Lessons from numerical investigations * III.1 Brief summary of the double-null formalism * III.2 Subtleties to define inner horizon and singularity * III.3 Formation of the inner horizon in Einstein-Maxwell theory * III.4 Do hairs remove the Cauchy horizon?: A case of the Brans-Dicke theory * IV Conclusion * IV.1 Acknowledgment * Appendix A Charged black holes in Einstein gravity * Appendix B Charged black holes in Brans-Dicke gravity * Appendix C Boundary conditions ## I Introduction The investigation of the interior of a black hole is a very interesting and important issue in general relativity. According to the singularity theorem [1], under very reasonable and natural assumptions, the singularity must be formed as a result of a gravitational collapse. Then, one may wonder about the properties of the singularities, here we have the Reissner-Nordstrom black hole [2; 3] as a typical example which contains a time-like singularity and two horizons which are the outer horizon and the inner horizon. The outer horizon corresponds to the usual black hole horizon and the inner horizon corresponds to the Cauchy horizon. The Cauchy horizon preserves the predictability in general relativity where a system can be evolved with time from its given initial data which is imposed on it. When a time-like singularity exists and an observer can see an event emanating from it, then the so-called _cosmic censorship conjecture_ is violated [4]. Will the cosmic censorship conjecture be true in generic situations? If the electric charge of the Reissner-Nordstrom black hole is greater than its mass, its time-like singularity can become a naked singularity. However, it is not quite easy to construct such an overcharged black hole using gravitational collapses [5; 6]. This justifies the weak version of the cosmic censorship conjecture. If one imposes a stronger version of the conjecture, namely strong cosmic censorship, one may insist that there must be no observer who can see the effects from the time-like singularity; in other words, no observer can cross the Cauchy horizon which can predict the evolution of a system from the imposed initial conditions on it. The violation of the strong cosmic censorship also implies the breakdown of predictability in general relativity. Will the strong cosmic censorship conjecture be true, too? Although we do not have a definite answer yet, we have some evidence to support that the strong version of the cosmic censorship conjecture is still reasonable. In general, there should exist an inner horizon before an observer can reach the time-like singularity. Here, the inner horizon is generally unstable, because the inner horizon corresponds to an infinite blue-shift instability [7], while the outer horizon corresponds to an infinite redshift instability. Relating these instabilities, when there is a matter fluctuation and if an observer measures the corresponding fluctuation near the inner horizon, the observed energy density will be exponentially increased. This effect is known as the _mass inflation_[8]. As long as mass inflation exists, we cannot trust the interior structure of the Reissner-Nordstrom spacetime anymore, and we have to rely on the full dynamical simulations [9]. In this context, recently there are interesting papers in the literature that discuss the existence or non-existence of the Cauchy horizons [10; 11; 12; 13; 14; 15]. In the literature, some authors proved the non-existence of the Cauchy horizons by considering some matter fields which support the black holes, i.e., the existence of _hairs_ which refers to the extra global charges (primary hair) associated with the matter fields or only the matter fields themselves (secondary hair). However, in order to prove a theorem on the non-existence of the Cauchy horizons, one needs to assume some conditions to define the inner horizons or Cauchy horizons. In order to generalize the corresponding mathematical theorems, the underlying assumptions must be true, but one may wonder whether _these assumptions will be still true in fully dynamical situations._ The aim of this paper is not to criticize other literature but to provide useful insight to learn some interesting properties of the Cauchy horizons for the black holes in the dynamical evolution. Indeed, the Cauchy horizon and the inner apparent horizon are distinguishable in dynamic cases, and the situations become more complicated if we take into account the quantum effects. We will report the detailed results that we can learn from numerical computations. We also wish our work can shed light on the development of some advanced theorems about the Cauchy horizons of black holes in the future. This paper is organized as follows. In Sec. II, we discuss some preliminary topics about the interior structure of the Reissner-Nordstrom black hole, the instability of inner horizons, and the recent progress with several literature that reformulates the no-Cauchy-horizon theorems. In Sec. III, we mainly demonstrate the dynamical properties of the inner horizon by revisiting several models of charged black holes in the gravitational collapse using the double-null formalism which can study the full dynamical process of black holes that begins from their formation and then ends with their evaporation. Finally, in Sec. IV, we provide a few implications and guidelines which are related to the Cauchy horizon based on our numerical results and comment on possible future research directions. ## II Preliminaries of inner horizons In this section, we maximally extend the Reissner-Nordstrom black hole which is a typical example of a static and charged black hole to understand its basic interior structure, particularly the Cauchy horizon inside the outer horizon. We then discuss the instability of the inner horizon. Moreover, we discuss the recent progress that reformulates the non-existence of the Cauchy horizon by providing several references. ### Static charged black holes Here we begin with the Reissner-Nordstrom black hole which is a static, stationary, and charged black hole, with the given metric as shown in the following, \[ds^{2}=-f(r)dt^{2}+\frac{1}{f(r)}dr^{2}+r^{2}d\Omega_{2}^{2}, \tag{1}\] where \[f(r)=1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}, \tag{2}\] where \(M\) is the ADM mass and \(Q\) is the asymptotically defined electric charge [2; 3]. Since \(f(r)=0\) can yield two solutions, one can notice that, if \(M>Q\), this metric has two apparent horizons which can be denoted as \(r_{+}\) and \(r_{-}\) with \(r_{+}>r_{-}\), where \(r_{+}\) is the outer horizon and \(r_{-}\) is the inner horizon. Let us define that \(u\) denotes the in-going null direction and \(v\) denotes the out-going null direction. After we Figure 1: The Penrose diagram of a static charged black hole. There exists a time-like singularity (\(r=0\)). The inner and outer apparent horizons correspond to the Cauchy and event horizons. \(r_{,v}=0\) horizons (red color) are always parallel to the out-going null direction, while \(r_{,u}=0\) (blue color) horizons are always parallel to the in-going null direction. maximally extend the spacetime in Kruskal-Szekeres coordinates [16], i.e., \[ds^{2}=-\frac{4r_{+}^{4}\left(r-r_{-}\right)^{1+r_{-}^{2}/r_{+}^{2}}}{r^{2}\left( r_{+}-r_{-}\right)^{2}}e^{\frac{r_{-}-r_{+}}{r_{+}^{2}}}dudv+r^{2}d\Omega^{2} \tag{3}\] with \(r=r(u,v)\), one can notice that there are two directions of the horizons, where \(r_{,v}=0\) horizons (red color) are always parallel to the out-going null direction, while \(r_{,u}=0\) horizons (blue color) are always parallel to the in-going null direction as depicted in Fig. 1. In addition, there exists a time-like singularity at \(r=0\); the inner horizon is the boundary where we cannot decide the geometry from the past data, and hence, this horizon is known as the Cauchy horizon [17]. The outer horizon is the boundary between the interior and the asymptotic infinity, and hence, this horizon is known as the event horizon. So, in the static and charged black hole case, the inner apparent horizon is the Cauchy horizon and the outer apparent horizon is the event horizon. Here, we list down several _common senses_ of the charged black holes: * \(r_{,v}=0\) apparent horizon is parallel to the out-going null direction; \(r_{,u}=0\) apparent horizon is parallel to the in-going null direction. * The \(r=0\) singularity is time-like. * The inner apparent horizon is the Cauchy horizon; the outer apparent horizon is the event horizon. However, these common senses are valid only if the metric is static. In the dynamic cases, all three assertions are not valid in general. On the other hand, if one applies such common sense to some proofs about the non-existence of inner horizons, the corresponding results might not be rigorous or even wrong because, in a realistic situation, the metric should be dynamic. ### Instability of inner horizons One of the most important issues about the inner horizon is the instability of inner horizons. When there exists an energy flux along the in-going null direction, we can define an observer who is moving along the out-going null direction and measures the energy flux. Near the inner horizon, the observed energy flux is approximately \[\rho\propto e^{+\kappa v}, \tag{4}\] where \(\kappa\) is the surface gravity of the inner horizon [8]. Therefore, the observed energy must be exponentially amplified near the inner horizon. This effect is known as the mass inflation. Theoretically, we can have a static, stationary, and charged black hole solution that is independent of time. However, in reality, there is no such an eternal object in the universe; every black hole must be generated from gravitational collapses. Therefore, energy fluctuations necessarily exist inside a charged black hole. Thus, the consequence of mass inflation is that, in realistic black holes which are generated from gravitational collapses, there must be a strong back-reaction due to the exponentially increased energy flux [9]. Hence, it is unlikely to precisely study the interior structures of a charged black hole from the perturbative analysis on the background of static solution. The only possible way to precisely understand the interior structure is to numerically evolve the corresponding equations of motion. In this paper, we apply the double-null formalism to study the formation of an inner horizon in the gravitational collapse of charged black holes. ### No-Cauchy-horizon theorem In this context, there have been some mathematical investigations and proofs regarding the inner horizon of black holes. In the literature, these investigations are mainly to formulate the _no-Cauchy-horizon theorems_. For example, we specify several mathematical assertions from the literature as follows: * Ref. [10] has shown numerically the Cauchy horizon does not exist in the interior of the scalar-charged hairy black holes with spherical and planar symmetries in the Born-Infield theory. * Ref. [11] has shown numerically the Cauchy horizon does not exist in the interior of the scalar charged hairy black holes with spherical and planar symmetries in Einstein-Maxwell-scalar theory of a complex scalar field with an arbitrary form of the scalar potential. * Ref. [12] has studied the non-existence of the Cauchy horizon for some specific types of black holes analytically in the Einstein-Maxwell-Horndeski theories. * Ref. [13] has studied numerically the non-existence of Cauchy horizon for scalar charged hairy black holes in the 5-dimensions of Einstein-Gauss-Bonnet theory with a massive complex scalar field. * Ref. [14] has studied the possibility to put a constraint on the number of horizons for the static black holes which can only possess at most one non-degenerated inner Killing horizon inside the event horizon when they satisfy the strong or null energy condition. * Ref. [15] has studied numerically the non-existence of Cauchy horizon for charged black holes with planar configuration in the Einstein-Maxwell-Gauss-Bonnet-scalar theory. Here we list down some common features in the literature as follows. * Some authors assumed that the black hole geometry will follow a _regular and static solution_. If a gravitational collapse occurs and then after a long time has sufficiently passed, a black hole solution will approach a static solution, but this is in the perspective of the _outside_ observer. One can ask whether this assumption is still true for an inside observer. * Some authors assumed hairs in order to prove the theorems for some specific cases, i.e., the dynamics of matter fields near the inner horizon. If any inconsistency can arise from the corresponding dynamics, then they can prove the non-existence of the Cauchy horizon. However, the no-hair theorem has not been established near the inner apparent horizon, and thus there is no systematic way to evade the no-hair theorem inside the outer horizon; hence, it would be risky to assume the existence of such a conserved quantity associated with the symmetry of the theory for the inner horizon. In this paper, we will show some examples that satisfy the following conditions. (1) The inner horizon is formed and approaches a (weak) curvature singularity. Hence, the interior geometry is no longer static or regular. (2) Near the inner horizon, there can be non-trivial field dynamics. This is not surprising because the interior geometry is no longer static. In the next section, we will discuss more issues that go beyond common sense. ## III Lessons from numerical investigations In this section, we revisit several models of charged black holes in the gravitational collapse using the double-null simulation by demonstrating the properties of the inner horizon in the gravitational collapse. For the sake of readers, we briefly introduce the double-null formalism as the first step. Then we also briefly define the properties on the inner horizon and singularity in the double-null coordinate since they can behave differently in the gravitational collapse. Lastly, we will present and discuss the properties of the inner horizon in the gravitational collapse. ### Brief summary of the double-null formalism Let us consider the most generic double-null coordinates with spherical symmetry: \[ds^{2}=-\alpha^{2}(u,v)dudv+r^{2}(u,v)d\Omega_{2}^{2}. \tag{5}\] In the _the double-null formalism_, one can obtain the equations of motion and solve them numerically based on this metric with appropriate boundary conditions (see Appendix C) [18]. So far, plenty of numerical simulations have been carried out to investigate the dynamics of the formation of inner horizons during the gravitational collapse of the charged black holes using the double-null formalism [19; 20; 21; 22; 23; 24; 25; 26; 28]. In this section, we summarize some important results that are relevant to the issue of the inner horizon by revisiting some of our previous works. * The authors investigated the properties of the Cauchy horizon inside the formation of charged black hole during a gravitational collapse caused by an input pulse consisting of a complex scalar field in the Einstein-Maxwell theory [19; 20]. The numerical results show that the inner horizon is formed after the gravitational collapse. If we turn off Hawking radiation, the inner horizon becomes a Cauchy horizon. * The authors extended the above investigation to study the semi-classical description for the emission of Hawking radiation from a charged black hole [21; 22]. The inclusion of Hawking radiation allows that the inner horizon and the Cauchy horizon can be distinguished. * The authors generalized the above two investigations from the Einstein gravity to the Brans-Dicke theory that can cover a diverse range of theories, including string-inspired models [23; 24; 25; 26], \(f(R)\) gravity [27; 28], etc. These results show that the inner and outer horizons can be varied dynamically due to the Brans-Dicke field. Also, we can see the existence or non-existence of inner horizons relating couplings between the Maxwell field and the Brans-Dicke field. Before we proceed to present the above results, we need to define clearly the inner horizon and singularity in the next subsection, since their appearance is highly sophisticated in the double-null simulation. ### Subtleties to define inner horizon and singularity The Reissner-Nordstrom black hole in the double-null coordinate contains two kinds of inner horizons: \(r_{,v}=0\) and \(r_{,u}=0\), where the former is parallel to a constant \(u\) line and another is parallel to a constant \(v\) line. The two horizons appear with the same areal radius, and they are well coincided with the causally defined Cauchy horizon. However, we need to distinguish three kinds of horizons for the dynamical cases: * H1. \(r_{,v}=0\) inner apparent horizon. * H2. \(r_{,u}=0\) inner apparent horizon. * H3. Cauchy horizon (boundary where one can causally determine from the past data). In order to define the third type of horizon (H3) precisely, one needs to take into account the subtleties to define the singularity and its properties. Thus, let us grasp several basic concepts about singularity as follows. * Type 1: it lies at \(r=0\) and the corresponding curvature quantities diverge; this kind of singularity is in the classical sense. * Type 2: it lies in \(r>0\) and the corresponding curvature quantities diverge; this kind of singularity is also in the classical sense which is so-called a weak singularity [29]. * Type 3: the radius of singularity is non-vanishing but at the Planck scale \(r\simeq\ell_{\rm P}\) and the corresponding curvature quantities are also at the Planck scale. * Type 4: the radius of singularity is greater than Planck scale \(r>\ell_{\rm P}\) but the curvature quantities are at the Planck scale. This kind of singularity is known as the quantum version of the weak singularity. Therefore, we can classify that Type 1 and 2 are classical singularities, while Type 3 and 4 are defined when there exists the Planck constant \(\hbar\) for the quantum regime. Type 1, 2, and 3 will dynamically generate the Cauchy horizon because one cannot extend computations inside the singularity; however, the property of Type 4 is less known, because one may extend general relativistic computations inside the singularity. The only problem is that such an extension is inconsistent in terms of quantum gravity except for some extreme cases (e.g., introducing the large \(N\)-rescaling [30; 31]). ### Formation of the inner horizon in Einstein-Maxwell theory In this subsection, we discuss the formation and evaporation process of charged black holes based on Ref. [22]. Hence, we begin with the typical model where a complex scalar field is coupled with a \(U(1)\) gauge field that is given as follows: \[S=\int\sqrt{-g}dx^{4}\left[\frac{\mathcal{R}}{16\pi}-(\phi_{,\mu}+ieA_{\mu}\phi) \left(\bar{\phi}^{,\mu}-ieA^{\mu}\bar{\phi}\right)-\frac{1}{8\pi}F_{\mu\nu}F^{ \mu\nu}\right], \tag{6}\] where \(\mathcal{R}\) is the Ricci scalar, \(\phi\) is a complex scalar field, \(A_{\mu}\) is the \(U(1)\) gauge field, \(e\) is the gauge coupling, and \(F_{\mu\nu}=A_{\nu;\mu}-A_{\mu;\nu}\). In the double-null formalism, one can observe a charged black hole that is formed in this Figure 3: Left: The evaporating charged black hole, where \(P=0.1\)[22]. There exists a Type 3 singularity with \(r\simeq\ell_{P}\) and an outer apparent horizon at \(r_{,v}=0\). At the inner apparent horizon, an H1 horizon (\(r_{,v}=0\)) and an H2 horizon (\(r_{,u}=0\)) are overlapped. Between the Type 3 singularity and H1/H2 horizon, there appears an H3 Cauchy horizon; we cannot decide beyond the null surface. Beyond the H1/H2 inner apparent horizon, the curvature rapidly increases, and hence, the gray-colored region is the super-Plankian region which goes beyond the scope of the semi-classical description. Therefore, the gray-colored region corresponds to Type 4 singularity (\(r>\ell_{P}\)). Right: The Penrose diagram for an evaporating charged black hole. Figure 2: Left: Dynamical formation of a charged black hole, where \(P=0\)[22]. There exists a Type 1 singularity \(r=0\) (thick black curve) and an outer apparent horizon \(r_{,v}=0\) (red curve). As \(v\) goes to infinity, there appears an H1 horizon (\(r_{,v}=0\)) which is parallel to the ingoing null direction; this will approach a Type 2 curvature singularity (\(r>0\)). Here, the grid spacing is 1 for black contours and 0.1 for green contours. Right: The Penrose diagram for the dynamical formation of a charged black hole. model due to the appearance of the input pulse \(\phi(u,v)\) at \(u=0\) to cause a gravitational collapse to occur in the spacetime, \[\phi(0,v)=\frac{A}{\sqrt{4\pi}}\sin^{2}\left(\frac{v}{v_{\rm f}}\right)\exp\left( 2\pi i\frac{v}{v_{\rm f}}\right), \tag{7}\] where the pulse is defined in \(0\leq v\leq v_{\rm f}\), \(A=0.25\), and \(e=0.1\). In order to turn on the Hawking radiation, we introduce the renormalized energy-momentum tensor \(\langle\hat{T}_{\mu\nu}\rangle\) in an approximate form [32], where the tensor is proportional to \(P\propto N\ell_{P}^{2}\) (the quantity \(P\) is precisely defined in Appendix A), where \(N\) is the number of scalar field that contribute to the Hawking radiation; this is the natural cutoff if \(N\) fields contribute to Hawking radiation [33]. If \(P=0\), we turn off Hawking radiation; we choose \(P=0.1\) for the evaporating case. Solving the equations of motion derived from Eq. (6) numerically using the double-null formalism (see Appendices A and C for more details), our numerical results demonstrate that a charged black hole is formed and evaporated during the process of gravitational collapse. Regarding this, we address several important remarks from the numerical results in [22]: * 1. There exist two kinds of singularities, which are Type 1 and Type 2. On the left of Fig. 2, one can see a Type 1 singularity at \(r=0\) (thick black curve) and an outer apparent horizon \(r_{,v}=0\) (red curve). In the late time limit, i.e., as \(v\) goes to infinity, there appears the H1 horizon (\(r_{,v}=0\)) which is parallel to the ingoing null direction. This will eventually approach a Type 2 curvature singularity (\(r>0\)) due to the mass inflation effect. Therefore, the Type 2 singularity coincides with the horizon H1, although it is parallel with constant \(v\) lines. This is somehow counter-intuitive with the _common sense_ which has been mentioned in Sec. II. However, this will help to preserve the strong cosmic censorship conjecture, because no observer can penetrate the Cauchy horizon due to the weak singularity. * 2. If we turn on the Hawking radiation as well as the incoming negative energy flux, the situation becomes more complicated. In the left of Fig. 3, there exists the Type 3 singularity \(r\simeq\ell_{P}\) and an outer apparent horizon \(r_{,v}=0\). At the inner apparent horizon, the H1 horizon (\(r_{,v}=0\)) and the H2 horizon (\(r_{,u}=0\)) are overlapped. Hence, as an observer penetrates inside the inner apparent horizon, the observer will experience that the areal radius increases, which is analogous to a wormhole structure (see more details in [5]). Due to the existence of the Type 3 singularity, there appears the H3 Cauchy horizon which is the future affected region from the singularity. Beyond the H1/H2 inner apparent horizon, the curvature rapidly increases, approximately \(m\sim e^{+M^{2}}\), where \(m\) is the Misner-Sharp mass [22]; hence, the gray-colored region represents the super-Plankian region which goes beyond the scope of the semi-classical description. Therefore, the gray-colored region (inside the H1 and H2 inner horizons) corresponds to Type 4 singularity (\(r>\ell_{P}\)). To summarize, there appears both Type 3 singularity and Type 4 singularity. Type 3 singularity generates the Cauchy horizon, while it is hidden by the Type 4 singularity. On the other hand, if one can trust the computation beyond the Type 4 singularity, e.g., assuming a large number of matter fields (\(\sim e^{M^{2}}\), see [30; 31]), the strong cosmic censorship might be violated even with the semi-classical effects [34]. The Type 4 singularity very well coincides with an inner apparent horizon, although there is an ambiguity to define the exact location of the singularity because it depends on the choice of the cutoff. Therefore, if we only define the apparent horizon using H1, one may neglect the important and interesting behaviors relating to the inner horizon and the Cauchy horizon. We can conclude that in dynamical situations, H1, H2, and H3 horizons are distinguishable; also, Type 1, Type 2, Type 3, and Type 4 singularities all appear in different situations. As we have mentioned, Fig. 2 confirms that the interior of a black hole does not remain as the interior of the static solution. The inner horizon becomes the \(r_{,v}=0\) horizon and is parallel to the in-going null direction, which is inconsistent with the static solution. Moreover, in the late time (\(v\rightarrow\infty\)) limit, the inner horizon becomes a curvature singularity, and hence, the regularity condition is also not satisfied. ### Do hairs remove the Cauchy horizon?: A case of the Brans-Dicke theory Some literature heavily relied on the assumption that there may exist hairs associated with the matter fields to establish some theorems about the non-existence of Cauchy horizons. In this subsection, we focus on the cases in which some hairs may help to remove the Cauchy horizon, but surprisingly that there _exists_ a counter-example that there is the Cauchy horizon with a scalar hair. Here, a crucial factor to support the counter-example is the existence of scalar hair at the outer horizon, although the non-trivial scalar hair can exist on the inner horizon. In order to investigate this issue, we consider the following model with a non-minimally coupled scalar field [23; 24; 25; 26]: \[S=\int\sqrt{-g}dx^{4}\left[\frac{1}{16\pi}\left(\Phi\mathcal{R}-\frac{\omega} {\Phi}\Phi_{;\mu}\Phi^{;\mu}-V(\Phi)\right)+\Phi^{\beta}\left(-\frac{1}{2} \left(\phi_{;\mu}+ieA_{\mu}\phi\right)\left(\bar{\phi}^{;\mu}-ieA^{\mu}\bar{ \phi}\right)-\frac{1}{16\pi}F_{\mu\nu}F^{\mu\nu}\right)\right], \tag{8}\] where \(\Phi\) is the Brans-Dicke field, \(\omega\) is the Brans-Dicke parameter, \(V(\Phi)\) is the potential of the Brans-Dicke field, and \(\beta\) is a constant. Similarly, to observe the formation of the inner horizon for charged black hole during the gravitational collapse using the double-null formalism we send an input pulse of a charged scalar field (\(0\leq v\leq v_{\rm f}\)) at \(u=0\), where it is given by \[\phi(0,v)=\frac{A}{\sqrt{4\pi}}\sin^{4}\left(\frac{v}{v_{\rm f}}\right)\exp \left(2\pi i\frac{v}{v_{\rm f}}\right). \tag{9}\] For simplicity, we fix \(e=0.3\) and \(\beta=0\) or \(\beta=1\). In this model, we can consider the Brans-Dicke field which is non-minimally coupled with the Maxwell field. Solving the equations of motion derived from Eq. (8) numerically in the double-null formalism (see Appendices B and C for more details), we demonstrate our numerical results from [23; 24; 25; 26] in the following. * 1. In Fig. 4, we considered \(\beta=0\) and \(\omega=1000\). The large value of \(\omega\) implies that the dynamics of the Brans-Dicke field can be switched off effectively and this result is consistent with Einstein gravity. Interestingly, the dynamics of the real and imaginary parts of the complex scalar field do not disappear in the large \(v\) limit. We find that the derivative of a complex scalar field with respect to \(u\) might vanish but might not vanish with respect to \(v\). Therefore, the dynamics of a non-trivial complex scalar field can exist in the inner horizon. If we turn on the Hawking radiation, one can still see the dynamics of the scalar fields that penetrate beyond the inner apparent horizon: see Fig. 15 of [22]. * 2. We considered \(\beta=0\) in the left of Fig. 5. The minimal coupling between the Maxwell field and the Brans-Dicke field in the Einstein frame does not form Brans-Dicke hair on the outer horizon. Figure 4: The real part (left) and the imaginary part (right) of the complex scalar field when \(\omega=1000\) and \(\beta=0\). Hence, the dynamics of the Brans-Dicke field can be negligible and this corresponds to the Einstein gravity [25; 26]. Here, we used \(A=0.25/\sqrt{2}\). The thin white contours are the contours of the fields and the red curves are apparent horizons (\(r_{,v}=0\)). The blue-colored regions are where the contours are space-like; the red-colored regions are where the contours are time-like. This directly shows that the scalar field has non-trivial dynamics in the Cauchy horizon limit. Figure 5: The Brans-Dicke field \(\Phi\) and horizons of charged black holes with \(\Phi\) when \(A=0.15\) and \(\omega=-1.4\)[23; 24]. Left: \(\beta=0\) and hence there is no Brans-Dicke hair at the outer horizon. Inside the outer apparent horizon, \(r_{,v}=0\) (red curve) and \(r_{,u}=0\) (light blue curve) horizons appear in a highly sophisticated manner because of the Jordan frame. In the \(v\rightarrow\infty\) limit, there exists a Type 2 singularity, but there exists a gradient of the Brans-Dicke field along the \(u\) direction. Right: \(\beta=1\) and there exists Brans-Dicke hair along the outer apparent horizon. The corresponding hair causes neither the inner apparent horizon nor Type 2 singularity to exist inside the outer horizon. Inside the outer apparent horizon, the appearance of \(r_{,v}=0\) (red curve) and \(r_{,u}=0\) (light blue curve) horizons are highly sophisticated because we perform the calculations in the Jordan frame. Hence, both H1 and H2 can appear in various places in the Jordan frame. In the \(v\rightarrow\infty\) limit, there exists a Type 2 singularity, but there exists a gradient of the Brans-Dicke field along the \(u\) direction. Therefore, one can interpret that there exists the Cauchy horizon in the \(v\rightarrow\infty\) limit, and at the same time, there exists a non-trivial scalar field dynamics along the Cauchy horizon. * 3. In the right of Fig. 5, we considered the \(\beta=1\) case. The non-minimal coupling between the Brans-Dicke field and the Maxwell field gives rise to the existence of the Brans-Dicke hair along the outer apparent horizon. One interesting observation is that there is no inner apparent horizon or Type 2 singularity inside the outer horizon. So, this implies that _if there exists a hair at the outer horizon, the inner Cauchy horizon structure disappears_, which is very different from pure Einstein gravity. Of course, at this moment, this is just a phenomenological observation and we need further mathematical investigations. Therefore, the scalar hair at the outer apparent horizon plays a more important role than the gradient of the (scalar or vector) field along the inner apparent horizon to determine the existence of the Cauchy horizon. ## IV Conclusion In this paper, we investigated the properties of inner horizons. First, we distinguished two notions of horizons, where one is the quasi-local notion (e.g., apparent horizons) and another is the global notion (Cauchy horizon). For the quasi-local notion, we can distinguish two horizons, i.e., \(r_{,v}=0\) or \(r_{,u}=0\). For the global notion, we can distinguish the horizons whether the singularity was generated from the classical singularity or the quantum (Planckian) singularity. In fully dynamical cases, one can construct several models so that all the notions can be distinguished. Our results provide a few important implications for the cosmic censorship conjecture. How can we define the singularity inside a black hole, for example, in the classical or quantum sense? What are the conditions to rescue the cosmic censorship conjecture, for example, in the classical or quantum sense? If there is no Cauchy horizon, is it related to the quasi-local or the global notion of the horizons? To preserve the cosmic censorship conjecture, should the horizon be disappeared at all or the existence of the inner horizon itself is still fine? These are the diverse questions that we need to ask before we proceed with more proof of the non-existence of the Cauchy horizon. We will leave these fascinating topics for future investigation and hope that our paper can shed light on the future advances for this issue. In addition, we can provide some guidelines for the Cauchy horizon theorems. Our simulations demonstrate that when we consider black holes in the dynamical cases, their interior structure is no longer to be described by the static and regular metric. Also, the scalar hair on the outer horizon is more important than the scalar hair on the inner horizon to determine the existence of the Cauchy horizon. We need to investigate these issues using not only numerical but also analytic approaches. Will this behavior generally be true in most cases, we leave this interesting question to be investigated in the future. ## Acknowledgment We would like to thank Mu-In Park for his valuable comments. DY is supported by the National Research Foundation of Korea (Grant no.: 2021R1C1C1008622, 2021R1A4A5031460). YYC acknowledges the support from the starting grant of Jiangsu University of Science and Technology (JUST). ## Appendix A Charged black holes in Einstein gravity We start from the model with a complex massless scalar field \(\phi\) and a gauge field \(A_{\mu}\)[22]: \[S=\int\sqrt{-g}dx^{4}\left[\frac{\mathcal{R}}{16\pi}-\left(\phi_{;\mu}+ieA_{ \mu}\phi\right)\left(\bar{\phi}^{;\mu}-ieA^{\mu}\bar{\phi}\right)-\frac{1}{8 \pi}F_{\mu\nu}F^{\mu\nu}\right], \tag{10}\] where \(F_{ab}=A_{b;a}-A_{a;b}\) and \(e\) is the gauge coupling. We introduce the double-null coordinates \[ds^{2}=-\alpha^{2}(u,v)dudv+r^{2}(u,v)d\Omega_{2}^{2}, \tag{11}\] and define variables: \[h\equiv\frac{\alpha_{,u}}{\alpha},\quad d\equiv\frac{\alpha_{,v}}{\alpha}, \quad f\equiv r_{,u},\quad g\equiv r_{,v},\quad w\equiv\sqrt{4\pi}\phi_{,u}, \quad z\equiv\sqrt{4\pi}\phi_{,v}. \tag{12}\] In addition, we assume the Coulomb gauge \(A_{\mu}=(a(u,v),0,0,0)\) which allows us to define the charge function \(q(u,v)\equiv 2r^{2}a_{,v}/\alpha^{2}\). The semi-classical Einstein equation is defined as \[G_{\mu\nu}=8\pi\left(T_{\mu\nu}^{\rm C}+\langle\hat{T}_{\mu\nu}^{\rm H}\rangle \right), \tag{13}\] where \(T_{\mu\nu}^{\rm C}\) is the classical energy-momentum tensor and \(\langle\hat{T}_{\mu\nu}^{\rm H}\rangle\) is the renormalized energy-momentum tensor. We introduce the \(S\)-wave approximation of the renormalized energy-momentum tensor: \[\langle\hat{T}_{uu}^{\rm H}\rangle = \frac{P}{4\pi r^{2}}\left(h_{,u}-h^{2}\right), \tag{14}\] \[\langle\hat{T}_{uv}^{\rm H}\rangle=\langle\hat{T}_{vu}^{\rm H}\rangle = -\frac{P}{4\pi r^{2}}d_{,u},\] (15) \[\langle\hat{T}_{vv}^{\rm H}\rangle = \frac{P}{4\pi r^{2}}\left(d_{,v}-d^{2}\right), \tag{16}\] with \(P\equiv N\ell_{\rm P}^{2}/12\pi\), where \(N\) is the number of scalar field and \(\ell_{\rm P}\) is the Planck length. The semi-classical Einstein equations, as well as the scalar field equation (\(s\equiv\sqrt{4\pi}\phi\)) and the Maxwell equation, are summarized as follows: \[d_{,u}=h_{,v} = \frac{1}{1-P/r^{2}}\left[\frac{fg}{r^{2}}+\frac{\alpha^{2}}{4r^{2}}- \frac{\alpha^{2}q^{2}}{2r^{4}}-\frac{1}{2}(w\overline{z}+\overline{w}z)-\frac{ iea}{2}(s\overline{z}-\overline{s}z)\right], \tag{17}\] \[g_{,v} = 2dg-rz\overline{z}-\frac{P}{r}\left(d_{,v}-d^{2}\right),\] (18) \[f_{,u} = 2fh-rw\overline{w}-iear\left(\overline{w}s-w\overline{s}\right) -e^{2}a^{2}rs\overline{s}-\frac{P}{r}\left(h_{,u}-h^{2}\right),\] (19) \[f_{,v}=g_{,u} = -\frac{fg}{r}-\frac{\alpha^{2}}{4r}+\frac{\alpha^{2}q^{2}}{4r^{3} }-\frac{P}{r}d_{,u},\] (20) \[a_{,v} = \frac{\alpha^{2}q}{2r^{2}},\] (21) \[q_{,v} = -\frac{ier^{2}}{2}\left(\overline{s}z-s\overline{z}\right),\] (22) \[z_{,u}=w_{,v} = -\frac{fz}{r}-\frac{gw}{r}-\frac{iearz}{r}-\frac{ieags}{r}-\frac{ ie}{4r^{2}}\alpha^{2}qs. \tag{23}\] ## Appendix B Charged black holes in Brans-Dicke gravity We consider the model for the charged black holes in Brans-Dicke gravity [23; 24] \[S=\int\sqrt{-g}dx^{4}\left[\frac{1}{16\pi}\left(\Phi\mathcal{R}-\frac{\omega} {\Phi}\Phi_{;\mu}\Phi^{;\mu}-V(\Phi)\right)+\Phi^{\beta}\left(-\frac{1}{2} \left(\phi_{;\mu}+ieA_{\mu}\phi\right)\left(\bar{\phi}^{;\mu}-ieA^{\mu}\bar{ \phi}\right)-\frac{1}{16\pi}F_{\mu\nu}F^{\mu\nu}\right)\right], \tag{24}\] where \(\Phi\) is the Brans-Dicke field, \(\omega\) is the Brans-Dicke parameter, \(V(\Phi)\) is the potential of the Brans-Dicke field, and \(\beta\) is a constant. We define the additional variables: \[W\equiv\Phi_{,u},\quad Z\equiv\Phi_{,v}. \tag{25}\] The Einstein equations, as well as the Brans-Dicke field equation, the complex scalar field equation, and the Maxwell equation, are given by the following: \[d_{,u}=h_{,v} = \mathfrak{A}-\frac{\mathfrak{B}}{r}-\frac{\mathfrak{C}}{2r\Phi}, \tag{26}\] \[f_{,v}=g_{,u} = \mathfrak{B}-\frac{\mathfrak{C}}{2\Phi},\] (27) \[Z_{,u}=W_{,v} = \frac{\mathfrak{C}}{r},\] (28) \[r_{,uu} = 2fh-\frac{r}{2\Phi}\left(W_{,u}-2hW\right)-\frac{r\omega}{2 \Phi^{2}}W^{2}-4\pi r\Phi^{\beta-1}T_{uu}^{\mathrm{C}},\] (29) \[r_{,vv} = 2gd-\frac{r}{2\Phi}\left(Z_{,v}-2dZ\right)-\frac{r\omega}{2\Phi ^{2}}Z^{2}-4\pi r\Phi^{\beta-1}T_{vv}^{\mathrm{C}},\] (30) \[a_{,v} = \frac{\alpha^{2}q}{2r^{2}},\] (31) \[q_{,v} = -\frac{ier^{2}}{2}\left(\overline{s}z-s\overline{z}\right)-\beta q \frac{Z}{\Phi},\] (32) \[a_{,vv} = \frac{\alpha^{2}}{r^{2}}\left(d-\frac{g}{r}\right)q-\frac{ie \alpha^{2}}{4}\left(z\overline{s}-s\overline{z}\right)-\beta q\frac{\alpha^{ 2}Z}{2r^{2}\Phi},\] (33) \[q_{,u} = \frac{ier^{2}}{2}\left(\overline{s}w-s\overline{w}\right)-r^{2} e^{2}as\overline{s}-\beta q\frac{W}{\Phi},\] (34) \[a_{,uv} = \frac{\alpha^{2}}{r^{2}}\left(h-\frac{f}{r}\right)q+\frac{ie \alpha^{2}}{4}\left(w\overline{s}-s\overline{w}\right)-\frac{\alpha^{2}}{2}e^ {2}as\overline{s}-\beta q\frac{\alpha^{2}W}{2r^{2}\Phi},\] (35) \[s_{,uv} = -\frac{fz}{r}-\frac{gw}{r}-\frac{iearz}{r}-\frac{ieags}{r}-\frac{ ie}{4r^{2}}\alpha^{2}qs-\frac{\beta}{2\Phi}\left(Wz+Zw+iesaZ\right), \tag{36}\] where \[\mathfrak{A} \equiv -\frac{2\pi\alpha^{2}}{r^{2}}\Phi^{\beta-1}T^{\rm C}_{\theta\theta}- \frac{1}{2r\Phi}\left(gW+fZ\right)-\frac{\omega}{2\Phi^{2}}WZ, \tag{37}\] \[\mathfrak{B} \equiv -\frac{\alpha^{2}}{4r}-\frac{fg}{r}+4\pi r\Phi^{\beta-1}T^{\rm C }_{uv}-\frac{1}{\Phi}(gW+fZ),\] (38) \[\mathfrak{C} \equiv -fZ-gW-\frac{2\pi r\alpha^{2}\Phi^{\beta}}{3+2\omega}\left(T^{\rm C }-2\beta\mathcal{L}^{\rm EM}\right),\] (39) \[T^{\rm C}_{uu} \equiv \frac{1}{4\pi}\left[w\overline{w}+iea(\overline{w}s-w\overline{ s})+e^{2}a^{2}s\overline{s}\right],\] (40) \[T^{\rm C}_{uv} \equiv \frac{\left(a,_{v}\right)^{2}}{4\pi\alpha^{2}},\] (41) \[T^{\rm C}_{vv} \equiv \frac{1}{4\pi}z\overline{z},\] (42) \[T^{\rm C}_{\theta\theta} \equiv \frac{r^{2}}{4\pi\alpha^{2}}\left[\left(w\overline{z}+z\overline {w}\right)+iea(\overline{z}s-z\overline{s})+\frac{2\left(a,_{v}\right)^{2}}{ \alpha^{2}}\right],\] (43) \[T^{\rm C} \equiv -\frac{4}{\alpha^{2}}T^{\rm C}_{uv}+\frac{2}{r^{2}}T^{\rm C}_{ \theta\theta},\] (44) \[\mathcal{L}^{\rm EM} \equiv \frac{1}{4\pi\alpha^{2}}\left(w\bar{z}+z\bar{w}\right)+\frac{iea} {4\pi\alpha^{2}}\left(\bar{z}s-z\bar{s}\right)+\frac{a,_{v}{}^{2}}{2\pi\alpha^ {4}}. \tag{45}\] ## Appendix C Boundary conditions We need the initial conditions for all functions \((\alpha,r,g,f,h,d,s,w,z,a,q,\Phi,W,Z)\) on the initial \(u=0\) and \(v=0\) surfaces. Using the gauge freedom to choose the initial \(r\) function, we define \(r(0,v)=r_{0}+0.5v\) and \(r(u,0)=r_{0}-0.5u\), where \(r_{0}\) is an arbitrary constant. We used \(r_{0}=10\) in the Einstein gravity case and \(r_{0}=20\) in the Brans-Dicke gravity case. This provides \(g(0,v)=0.5\) and \(f(u,0)=-0.5\). We have specified the form of the scalar field profile \(s(0,v)\) along the \(u=0\) surface, while it vanishes along the \(v=0\) surface. In addition, we provide \(\Phi(u,0)=\Phi(0,v)=1\) in the Brans-Dicke model. This automatically provides \(z(0,v)\), \(w(u,0)\), \(Z(0,v)\), and \(W(u,0)\). Hence, \(q(u,0)=a(u,0)=0\) and \(\alpha(u,0)=1\). The other functions will be determined by the equations with \(uu\)- or \(vv\)- derivation equations. The generic procedure to assign the boundary conditions is discussed in [18].
2310.13037
Agri-GNN: A Novel Genotypic-Topological Graph Neural Network Framework Built on GraphSAGE for Optimized Yield Prediction
Agriculture, as the cornerstone of human civilization, constantly seeks to integrate technology for enhanced productivity and sustainability. This paper introduces $\textit{Agri-GNN}$, a novel Genotypic-Topological Graph Neural Network Framework tailored to capture the intricate spatial and genotypic interactions of crops, paving the way for optimized predictions of harvest yields. $\textit{Agri-GNN}$ constructs a Graph $\mathcal{G}$ that considers farming plots as nodes, and then methodically constructs edges between nodes based on spatial and genotypic similarity, allowing for the aggregation of node information through a genotypic-topological filter. Graph Neural Networks (GNN), by design, consider the relationships between data points, enabling them to efficiently model the interconnected agricultural ecosystem. By harnessing the power of GNNs, $\textit{Agri-GNN}$ encapsulates both local and global information from plants, considering their inherent connections based on spatial proximity and shared genotypes, allowing stronger predictions to be made than traditional Machine Learning architectures. $\textit{Agri-GNN}$ is built from the GraphSAGE architecture, because of its optimal calibration with large graphs, like those of farming plots and breeding experiments. $\textit{Agri-GNN}$ experiments, conducted on a comprehensive dataset of vegetation indices, time, genotype information, and location data, demonstrate that $\textit{Agri-GNN}$ achieves an $R^2 = .876$ in yield predictions for farming fields in Iowa. The results show significant improvement over the baselines and other work in the field. $\textit{Agri-GNN}$ represents a blueprint for using advanced graph-based neural architectures to predict crop yield, providing significant improvements over baselines in the field.
Aditya Gupta, Asheesh Singh
2023-10-19T14:49:35Z
http://arxiv.org/abs/2310.13037v1
Agri-GNN: A Novel Genotypic-Topological Graph Neural Network Framework Built on GraphSAGE for Optimized Yield Prediction ###### Abstract Agriculture, as the cornerstone of human civilization, constantly seeks to integrate technology for enhanced productivity and sustainability. This paper introduces _Agri-GNN_, a novel Genotypic-Topological Graph Neural Network Framework tailored to capture the intricate spatial and genotypic interactions of crops, paving the way for optimized predictions of harvest yields. _Agri-GNN_ constructs a Graph \(\mathcal{G}\) that considers farming plots as nodes, and then methodically constructs edges between nodes based on spatial and genotypic similarity, allowing for the aggregation of node information through a genotypic-topological filter. Graph Neural Networks (GNN), by design, consider the relationships between data points, enabling them to efficiently model the interconnected agricultural ecosystem. By harnessing the power of GNNs, _Agri-GNN_ encapsulates both local and global information from plants, considering their inherent connections based on spatial proximity and shared genotypes, allowing stronger predictions to be made than traditional Machine Learning architectures. _Agri-GNN_ is built from the GraphSAGE architecture, because of its optimal calibration with large graphs, like those of farming plots and breeding experiments. _Agri-GNN_ experiments, conducted on a comprehensive dataset of vegetation indices, time, genotype information, and location data, demonstrate that _Agri-GNN_ achieves an \(R^{2}=.876\) in yield predictions for farming fields in Iowa. The results show significant improvement over the baselines and other work in the field. _Agri-GNN_ represents a blueprint for using advanced graph-based neural architectures to predict crop yield, providing significant improvements over baselines in the field. _The ultimate goal of farming is not the growing of the crops, but the cultivation and perfection of human beings. --Masanobu Fukuoka_ **Key Words:** Graph Neural Networks (GNNs), Agricultural Data Integration, Multimodal Data Fusion, Structured Data Modeling, Adaptive and Modular Design, Precision Agriculture Enhancement, Complex Interdependencies Modeling ## 1 Introduction In an era characterized by escalating climate change, which is resulting in unpredictable weather patterns and increasing environmental stresses, the agricultural sector faces significant challenges (Anwar et al., 2013). Unforseen climatic events such as droughts, floods, and extreme temperatures are impacting crop yields, highlighting the imperative for advanced, precise, and resilient crop yield prediction models (Kuwayama et al., 2019). Amidst this backdrop of climatic uncertainties (Shrestha et al., 2012), the necessity for accurate and comprehensive crop yield predictions is more acute than ever. A robust system that can efficiently integrate diverse data types and provide a detailed and holistic understanding of the agricultural ecosystem is crucial for mitigating the impacts of climate change on agriculture. The agricultural ecosystem is inherently complex and interconnected, with numerous factors playing a pivotal role in determining crop yields. Traditional Machine Learning models, while powerful, often fall short in effectively capturing these intricate relationships, as they generally treat data points as independent entities (Liu et al., 2020). The limited capacity of these models to handle relational data and their inability to seamlessly integrate diverse data types such as spatial, temporal, and genetic information, hamper their effectiveness in providing comprehensive and accurate crop yield predictions. Moreover, these models tend to be data-hungry, requiring substantial labeled data for training, which is often a significant challenge in the agricultural context (Majumdar et al., 2017). On the contrary, Graph Neural Networks (GNNs) stand out as a more apt choice for this scenario. GNNs, by design, consider the relationships between data points, enabling them to efficiently model the interconnected agricultural ecosystem. They can effectively synthesize diverse data types into a unified framework, offering a more holistic and nuanced understanding of the factors influencing crop yields (Zhou et al., 2020). The ability of GNNs to work with limited labeled data and their flexibility in handling various data modalities make them a superior choice for developing robust and resilient crop yield prediction models in the face of climate change. In light of this, the present study introduces Agri-GNN, a pioneering approach employing GNNs to offer an inclusive representation of the agricultural ecosystem. _Agri-GNN_ considers farming plots as nodes, and then constructs edges between nodes based on spatial and genotypic similarity, allowing for the aggregation of node information from a refined selection of nodes. This allows for the model to refine the noise that exists in the dataset and focus yield prediction efforts for each node on the most similar nodes in terms of genotypic and spatial similarity. _Agri-GNN_ stands out with its capacity to amalgamate diverse data modalities into a cohesive framework, adeptly capturing the complex interaction among genetic, environmental, and spatial factors (Meng et al., 2018). This innovative model transcends traditional methodologies by conceptualizing the agricultural ecosystem as a connected network, where each crop, viewed as an active node, is influenced by its immediate environment and genetic context. The employment of the GraphSAGE architecture (Hamilton et al., 2017), significantly bolsters the effectiveness of _Agri-GNN_. GraphSAGE is known for its inductive learning approach, where it leverages node attribute information to generate representations for data not seen during the training process. This approach is particularly beneficial for the extensive and heterogeneous datasets that are commonplace in the field of agriculture. Traditional machine learning models often struggle with such diverse and expansive data, leading to suboptimal predictions and insights. However, the GraphSAGE architecture, with its innovative inductive learning, excels in processing and learning from large datasets, thereby ensuring the robustness of _Agri-GNN_. In agriculture, datasets encompass a wide range of information including weather conditions, soil types, and genetic information, the ability to effectively handle and learn from such data is crucial for accurate yield predictions. The GraphSAGE architecture equips _Agri-GNN_ with this capability, allowing it to seamlessly integrate and process diverse data types to generate detailed and reliable yield predictions. This level of granular insight is important for making informed decisions in agricultural planning and management, ultimately contributing to enhanced productivity and sustainability. By using the GraphSAGE architecture, _Agri-GNN_ is not just limited to data seen during training. It can generalize and adapt to new data, ensuring that the model remains relevant and useful as more agricultural data becomes available. This adaptability is essential in the dynamic field of agriculture, where new data and insights continuously emerge. The advanced architecture thereby not only enhances _Agri-GNN_'s predictive accuracy but also bolsters its longevity and relevance in the agricultural sector, making it a valuable tool for tackling the challenges of modern agriculture. _Agri-GNN_'s modular and scalable design ensures its adaptability to the fast-paced evolution of the agricultural sector. This flexibility allows for the effortless integration of emerging data sources and insights, ensuring the model remains relevant and effective in a changing landscape (Gandhi and Armstrong, 2016). _Agri-GNN_ embodies a transformative shift that is taking place in agricultural modeling, providing a novel perspective that comprehensively addresses the complexity and interconnectedness of farming systems. By offering a nuanced, data-driven lens, _Agri-GNN_ stands as a robust tool for navigating the multi-faceted challenges of modern agriculture, particularly in the context of a changing climate. Literature Review Plant breeding specialists are focused on discovering high-quality genetic variations that fulfill the needs of farmers, the wider agricultural sector, and end consumers. One of the key traits often scrutinized is seed yield, particularly in row crops (Singh et al., 2021c). Traditional ways of assessing seed yield involve the laborious and time-restricted activity of machine-harvesting numerous plots when the growing season concludes. This data then informs decisions about which genetic lines to either advance or discontinue in breeding programs. This approach is highly resource-intensive, requiring the harvesting of thousands of test plots each year, thus presenting operational challenges. In response to these issues, advancements in technology are being harnessed to develop more efficient alternatives. An increasing number of scientists and plant breeders are adopting the use of remote sensing technology, integrated with machine learning methods. This enables more timely predictions of seed yield, substantially cutting down on labor and time requirements during the crucial harvest phase (Li et al., 2022; Yoosefzadeh-Najafabadi et al., 2021; Chiozza et al., 2021; Shook et al., 2021b; Riera et al., 2021; Guo et al., 2021; Singh et al., 2021a). The newly introduced Cyber-Agricultural System (CAS) takes advantage of cutting-edge continual sensing technology, artificial intelligence, and smart actuators for enhancing both breeding and production in agriculture Sarkar et al. (2023). Integral to CAS is the concept of phenotyping, which employs sophisticated imaging and computational techniques to streamline the gathering and interpretation of data, thereby facilitating better yield forecasts Singh et al. (2021b). Numerous studies have honed in on high-throughput phenotyping through the use of drones, investigating yield predictions in crops such as cotton, maize, soybean, and wheat Herr et al. (2023). Beyond the 2D data collected by drones, research has demonstrated the value of canopy fingerprints, which offer unique insights into the 3D structure of soybean canopies via point cloud data Young et al. (2023). Despite these advances, there is still scope for refining models that amalgamate diverse datasets, including but not limited to soil features and hyperspectral reflectance, for a more holistic grasp of soybean yields. The soil's physical and chemical attributes play a crucial role in nutrient availability, thereby impacting plant health and growth. Incorporating these soil characteristics could potentially enhance the precision of yield prediction models. In recent years, the application of neural networks in crop yield prediction has moved beyond traditional architectures to more complex models like Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) (Dahikar and Rode, 2014; Gandhi et al., 2016). Much work in the field takes advantage of remote sensing data, such as satellite images or NDVI, for yield predictions (You et al., 2017; Nevavuori et al., 2019; Kim et al., 2019). While these methods have shown promise, they often struggle to capture the direct relationships between environmental factors and crop yields. Previous research has also focused on using environmental factors like temperature and rainfall directly as inputs for yield prediction models (Cakir et al., 2014; Khaki and Wang, 2019), but the same failure to capture direct relationships accurately has been seen. Against this backdrop, Graph Neural Networks (GNNs) offer a significant advancement by incorporating spatial relationships or neighborhood information into the prediction models. By adding the spatial context through GNNs, recent models provide a more nuanced understanding of how localized factors can impact yield, enhancing prediction accuracy (Fan et al., 2022; Park and Park, 2019; Sajitha et al., 2023). Particularly, Fan et al. (2022) uses a GNN-RNN based approach that shows promising results, but fails to generalize on graphs of large size. The GraphSAGE model (Hamilton et al., 2017) stands out as a notable innovation in leveraging spatial and neighborhood information for more accurate crop yield predictions. The incorporation of GraphSAGE allows us to effectively capture localized contextual information, thereby refining our understanding of how specific factors in localized areas can influence crop yields. This results in an enhanced level of accuracy in our yield predictions. Graph Neural Networks offer a promising avenue for enhancing the state-of-the-art in crop yield prediction. They facilitate the integration of various types of data and have the potential to significantly improve the accuracy of existing models. Future research should focus on leveraging these capabilities to build models that can generalize well across different conditions and scales. Background In this section, we provide background information on Graph Neural Networks and GraphSAGE. We also provide a background on the data features used in the creation of Agri-GNN. This background information is necessary for understanding Agri-GNN. ### Graph Neural Networks Graphs are a robust and versatile means of capturing relationships among entities, known as nodes, and their interconnections, termed edges. Graph Neural Networks elevate this representational power by extending classical neural network architectures to operate directly on graph-structured data. These networks are particularly effective at generating meaningful node-level or graph-level embeddings, representations that can be subsequently used in various downstream tasks such as node classification, link prediction, and community detection. For an exhaustive review of the techniques and methods employed in GNNs, we direct the reader to seminal survey papers by Battaglia et al. (2018) and Chami et al. (2021). A GNN can be represented as \(f(A,X;W)\to y\), where \(y\) denotes the set of predicted outcomes (e.g., plot yield predictions), \(A\) is an \(n\times n\) adjacency matrix encapsulating the graph structure, \(X\) is an \(n\times p\) feature matrix detailing node attributes, and \(W\) represents the trainable weight parameters of the network. In this function, \(A\) and \(X\) serve as inputs, and the network \(f\) is parameterized by \(W\). One of the distinguishing characteristics of GNNs lies in their ability to accumulate and propagate information across nodes (Hamilton et al., 2017). A node's feature vector is updated iteratively based on the feature vectors of its neighboring nodes. The depth or number of layers, \(\ell\), in the GNN controls which neighbors are involved in these updates. Specifically, the final representation of a node only includes information from nodes that are at most \(\ell\)-hop distance away. This scope of nodes and edges involved in the computation of a node's representation is termed its computational graph, formally defined as \(G=(A,V)\). Here, \(A\) is the adjacency matrix for the subgraph, and \(X_{v}\) is the feature matrix for nodes within the \(\ell\)-hop neighborhood of the node \(v\). ### GraphSAGE GraphSAGE (Graph Sample and Aggregation) is a pioneering extension to general Graph Neural Networks and was specifically designed to address challenges such as scalability and inductive learning on large graphs (Hamilton et al., 2017). Unlike traditional GNN architectures that require the entire graph to be loaded into memory for training, GraphSAGE leverages a sampling strategy to extract localized subgraphs, thereby allowing for mini-batch training on large-scale graphs. The key innovation in GraphSAGE is its novel aggregation mechanism, which uses parameterized functions to aggregate information from a node's neighbors. These functions can be as simple as taking an average or as complex as employing a neural network for the aggregation process. GraphSAGE is expressed as \(f_{\text{SAGE}}(A,X;W)\to y\), where \(f_{\text{SAGE}}\) represents the GraphSAGE model, \(y\) is the output (such as node embeddings or graph-level predictions), \(A\) is the adjacency matrix, \(X\) is the feature matrix, and \(W\) are the trainable parameters. Like generic GNNs, GraphSAGE accumulates and combines information from a node's neighborhood to update its feature representation. However, GraphSAGE can generalize to unseen nodes during inference by leveraging learned aggregation functions, making it particularly valuable for evolving graphs where the node set can change over time. It has been employed effectively in diverse applications such as social network analysis, recommendation systems, and even in specialized fields like computational biology and agronomy, showcasing its adaptability and efficiency(Xiao et al., 2019). In formal terms, the \(l\)-th layer of GraphSAGE is defined. The aggregated embedding from neighboring counties, denoted \(\mathbf{a}_{c,t}^{(l)}\), is calculated using the function \(g_{l}\) applied to the embeddings \(\mathbf{z}_{c^{\prime},t}^{(l-1)}\) for all neighboring counties \(c^{\prime}\) of county \(c\), represented as: \[\mathbf{a}_{c,t}^{(l)}=g_{l}(\{\mathbf{z}_{c^{\prime},t}^{(l-1)},\forall c^{ \prime}\in\mathcal{N}(c)\})\] Here, \(\mathcal{N}(c)=\{c^{\prime},\forall A_{c,c^{\prime}}=1\}\) denotes the set of neighboring counties for \(c\). The embedding for the \(l\)-th layer, \(\mathbf{z}^{(l)}_{c,t}\), is then obtained by applying a non-linear function \(\sigma\) to the product of a weight matrix \(\mathbf{W}^{(l)}\) and the concatenation of the last layer's embedding \(\mathbf{z}^{(l-1)}_{c,t}\) and \(\mathbf{a}^{(l)}_{c,t}\): \[\mathbf{z}^{(l)}_{c,t}=\sigma(\mathbf{W}^{(l)}\cdot(\mathbf{z}^{(l-1)}_{c,t}, \mathbf{a}^{(l)}_{c,t}))\] Where \(\mathbf{z}^{(0)}_{c,t}=h_{c,t}\) as per a previous equation, and \(l\) belongs to the set \(\{0,1,...,L\}\). The aggregation function for the \(l\)-th layer, \(g_{l}(\cdot)\), can be a mean, pooling, or graph convolution (GCN) function. In this process, \(\mathbf{a}^{(l)}_{c,t}\) is first concatenated with \(\mathbf{z}^{(l-1)}_{c,t}\), and then transformed using the weight matrix \(\mathbf{W}^{(l)}\). The non-linear function \(\sigma(\cdot)\) is applied to this product to obtain the final embedding for the \(l\)-th layer. ### Vegetation Indices Vegetation indices are essential metrics used in the field of remote sensing phenology to quantify vegetation cover, assess plant health, and (in our study) to estimate crop yields. These indices leverage the spectral data gathered by electromagnetic radiation, which measure various wavelengths of light absorbed and reflected by plants. A fundamental understanding of how these wavelengths interact with vegetation is crucial for interpreting these indices. Specifically, the pigments in plant leaves, such as chlorophyll, absorb wavelengths in the visible spectrum, particularly the red light. Conversely, leaves reflect a significant amount of near-infrared (NIR) light, which is not visible to the human eye. The indices used in the construction of Agri-GNN are available in Appendix A. One of the most commonly used vegetation indices is the Normalized Difference Vegetation Index (NDVI). It is calculated using the formula \(\text{NDVI}=\frac{(NIR-RED)}{(NIR+RED)}\), where \(NIR\) represents the near-infrared reflectance and \(RED\) is the reflectance in the red part of the spectrum. The NDVI value ranges between -1 and 1, with higher values typically indicating healthier vegetation and lower values signifying sparse or stressed vegetation. This index is invaluable for various applications, ranging from environmental monitoring to precision agriculture. For Agri-GNN, vegetation indices like NDVI can serve as informative node attributes in agronomic graphs, enhancing the model's ability to make accurate and meaningful predictions in agricultural settings(Bannari et al., 1995). ## 4 Methods Our proposed framework, _Agri-GNN_, aims to provide a comprehensive solution to crop yield prediction by leveraging the power of Graph Neural Networks (GNNs). The methods section goes over the various stages involved in the design, construction, and validation of _Agri-GNN_. ### Graph Construction To effectively utilize GNNs for agricultural prediction, we first represent the agricultural data as a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}\) denotes the set of nodes and \(\mathcal{E}\) denotes the set of edges. #### 4.1.1 Node Representation Each node \(v_{i}\in\mathcal{V}\) corresponds to a specific plot and is associated with a feature vector \(\mathbf{x}_{i}\). This feature vector encapsulates all given data information: **Vegetation Indices (r\({}_{i}\))**: The variable \(r_{i}\) is derived from remote sensing imagery and encapsulates various spectral indices, including but not limited to reflectance values, that are crucial for assessing vegetation health and vitality. These spectral indices often make use of different reflectance bands, such as near-infrared and red bands, to capture detailed information about plant life. These indices serve as a valuable source of information, particularly when integrated with other types of data like genotypic and soil information. The computational procedure to obtain \(r_{i}\) often involves sophisticated algorithms that account for atmospheric corrections and other sources of noise in the imagery. The specific formulas and methodologies used to determine \(r_{i}\) are detailed in Appendix A. **Genotypic Data (g\({}_{i}\))**: The variable \(g_{i}\) encapsulates the genetic information associated with a specific crop or plant. This genotypic data serves as a foundational element in the realm of Cyber Agricultural Systems. Utilizing advanced imaging techniques like hyperspectral imaging, along with computational methods such as machine learning algorithms, the acquisition and analysis of \(g_{i}\) have been significantly streamlined. These advancements not only ease the process of data collection but also enable more accurate and comprehensive genetic profiling. Such in-depth genotypic information is invaluable for understanding plant characteristics, disease resistance, and yield potential, thereby playing a crucial role in the development of precision agriculture strategies and sustainable farming practices Singh et al. (2021). **Weather Data (w\({}_{i}\))**: The variable \(w_{i}\) encompasses an array of meteorological factors related to a specific agricultural plot, capturing elements such as temperature, precipitation, humidity, wind speed, and solar radiation. These weather conditions are collected through a variety of methods, including on-site weather stations, remote sensors, and even satellite data. The comprehensive nature of \(w_{i}\) allows it to serve as a vital input for crop health monitoring systems and predictive yield models. For instance, high-resolution temporal data on factors like soil moisture and air temperature can be instrumental in predicting potential stress events for crops, such as drought or frost risks. Furthermore, when integrated with other data types like genotypic and vegetation indices, \(w_{i}\) contributes to creating a multifaceted, dynamic model of the agricultural environment(Mansfield et al. (2005)). The final node feature vector is a concatenation of these features: \[\mathbf{x}_{i}=[\mathbf{r}_{i},\mathbf{g}_{i},\mathbf{w}_{i}] \tag{1}\] #### 4.1.2 Edge Representation in _Agri-GNN_ Edges play an indispensable role in graph-based neural network architectures, encoding vital relationships between nodes. Within the _Agri-GNN_ architecture, the edge set \(\mathcal{E}\) is meticulously constructed to encapsulate the intricate relationships between agricultural plots. This is achieved by harnessing both spatial and genotypic attributes. There are two edge constructions that are undertaken. \(\mathcal{E}_{\text{spatial}}\) encompasses the edges that are created through spatial proximity. Given the geographical coordinates \(\mathbf{c}(v_{i})\) associated with a node \(v_{i}\), the pairwise distance to another node \(v_{j}\) having coordinates \(\mathbf{c}(v_{j})\) can be defined as: \[d(v_{i},v_{j})=\|\mathbf{c}(v_{i})-\mathbf{c}(v_{j})\| \tag{2}\] For every node \(v_{i}\), edges are constructed to nodes that fall within the bottom 3% of all pairwise distances, thereby ensuring that the model captures localized environmental intricacies and dependencies: \[e_{ij}=\begin{cases}1&\text{if }d(v_{i},v_{j})\in\text{bottom 3\% of distances for }v_{i}\\ 0&\text{otherwise}\end{cases} \tag{3}\] The second set of edges constructed is represented by \(\mathcal{E}_{\text{genotypic}}\). The genotypic data serves as a repository of the genetic characteristics of agricultural plots, offering a window into inherent traits and susceptibilities. Let \(g(v_{i})\) represent the genotypic data for node \(v_{i}\). An edge is formed between nodes \(v_{i}\) and \(v_{j}\) if their genotypic attributes resonate: \[e_{ij}=\begin{cases}1&\text{if }g(v_{i})=g(v_{j})\\ 0&\text{otherwise}\end{cases} \tag{4}\] The culmination of the edge formation process results in the edge set \(\mathcal{E}\), which is a fusion of edges derived from both spatial proximity and genotypic similarity: \[\mathcal{E}=\mathcal{E}_{\text{spatial}}\cup\mathcal{E}_{\text{genotypic}} \tag{5}\] By harmonizing spatial and genotypic data, _Agri-GNN_ crafts a robust and nuanced representation of the agricultural milieu, establishing itself as an efficacious tool for diverse applications in the realm of agriculture. ### Graph Neural Network Architecture For our crop yield prediction task, we introduce the _Agri-GNN_ model, an adaptation of the GraphSAGE (Graph Sample and Aggregation) architecture discussed in section 3. The model operates on graph \(\mathcal{G}\), designed for efficient aggregation and propagation of information across the graph. _AgriGNN_ has four GraphSAGE convolutional layers to process and refine node features, ensuring an effective representation for downstream tasks. The model architecture is explained in detail in this section. In the initial layer of the Architecture, whose primary objective is transitioning the input node features to an intermediate representation using hidden channels. The transformation for node \(i\) in this layer is shown by equation (6). \[\mathbf{h}_{i}^{(1)}=\sigma\left(\mathbf{W}_{\text{init}}\cdot\mathbf{x}_{i}+ \mathbf{b}_{\text{init}}\right) \tag{6}\] In equation (6), \(\mathbf{x}_{i}\) stands for the initial input features of node \(i\), while \(\mathbf{W}_{\text{init}}\) and \(\mathbf{b}_{\text{init}}\) denote the weight matrix and bias vector of this layer, respectively. The function \(\sigma\) represents the activation function. The Rectified Linear Unit (ReLU) is used. A salient feature of the _Agri-GNN_ is its intermediary layers, which not only collate features from neighboring nodes but also incorporate skip connections to preserve the essence of the original node's features. The aggregation of features from neighboring nodes in these layers is depicted in equation (7). \[\mathbf{m}_{i}^{(I)}=\text{AGGREGATE}\left(\left\{\mathbf{h}_{j}^{(I-1)}\, \forall j\in\text{N}(i)\right\}\right) \tag{7}\] Subsequent to this aggregation, the features undergo a transformation, as expressed in equation (8). \[\mathbf{h}_{i}^{(I)}=\sigma\left(\mathbf{W}^{(I)}\cdot\text{CONCAT}(\mathbf{h} _{i}^{(I-1)},\mathbf{m}_{i}^{(I)})+\mathbf{b}^{(I)}\right) \tag{8}\] Here, \(\text{N}(i)\) represents the neighboring nodes of node \(i\), and \(\mathbf{W}^{(I)}\) and \(\mathbf{b}^{(I)}\) signify the weight matrix and bias vector for layer \(I\), respectively. The architecture culminates in the final layer, a pivotal component that produces the model's refined output. This layer mirrors the operations of the intermediary layers in aggregating neighboring node features, but distinguishes itself by excluding the addition of original node features. The aggregation of features in this layer is portrayed in equation (9). \[\mathbf{m}_{i}^{(4)}=\text{AGGREGATE}\left(\left\{\mathbf{h}_{j}^{(3)}\, \forall j\in\text{N}(i)\right\}\right) \tag{9}\] The subsequent transformation, harnessing the aggregated features to yield the final output, is described in equation (10). \[\mathbf{h}_{i}^{(4)}=\sigma\left(\mathbf{W}^{(4)}\cdot\text{CONCAT}(\mathbf{h} _{i}^{(3)},\mathbf{m}_{i}^{(4)})+\mathbf{b}^{(4)}\right) \tag{10}\] To ensure stability in convergence and enhance generalization, each hidden layer is succeeded by a batch normalization step. After normalization, dropout regularization with a rate of \(p=0.5\) is employed to combat overfitting, as described by equation (11): \[\mathbf{h}_{i}^{(I)}=\text{Dropout}(\mathbf{h}_{i}^{(I)},p=0.5) \tag{11}\] The final output of the model is a prediction of the yield of the given node(s). The final _Agri-GNN_ model is designed to take in initial node features \(\mathbf{x}_{i}\) and produce an output \(\mathbf{o}_{i}\) for each node, representing the predicted yield. The model is summarized as: \[\mathbf{o}_{i}=\text{{Agri-GNN}}(\mathbf{x}_{i};\mathcal{G},\Theta) \tag{12}\] Here, \(\Theta\) stands for the set of all learnable parameters within the model. The model's performance is evaluated using a Mean Squared Error (MSE) loss between the predicted crop yields \(\mathbf{o}_{i}\) and the actual yields. Optimization is carried out via the Adam optimizer, and hyperparameters such as learning rates are fine-tuned for optimal performance. _Agri-GNN_'s architecture is summarized in Figure 1. ## 5 Applications and Experimental Results The _Agri-GNN_ framework is now applied to plot fields in Ames, Iowa. _Agri-GNN_'s performance is compared to various baseline yield prediction models to accurately gauge its potential. ### Data Collection and Processing Data on soil attributes, hyperspectral reflectance, seed yield, and weather were collected systematically over two consecutive years, 2020 and 2021. The data collection covered both Preliminary Yield Trials (PYT) and Advanced Yield Trials (AYT), and each year involved multiple trial locations. #### 5.1.1 Trial Design The Preliminary Yield Trials (PYT) were designed using a row-column configuration. In this layout, each plot had a width of 1.52m and a length of 2.13m. Plots within each row were interspaced by 0.91m. The Advanced Yield Trials (AYT) followed a similar row-column design, but with each plot measuring 1.52m in width and 5.18m in length. The interspacing between plots remained consistent at 0.91m for both trial types. #### 5.1.2 Soil Data Collection Digital soil mapping techniques were used to identify ten specific soil attributes, supplementing the collection of hyperspectral data. Soil cores were extracted down to a depth of 15 cm following a 25m grid sampling pattern, using specialized soil probes. Digital soil maps were then generated with 3mx3m pixel resolution using the Cubist regression machine learning algorithmKhaledian and Miller (2020). For each plot, boundaries were outlined using polygon shape files, and the average value of each soil feature was calculated. To improve data reliability, a 3x3 moving mean was computed for each soil attribute within the plot, and this smoothed value was then used for more detailed analyses. The assessed soil features encompassed Calcium (Ca), Cation Exchange Capacity (CEC), Potassium (K), Magnesium (Mg), Organic Matter (OM), Phosphorus (P1), Percent Hydrogen (Ph), and proportions of Clay, Sand, and Silt. Figure 1: Summary of Model Architecture #### 5.1.3 Hyperspectral Reflectance Data Hyperspectral reflectance data for each plot was captured using a Thorlabs CCS200 spectrometer, based in Newton, NJ. The methodology adheres to the system outlined in the study by Bai et al. (2016). Spectral data was collected annually at three distinct time points: T1, T2, and T3, captured sequentially, covering wavelengths from 200 nm to 1000 nm, as illustrated in Figure 2. Particular emphasis is placed on the T3 timepoint, which has been identified as having superior feature importance values according to preliminary data assessments. As vegetation nears physiological maturity, the correlation between hyperspectral reflectance data and crop yield becomes more significant. Fifty-two vegetation indices were calculated based on the collected hyperspectral reflectance values, following the methodology detailed in the study by Li et al. (2022). A comprehensive list of these indices is available in Appendix A. The distribution of the collected data across the four fields is summarized in Table 1. It is interesting to note that many of the vegetation indices show a strong correlation with each other, especially when considering their underlying mathematical formulations. As depicted in Figure 3, certain pairs of indices exhibit notably high correlation values, suggesting that they might be capturing similar information about the vegetation. This redundancy could be attributed to the fact that many vegetation indices are derived from the same spectral bands, primarily the red and near-infrared (NIR) regions, which are known to be indicative of plant health and vigor. However, while two indices might be highly correlated, they might still provide unique insights to different vegetation properties, and thus, all of the vegetation indices were kept as features in the construction of the dataset. \begin{table} \begin{tabular}{c c} \hline \hline Field No. & Number of observations \\ \hline \hline 1 & 770 \\ 2 & 912 \\ 3 & 800 \\ 4 & 679 \\ \hline \hline \end{tabular} \end{table} Table 1: Number of datapoints in each of the four fields. Figure 2: Reflectance plot showcasing wavelengths from 400 nm to 1000 nm. The depicted red, green, and blue bands are illustrative of the visible spectrum wavelengths. #### 5.1.4 Yield Data Seed yield data at the plot level was collected using an Almaco small plot combine, headquartered in Nevada, IA. For consistency, all yield data was normalized to a moisture content of 13% and then converted to kilograms per hectare (Kg/ha). The data collection process was organized in blocks, mirroring the layout established by the breeding program. This arrangement grouped the plots based on their genetic lineage and corresponding maturity groups. Prior to the computation of vegetation indices, a rigorous data preprocessing phase was undertaken to omit anomalies and outliers. The steps encapsulated: 1. Omission of all observations for band values falling below 400 nm due to detected anomalies in these bands' readings. 2. Exclusion of datapoints with negative hyperspectral values within the 400 nm to 1000 nm band range. 3. Removal of datapoints showcasing negative seed yield values. ### Graph Construction Once the dataset was pruned to retain only the relevant columns, we started the task of graph construction, as explained in detail in Section 4.1. The nodes of the graph represented individual data points from the dataset, while the edges encoded two primary relationships: spatial proximity and genotype similarity. The geospatial coordinates ('Latitude' and 'Longitude') of each data point facilitated the computation of pairwise distances between them. By harnessing this distance matrix, we established a threshold--specifically the 3rd percentile of non-zero distances--and used it as a criterion to draw edges between nodes. In essence, if the spatial distance between two nodes was less than this threshold, an edge was drawn between them. Beyond spatial relationships, our graph also recognized the significance of genetic similarities between data points. This was achieved by drawing edges between nodes that shared the same genotype, as denoted by the 'Population' column. By adopting this strategy, our graph was enriched with edges that encapsulated intrinsic genetic relationships, which has an influence on agricultural yield (Shook et al., 2021a) Figure 3: Correlation of selected vegetation indices is shown. To facilitate subsequent graph-based deep learning, we represented our graph using the _PyTorch Geometric_ framework. This involved a series of transformations. Firstly, the combined spatial and genotype edges were aggregated. Secondly, the dataset was processed to handle missing values, by imputing them with column-wise means, and categorical columns were one-hot encoded. The final graph representation incorporated node features (derived from the processed dataset), the aggregated edges, and the target label ('Yield'). The resulting graph, thus constructed, served as the foundation for our _Agri-GNN_ experiments. ### Neural Architecture The model was developed using the _PyTorch_ framework (Paszke et al., 2017). For the specific needs of graph-based neural networks, we turned to the _PyTorch Geometric (PyG)_ library (Fey and Lenssen (2019)). This library is a comprehensive toolkit, providing a range of architectures and tools tailored for graph neural networks. Our model, _Agri-GNN_, is an augmentation of the conventional GraphSAGE architecture Hamilton et al. (2017). It features four GraphSAGE convolutional layers. The initial layer transitions input features to hidden channels. The two intermediary layers, enhanced with skip connections, amplify the model's capacity to discern both rudimentary and advanced patterns in the data. The final layer is designed to yield the model's output. To ensure the stability and efficiency of training, batch normalization is applied following each convolutional layer. Furthermore, to mitigate the risk of overfitting, dropout regularization is integrated after each batch normalization, with a rate of 0.5. The dimensionality of the input features was determined dynamically based on the dataset. The model was trained for 500 epochs, and was monitored to gauge the model's performance accurately and take measures against potential overfitting. An \(80-20\) split was utilized, where 80% of the farm nodes were randomly chosen to be part of the training dataset, ### Hyperparameter Tuning Hyperparameter tuning was critical to ensuring we have the optimal model. We conducted an exhaustive exploration of various hyperparameter combinations to pinpoint the most conducive setting for our model. The hyperparameters that we varied are summarized in Table 2. The best hyperparameters can be seen in Table 3. ## 6 Model Performance and Results We now show the results of _Agri-GNN_ on the described dataset. The results are compared with two baselines on the same dataset: a K-Nearest Neighbors Model and an Ensemble Machine Learning model (Chat \begin{table} \begin{tabular}{|c|c|} \hline **Hyperparameter** & **Values Explored** \\ \hline Learning Rates & \(0.001,0.005,0.01,0.02\) \\ \hline Hidden Channels & \(32,64,128\) \\ \hline Dropout Rates & \(0.3,0.5,0.7\) \\ \hline \end{tabular} \end{table} Table 2: Hyperparameters and their explored values \begin{table} \begin{tabular}{|c|c|} \hline **Hyperparameter** & **Best Value** \\ \hline Learning Rate & \(0.02\) \\ \hline Hidden Channels & \(32\) \\ \hline Dropout Rate & \(0.3\) \\ \hline \end{tabular} \end{table} Table 3: Optimal hyperparameters after tuning topadhyay et al., 2023). We first visualize the embeddings derived from our graph neural network model. To better understand the spatial distribution of these embeddings, we employed the t-Distributed Stochastic Neighbor Embedding (t-SNE) algorithm for dimensionality reduction (Hinton and Roweis (2002)). The embeddings, initially generated by the forward method of our model, were reduced to two dimensions using t-SNE. As shown in Figure 4, each point represents a node in the graph, and the spatial arrangement captures the similarity between node embeddings. Distinct clusters and patterns in the visualization indicate nodes with similar features within the graph. The embeddings can be seen in Figure 4. From Figure 4, we see that multiple distinct graphs may possibly be formed from the data, particularly when subsets of the data are not related in any spatial or genotypic way. _Agri-GNN_ is designed to be able to learn the inherent patterns in each of these subgraphs, while further increasing the accuracy of the main base model. The performance of _Agri-GNN_ was assessed using standard regression metrics, as presented in table 4: _Agri-GNN_ achieves a Root Mean Squared Error (RMSE) of 4.565 on this dataset. The Mean Absolute Error (MAE) is 3.590. The model has an \(R^{2}\) value of 0.876. These results show significant improvement over the baselines used. The K-Nearest Neighbor (K-NN) Algorithm is used as the first baseline model. The K-NN model predicts the yield of a node based on the average yield of its \(k\) nearest neighbors based on latitude and longitude. This model is optimized by performing a grid search to find the best value of \(k\) from a range of 1 to 20. The optimized K-NN model used \(k=18\) as the number of neighbors for making predictions. \begin{table} \begin{tabular}{|c|c|} \hline **Metric** & _Agri-GNN_ **Value** \\ \hline Root Mean Squared Error (RMSE) & 4.565 \\ \hline Mean Absolute Error (MAE) & 3.590 \\ \hline Coefficient of Determination (\(R^{2}\)) & 0.876 \\ \hline \end{tabular} \end{table} Table 4: Performance metrics of the _Agri-GNN_ model Figure 4: t-SNE visualization of graph node embeddings. The performance metrics of the optimized K-NN model are presented in Table 5. The model achieved a Root Mean Squared Error (RMSE) of 12.93, a Mean Absolute Error (MAE) of 10.33, and a Coefficient of Determination (\(R^{2}\)) of 0.026. The _Agri-GNN_ model shows superior performance to the baseline. The metrics signify a substantial enhancement over the baseline K-NN model, demonstrating the effectiveness of _Agri-GNN_ in yield prediction. _Agri-GNN_'s prediction error and \(R^{2}\) shows significant potential when compared to recent work on a similar dataset (Chattopadhyay et al., 2023), where the highest \(R^{2}\) value achieved was less than.8. Such an enhancement underscores the potential benefits of graph-based neural networks, especially when handling datasets with inherent relational structures. In Figure 5, we present a comparison between the Actual and Predicted Yields. This graphical representation provides a comprehensive insight into the accuracy and precision of our model's predictions. An interesting observation from Figure 5 is the remarkable accuracy of the model's predictions for yields ranging between 50 and 90. The data points in this range cluster closely around the line of perfect agreement, suggesting that the model is particularly adept at predicting yields within this interval. This is noteworthy as it indicates that our model is not only reliable in general but also exhibits enhanced performance when predicting yields in this specific range. Figure 5: The Actual vs. Predicted Yield of our predictions. The red line signifies a perfect model. The model achieves an \(R^{2}\) value of 0.876 \begin{table} \begin{tabular}{|c|c|} \hline **Metric** & _Baseline K-NN Value_ \\ \hline Root Mean Squared Error (RMSE) & 12.93 \\ \hline Mean Absolute Error (MAE) & 10.33 \\ \hline Coefficient of Determination (\(R^{2}\)) & 0.026 \\ \hline \end{tabular} \end{table} Table 5: Performance metrics of K-NN baseline model (\(k=18\) ### Scalability and Robustness In the previous section, an application of _Agri-GNN_ is outlined, focusing on agricultural yield prediction in farms located in Ames, Iowa. The results demonstrate the model's capability to efficiently and accurately predict yields based on geographical coordinates. This section delves into the scalability and robustness of the _Agri-GNN_ framework, discussing its potential application on a broader scale, both nationally and internationally. _Agri-GNN_ is designed with scalability in mind, allowing for seamless integration and deployment in diverse farming contexts across the world. The model's adaptability to various geographical and environmental conditions ensures consistent and reliable yield predictions irrespective of the location. The use of cloud computing resources and distributed computing frameworks enhances the scalability of _Agri-GNN_, enabling real-time processing and analysis of large datasets encompassing numerous farms across different geographical locations. Robustness is at the core of the _Agri-GNN_ framework. The model employs advanced machine learning algorithms and techniques to ensure stability and reliability in yield predictions, even in the face of data variability and uncertainties. The robust nature of _Agri-GNN_ ensures uninterrupted and consistent performance, bolstering the confidence of farmers and stakeholders in the accuracy and reliability of the yield predictions provided by the model. Moreover, the continuous learning and adaptation capabilities of _Agri-GNN_ further enhance its robustness, ensuring it remains at the forefront of agricultural yield prediction technology. Unfortunately, _Agri-GNN_ is optimized to work best with large datasets where ample information from previous experiments is available. In many farming contexts, there is not enough information to be able to train \(Agri-GNN\) to have reliable performance (Wiseman et al., 2019). Further research should explore how _Agri-GNN_ can be optimized to perform well in instances where a lack of ample farming data may be a problem. ## 7 Conclusion In an ever-evolving agricultural landscape marked by climatic uncertainties, the pressing need for accurate and holistic crop yield predictions has never been greater. This study introduced Agri-GNN, a pioneering approach that harnesses the power of Graph Neural Networks to provide a comprehensive representation of the agricultural ecosystem. Unlike traditional models that often operate in silos, Agri-GNN's strength lies in its ability to synthesize diverse data modalities into a unified framework, capturing the intricate interplay between genetic, environmental, and spatial factors. Agri-GNN transcends conventional methodologies by viewing the agricultural ecosystem as an interconnected network, where crops aren't just passive entities but active nodes influenced by their immediate surroundings and broader contexts. This perspective, combined with GNN's superior data processing capabilities, enables Agri-GNN to deliver predictions that are both granular and holistic. Furthermore, Agri-GNN's modular design ensures its relevance in a rapidly changing agricultural sector, allowing for seamless integration of new data sources and insights. Its precision agriculture approach not only aids in enhancing productivity but also paves the way for sustainable practices that respect both economic and environmental considerations. Our applications of the Agri-GNN framework's capabilities to the plot fields of Ames, Iowa show promising results, even obtaining better performance than the results obtained in Chattopadhyay et al. (2023). Agri-GNN's performance metrics, including \(RMSE\), \(MAE\), and \(R^{2}\) highlighted its proficiency in yield prediction. Notably, the model showcased significant improvements over existing models, reaffirming the potential of graph-based neural networks in agricultural applications. The t-SNE visualizations further provided insights into the model's embeddings, reinforcing the cohesive and interconnected nature of the data. In summary, Agri-GNN represents a paradigm shift in agricultural modeling. By capturing the complexity and interconnectedness of farming systems, it offers a fresh lens through which to view and address the multifaceted challenges of modern agriculture. As we stand at the crossroads of traditional practices and technological innovation, Agri-GNN serves as a beacon, guiding the way towards informed, sustainable, and resilient agricultural futures. Acknowledgments The authors thank Joseif Raigne, Dr. Baskar Ganapathysubramanian, and Dr. Soumik Sarkar for their invaluable feedback on the draft manuscript. The authors thank Joseif Raigne for his help in the construction of spatial data for the field experiments. The authors thank Shannon Denna and Christopher Grattoni for their support. The authors thank staff and student members of SinghSoybean group at Iowa State University, particularly Brian Scott, Will Doepke, Jennifer Hicks, Ryan Dunn, and Sam Blair for their assistance with field experiments and phenotyping. This work was supported by the Iowa Soybean Association, North Central Soybean Research Program, USDA CRIS project IOW04714, AI Institute for Resilient Agriculture (USDA-NIFA #2021-647021-35329), COALESCE: COntext Aware LEarning for Sustainable CyBr-Agricultural Systems (CPS Frontier #1954556), Smart Integrated Farm Network for Rural Agricultural Communities (SIRAC) (NSF S & CC #1952045), RF Baker Center for Plant Breeding, and Plant Sciences Institute. ## 9 Author Contributions Aditya Gupta conceptualized and designed the proposed model, developed the methodology, conducted the data analysis, constructed and implemented the Agri-GNN, provided visualization of the results, and took primary responsibility for writing, revising, and finalizing this paper. All authors have read, reviewed, and agreed to the published version of the manuscript. ## 10 Conflict of Interest The authors declare no conflict of interest.
2308.06116
The Stochastic Steepest Descent Method for Robust Optimization in Banach Spaces
Stochastic gradient methods have been a popular and powerful choice of optimization methods, aimed at minimizing functions. Their advantage lies in the fact that that one approximates the gradient as opposed to using the full Jacobian matrix. One research direction, related to this, has been on the application to infinite-dimensional problems, where one may naturally have a Hilbert space framework. However, there has been limited work done on considering this in a more general setup, such as where the natural framework is that of a Banach space. This article aims to address this by the introduction of a novel stochastic method, the stochastic steepest descent method (SSD). The SSD will follow the spirit of stochastic gradient descent, which utilizes Riesz representation to identify gradients and derivatives. Our choice for using such a method is that it naturally allows one to adopt a Banach space setting, for which recent applications have exploited the benefit of this, such as in PDE-constrained shape optimization. We provide a convergence theory related to this under mild assumptions. Furthermore, we demonstrate the performance of this method on a couple of numerical applications, namely a $p$-Laplacian and an optimal control problem. Our assumptions are verified in these applications.
Neil K. Chada, Philip J. Herbert
2023-08-11T13:10:46Z
http://arxiv.org/abs/2308.06116v1
# The stochastic steepest descent method for robust optimization in Banach spaces ###### Abstract. Stochastic gradient methods have been a popular and powerful choice of optimization methods, aimed at minimizing functions. Their advantage lies in the fact that that one approximates the gradient as opposed to using the full Jacobian matrix. One research direction, related to this, has been on the application to infinite-dimensional problems, where one may naturally have a Hilbert space framework. However, there has been limited work done on considering this in a more general setup, such as where the natural framework is that of a Banach space. This article aims to address this by the introduction of a novel stochastic method, the stochastic steepest descent method (SSD). The SSD will follow the spirit of stochastic gradient descent, which utilises Riesz representation to identify gradients and derivatives. Our choice for using such a method is that it naturally allows one to adopt a Banach space setting, for which recent applications have exploited the benefit of this, such as in PDE-constrained shape optimization. We provide a convergence theory related to this under mild assumptions. Furthermore, we demonstrate the performance of this method on a couple of numerical applications, namely a \(p\)-Laplacian and an optimal control problem. Our assumptions are verified in these applications. Key words and phrases:steepest descent, robust optimization, Banach spaces, stochastic optimization 2020 Mathematics Subject Classification: 49M41, 65K15, 65C05 ## 1. Introduction Let us consider a Banach space \(\mathcal{X}\), for which we will have minimal assumptions on, \((\Omega,\mathcal{F},\mathbb{P})\) a probability space, function \(j\colon\mathcal{X}\times\Omega\to\mathbf{R}\) and functional \(\mathcal{J}\colon\mathcal{X}\to\mathbf{R}\), which takes the form \[\mathcal{J}(u):=\int_{\Omega}j(u,\xi)\ \mathrm{d}\mathbb{P}(\xi)=\mathbb{E} \left[j(u,\cdot)\right]. \tag{1.1}\] To minimise \(\mathcal{J}\) is to minimise the expected value of the random map \(u\mapsto j(u,\cdot)\). This kind of problem is known to be a robust minimisation problem, which allows for the 'best outcome on average', as opposed to the deterministic case of, for a given \(\xi\), minimise \(u\mapsto j(u,\xi)\), which provides the best case for that that specific choice of \(\xi\). While mathematically interesting, this is clearly of interest in many practical settings. One such example is PDE-constrained optimization problems [19], or specifically PDE-constrained shape optimization. [9, 28]. Let us state the problem problem of interest: \[\text{Find }u^{*}\in\text{argmin}\left\{\mathbb{E}\left[j(u,\cdot)\right]:u\in \mathcal{X}\right\}. \tag{1.2}\] Given (1.2), we will not necessarily find a solution to this problem, although, our proposed algorithm will tend to a stationary point of \(\mathcal{J}\). The typical approach to find solutions to problems of this type is to use stochastic gradient descent (SGD) methods. This has received a lot of attention in recent years, particularly from the machine learning community as instead of requiring full gradient information to solve (1.2), one uses a stochastic approximation of gradient using a random sample. However we are not necessarily able to proceed in this manner due to being in a Banach space. As a result we require a different stochastic method for the problem (1.2). Our proposed method, which we aim to understand, is the stochastic steepest descent method (SSD). To the best of our knowledge there has been no formal work which has considered this analyzing the SSD in a Banach space setting. As a result this formally provides our motivation for this work, aiming to introduce some important elementary results which can be verified numerically. Before stating our highlighting result, and other contributions, we provide a brief literature review on relatable work. ### Literature review The SGD has been a popular choice for a range of stochastic optimization problems arising in various applied mathematical disciplines. Since its formulation [22, 26], it has been heavily used in the fields of machine learning, uncertainty quantification and imaging [3, 16, 31]. In the infinitesimal-setting the SGD operates on a Hilbert space setting, as it uses a Riesz representation. Some important areas for which this is exploited can be found the aforementioned references, particularly related to learning in a least-squares formulation [4, 11, 23]. However, the computation of the gradient does not make sense in a Banach space \(\mathcal{X}\) setting as the elements of the method are within the dual space \(\mathcal{X}^{*}\). This is can be an issue, as for some examples, a Banach space is necessary. A natural choice of application is shape optimization, particularly PDE-constrained shape optimization [9]. Typically Hilbert spaces have been used in practical settings, which remain valid in the computational setting, however recent work [8, 18, 24] has utilized Banach spaces, where one makes use of a steepest descent-type method. In the random shape optimization setting, there has been recent work closely related to the Hilbertian setting [13, 14]. In the context of inverse problems a recent work, by Jin et al. [21] considers the application of the SGD to inverse problems in Banach spaces. Specifically they consider a linear setting, which assumes the use of duality maps and their randomness is induced from mini-batching. ### Main Theorem Our main theorem we state below, which is a convergence theorem for the SSD method in a Banach space setting. **Theorem**.: _Suppose \(j\) is a.s. in \(C^{1,1}_{\mathrm{loc}}(\mathcal{X})\) and the sequence \(\{u_{n}\}_{n\geq 1}\) generated by Algorithm 1 is \(\mathcal{F}_{n}\)-measurable and bounded. Then the sequence \(\mathcal{J}(u_{n})\) converges a.s. and \(\liminf_{n\to\infty}\|\mathcal{J}^{\prime}(u_{n})\|_{\mathcal{X}^{*}}=0\). Furthermore, if the sequence \(t_{n}\) satisfies_ \[\sum_{j=1}^{\infty}\frac{t_{j}}{\sum_{n=1}^{j}t_{n}}=\infty,\] _then, almost surely_ \[\min_{n=1,\dots,j}\|\mathcal{J}^{\prime}(u_{n})\|_{\mathcal{X}^{*}}=\mathcal{ O}\bigg{(}\Big{(}\sum_{n=1}^{j}t_{n}\Big{)}^{-1}\bigg{)}.\] To complement our main theorem, we will provide numerical experiments for two model problems. For each numerical experiment we will verify the assumptions for the theorem to hold. The first being a \(p\)-Laplace-type problem with random data, and the second being an optimal control problem with \(L^{p}\) cost, for \(p>1\). Our assumptions are proven in the infinite-dimensional case, and therefore translate to the finite-dimensional setting. ### Outline The outline of this paper is as follows: in Section 2 we provide some necessary background material. This includes a discussion on the stochastic gradient descent method and the stochastic steepest descent method, with various assumptions. These will lead onto Section 3 where we provide out main theorem and analysis related to the SSD in Banach spaces, specifically a convergence result. We will then discuss two particular applications of interest, which are presented in Section 4. Finally numerical experiments are provided in Section 5 on the two applications, where we conclude with a conclusion section. ## 2. Background material In this section we will provide an overview of the relevant background material. This will include a discussion on both the SSD and the SGD and their respective differences. As well as this we will cover traditional assumptions related to both, and discuss some of their properties in a function space setting. This will include both generic Hilbert and Banach spaces. Consider \(\mathcal{X}\) a Banach space and \((\Omega,\mathcal{F},\mathbb{P})\) a probability space \(\Omega\) with filtration \(\mathcal{F}\). We recall firstly some definitions, before discussing our algorithm of interest. **Definition 2.1** (Stochastic process).: _Given a Banach space \(\mathcal{X}\), a discrete-time stochastic process is a collection of \(\mathcal{X}\)-values random variables indexed by \(n\), i.e. \(\{\vartheta_{n}:\Omega\to\mathcal{X}:n\in\mathbb{N}\}\)._ **Definition 2.2** (Filtration).: _A filtration is a sequence \(\{\mathcal{F}_{n}\}\) of sub \(\sigma\)-algebras of \(\mathcal{F}\), such that \(\mathcal{F}_{1}\subset\mathcal{F}_{2}\subset\ldots\subset\mathcal{F}\). We further state that a stochastic process is said to be adapted to a filtration \(\{\mathcal{F}_{n}\}\) if and only if, the sequence \(\{\vartheta_{n}\}\) is \(\mathcal{F}_{n}\)-measurable for all \(n\in\mathbb{N}\)._ Let \(j\colon\Omega\times\mathcal{X}\to\overline{\mathbf{R}}\) be a proper functional, by that we mean there exists a value such that the functional is finite. The aim is to find \(u\in\mathcal{X}\) such that the following quantity \[\int_{\Omega}j(\omega,u)\ \mathrm{d}\mathbb{P}(\omega),\] is minimized. In the setting of a Hilbert space \(H\), a typical strategy is (deterministic) gradient descent based methods. In such a case, one seeks a \(v^{*}\in H\) which represents the negative gradient defined as \[\langle v^{*},\eta\rangle_{H}=\langle-\nabla_{H}\mathcal{J}(u),\eta\rangle_{H} :=-\mathcal{J}^{\prime}(u)[\eta], \tag{2.1}\] for all \(\eta\in H\), or equivalently, to compute the operation \[v^{*}\in\operatorname*{argmin}_{v\in H}\Big{\{}\frac{1}{2}\|v\|_{H}^{2}+ \mathcal{J}^{\prime}(u)[v]\Big{\}}. \tag{2.2}\] Alternatively one can use the (deterministic) steepest descent method which also works in Banach spaces, which considers the following problem \[v^{*}\in\operatorname*{argmin}\Big{\{}\mathcal{J}^{\prime}(u)[v]:v\in \mathcal{X},\|v\|_{\mathcal{X}}\leq 1\Big{\}}.\] The approach which we introduce is the stochastic steepest descent method (SSD), which functions as, and is defined, very similarly to the stochastic gradient descent method (SGD). Before we discuss the SSD, let us recall the SGD and various properties and common assumptions. We describe the SSD below in Algorithm 1, where \(j_{u}\) denotes the gradient of \(j\) w.r.t. to the second variable \(u\). Similarly we present the SGD in Algorithm 2. Notice the _only_ difference between Algorithms 1 and 2 is line 5. ``` 1:Input:\(\mathcal{X}\), ``` 1:Input: * sequence of step sizes \((t_{n},n\in\mathbb{N})\). * initial \(u_{0}\in\mathcal{X}\) such that \(\mathcal{J}(\cdot,u_{0})<\infty\) a.s. 2:Output: 3:for\(n=1,\ldots,N\)do 4: generate \(\xi_{n}\in\Omega\) independent of previous draws. 5: find \(v_{n}\in\operatorname*{argmin}\left\{j_{u}(\xi_{n},u_{n})[v]:\ v\in\mathcal{X},\ \|v\|_{\mathcal{X}}\leq 1\right\}\). 6: iterate \(u_{n+1}=u_{n}+t_{n}v_{n}\). 7:endfor ``` **Algorithm 1** Stochastic Steepest Descent Method (SSD) In terms of the SGD related to PDE-constrained optimization, there has been recent extensive work both numerically and analytically. For the later, this has been primarily concerned with deriving convergence results related to the functional (1.1) as well as the iterates. In particular a relevant result is Theorem 4.7 from [15] which states that given a functional \(\mathcal{J}\) and the optimal state \(u\), with various convexity assumptions which shows \[\mathbb{E}\big{[}\|u^{n}-u\|_{H}\big{]}\leq\mathcal{O}\Big{(}\frac{1}{\sqrt{n+ \nu}}\Big{)},\] where \(\nu\) is some particular constant related to the strong convexity. Furthermore they also show \[\mathbb{E}\big{[}\mathcal{J}(u^{n})-\mathcal{J}(u)\big{]}\leq\mathcal{O}\Big{(} \frac{1}{n+\nu}\Big{)}.\] The above results are specific to use of SGD in Hilbert spaces. The later result is the well-known Monte Carlo rate of convergence. The notion of strong-convexity is difficult to use in a general Banach space setting, therefore attaining the equivalent may prove difficult. Given also that it is not essential to this work, this is left as potential future work. ``` 1:Input: * sequence of step sizes \((t_{n},n\in\mathbb{N})\). * initial \(u_{0}\in\mathcal{X}\) such that \(\mathcal{J}(\cdot,u_{0})<\infty\) a.s. 2:Output: * generate \(\xi_{n}\in\Omega\) independent of previous draws. 3: find \(v_{n}\in\operatorname*{argmin}_{v\in H}\left\{j_{u}(\xi_{n},u_{n})[v]+\frac{1 }{2}\|v\|_{H}^{2}\right\}\). 4: iterate \(u_{n+1}=u_{n}+t_{n}v_{n}\). 5:endfor ``` **Algorithm 2** Stochastic Gradient Descent Method (SGD) ## 3. Theory In this section we provide a number of results which will be helpful in the understanding of the SSD in Banach spaces. This corresponds to our main theorem, which is a convergence result which additionally provides an asymptotic almost sure convergence rate for the minimal values of the norm of the derivative. We provide a number of key assumptions and definitions for which our analysis relies on. The main assumption we have on our Banach space \(\mathcal{X}\) is that it should be the dual of another space, i.e. there is some space \(Y\) such that \(\mathcal{X}=Y^{*}\). This will be used to be able to apply the theorem of Banach-Alaoglu. A consequence of this is that we may _not_ generally take \(\mathcal{X}\) as \(L^{1}(U)\). ### Analytical results Let us begin by verifying that there exists a solution to the problem we wish to solve. This requires a couple of notions from the calculus of variations. **Definition 3.1**.: _Let \(\mathcal{G}\colon\mathcal{X}\to\overline{\mathbf{R}}\)._ * _We say_ \(\mathcal{G}\) _is weak-_\(*\) _coercive if for all_ \(\Lambda\in\mathbf{R}\)_, the sublevel set_ \[\{u\in\mathcal{X}:\mathcal{G}(u)\leq\Lambda\},\] _is sequentially weak-_\(*\) _relatively compact._ * _We say_ \(\mathcal{G}\) _is weak-_\(*\)_-lower-semi-continuous if for all sequences_ \(\{u_{j}\}_{j\in\mathbb{N}}\subset\mathcal{X}\) _with_ \(u_{j}\stackrel{{*}}{{\rightharpoonup}}u\) _(weak-_\(*\) _convergent), it holds_ \[\mathcal{G}(u)\leq\liminf_{j\to\infty}\mathcal{G}(u_{j}).\] **Proposition 3.2**.: _Suppose that \(j\) is Caratheodory, that is measurable in the first component and continuous in the second component. Then if \(u\mapsto j(\cdot,u)\) is weak-\(*\)-lower-semi-continuous a.s., then so is \(u\mapsto\mathcal{J}(u)\)._ Proof.: Proof of weak-\(*\)-lower-semi-continuity follows directly from the Dominated Convergence Theorem: \[\mathcal{J}(u)=\int_{\Omega}j(\cdot,u)\ \mathrm{d}\mathbb{P} \leq\int_{\Omega}\liminf_{j\to\infty}j(\cdot,u_{j})\ \mathrm{d}\mathbb{P}\] \[\leq\liminf_{j\to\infty}\int_{\Omega}j(\cdot,u_{j})\ \mathrm{d}\mathbb{P}=\liminf_{j\to\infty}\mathcal{J}(u_{j}).\] **Theorem 3.3**.: _Suppose \(j\) is Caratheodory, \(u\mapsto j(\cdot,u)\) is weak-\(*\)-lower-semi-continuous a.s., and \(\mathcal{J}\) is weak-\(*\) coercive. Then there is \(u^{*}\in\mathcal{X}\) such that_ \[u^{*}\in\operatorname*{argmin}_{u\in\mathcal{X}}\mathcal{J}(u).\] This result follows from the Direct Method of the calculus of variations. The next result is necessary for the proposed stochastic steepest descent method. **Proposition 3.4**.: _Suppose \(u\mapsto j(\cdot,u)\) is differentiable a.s. and denote this \(j_{u}(\cdot,u)\in\mathcal{X}^{*}\). Let us assume that \(v\mapsto j_{u}(\cdot,u)[v]\) is a.s. weak-*-lower-semi-continuous. Then, there exists \(v\in\mathcal{X}\) such that_ \[v\in\operatorname*{argmin}\{j_{u}(\cdot,u)[\tilde{v}]:\tilde{v}\in\mathcal{X },\,\|\tilde{v}\|_{\mathcal{X}}\leq 1\},\] _a direction of steepest descent._ Proof.: By the Banach-Alaoglu Theorem, which requires that \(\mathcal{X}\) is the dual space to some normed space, we know that for an infinising sequence, \(\{v_{n}\}_{n\in\mathbb{N}}\subset\mathcal{X}\) such that \(\|v_{n}\|_{\mathcal{X}}\leq 1\) and \(j_{u}(\cdot,u)[v_{n}]\to\inf\{j_{u}(\cdot,u)[\tilde{v}]:\tilde{v}\in\mathcal{X },\,\|\tilde{v}\|_{\mathcal{X}}\leq 1\}\), there is a subsequence \(\{n_{j}\}_{j\in\mathbb{N}}\) and \(v^{*}\in\mathcal{X}\) such that \(\|v^{*}\|_{\mathcal{X}}\leq 1\) and \(v_{n_{j}}\stackrel{{*}}{{\rightharpoonup}}v^{*}\). By the assumed weak-*-lower-semi-continuity, it holds that \[j_{u}(\cdot,u)[v^{*}]\leq\inf\{j_{u}(\cdot,u)[\tilde{v}]:\tilde{v}\in \mathcal{X},\,\|\tilde{v}\|_{\mathcal{X}}\leq 1\},\] which completes the proof. Notice that, here, the condition that \(v\mapsto j_{u}(\cdot,u)[v]\) is weak-*-lower-semi-continuous, is a technical condition which arises from the fact we have not assumed that \(\mathcal{X}\) is reflexive. This assumption has implicitly appeared in shape optimisation works, see [25, equation (12)] for example. ### Algorithmic results In order to show convergence of our method, we require a few technical results. The first of which is a classical and important result of stochastic algorithms, which we will not prove, but refer the reader to [27]. **Lemma 3.5** (Robbins-Siegmund).: _Let \(\{\mathcal{F}_{n}\}_{n\geq 1}\) be a filtration. Let \(\nu_{n}\), \(a_{n}\), \(b_{n}\), and \(c_{n}\) be non-negative random variables which are adapted to \(\mathcal{F}_{n}\) for \(n\geq 1\). If_ \[\begin{split}\mathbb{E}\left[\nu_{n+1}|\mathcal{F}_{n}\right] \leq\nu_{n}\left(1+a_{n}\right)+b_{n}-c_{n},\quad\text{and}\\ \sum_{n=1}^{\infty}a_{n}<\infty,\quad\sum_{n=1}^{\infty}b_{n}< \infty,\quad\text{ a.s.},\end{split} \tag{3.1}\] _then almost surely \(\nu_{n}\) is convergent and \(\sum_{n=1}^{\infty}c_{n}<\infty\)._ We will say a functional \(\mathcal{G}\colon\mathcal{X}\to\bar{\mathbf{R}}\) is in \(C^{1,1}_{\mathrm{loc}}(\mathcal{X})\) if it is differentiable and its derivative is locally Lipschitz, i.e. for any bounded set \(B\subset\mathcal{X}\)\(\exists L_{B}>0\) such that \(\|\mathcal{G}^{\prime}(u)-\mathcal{G}^{\prime}(v)\|_{\mathcal{X}^{*}}\leq L_{ B}\|u-v\|_{\mathcal{X}}\) for any \(u\), \(v\in B\). **Lemma 3.6**.: _Suppose that \(\mathcal{G}\colon\mathcal{X}\to\bar{\mathbf{R}}\) is in \(C^{1,1}_{\mathrm{loc}}(\mathcal{X})\). Then for all \(u,\,v\in\mathcal{X}\) there is \(L>0\), which depends on \(u\) and \(v\) such that_ \[\mathcal{G}(u)-\mathcal{G}(v)\leq\mathcal{G}^{\prime}(v)[u-v]+L\|u-v\|_{ \mathcal{X}}^{2}.\] Proof.: Let \(\phi\colon\mathbf{R}\ni t\mapsto\mathcal{G}(u+t(v-u))\), then \(\phi^{\prime}(t)=\mathcal{G}^{\prime}(u+t(v-u))[v-u]\). It also holds that \[\phi(1)-\phi(0)=\phi^{\prime}(0)+\int_{0}^{1}(\phi^{\prime}(t)-\phi^{\prime}( 0))\mathrm{d}t.\] As such, we have \[\mathcal{G}(v)-\mathcal{G}(u)=\mathcal{G}^{\prime}(u)[v-u]+\int_ {0}^{1}(\mathcal{G}^{\prime}(u+t(v-u))[v-u]-\mathcal{G}^{\prime}(u)[v-u]) \mathrm{d}t\] \[\leq\mathcal{G}(u)[v-u]+\int_{0}^{1}L\|t(v-u)\|_{\mathcal{X}}\|v -u\|_{\mathcal{X}}\mathrm{d}t,\] which completes the result. We are now able to provide our main result. This result states that, under appropriate conditions, the functional along the iterations generated by Algorithm 1 converges and that the derivative of the functional has vanishing \(\liminf\). **Theorem 3.7**.: _Suppose \(j\) is a.s. in \(C^{1,1}_{\mathrm{loc}}(\mathcal{X})\) and the sequence \(\{u_{n}\}_{n\geq 1}\) generated by Algorithm 1 is \(\mathcal{F}_{n}\)-measurable and bounded. Then the sequence \(\mathcal{J}(u_{n})\) converges a.s. and \(\liminf_{n\to\infty}\|\mathcal{J}^{\prime}(u_{n})\|_{\mathcal{X}^{*}}=0\). Furthermore, if the sequence \(t_{n}\) satisfies_ \[\sum_{j=1}^{\infty}\frac{t_{j}}{\sum_{n=1}^{j}t_{n}}=\infty, \tag{3.2}\] _then, almost surely_ \[\min_{n=1,\ldots,j}\|\mathcal{J}^{\prime}(u_{n})\|_{\mathcal{X}^{*}}= \mathcal{O}\bigg{(}\Big{(}\sum_{n=1}^{j}t_{n}\Big{)}^{-1}\bigg{)}. \tag{3.3}\] Proof.: Since \(\{u_{n}\}_{n\geq 1}\) is bounded, there is \(L>0\) such that \(u\mapsto j_{u}(\cdot,u)\) is Lipschitz with constant \(L>0\) on the bounded set \(B:=\bigcup_{n\geq 1}\{u_{n}\}\subset\mathcal{X}\). By applying Lemma 3.6 to \(u\mapsto j(\cdot,u)\) and taking expectation, it holds that \[\mathcal{J}(u_{n+1})-\mathcal{J}(u_{n})\leq t_{n}\mathcal{J}^{\prime}(u_{n})[v _{n}]+Lt_{n}^{2}\|v_{n}\|_{\mathcal{X}}^{2}.\] Recall that the quantities in the above are random, depending on the draw at each step of the algorithm, also, by definition, \(\|v_{n}\|_{X}\leq 1\). By taking expectation conditional on \(\mathcal{F}_{n}\), one has that \[\mathbb{E}[\mathcal{J}(u_{n+1})|\mathcal{F}_{n}]-\mathcal{J}(u_{n})\leq t_{n} \mathbb{E}[\mathcal{J}^{\prime}(u_{n})[v_{n}]|\mathcal{F}_{n}]+Lt_{n}^{2}\|v_{ n}\|_{\mathcal{X}}^{2}. \tag{3.4}\] where we have used that \(u_{n}\) is measurable with respect to \(\mathcal{F}_{n}\) Since we are choosing the draws at each step of the algorithm independent of the previous draws, it holds that \[\mathbb{E}[\mathcal{J}^{\prime}(u_{n})[v_{n}]|\mathcal{F}_{n}]=-\|\mathcal{J} ^{\prime}(u_{n})\|_{\mathcal{X}^{*}}.\] Therefore, we find ourselves in the setting of Lemma 3.5 with \(a_{n}=0\), \(b_{n}=t_{n}^{2}L\), \(c_{n}=t_{n}\|\mathcal{J}^{\prime}(u_{n})\|_{\mathcal{X}^{*}}\), and \(\nu_{n}=\mathcal{J}(u_{n})\), therefore it holds that \(\mathcal{J}(u_{n})\) converges a.s., and \(\sum_{n=1}^{\infty}t_{n}\|\mathcal{J}^{\prime}(u_{n})\|_{\mathcal{X}^{*}}\) is finite, hence \(\|\mathcal{J}^{\prime}(u_{n})\|_{\mathcal{X}^{*}}\) has vanishing \(\liminf\). We now turn to the proof of (3.3). We begin by defining for all \(n\in\mathbb{N}\), \[\eta_{n}:=\frac{2t_{n}}{\sum_{j=1}^{n}t_{j}},\quad T_{1}:=\|\mathcal{J}^{ \prime}(u_{1})\|_{\mathcal{X}^{*}},\quad T_{n+1}:=(1-\eta_{n})T_{n}+\eta_{n} \|\mathcal{J}^{\prime}(u_{n})\|_{\mathcal{X}^{*}},\] where we note that \(\eta_{1}=2\) and \(\eta_{n}\in[0,1]\) for \(n>1\) since \(t_{n}\) is decreasing we can state \[0\leq\eta_{n}=\frac{2t_{n}}{\sum_{j=1}^{n}t_{j}}\leq\frac{2t_{n}}{nt_{n}}\leq 1,\quad\text{for }n>1.\] Moreover, \[2t_{n}\|\mathcal{J}^{\prime}(u_{n})\|_{\mathcal{X}^{*}}^{2}=\sum_{j=1}^{n}t_{ j}T_{n+1}+t_{n}T_{n}-\sum_{j=1}^{n-1}t_{j}T_{n}.\] Then if we use this in the equation (3.4), we have \[\mathbb{E}\Big{[}\mathcal{J}(u_{n+1})+\frac{1}{2}\sum_{j=1}^{n}t _{j}T_{n+1}|\mathcal{F}_{n}\Big{]} =\mathbb{E}[\mathcal{J}(u_{n+1})|\mathcal{F}_{n}]+\frac{1}{2}\sum _{j=1}^{n}t_{j}T_{n+1}\] \[\leq\mathcal{J}(u_{n})+\frac{1}{2}\sum_{j=1}^{n-1}t_{j}T_{n}-\frac {1}{2}t_{n}T_{n}+Lt_{n}^{2}.\] Therefore, by the Robbins-Siegmund Lemma, i.e. Lemma 3.5 with \(a_{n}=0\), \(b_{n}=Lt_{n}^{2}\), \(c_{n}=\frac{1}{2}t_{n}T_{n}\), and \(\nu_{n}=\mathcal{J}(u_{n})+\sum_{j=1}^{n-1}t_{j}T_{n}\), it holds that \[\Big{\{}\mathcal{J}(u_{n})+\frac{1}{2}\sum_{j=1}^{n-1}t_{j}T_{n}\Big{\}}\text { converges a.s. and }\sum_{n=1}^{\infty}t_{n}T_{n}<\infty.\] Since \(\{\mathcal{J}(u_{n})\}\) converges almost surely, as demonstrated above, then we know the sequence \(\{\sum_{j=1}^{n-1}t_{j}T_{n}\}\) does so similarly. Then the fact \(\sum_{n=1}^{\infty}t_{n}T_{n}<\infty\) yields \[\lim_{n\to\infty}\frac{t_{n}}{\sum_{j=1}^{n-1}t_{j}}\sum_{j=1}^{n-1}t_{j}T_{n} =\lim_{n\to\infty}t_{n}T_{n}=0.\] Using (3.2) and the above limit, it holds that \(T_{n}=\mathcal{O}\bigg{(}\Big{(}\sum_{j=1}^{n-1}t_{j}\Big{)}^{-1}\bigg{)}\) almost surely. Using the definition of \(T_{n}\) and that \(\eta_{n}\in[0,1]\) one can deduce that for each \(n>1\) there exists a sequence \(\tilde{\eta}_{j}\in[0,1]\) for \(j=1,\ldots,n-1\) such that \[\sum_{j=1}^{n}\tilde{\eta}_{j}=1,\quad T_{n}=\sum_{j=1}^{n-1}\tilde{\eta}_{j} \|\mathcal{J}^{\prime}(u_{j})\|_{\mathcal{X}^{*}}.\] Moreover, \[T_{2} =\|\mathcal{J}^{\prime}(u_{1})\|_{\mathcal{X}^{*}},\] \[T_{n} \geq\sum_{j=1}^{n-1}\tilde{\eta}_{j}\min_{k=1,\ldots,n-1}\| \mathcal{J}^{\prime}(u_{k})\|_{\mathcal{X}^{*}}=\min_{k=1,\ldots,n-1}\|\mathcal{ J}^{\prime}(u_{k})\|_{\mathcal{X}^{*}}\geq 0,\quad\text{ for }n>2.\] Therefore the above statement for \(T_{n}\) holds for any \(n>1\), which yields the result and concludes the proof. **Remark 3.8**.: _The assumed regularity of \(u\mapsto j(\cdot,u)\) is in \(C^{1,1}_{\mathrm{loc}}(\mathcal{X})\) is in line with the work of Geiersbach et al. [13]. Let us note that one may relax the assumption of boundedness of \(\{u_{n}\}_{n\geq 1}\) if one takes the stronger assumption that \(u\mapsto j(\cdot,u)\) is in \(C^{1,1}(\mathcal{X})\)._ ## 4. Applications In this section we introduce two applications of interest, for which we will test our SSD on in Section 5. These will compromise of a elliptic partial differential equation (PDE) control problem, as well as a \(p\)-Laplacian (PDE). We will state each of these applications, verify the necessary assumptions, and describe formally how to obtain the direction of steepest descent. ### Application 1 Our first application we consider is the the \(p\)-Laplacian PDE. Given \(p\in(1,\infty)\), let \(\mathcal{X}=W^{1,p}_{0}(U)\), with norm \(\|u\|_{\mathcal{X}}:=\left(\int_{U}|\nabla u|^{p}\right)^{1/p}\). We let \(g\colon\Omega\times U\to\mathbf{R}\) satisfy \(\xi\mapsto g(\xi,\cdot)\in W^{1,p}(U)\) a.s.. The function \(j\) is given by \[j(\xi,u)=\frac{1}{p}\int_{U}|\nabla(u+g(\xi,\cdot))|^{p}\mathrm{d}x.\] The derivative of this is given by \[j_{u}(\xi,u)[v]=\int_{U}|\nabla(u+g(\xi,\cdot))|^{p-2}\nabla(u+g(\xi,\cdot)) \cdot\nabla v\mathrm{d}x. \tag{4.1}\] One may formally demonstrate that the direction of steepest descent \(v(u)\in\mathcal{X}\) satisfies \(\|v(u)\|_{\mathcal{X}}^{p}\leq 1\), \[\lambda\int_{U}|\nabla v(u)|^{p-2}\nabla v(u)\cdot\nabla\eta\ \mathrm{d}x=-j_{u}(\xi,u)[\eta], \tag{4.2}\] for all \(\eta\in\mathcal{X}\), where \(\lambda\in\mathbf{R}\) is a Lagrange multiplier and satisfies \(\lambda\geq 0\), \(\lambda(\|v(u)\|_{\mathcal{X}}^{p}-1)=0\). That is to say, by rescaling the solution of a \(p-\)Laplace, problem, one may find the direction of descent without having to solve a non-linear convex problem. #### 4.1.1. Verification of conditions For the results we have shown, we have made assumptions on \(j\), \(\mathcal{J}\), and \(\mathcal{X}\). These are namely that (i) \(\mathcal{X}\) is the dual of a normed space, (ii) \(j\) is appropriately smooth, (iii) \(j\) is weak-\(*\)-lower-semi-continuous in the second variable (iv) \(v\mapsto j_{u}(\cdot,u)[v]\) is weak-\(*\)-lower-semi-continuous, and that (v) \(\mathcal{J}\) is weak-\(*\)-coercive. For simplicity, let us assume that \(p\geq 2\). 1. Since \(\mathcal{X}=W^{1,p}_{0}(U)\) is reflexive then it is certainly the dual of a space. 2. It is clear that \(j\) is differentiable in the second variable, the form of the derivative is given in (4.1). One then calculates \[|j_{u}(\cdot,u_{1})[v]-j_{u}(\cdot,u_{2})[v]|\leq \|\nabla(u_{1}-g)\|_{L^{p}}\|\nabla(u_{1}-u_{2})\|_{L^{p}}\|\nabla v \|_{L^{p}}\] \[+(p-2)\|\nabla(u_{2}-g)\|_{L^{p}}\|\nabla(u_{1}-u_{2})\|_{L^{p}} \|\nabla v\|_{L^{p}},\] for example, which yields the relevant local Lipschitz condition. 3. Since we consider the \(p\) power of the norm, it is weak-\(*\)-lower-semi continuous. 4. By reflexivity, this holds. * One may show the stronger coercivity assumption by seeing that \[j(\cdot,u)\geq\frac{1}{p^{2}}\int_{U}|\nabla u|^{p}\mathrm{d}x-p^{p-2}\int_{U}| \nabla g|^{p}\mathrm{d}x.\] After integrating in probability, \[\mathcal{J}(u)\geq\frac{1}{p^{2}}\int_{U}|\nabla u|^{p}\mathrm{d}x-p^{p-2} \mathbb{E}\left(\int_{U}|\nabla g|^{p}\mathrm{d}x\right),\] which assuming appropriate \(p\)-integrability of \(g\) yields the desired coercivity. ### Application 2 Our second application is based on a modified 2-dimensional elliptic Poisson equation. Given \(p\in(1,\infty)\), let \(\mathcal{X}=L^{p}(U)\) with the standard norm \(\|u\|_{\mathcal{X}}:=\left(\int_{U}|u|^{p}\right)^{1/p}\). Fix \(y_{d}\in L^{2}(U)\) and \(\beta>0\). The function \(j\) is given by \[j(\xi,u)=\frac{1}{2}\int_{U}|y(\xi,\cdot)-y_{d}|^{2}\mathrm{d}x+\frac{\beta}{ p}\int_{U}|u|^{p}\mathrm{d}x, \tag{4.3}\] where for a random diffusivity coefficient \(D\), \(y(\xi,\cdot)\in H^{1}(U)\) satisfies \[-\mathrm{div}\left(D(\xi)\nabla y\right)+y+y^{5}= F(\xi)+u\quad\text{ in }U,\] \[\nu\cdot\nabla y= 0\quad\text{ on }\partial U. \tag{4.4}\] The derivative of \(j\) is given by \[j_{u}(\xi,u)[v]=\int_{U}\left(\beta|u|^{p-2}uv-qv\right)\mathrm{d}x, \tag{4.5}\] where \(q\in H^{1}(U)\) is the adjoint variable and satisfies \[\int_{U}\left(\exp(N(\xi))\nabla q\cdot\nabla\eta+q\eta+5y^{4}\eta q\right) \mathrm{d}x=-\int_{U}(y(\xi,\cdot)-y_{d})\eta\ \mathrm{d}x,\] for all \(\eta\in H^{1}(U)\). Along the same lines as in the previous application, one may formally demonstrate that the direction of steepest descent \(v(u)\in\mathcal{X}\) satisfies \(\|v(u)\|_{\mathcal{X}}^{p}\leq 1\), \[\lambda\int_{U}|v(u)|^{p-2}v(u)\eta\ \mathrm{d}x=-j_{u}(u,\xi)[\eta],\] for all \(\eta\in\mathcal{X}\), where \(\lambda\in\mathbf{R}\) is a Lagrange multiplier and satisfies \(\lambda\geq 0\), for \(\lambda(\|v(u)\|_{\mathcal{X}}^{p}-1)=0\). #### 4.2.1. Verification of conditions As mentioned previously, we have made assumptions on \(j\), \(\mathcal{J}\), and \(\mathcal{X}\) which require verification. For convenience we repeat them here (i) \(\mathcal{X}\) is the dual of a normed space, (ii) \(j\) is appropriately smooth, (iii) \(j\) is weak-\(*\)-lower-semi-continuous in the second variable (iv) \(v\mapsto j_{u}(\cdot,u)[v]\) is weak-\(*\)-lower-semi-continuous, and that (v) \(\mathcal{J}\) is weak-\(*\)-coercive. For simplicity, let us assume that \(p\geq 2\). * Since \(\mathcal{X}=L^{p}(U)\) is reflexive then it is certainly the dual of a space. * \(j\) is differentiable in the second variable, the form of the derivative is given in (4.5) The fact that first derivative is locally Lipschitz follows by seeing that the maps \(u\mapsto y(u)\) and \(y\mapsto\frac{1}{2}\int_{U}(y-y_{d})^{2}\mathrm{d}x\) are smooth, hence locally Lipschitz in each derivative, and using the same argument as in the preceding section to handle the \(p\) power terms. * For the term \(\frac{1}{2}\int_{U}(y-y_{d})^{2}\ \mathrm{d}x\), we have that this is a smooth map composed with a, \(u\mapsto y(u)\), a compact map which yields continuity. For the remaining term, \(\frac{\beta}{p}\int_{U}|u|^{p}\mathrm{d}x\) this is the \(p\) power of the norm hence is weak-\(*\)-lower-semi continuity holds. * By reflexivity, this holds. * One may show the stronger coercivity assumption by seeing that \[j(\cdot,u)\geq\frac{\beta}{p}\int_{U}|u|^{p}\mathrm{d}x.\] After integrating in probability, \[\mathcal{J}(u)\geq\frac{\beta}{p}\int_{U}|u|^{p}\mathrm{d}x,\] which yields the desired coercivity. ## 5. Numerical examples This section is devoted to providing numerical simulations to each of the applications presented in Section 4. We will test the SSD on each application, before directly comparing with the SGD. Furthermore we will also verify the result related to Theorem 3.7, which provides a rate of convergence. Throughout this section, we will consider the finite domain \(U=(0,1)^{2}\). Our experiments will use standard finite elements. For the finite elements, we use DUNE[1], in particular the Python bindings [10]. The sequence of step sizes will be given by \(t_{n}:=n^{-1}\) for \(n\geq 1\) and we will take \(U=(0,1)^{2}\). #### 5.0.1. Random coefficient Before conducting our experiments we now describe how we simulate our random coefficient for our numerical experiments. We take the particular example of defining it on a two-dimensional domain for simplicity. We let \(-\triangle\) denote the Laplacian, which on the computational domain \(U\) is subject to homogeneous Neumann boundary conditions. From this we can define our covariance operator of our random coefficient \(\xi\), \[C_{0}=\left(-\triangle+\tau^{2}\right)^{-\alpha},\] where \(\tau\in\mathbb{R}^{+}\) denotes the inverse lengthscale of the random field and \(\alpha\in\mathbb{R}^{+}\) determines the regularity; specifically draws from the random field are Holder with exponent up to \(\alpha-1\) (since spatial dimension \(d=2\)). From this we note that the eigenvalue problem \[C_{0}\varphi_{k}=\lambda_{k}\varphi_{k},\] has solutions, for \(\mathbb{Z}=\{0,1,2,\cdots\}\), \[\varphi_{k}(x)=\sqrt{2}\cos(k\pi x),\quad\lambda_{k}=\left(|k|^{2}\pi^{2}+\tau ^{2}\right)^{-\alpha},\quad k\in\mathbb{N}^{2}.\] Here \(X=L^{2}(\mathcal{D},\mathbb{R})\) and the \(\varphi_{k}\) are orthonormal in \(X\) with respect to the standard inner-product. Draws from the measure \(N(0,C_{0})\) are given by the Karhunen-Loeve expansion (KLE) \[\Theta=\sum_{k\in\mathbb{N}^{2}}\sqrt{\lambda_{k}}\xi_{k}\varphi_{k}(x), \quad\xi_{k}\sim N(0,1)\quad\text{i.i.d.}\,. \tag{5.1}\] This random function will be almost surely in \(X\) and in \(C(\mathcal{D},\mathbb{R})\) provided that \(\alpha>d/2\), therefore we impose this condition. As we are working in a \(2d\) setting we require \(\alpha>1\). Random draws from the KLE (5.1) are provided in Figure 1 for varying lengthscale and regularity. It is typical in the computational setting to cut off the sum in for the KLE, known as a truncation. ### Application 1 For the first experiment, we choose the random data \(g=\Theta+\cos(\pi x_{1})^{2}\cos(\pi x_{2})^{2}\), where \(\Theta\) is the KLE with prescribed hyperparameters as \(\tau=1\) and \(\alpha=3\), for computational simplicity, we cut off so that \(k_{1},k_{2}\leq 10\). We also choose \(p=4\). Figure 2 shows the negative of the directional derivative along the iterations along with the theoretical rate shown in (3.3). The numerical simulations appear to abide by the rate attained in Theorem 3.7. ### Application 2 For the second application, we choose our random coefficient \(D=I_{h}(1+\exp(\Theta))\), where \(I_{h}\) is the standard Lagrange interpolation and again \(\Theta\) is the KLE with prescribed hyperparameters as \(\tau=1\) and \(\alpha=2\) where we again cut off so that \(k_{1},k_{2}\leq 10\). Also \(p=4\) is taken. We choose \(y_{d}(x)=1+256((x_{1}-1)x_{1}(x_{2}-1)x_{2})^{2}\), \(\beta=10^{-2}\), and \(F(\xi)=1+5\Theta\), where this \(\Theta\) is the same KLE as above, but drawn independently. Figure 3 shows the negative of the directional derivative along the iterations along with the rate shown in (3.3). Similarly to the \(p\)-Laplace example, the numerical simulations appear to abide by the rate attained in Theorem 3.7. ### Application 2': A comparison of SGD and SSD Here we consider the exact setting of the previous application but instead set \(p=2\) so that we are in a Hilbertian setting. In the Hilbertian setting, one may apply the classical stochastic gradient descent method. It is easy to verify that in this setting, if one were to consider stochastic steepest descent with step length \(S_{n}:=\|J^{\prime}(u_{n})\|_{\mathcal{X}^{\prime}}\!\cdot\!t_{n}\), one recovers SGD with step length \(t_{n}\). Heuristically, one might then expect that Figure 1. Random draws from a Karhunen-Loeve expansion on the square domain \(U=[0,1]^{2}\). Figure 2. Derivatives for the \(p\)-Laplace-type experiment in Section 5.1. the gradient descent method will work well far away from the minimum, when the derivative has a large magnitude, whereas the SSD should work faster when the derivative has a small magnitude. For comparison, we consider the energy, since this is a random problem, it is more realistic to take a Monte-Carlo sample, this is given as \[\frac{\beta}{p}\int_{U}|u_{n}|^{p}\mathrm{d}x+\frac{1}{2N}\sum_{i=1}^{N}\int_{U }(y(u_{n},\xi_{n_{i}})-y_{d})^{2}\mathrm{d}x,\] where we have \(p=2\), and choose \(N=20\) samples. [htbp] We see for this particular application, that SSD appears to be out performing SGD, both in terms of the estimated energy decrease, and the size of the steepest descent. ## 6. Conclusion The purpose of this paper was to provide a first understanding of the stochastic steepest descent method in a Banach space setting. Commonly gradient methods are exploited which naturally induce a Hilbertian setting. In this work we provided a first simple understanding of a convergence analysis in a general Banach space Figure 4. Comparison of the energies for the optimal control experiments in Section 5.3. Figure 3. Derivative for the optimal control experiment in Section 5.2. setting, where one does not require the assumption of reflexivity. Our main result also include a rate of convergence which is similar to what is achieved for the stochastic gradient method. Numerical simulations were conducted comparing the stochastic steepest descent method to the gradient method, which was tested on two problems: a random 2D elliptic Poisson problem and a \(p\)-Laplacian problem. Our results demonstrate the improvements of our methodology, related to both relative errors and the energy. There are numerous natural extensions one can consider from this this work. We provide a short summary of some of these below. * One direction would be shape-optimization which provided the initial motivation behind this work. In particular one could aim to understand recent Banach space algorithms which include the \(W^{1,\infty}\)-approach, which has been discussed in the various works [8, 18]. New convergence analysis is required here. * Our numerical experiments were conducted on PDE-constrained optimization problems, which induce randomness. We assumed the randomness to be independent, however in various scenarios one instead has correlated data. Extending this to that setting is of particular interest. * Finally, related to the first point, one may consider the use of such a method in parameter estimation problems, or inverse problems. Such problems assume a unique minimizer exists in a Hilbert space, or use gradient methodologies which relate to that. Such a work would go beyond what has been done in [21], and consider fully the SSD. Other potential connections could lie within bilevel optimization [7], and in the context of non-Gaussian reconstruction [5, 32]. ## Acknowledgements NKC is supported by an EPSRC-UKRI AI for Net Zero Grant: "Enabling CO2 Capture And Storage Projects Using AI", (grant EP/Y006143/1). PJH is funded by EPSRC (grant EP/W005840/1).
2308.01679
Equivariant Movability of Topological Groups
The equivariant movability of topological spaces with an action of a given topological group $G$ is considered. In particular, the equivariant movability of topological groups is studied. It is proved that a second countable group $G$ is Lie if and only if it is equivariantly movable.
Pavel S. Gevorgyan
2023-08-03T10:41:36Z
http://arxiv.org/abs/2308.01679v1
# Equivariant movability of topological groups ###### Abstract. The equivariant movability of topological spaces with an action of a given topological group \(G\) is considered. In particular, the equivariant movability of topological groups is studied. It is proved that a second countable group \(G\) is Lie if and only if it is equivariantly movable. Key words and phrases:Equivariant shape, equivariant movability, \(G\)-space, Lie group 2020 Mathematics Subject Classification: 55P91; 55P55 ## 1. Introduction The first results on the equivariant movability of \(G\)-spaces were obtained in [4] and [5]. If \(X\) is a \(p\)-paracompact space and \(H\) is a closed subgroup of a topological group \(G\), then the \(G\)-movability of \(X\) implies its \(H\)-movability [5, Theorem 3.3]. If \(X\) is a metrizable \(G\)-movable space and \(H\) is a closed normal subgroup of the topological group \(G\), then the \(H\)-orbit space \(X|_{H}\) with the natural action of the group \(G\) is \(G\)-movable as well [5, Theorem 6.1]. In the case of \(H=G\), the equivariant movability of a metrizable \(G\)-space implies the movability of the orbit space \(X|_{G}\)[5, Corollary 6.2]. The converse is generally false [5, Example 6.3]. However, if \(X\) is metrizable, \(G\) is a compact Lie group, and the action of \(G\) on \(X\) is free, then the equivariant movability of the \(G\)-space \(X\) is equivalent to the movability of the orbit space \(X|_{G}\)[5, Theorem 7.2]. The \(G\)-movability of \(X\) implies also the movability of the \(H\)-fixed point space \(X[H]\)[5, Theorem 4.1]. In particular, the equivariant movability of a \(G\)-space \(X\) implies the movability of the topological space \(X\)[5, Corollary 3.5]. The converse is not true even for the cyclic group \(Z_{2}\). In [5], an example of a \(Z_{2}\)-space \(X\) which is movable but not \(Z_{2}\)-movable was constructed [5, Example 5.1]. The movability of topological groups in classical shape theory was studied by Keesling [9, 7] and by Kozlovskii with Segal [10]. In particular, Keesling [7] proved that, for compact connected Abelian groups, movability is equivalent to local connectedness. In this paper, we study the equivariant movability of topological groups; in particular, we prove that a second countable compact topological group \(G\) is a Lie group if and only if it is equivariantly movable (Theorem 7). This theorem provides new examples of spaces which are movable but not equivariantly movable. ## 2. Preliminaries on Equivariant Topology and Equivariant Shapes Let \(G\) be a topological group. A topological space \(X\) is called a \(G\)-space if there is a continuous map \(\theta:G\times X\to X\) of the direct product \(G\times X\) to \(X\), for which we use the notation \(\theta(g,x)=gx\), such that \[(1)\quad g(hx)=(gh)x\qquad\text{and}\qquad(2)\quad ex=x\] for \(g,h\in G\) and \(x\in X\), where \(e\) denotes the identity element of \(G\). The (continuous) map \(\theta:G\times X\to X\) is called a (continuous) action of the group \(G\) on the topological space \(X\). An evident example is the so-called trivial action of \(G\) on \(X\) defined by \(gx=x\) for all \(g\in G\) and \(x\in X\). Another example is the action of the group \(G\) on itself defined by \((g,x)\to gx\) for all \(g\in G\) and \(x\in G\). Let \(H\) be a closed subgroup of a group \(G\). There exists a natural action of the group \(G\) on space \(G|H\), which is defined by \(g(g^{\prime}H)=(gg^{\prime})H\). A subset \(A\) of a \(G\)-space \(X\) is said to be invariant if \(ga\in A\) for any \(g\in G\) and \(a\in A\). Obviously, an invariant subset of a \(G\)-space is itself a \(G\)-space. If \(A\) is an invariant subset of a \(G\)-space \(X\), then every neighborhood of \(A\) contains an open invariant neighborhood of \(A\) (see [12, Proposition 1.1.14]). Let \(X\) and \(Y\) be \(G\)-spaces. A (continuous) map \(f:X\to Y\) is called a \(G\)-map, or an equivariant map, if \(f(gx)=gf(x)\) for any \(g\in G\) and \(x\in X\). Note that the identity map \(i:X\to X\) is equivariant, and the composition of any equivariant maps is equivariant. Therefore, all \(G\)-spaces and equivariant maps form a category, which we denote by \(\mathtt{Top}^{G}\). Let \(Z\) be a \(G\)-space, and let \(Y\subseteq Z\) be an invariant subset. A \(G\)-retraction of \(Z\) to \(Y\) is a \(G\)-map \(r:Z\to Y\) such that \(r|_{Y}=1_{Y}\). Let \(K_{G}\) be a class of \(G\)-spaces. A \(G\)-space \(Y\) is called a \(G\)-absolute neighborhood retract for the class \(K_{G}\), or a \(G\)-ANR\((K_{G})\) (a \(G\)-absolute retract for the class \(K_{G}\), or a \(G\)-AR\((K_{G})\)), if \(Y\in K_{G}\) and, whenever \(Y\) is a closed invariant subset of a \(G\)-space \(Z\in K_{G}\), there exists an invariant neighborhood \(U\) of \(Y\) and a \(G\)-retraction \(r:U\to Y\) (there exists a \(G\)-retraction \(r:Z\to Y\)). Let \(X\) and \(Y\) be \(G\)-spaces. We say that two equivariant maps, or \(G\)-maps, \(f_{0},f_{1}:X\to Y\) are \(G\)-homotopic, or equivariantly homotopic, and write \(f_{0}\simeq_{G}f_{1}\) if there exists a \(G\)-map \(F:X\times I\to Y\) such that \(F(x,0)=f_{0}(x)\) and \(F(x,1)=f_{1}(x)\) for all \(x\in X\) (we assume that \(G\) acts trivially on \(I\)). The relation \(\simeq_{G}\) is an equivalence relation; we denote the \(G\)-homotopy class of a \(G\)-map \(f\) by \([f]\). In this way, we obtain the category \(H\)-\(\mathtt{Top}^{G}\), whose objects are \(G\)-spaces and whose morphisms are classes of \(G\)-homotopic \(G\)-maps. **Definition 1**.: An inverse \(G\)-ANR system \(\underline{X}=\{X_{\alpha},p_{\alpha\alpha^{\prime}},A\}\) in \(\mathtt{Top}^{G}\) (where all \(X_{\alpha}\) are \(G\)-ANRs) is said to be associated with a \(G\)-space \(X\) if there exist \(G\)-maps \(p_{\alpha}:X\to X_{\alpha}\) such that \(p_{\alpha}\simeq_{G}p_{\alpha\alpha^{\prime}}\circ p_{\alpha^{\prime}}\) and the following two conditions hold for every \(G\)-ANR \(P\): (1) for every \(G\)-map \(f:X\to P\), there is an \(\alpha\in A\) and a \(G\)-map \(h_{\alpha}:X_{\alpha}\to P\) such that \(h_{\alpha}\circ p_{\alpha}\simeq_{G}f\) (i.e., each \(f\) factors through some \(X_{\alpha}\)); (2) if \(\varphi\circ p_{\alpha}\simeq_{G}\psi\circ p_{\alpha}\) for some \(G\)-maps \(\varphi,\psi:X_{\alpha}\to P\), then there is an \(\alpha^{\prime}\geqslant\alpha\) such that \(\varphi\circ p_{\alpha\alpha^{\prime}}\simeq_{G}\psi\circ p_{\alpha\alpha^{ \prime}}\). For every \(G\)-space \(X\), there exists an inverse \(G\)-ANR system \(\underline{X}=\{X_{\alpha},p_{\alpha\alpha^{\prime}},A\}\) associated with \(X\); i.e., the category \(H\)-ANR\({}^{G}\) is dense in \(H\)-\(\mathtt{Top}^{G}\)[1]. **Definition 2**.: An inverse \(G\)-ANR system \(\underline{X}=\{X_{\alpha},p_{\alpha\alpha^{\prime}},A\}\) is said to be equivariantly movable, or \(G\)-movable, if, for every \(\alpha\in A\), there exists an \(\alpha^{\prime}\in A\) such that \(\alpha^{\prime}\geqslant\alpha\) and, for any \(\alpha^{\prime\prime}\geqslant\alpha\) (\(\alpha^{\prime\prime}\in A\)), there exists a \(G\)-homotopy class \(r^{\alpha^{\prime}\alpha^{\prime\prime}}:X_{\alpha^{\prime}}\to X_{\alpha^{ \prime\prime}}\) for which \[p_{\alpha\alpha^{\prime\prime}}\circ r^{\alpha^{\prime}\alpha^{\prime\prime}}=p_ {\alpha\alpha^{\prime}}.\] **Definition 3**.: A \(G\)-space \(X\) is said to be equivariantly movable, or \(G\)-movable, if there exists an equivariantly movable inverse \(G\)-ANR system \(\underline{X}=\{X_{\alpha},p_{\alpha\alpha^{\prime}},A\}\) associated with \(X\). Note that the last definition of equivariant movability coincides with that of ordinary movability for the trivial group \(G=\{e\}\). The reader is referred to the books by K. Borsuk [2] and by S. Mardesic and J. Segal [11] for general information about shape theory and to the book by G. Bredon [3] for an introduction to compact transformation groups. ## 3. Weakly Equivariantly Shape Comparable \(G\)-Spaces **Definition 4**.: We say that \(G\)-spaces \(X\) and \(Y\) are weakly equivariantly shape comparable if there exist \(G\)-shape morphisms both from \(X\) to \(Y\) and from \(Y\) to \(X\). Obviously, this relation is an equivalence. Therefore, the family of all \(G\)-spaces splits into disjoint classes of weakly equivariantly shape comparable \(G\)-spaces. We denote the class of spaces weakly equivariantly shape comparable with a \(G\)-space \(X\) by \(wes(X)\). Let \(wes(*)\) be the weak equivariant shape comparability class of the one-point \(G\)-space \(\{*\}\). The following proposition characterizes the class \(wes(*)\). **Proposition 1**.: _The family of all \(G\)-spaces with a fixed point is precisely the weak equivariant shape comparability class \(wes(*)\)._ Proof.: It is sufficient to prove that any \(G\)-space \(X\in wes(*)\) has a fixed point. Indeed, let \(F:\{*\}\to X\) be a \(G\)-shape morphism. We regard the \(G\)-space \(X\) as an invariant closed subset of some \(G\)-AR \(Y\). Since \(F:\{*\}\to X\) is a \(G\)-shape morphism, it follows that any invariant neighborhood of the space \(X\) in \(Y\) has a fixed point. Therefore, the \(G\)-space \(X\) has a fixed point as well, because the set of all fixed points of the \(G\)-space \(Y\) is closed. We denote the weak equivariant shape comparability class of a group \(G\) with its natural action on itself by \(wes(G)\). The following theorem characterizes the class \(wes(G)\) in the case of a second countable compact group. **Theorem 1**.: _Let \(G\) be a second countable compact group. Then the \(G\)-space \(X\) belongs to the class \(wes(G)\) if and only if \(X=X|G\times G\)._ The proof of this theorem is based on the following theorem of independent interest. **Theorem 2**.: _Suppose that \(G\) is a second countable compact group, \(H\subset G\) is a closed normal subgroup of \(G\), and \(X\) is any \(G\)-space. If there exists a \(G\)-shape morphism \(F:X\to G|H\), then \(X=G\times_{H}A\) and \(F\) is generated by the \(G\)-equivariant map \(h:G\times_{H}A\to G|H\) given by \(h([g,a])=gH\), where \(A\) is some \(H\)-space, \(G\times_{H}A\) is the twisted product, and \([g,a]\) is the \(H\)-orbit of the point \((g,a)\)._ Proof.: Since the group \(G\) is compact and second countable, it follows that so is the group \(G\left|H\right.\). By a well-known theorem of Pontryagin [13, Theorem 68], there exist closed normal divisors \(K_{i}\) (\(i=1,2,\dots\)) of the group \(G\left|H\right.\) such that \(K_{i+1}\subset K_{i}\), \(\left(G\left|H\right.\right)\left|K_{i}\right.\) is a Lie group for any \(i=1,2,\dots\), and \[G|H=\lim_{\longleftarrow}\left\{(G|H)|K_{i};\ p_{i,i+1}\right\}, \tag{1}\] where the \(p_{i,i+1}:(G|H)|K_{i+1}\rightarrow(G|H)|K_{i}\) are the natural epimorphisms generated by the inclusions \(K_{i+1}\subset K_{i}\). Note that, for any \(i=1,2,\dots\), the group \((G|H)|K_{i}\) is isomorphic (topologically and algebraically) to the group \(G|\tilde{K}_{i}\), where \(\tilde{K}_{i}=p^{-1}\left(K_{i}\right)\) and \(p:G\to G\left|H\right.\) is the natural epimorphism. The groups \(\tilde{K}_{i}\) with \(i=1,2,\dots\), being continuous preimages of the closed normal subgroups \(K_{i}\subset G\left|H\right.\), are closed and normal. Thus, we have \[G|H=\lim_{\longleftarrow}\left\{G|\tilde{K}_{i};p_{i,i+1}\right\}, \tag{2}\] where all maps \(p_{i,i+1}\) are \(G\)-equivariant, provided that the space \(G|\tilde{K}_{i}\) is endowed with the natural action of the group \(G\). Now, suppose given a \(G\)-shape morphism \(F:X\to G|H\). Let us prove that \(F\) is generated by some equivariant map \(h:X\to G|H\). Consider the \(G\)-shape maps \(S_{G}\left(p_{i}\right)\circ F:X\to G|\tilde{K}_{i}\), where \(S_{G}\) is the \(G\)-shape functor and the \(p_{i}:G\to G|\tilde{K}_{i}\) are the projections. Since all \(G|\tilde{K}_{i}\) are \(G\)-ANRs, it follows that there exist \(G\)-equivariant maps \(f_{i}:X\to G|\tilde{K}_{i}\) for which \[S_{G}\left(p_{i}\right)\circ F=S_{G}\left(f_{i}\right). \tag{3}\] On the other hand, we have \[S_{G}\left(f_{i}\right)=S_{G}\left(p_{i}\right)\circ F=S_{G}\left( p_{i,i+1}\right)\circ S_{G}\left(p_{i+1}\right)\circ F\\ =S_{G}\left(p_{i,i+1}\right)\circ S_{G}\left(f_{i+1}\right)=S_{G }\left(p_{i,i+1}\circ f_{i+1}\right).\] Thus, the \(G\)-maps \(f_{i}\) and \(p_{i,i+1}\circ f_{i+1}\) to the \(G\)-ANR \(G|\tilde{K}_{i}\) generate the same \(G\)-shape morphism. Therefore, they are \(G\)-homotopic, i.e., \[f_{i}\simeq_{G}p_{i,i+1}\circ f_{i+1}. \tag{4}\] Let \(h_{1}=f_{1}\). All maps \(p_{i,i+1}\) are \(G\)-fibrations. Hence there exists a \(G\)-map \(h_{2}:X\to G|\tilde{K}_{2}\) such that \(h_{2}\simeq_{G}f_{2}\) and \(h_{1}=p_{1,2}\circ h_{2}\). Continuing this construction by induction, we obtain \(G\)-maps \(h_{i}:X\to G|\tilde{K}_{i}\) such that \[h_{i}\simeq_{G}f_{i}\qquad\text{and}\qquad h_{i}=p_{i,i+1}\circ h_{i+1}. \tag{5}\] We set \(h=\lim\limits_{\longleftarrow}h_{i}\). Obviously, \(h\) is a \(G\)-map from \(X\) to \(G\left|H\right.\). Let us prove that the map \(h\) has the required property \[S_{G}\left(h\right)=F. \tag{6}\] Indeed, the continuity of the shape functor [6] implies \[S_{G}\left(h\right)=S_{G}\left(\,h_{i}\right)=\,S_{G}\left(h_{i}\right)=\,S_{ G}\left(f_{i}\right)=\,S_{G}\left(p_{i}\right)\circ F=1\circ F=F.\] Let \(A=h^{-1}\left(eH\right)\). Then \(A\) is an \(H\)-invariant subset of the \(G\)-space \(X\), \(X=G\times_{H}A\), and \(h:G\times_{H}A\to G|H\) is defined by \(h([g,a])=gH\)[3, Proposition 3.2]. This completes the proof of the theorem. Proof of Theorem 1.: First, note that if \(X=A\times G\), where \(A\) is a trivial \(G\)-space, then the map \(f:A\times G\to G\) defined by \(f(a,g)=g\) is equivariant. Therefore, \(X=A\times G\in wes(G)\). Now, suppose that \(X\) is a \(G\)-space in the class \(wes(G)\). Consider a \(G\)-shape morphism \(F:X\to G\). Taking the trivial group \(\{e\}\) for the closed normal subgroup \(H\) in Theorem 2, we obtain \(X=X|G\times G\). **Corollary 1**.: _Let \(G\) be a second countable compact group, and let \(H\) and \(K\) be its closed normal subgroups. Then \(wes(G|H)=wes(G|K)\) if and only if the subgroup \(H\) is conjugate to the subgroup \(K\)._ Proof.: By Definition 4, the equality \(wes(G|H)=wes(G|K)\) means the existence of \(G\)-shape morphisms both from \(G|H\) to \(G|K\) and from \(G|K\) to \(G|H\). According to Theorem 2, there exist equivariant maps from \(G|H\) to \(G|K\) and from \(G|K\) to \(G|H\), which is possible if and only if \(H\) is conjugate to \(K\)[3, Corollary 4.4]. ## 4. Equivariantly Movable Groups **Definition 5**.: We say that an inverse \(G\)-ANR sequence \(\{X_{K},p_{k,l}\}\) is canonical if, for any \(k\in N\), there exists an equivariant map \(r^{k,k+1}:X_{k}\to X_{k+1}\) such that \[p_{j,k+1}\circ r^{k,k+1}\simeq_{G}p_{j,k}, \tag{7}\] where \(1\leqslant j<k\). The following proposition is easy to prove. **Proposition 2**.: _Any canonical \(G\)-ANR sequence is \(G\)-movable._ The following assertion is also valid. **Proposition 3**.: _Any \(G\)-movable inverse \(G\)-ANR sequence contains a canonical subsequence._ Proof.: Let \(\{X_{k},p_{k,l}\}\) be a \(G\)-movable \(G\)-ANR sequence. For \(k=1\), there exists a number \(k_{1}>1\) which witnesses the sequence \(\{X_{k},p_{k,l}\}\) being \(G\)-movable. By induction, given a number \(k=k_{i}\), we choose a number \(k_{i+1}>k_{i}\) such that, for any other number \(l>k_{i+1}\), there exists an equivariant map \(r^{k_{i+1},l}:X_{k_{i+1}}\to X_{l}\) for which \[p_{k_{i},l}\circ r^{k_{i+1},l}\simeq_{G}p_{k_{i},k_{i+1}}. \tag{8}\] It is easy to check that \(\big{\{}X_{k_{i}},p_{k_{i},k_{i+1}},i\geqslant 1\big{\}}\) is the required canonical subsequence. The following theorem follows directly from Propositions 2 and 3. **Theorem 3**.: _A compact metrizable \(G\)-space \(X\) is \(G\)-movable if and only if there exists a canonical inverse \(G\)-ANR sequence \(G\)-associated with \(X\)._ **Lemma 1**.: _Let \(X\) be a compact metrizable \(G\)-movable space. Then there exists an inverse \(G\)-ANR sequence \(\{X_{k},p_{k,l}\}\)\(G\)-associated with \(X\) such that \(X_{k}\in wes(X)\) for any \(k\in N\)._ Proof.: Let \(X\) be a compact metrizable \(G\)-movable space. By Theorem 3, there exists a canonical \(G\)-ANR sequence \(\{X_{k},p_{k,l}\}\)\(G\)-associated with \(X\). Let us prove that this sequence is as required, i.e., \(X_{k}\in wes(X)\) for any \(k\in N\). Since the maps \(p_{k}:X\to X_{k}\) are equivariant, it suffices to prove the existence of a \(G\)-shape morphism from \(X_{k}\) to \(X\). Consider the morphism \(F=\{f_{i},\varphi\}:X_{k}\to\{X,p_{i,r},i\geqslant k\}\) defined by \[\varphi(i)=k,\quad f_{i}=p_{i,i+1}\circ r^{k,i+1}, \tag{9}\] where \(r^{k,i+1}=r^{i,i+1}\circ\ldots\circ r^{k,k+1}\) and the \(r^{j,j+1}\) are the equivariant maps mentioned in Definition 5. The morphism \(F=\{f_{i},\varphi\}\) is a map of \(G\)-ANR sequences. Indeed, \[p_{i,i+1}\circ f_{i+1}=p_{i,i+1}\circ p_{i+1,i+2}\circ r^{k,i+2}= p_{i,i+2}\circ r^{i+1,i+2}\circ r^{k,i+1}\\ \simeq_{G}p_{i,i+1}\circ r^{k,i+1}=f_{i}.\] Lemma 1 implies directly the following assertion. **Theorem 4**.: _Any compact metrizable \(G\)-movable space is weakly equivariantly shape comparable with some \(G\)-ANR._ However, as shown below (see Corollary 2), there exist compact metrizable \(G\)-spaces which are not weakly equivariantly shape comparable with any \(G\)-ANR (\(G\)-movable space). **Lemma 2**.: _Suppose that \(ass_{G}X=\{X_{k},p_{k,k+1}\}\), \(ass_{G}Y=\{Y_{l},q_{l,l+1}\}\), \(wes(X)=wes(Y)\), and \(X_{k}\in K\) for any \(k\in N\), where \(K\) is a class of weak equivariant shape comparability. Then the \(G\)-ANR sequence \(\{Y_{l},q_{l,l+1}\}\) has a subsequence \(\big{\{}Y_{l_{i}},q_{l,l_{i+1}}\big{\}}\) such that \(Y_{l_{i}}\in K\) for any \(i=1,2,\ldots\)._ Proof.: The condition \(wes(X)=wes(Y)\) means the existence of \(G\)-shape morphisms \(F:Y\to X\) and \(\Phi:X\to Y\). Suppose that \(F=\{f_{k},\varphi\}:\{Y_{l},q_{l,l+1}\}\to\{X_{k},p_{k,k+1}\}\) and \(\Phi=\{g_{l},\psi\}:\{X_{k},p_{k,k+1}\}\to\{Y_{l},q_{l,l+1}\}\). Let us prove that \(\big{\{}Y_{\varphi(k)},q_{\varphi(k),\varphi(k+1)}\big{\}}\) is the required subsequence. For this purpose, we show that the \(Y_{\varphi(k)}\) and the \(X_{k}\) are weakly equivariantly shape comparable and belong to the class \(K\) for any \(k=1,2,\ldots\). Indeed, the maps \(f_{k}:Y_{\varphi(k)}\to X_{k}\) are equivariant. On the other hand, the \(X_{k}\) and the \(X_{\psi(\varphi(k))}\) belong to the class \(K\), and the maps \(g_{\varphi(k)}:X_{\psi(\varphi(k))}\to Y_{\varphi(k)}\) are equivariant. Therefore, \(Y_{\varphi(k)}\in K\), as required. **Theorem 5**.: _If a weak equivariant shape comparability class \(K\) contains a \(G\)-movable metrizable compact space, then, for any compact metrizable \(G\)-space \(X\in K\), there exists a \(G\)-ANR sequence \(\{X_{k},p_{k,k+1}\}\)\(G\)-associated with \(X\) such that \(X_{k}\in K\) for any \(k\in N\)._ Proof.: Let \(Y\in K\) be a \(G\)-movable metrizable compact space. According to Lemma 1, there exists an inverse \(G\)-ANR sequence \(\{Y_{i},q_{l,l+1}\}\)\(G\)-associated with \(Y\) such that \(Y_{i}\in K\) for any \(k\in N\). Consider any compact metrizable \(G\)-space \(X\in K\). By Lemma 2, any \(G\)-ANR sequence \(G\)-associated with \(X\) has a subsequence of spaces belonging to the class \(K\). This completes the proof of the theorem. **Theorem 6**.: _A second countable compact group \(G\) is Lie if and only if the class \(wes(G)\) contains a second countable \(G\)-movable compact space._ Proof.: Necessity is obvious, because any compact Lie group is a \(G\)-ANR [12] and, therefore, a \(G\)-movable space. Let us prove sufficiency. Suppose that the class \(wes(G)\) contains a second countable \(G\)-movable compact space. By a theorem of Pontryagin [13, p. 332], the identity element \(e\in G\) is surrounded by decreasing closed normal subgroups \(K_{1}\supset K_{2}\supset\dotsb\) such that \(G\,|K_{i}\) is a Lie group for any \(i=1,2,\dotsb\) and \(G=\lim\limits_{\longleftarrow}\{G\,|K_{i},p_{i,i+1}\}\), where the \(p_{i,i+1}:G|K_{i+1}\to G|K_{i}\) are the natural epimorphisms generated by the embeddings \(K_{i+1}\subset K_{i}\). The group \(G\) acts naturally on each \(G\,|K_{i}\); all maps \(p_{i,i+1}\) are \(G\)-equivariant with respect to these actions, and the \(G\,|K_{i}\) themselves are \(G\)-ANRs [12]. Thus, the inverse sequence \(\{G|K_{i},p_{i,i+1}\}\) is \(G\)-associated with the group \(G\). By virtue of Theorem 5, the sequence \(\{G\,|K_{i},p_{i,i+1}\}\) has a subsequence in which all spaces are weakly equivariantly shape comparable with the group \(G\). This is possible only if \(G\,|K_{i}=G\) starting with some \(i\). Since \(G\,|K_{i}\) is a Lie group, the required assertion follows. The last theorem implies directly the following criterion for a second countable compact group to be a Lie group. **Theorem 7**.: _A second countable compact group is a Lie group if and only if it is \(G\)-movable._ Theorem 7 gives new examples of movable but not \(G\)-movable spaces. Indeed, as was shown by Keesling [9], there exist compact connected Abelian groups which are movable but not uniformly movable and, therefore, not Lie. Later, Kozlovskii and Segal [7] constructed examples of such groups. These groups are not \(G\)-movable by Theorem 7. **Corollary 2**.: _There exists a \(G\)-space which is not weakly equivariantly shape comparable with any second countable compact \(G\)-movable space._ Proof.: By virtue of Theorem 6, any second countable compact group not being a Lie group has the required property.
2303.14660
Heavy-Flavour Jets in High-Energy Nuclear Collisions
Reconstructed jets initiated from heavy quarks provide a powerful tool to probe the properties of the quark-gluon plasma (QGP) and to explore the mass hierarchy of jet quenching. In this article, we review the recent theoretical progresses on heavy-flavour jets in high-energy nuclear collisions at the RHIC and LHC. We focus on the yields and substructures of charm and bottom quark jets with jet quenching effect, such as the nuclear modification factors, transverse momentum imbalance, angular correlation, radial profiles, fragmentation functions, the "dead-cone" effect, etc.
Sa Wang, Wei Dai, Enke Wang, Xin-Nian Wang, Ben-Wei Zhang
2023-03-26T09:06:00Z
http://arxiv.org/abs/2303.14660v2
# Heavy-Flavour Jets in High-Energy Nuclear Collisions ###### Abstract Reconstructed jets initiated from heavy quarks provide a powerful tool to probe the properties of the quark-gluon plasma (QGP) and to explore the mass hierarchy of jet quenching. In this article, we review the recent theoretical progresses on heavy-flavour jets in high-energy nuclear collisions at the RHIC and LHC. We focus on the yields and substructures of charm and bottom quark jets with jet quenching effect, such as the nuclear modification factors, transverse momentum imbalance, angular correlation, radial profiles, fragmentation functions, the "dead-cone" effect, etc. quark-gluon plasma; jet quenching; high-energy nuclear collisions; heavy-flavour jet 15 March 2023 ## 1 Introduction High-energy nuclear collisions at the Relativistic Heavy Ion Collider (RHIC) and Large Hadron Collider (LHC) have opened up new avenues for the search for strongly interacting nuclear matter, the quark-gluon plasma (QGP) [1; 2; 3; 4]. Investigating the formation of the QGP deepens our understanding of quantum chromodynamics (QCD) under extreme conditions at high temperature and density [5; 6] and the evolution of the Universe at the first microsecond [7]. The jet-quenching phenomena, the energy attenuation of fast partons due to their strong interactions with the QCD medium, provide an army of powerful tools to study the properties of the QGP, such as the yield suppression of high-\(p_{T}\) hadron/jet, the \(p_{T}\) asymmetry of dijets, \(\gamma/Z^{0}\)+ jets as well as jet substructures [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21]. In elementary proton-proton reactions, the productions of charm and bottom quarks are perturbatively calculable, since their large masses (\(M_{c}\sim 1.5\) GeV, \(M_{b}\sim 4.8\) GeV) act as a natural cut-off above the \(\Lambda_{\rm QCD}\)[22]. Heavy quarks are produced in the initial hard scattering at a very early stage due to their large masses, therefore witnessing the whole QGP evolution. Meanwhile, while their thermal production is almost negligible with the initial conditions so far accessible in heavy-ion programs at the RHIC and LHC [23], the productions of charm and bottom hadron/jets make a very promising hard probe to the transport properties of hot and dense quark matter. During the past decade, the experimental measurements including the nuclear modification factor \(R_{AA}\)[24; 25; 26; 27; 28; 29; 30] and the collective flow (the direct flow \(v_{1}\)[31; 32] and elliptical flow \(v_{2}\)[33; 34; 35; 36]) of heavy-flavour hadrons both at the RHIC and LHC have attracted much attention from the community of high-energy nuclear physics. A lot of theoretical studies have been performed to confront the experimental data obtained in the high-energy heavy-ion collisions, which greatly improve our understanding of the in-medium evolution [37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58] and hadronization mechanisms [59; 60; 61] of heavy quarks (for detailed reviews see [62; 63; 64; 65; 66; 67; 68; 69; 70]). Specifically, the current models treat the elastic and inelastic interactions between heavy quarks and the QGP medium with multiple methods, consisting mainly the perturbative or non-perturbative analytic calculations (SCET [41; 71], CUJET [46; 72], DREENA [73; 74; 75; 76; 77; 78], WHDG, [79; 80], AdS/CFT (HG) [81; 82]), and the Monte Carlo transport approaches based on the Boltzmann (BAMPS [83; 84; 85; 86], MC@\(\theta_{s}\)HQ [87; 88; 89], (Q)LBT [47; 57], LIDO [49; 90], Catania-pQCD/QPM [91; 92; 93; 94]), the Langevin (POWL ANG [45; 95; 96], Duke [44; 97], UrQMD [98; 99; 100], TAMU [40; 101; 102], SHELL [103; 104; 105]) and the Kadanoff-Baym (PHSD [106; 107; 108; 108]) equations. These phenomenological studies reveal a fact that the elastic scattering of heavy quarks in the hot/dense nuclear matter is important, especially at the lower \(p_{T}\) region (\(p_{T}^{Q}<5m_{Q}\)), different from our experience of treating light quarks or gluons. One of the central issues of investigating the heavy-flavour production in the heavy-ion program is extracting the diffusion coefficient \(D_{s}\), which is directly related to the transport properties of the hot QCD matter. Additionally, different from the fragmentation hadronization of heavy quarks in a vacuum, within the hot and dense nuclear matter, the heavy-flavour hadrons can be produced by a combination of heavy quarks and thermal partons. Such a coalescence hadronization mechanism plays an important role in the collective flow [33; 34] and baryon-to-meson ratio [109; 110] of charmed hadron in nucleus-nucleus collisions at the RHIC and LHC. In recent years, the experimental measurements on heavy-flavour jet (a reconstructed jet containing a heavy quark or a heavy-flavour hadron) have made great strides in p+p [111; 112; 113; 114; 115; 116; 117; 118], p+A [111; 119; 120] and A+A collisions [121; 122; 123; 124; 125; 126; 127; 128]. A wealth of information carried by heavy-flavour jets not only offers a new topic of jet physics and the application of the perturbative QCD, but also their medium modifications in heavy-ion collisions are also of great significance to reveal the in-medium energy loss mechanism of heavy quarks, to address the mass effect of jet quenching, and to extract the transport properties of the QGP. ## 2 Recent Advances of Heavy-Flavour Phenomenology in Heavy-Ion Collisions Generally speaking, as we discussed in the last section, the reason for treating the heavy flavours as powerful hard probes to the transport properties of the QGP consists of at least three aspects. Firstly, the large mass (\(M_{Q}\gg\Lambda_{QCD}\)) makes it available to compute the differential cross-section of heavy quarks in the binary nucleon-nucleon collisions based on the perturbative QCD (pQCD) scheme within the next-to-next-to-leading order (NNLO) precision [129]. Secondly, due to the large mass (\(M_{Q}\gg T_{\rm med}\)), the total yield of heavy quarks in nucleus-nucleus collisions only depends on their initial production at hard scattering. Since the momentum transfer of the in-medium collisions \(q^{2}\)\(\sim\)\(g^{2}T^{2}\) (\(T\sim\) 0.4-0.5 GeV) is much smaller than the creation energy of heavy quark pairs at the current collision energy, both at the RHIC and LHC, the subsequent contribution from the thermal creation during the QGP evolution is negligible [23]. Apart from this, according to the Heisenberg uncertainty principle, the formation time of heavy quarks (\(\tau_{0}\sim\frac{1}{2m_{Q}}<0.1\) fm/c) is shorter than the formation time of the quark-gluon plasma (\(\tau_{f}\sim\) 0.6 fm/c). Therefore heavy quarks witness the entire evolution of the hot/dense nuclear matter until the freeze-out. In this section, we will briefly introduce the recent theoretical advances that help us understand the heavy-flavour production in heavy-ion collisions, including mainly the following several aspects, the initial production, the transport approaches, the hadronization mechanisms, and the extraction of diffusion coefficient. ### Production of Heavy Quarks in p+p Collisions The production of heavy quarks in proton-proton collisions establishes a baseline to investigate the nuclear modification in high-energy nuclear collisions both at the RHIC and LHC. The yield of heavy flavours in nucleus-nucleus collisions generally is viewed as the sum of that in \(N_{\rm coll}\) binary nucleon-nucleon collisions while taking into account the initial cold nuclear matter effect (usually considered by using the nuclear-modified parton distribution function [130; 131; 132]). In the fixed-flavour-number scheme (FFNS) [22], the cross-section of heavy quarks in p+p collisions can be expressed based on the factorization theorem, \[d\sigma_{Q}[s,p_{T},y,m_{Q}]\simeq\sum_{i,j}\int_{0}^{1}dx_{i}\int_{0}^{1}dx_{j} f_{i}^{A}(x_{i},\mu_{F})f_{j}^{A}(x_{j},\mu_{F})d\tilde{\sigma}_{ij\to Q+X}[x_{i},x_{j},s,p_{T},y,m_{Q},\mu_{F},\mu_{R}] \tag{1}\] where \(s\) is the square of the centre-of-mass energy of the incoming proton, \(p_{T}\) is the transverse momentum of the produced heavy quark, and \(y\) is the rapidity. \(f_{i}^{A}\) (\(f_{j}^{B}\)) is the parton distribution function (PDF) quantifying the probability to find a parton with flavour \(i(j)\) and carrying momentum fraction \(x_{i(j)}\) in the colliding proton \(A(B)\), which relies on the factorization scale \(\mu_{F}\). \(\tilde{\sigma}_{ij\to Q+X}\) represents the cross-section of the partonic hard process \(i+j\to Q+X\) that can be calculated relying on the pQCD. The partonic cross-section \(\tilde{\sigma}_{ij\to Q+X}\) also relies on the strong coupling constant \(\alpha_{s}\) determined at the renormalization scale \(\mu_{R}\). Note that Equation (1) sums all partonic hard processes \(i+j\to Q+X\), where \(i,j\) are the active flavours including \((u,\bar{u},d,\bar{d},s,s,g)\) but not heavy quarks. Only at the factorization scale \(\mu_{F}>m_{c}\), can charm be viewed as an active flavour, often used for beauty production. The differential cross-section \(d\sigma_{Q}\) can be convolved with a scale-independent fragmentation function \(D_{Q}^{H}(z)\), such as the Peterson [133] or Lund [134] forms, to obtain the cross-section of the heavy-flavour hadron, \[d\sigma_{H}=d\sigma_{Q}\otimes D_{Q}^{H}(z) \tag{2}\] where \(H\) denotes the heavy-flavour hadron and \(z\) the momentum fraction carried by \(H\). Since the FFNS is usually applicable at the low \(p_{T}\) region (\(0<p_{T}<5m_{Q}\)), for the higher kinematic region (\(p_{T}\gg m_{Q}\)), the logarithmic terms (\(\frac{\mu_{F}}{2\pi}ln(p_{T}^{2}/m_{Q}^{2})\)) in the perturbative expansion of the cross-section become large, and should be resummed to all orders. To implement such a resummation, one has to absorb the large logarithmic terms into the parton distribution function and fragmentation function. This treatment requires that heavy quarks are active flavours when the factorization scale is \(\mu_{F}>m_{Q}\). In other words, such a scheme has a variable number of active flavours when \(\mu_{F}\) crosses the heavy quark mass, hence named the variable-flavour-number scheme (VFNS). In particular, when the heavy quark mass can be neglected in the evaluation of the short-distance cross-section, the VFNS scheme is called the zero-mass VFNS (ZM-VFNS). In the ZM-VFNS, the differential cross-section of a heavy-flavour hadron based on the factorization theorem can be expressed as: \[d\sigma_{H+X}\simeq\sum_{i,j}\int_{0}^{1}dx_{i}\int_{0}^{1}dx_{j}f_{i}^{A}(x_ {i},\mu_{F})f_{j}^{A}(x_{j},\mu_{F})d\tilde{\sigma}_{ij\to k+X}D_{k}^{H}(z,\mu_ {F}^{\prime}) \tag{3}\] where \(D_{k}^{H}(z,\mu_{F}^{\prime})\) is given by a convolution of a perturbative-fragmentation function (PFF) \(D_{k}^{\rm Q}(z,\mu_{F}^{\prime})\) describing a parton k fragmentation into heavy quark Q, with a scale-independent one \(D_{Q}^{H}(z)\) for the hadronization of a heavy quark. Note that in Equation (3) the sum covers all possible partonic-hard processes (\(i+j\to k+X\)) where \(i,j,k\) can be light quarks, gluons, and heavy quarks [135]. Since heavy quark mass is neglected in the computation of the cross-section, the ZM-VFNS is expected to be reliable only at very high \(p_{T}\). To find a unified theoretical framework that combines the advantages of the FFNS at low \(p_{T}\) and the ZM-VFNS at high \(p_{T}\), in recent years the interpolation schemes have been established, such as the general-mass VFNS (GM-VFNS) [136; 137] and the fixed-order plus next-to-leading logarithms (FONLL) [129; 138]. For instance, by using an interpolating function \(G(m_{Q},p_{T})=p_{T}^{2}/(p_{T}^{2}+c^{2}m_{Q}^{2})\) where \(c\) is set to \(c=5\), the FONLL scheme can well describe the heavy-flavour production in the entire kinematic region. For more details of the interpolation schemes see [22] and the references therein. Compared to the analytic calculation schemes discussed above, the general-purpose Monte Carlo event generator, such as PYTHIA [139], HERWIG [140], POWHEG [141] and SHERPA [142], can provide a more complete description of all the final-state particles at the parton or hadron level. Especially for the studies of jet physics, the Monte Carlo event generators can give more precise descriptions of the observations relating to the jet substructure than analytic calculations. ### Transport of Heavy Quarks in the QGP Transport approaches are wildly used in the current theoretical studies of heavy-flavour production in high-energy nuclear collisions. At the lower \(p_{T}\) region, the elastic scattering of heavy quarks with the thermal parton (light quark or gluon) has been proven to be the dominant mechanism of energy loss. Generally, the kinetic theory based on the Boltzmann transport equation is a popular treatment for in-medium heavy quark evolution. The Boltzmann equation for the distribution function of heavy quarks can be written in a compact form, \[p_{n}\partial_{\mu}f_{Q}(x,p)=C[f_{q},f_{q},f_{g},f_{Q}](x,p) \tag{4}\] where \(f_{Q}(x,p)\) is the phase-space distribution of heavy quarks. In the QGP, the phase-space distributions of light quark \(f_{q}\) and gluon \(f_{g}\) can be solved by the Boltzmann equation [143; 144]. Subsequently, the relativistic Boltzmann-like collision integral \(C[f_{Q}](x,p)\) has a simplified form [42; 145], \[C[f_{Q}]=\int d^{3}q[\omega(\mathbf{p}+\mathbf{q},\mathbf{q})f_{Q}(\mathbf{x},\mathbf{p}+\mathbf{q},t)-\omega(\mathbf{p},\mathbf{q})f_{Q}(\mathbf{x}, \mathbf{p},t)] \tag{5}\] where \(\omega(\mathbf{p}+\mathbf{q},\mathbf{q})\) represents the transition rate of a heavy quark from the momentum \(\mathbf{p}+\mathbf{q}\) to \(\mathbf{p}\) by collisions with quasiparticles. This rate is usually determined by the matrix elements of the \(2\to 2\) QCD scattering. With the assumption that the momentum transfer \(|\mathbf{q}|\) is small compared to the momentum of a heavy quark, we can expand \(\omega(\mathbf{p}+\mathbf{q},\mathbf{q})f_{Q}(\mathbf{x},\mathbf{p}+\mathbf{q},t)\) around \(\mathbf{q}\) by utilizing the Taylor formula to obtain the Fokker-Planck equation, \[\frac{\partial f_{Q}}{\partial t}=\frac{\partial}{\partial p_{i}}\Bigg{[}A_{i }(\mathbf{p})f_{Q}+\frac{\partial}{\partial p_{j}}[B_{ij}(\mathbf{p})f_{Q}] \Bigg{]} \tag{6}\] where two coefficients \(A_{i}(\mathbf{p})=\int d^{3}q\omega(\mathbf{p},\mathbf{q})q_{i}\) and \(B_{ij}(\mathbf{p})=\int d^{3}q\omega(\mathbf{p},\mathbf{q})q_{i}q_{j}\) are directly related to the drag coefficient (\(\eta_{D}\)) and the momentum diffusion coefficient (\(\kappa\)), which control rate of the energy loss and the momentum broadening of heavy quarks in the hot medium, respectively. Indeed, the Fokker-Planck equation is equivalent to another more well-known equation, the Langevin equation, \[\frac{d\vec{x}}{dt} = \frac{\vec{p}}{\vec{E}} \tag{7}\] \[\frac{d\vec{p}}{dt} = -\eta_{D}(p)\vec{p}+\vec{\xi}(t) \tag{8}\] where the stochastic term \(\vec{\xi}(t)\) describes the random kicks suffered in heavy quarks from the medium constituents, which obeys a Gaussian distribution with a mean value 0 and variance \(\kappa\). The drag coefficient \(\eta_{D}\) and the diffusion coefficient \(\kappa\) are related by the fluctuation-dissipation theorem (FDT) \(\kappa=2\eta_{D}ET\). Note that at higher kinematic regions (\(p_{T}^{Q}>5m_{Q}\)), the medium-induced gluon radiation plays an increasingly important role in the energy loss of heavy quarks. The radiative energy loss of heavy quarks is treated with various formalisms and at different approximations [53; 146; 147; 148; 149; 150], which usually provide the radiated gluon spectra as a function of momentum fraction \(x\) and transverse momentum \(k_{\perp}\). In the Langevin equation, the radiative energy loss of heavy quarks can be coupled with the collisional one by adding a recoil term \(-\vec{p}_{g}\) caused by the radiated gluon [44]. The four-momentum of the radiated gluon can be easily sampled based on the radiation spectra \(dN_{g}/dxdk_{\perp}^{2}\). In many of the recently developed theoretical frameworks modelling the production of heavy flavour in heavy-ion collisions, the Boltzmann and Langevin equations are the two most popular choices, especially for Monte Carlo simulations. Concerning the performance of these two approaches, detailed comparisons have been discussed in [151; 152]. In general, the implementation of the Boltzmann equation implies that the medium consists of well-defined quasiparticles, while the Fokker-Planck (Langevin) equation is realized in a more general way without the quasiparticle assumption. However, the advantage of the Boltzmann equation is that it can naturally describe the heavy quark evolution even under off-equilibrium conditions, which may be the case of the early pre-equilibrium stage in heavy-ion collisions [153]. ### Hadronization: Fragmentation and Coalescence Studying the yield suppression and collective flow of heavy-flavour hadrons also deepens our understanding of heavy quark hadronization in nucleus-nucleus collisions, which shows different mechanisms with that in a vacuum. As discussed in Section 2.1, fragmentation functions describe the non-perturbative hadronization process of heavy quarks into heavy-flavour hadron in a vacuum. The most commonly used fragmentation function is the Peterson form [133], \[D_{H/Q}(z)=\frac{N}{z[1-\frac{1}{z}-\frac{\epsilon_{Q}}{1-z}]} \tag{9}\] where \(z\) denotes the momentum fraction carried by the heavy hadron from the heavy quark in the fragmentation process (\(0<z<1\)), which implies that the heavy hadron must have smaller energy than the heavy quark. The only tunable parameter in Equation (9) is \(\epsilon_{Q}\) that can be determined by fitting to the measured spectra of the heavy-flavour hadrons. \(N\) is the normalization factor to guarantee \(\int_{0}^{1}dzD_{H/Q}(z)=1\). Measurements on the collective flow [33; 34] and baryon-to-meson ratio [109; 110] of charmed hadron A+A collisions suggest the existence of a new hadronization mechanism, coalescent of heavy quarks. The basic idea behind the coalescence mechanism is that a heavy quark can combine with a light anti-quark from the medium when they have enough small distance in the coordinate-momentum space. It means that the heavy-flavour meson has larger energy than the parent heavy quark, differing from the mechanism of fragmentation. The distribution function of the formed heavy-flavour meson usually can be obtained by a convolution with the following schematic form. \[f_{M}\sim g_{M}f_{Q(\mathcal{Q})}\otimes f_{\bar{q}(q)}\otimes\phi_{M} \tag{10}\] where \(g_{M}\) denotes the degeneracy of the heavy-flavour meson in spin and isospin, \(f_{Q(\mathcal{Q})}\) and \(f_{\bar{q}(q)}\) are the distribution functions of the heavy and light quarks in the coordinate-momentum space, respectively. \(\phi_{M}\) represents the Wigner transform of the wave function of the heavy-flavour meson, commonly approximated by the ground state one of the simple harmonic oscillators [47]. In the realistic implementation of the heavy quark hadronization in nuclear collisions, the first step is to determine the probability of coalescence by integrating the distribution function of Equation (10). If coalescence occurs, one can sample a light anti-quark based on the thermal equilibrium distribution, otherwise Equation (9) is used to fragment the heavy quark into a hadron. At least in the lower \(p_{T}\) region, the experimental results favour the coalescence mechanism [94]. The coalescence of heavy quarks seems to decrease the suppression factor and enhance the collective flow of heavy-flavour hadrons, especially at \(p_{T}<6\) GeV. The recent studies [59; 61] show that the coalescence mechanism is important in the description of the \(\Lambda_{c}/D^{0}\) ratio measured by the STAR [109] and ALICE [110] collaborations. Additionally, the hadronic scattering between the D meson and light-flavour hadrons (\(D-\pi\), \(D-\rho\)) has also been studied in [154], but its influence on the D meson \(R_{AA}\) was found to be very limited [155]. ### Extraction of the Diffusion Coefficient of Heavy Quarks One of the most important goals of the heavy-ion collision experiment is to investigate the transport properties of the QCD matter under extremely hot and dense conditions. As discussed above, due to the large mass (\(m_{Q}\gg T_{\rm med}\)), heavy quarks are believed to be powerful tools for exploring the transport properties of the QGP. Phenomenological studies of heavy-flavour production in high-energy nuclear collisions provide a unique opportunity to extract the transport coefficient of the QGP, such as the momentum diffusion coefficient \(\kappa\) of heavy quarks, whose longitudinal and transverse components can be convenient to define as, \[\kappa_{||}\equiv-\frac{d\Big{\langle}(\Delta p_{||})^{2}\Big{\rangle}}{dt} \tag{11}\] \[\kappa_{\perp}\equiv\frac{1}{2}\frac{d\big{\langle}(\Delta p_{ \perp})^{2}\big{\rangle}}{dt} \tag{12}\] where \(\Delta p_{||}\) and \(\Delta p_{\perp}\) momentum changes parallel and perpendicular to the heavy quark formulation. By definition, \(\kappa_{\perp}\) can be directly related to the jet transport coefficient \(\hat{q}\) which quantifies all the transverse momentum broadening of hard partons as traversing the QGP medium. Assuming that the \(\kappa\) is isotropic, namely, \(\kappa_{\perp}=\kappa_{||}=\kappa\), one can obtain a simplified relation \(\hat{q}=2\kappa\). This relation has been employed in the modified Langevin equation to balance the two parts of the contribution from the collisional and radiative energy loss of heavy quarks [156; 44]. Here we only overview the recent advances of the \(\kappa\) extraction by different model calculations. A more detailed and profound discussion about this topic can be found in [64]. The momentum diffusion coefficient \(\kappa\) can be easily converted to the spatial one \(D_{s}\) with the relation \(\kappa=2T^{2}/D_{s}\). In recent years, the temperature dependence of the dimensionless quantity \(2\pi TD_{s}\) has been estimated by a lot of theoretical frameworks, such as the lattice QCD (lQCD) [157; 158; 159], LO pQCD [43; 160], QPM calculations [92], \(T\)-matrix [40], PHSD [161], MC@\(s_{\rm H}\)IQ [22], Ads/CFT [162], duke (Bayesian analysis) [163], and hadronic matter [102; 164], as shown in Figure 1. The estimates by the lQCD from the first principles provide a valuable reference for the model extractions of \(2\pi TD_{s}\). As one can see, with relatively large uncertainties, the lQCD calculations in the quenched approximation give \(D_{s}\sim 3.7\)-\(7.0\)[159] over the temperature range from \(T_{pc}\) to \(2T_{pc}\). However, it is difficult to extract meaningful information about the temperature dependence of \(2\pi TD_{s}\) from the current lQCD results. Furthermore, except for the pQCD calculations at the leading-order which show obvious larger values than others, these extractions of \(2\pi TD_{s}\) based on the recently developed models are consistent with the lQCD data, as well as previous studies presented in [64] which give \(2\pi TD_{s}\sim 2\)-\(4\) near the critical temperature. Although these calculations give different values of \(2\pi TD_{s}\) versus \(T/T_{pc}\), most estimations show that \(D_{s}\) slightly increases with \(T\). It implies that the interactions between a charm quark and the QCD medium have the strongest strength near the critical temperature. However, no direct evidence has been found in the experiment to verify this upward trend of \(D_{s}\) so far because it is hard to find an observation only sensitive to the in-medium interactions at the late stage of the QGP evolution. Fortunately, the data-driven analysis utilizing Bayesian inferences seems to shed new light on this issue. The temperature and momentum dependence of \(D_{s}\) has been extracted from the available experimental data (\(R_{AA}\) and \(v_{2}\) of a D meson both at the RHIC and LHC) [163] based on the Duke-Langevin transport model, which indeed shows an upward trend of \(2\pi TD_{s}\). More recently, this approach of Bayesian inference has been improved with the help of information field theory [165; 166] in [167]. Therefore, one can now extract model parameters without relying on an explicit form of parametrization, leading to a robust determination by such a model-data fit. ## 3 Production of Heavy-Flavour Jets in Heavy-Ion Collisions ### Nuclear Modification Factors of Production Yields To address the nuclear effect in relativistic heavy-ion collisions, the nuclear modification factor \(R_{AA}\) is conventionally utilized to quantify the yield suppression of hadron/jet in A+A collisions per binary nucleon-nucleon collision relative to p+p [168], \[R_{AA}=\frac{1}{\left<N_{\rm bin}^{\rm AA}\right>}\frac{d\sigma^{\rm AA}/{\rm dydp _{T}}}{d\sigma^{\rm pp}/{\rm dydp_{T}}} \tag{13}\] where the scaling factor \(\left<N_{\rm bin}^{\rm AA}\right>\) denotes the number of binary nucleon-nucleon collisions in A+A [169]. It has been observed that the values of \(R_{AA}\) of hadrons and jets are smaller than one in nucleus-nucleus collisions both at the RHIC [170; 171; 172] and LHC [173; 174], and these measurements could be explained by the mechanism of partonic energy loss, which in turns serve as convincing evidence for the formation of QGP in such extremely hot and Figure 1: Spatial diffusion coefficient (\(2\pi TD_{s}\)) of charm quark in the quark–gluon plasma calculated by different approaches versus the reduced temperature (\(T/T_{pc}\)). The lattice QCD calculations in the quenched approximation [157; 158; 159] are compared with the estimations based on different models [22; 40; 43; 92; 102; 160; 161; 162; 163; 164]. The figure is from [66]. dense conditions. Meanwhile, the jet transport parameter \(\hat{q}\equiv d\langle p_{\perp}^{2}\rangle/dL\)[175] representing the strength of in-medium partonic interactions could be extracted from the available \(R_{AA}\) data by various theoretical models [167; 176; 177; 178; 179]. Additionally, to test the mass dependence of jet quenching, the \(R_{AA}\) has also been used in a comparison of the yield suppression between heavy-flavour jets and inclusive jets. Benefiting from the fact that heavy-flavour jets are produced abundantly as the centre-mass energy increases in hadronic collisions at the LHC, the exploration of a heavy quark-tagged jet produced in heavy-ion collisions has gradually attracted much attention. The first experimental effort focused on the production of a b-jet was implemented by the CMS collaboration [121] in 2013, as shown in the left plot of Figure 2, where a b-jet is defined as jets containing at least one B hadron inside the jet-cone. The red points are the CMS data and the coloured bands are the theoretical calculations. This measurement accounts for the b-jet samples in minimum bias collisions (0-100%). We note that even with large experimental uncertainties, the b-jet \(R_{AA}\) slightly increases with jet \(p_{T}\) and varies from 0.4 to 0.8. Significant suppression of the b-jet yield in Pb+Pb collisions at \(\sqrt{s_{NN}}\) = 2.76 TeV relative to the p+p baseline was observed for the first time, which indicates that bottom quarks strongly interact with the hot/dense nuclear matter. Furthermore, within experiment uncertainties, the results were found to be consistent with the pQCD-based calculations conducted in [180] when the coupling factor \(g^{med}\) varied from 1.8 to 2.2. To address the difference of the yield suppression between the b-jet and inclusive jet (mainly initiated by a massless light quark or gluon), a direct comparison of their \(R_{AA}\) in the right plot of Figure 2 was presented by the SHELL approach, which applies a Langevin transport model to describe heavy quark propagation in the QGP [104] in central Pb+Pb collisions at \(\sqrt{s_{NN}}\) = 2.76 TeV, as well as the next-to-leading order pQCD calculations matched with the parton shower effect for the p+p baseline [142; 181]. In the model, the jet transport parameter \(\hat{q}\) was extracted by the production of an identified hadron in A+A collisions [182], and then the spatial diffusion coefficient \(D_{s}\) of heavy quarks can be determined by the D meson \(R_{AA}\) data [26; 127]. The measured \(R_{AA}\) of the inclusive jet with the centrality of 0-5% [183] and b-jet with 0-10% [121] are also illustrated in the plot of Figure 2. Although the \(R_{AA}\) of the b-jet seems to be slightly smaller than that of the inclusive jet, the CMS collaboration claims that Figure 2: **Left**: the measured nuclear modification factor \(R_{AA}\) of the inclusive b-jet versus b-jet \(p_{T}\) by the CMS collaboration in Pb+Pb at \(\sqrt{s_{NN}}\) = 2.76 TeV for minimum bias collisions [121]. **Right**: the comparison of \(R_{AA}\) between the inclusive jets and b-jets versus jet \(p_{T}\) in central Pb+Pb collisions at \(\sqrt{s_{NN}}\) = 2.76 TeV [104]. The figures are from [104; 121]. no clear difference of \(R_{AA}\) between the inclusive jet and b-jet was found, because the current uncertainties of the b-jet data are too large. However, the theoretical calculations in [104] suggest that b-jet \(R_{AA}\) may be larger than inclusive jet \(R_{AA}\), due to the "dead-cone" effect of the bottom quarks, which suppresses the medium-induced gluon radiation of massive heavy quarks within a cone \(\theta\sim M/E\)[184]. A more precise measurement is necessary to resolve the tension between the experimental data and theoretical calculations. It is very exciting the fact that recently, the ATLAS collaboration reported preliminary results by simultaneously measuring the \(R_{AA}\) of the inclusive jet and b-jet in 0-20% Pb+Pb collisions at \(\sqrt{s_{NN}}\) = 5.02 TeV [124], which shows a clear weaker suppression of the b-jet, and the features can be described by the theoretical calculations [49; 104]. Although the mass hierarchy of jet quenching at the particle level has been confirmed by a lot of experimental data [185; 186], it is indisputable that the ATLAS measurement makes a crucial step towards finding the mass effect at the jet level. The comparison of the c- and b-jet \(R_{AA}\) has been presented in [71] with the SCET model [187; 188], which shows no significant difference at \(p_{T}>\) 50 GeV. More recently, some exploratory estimates indicate that the \(R_{AA}\) of the c-jet may be stronger than that of the inclusive jet at higher jet \(p_{T}\) due to their different constituents [189; 190], an interesting finding to be investigated further in detail. Beyond \(R_{AA}\), another observation \(I_{AA}\)[16] has also been utilized to study the yield suppression of b-jets tagged by \(Z^{0}\) bosons in high-energy nuclear collisions [191]. Similar to \(R_{AA}\), \(I_{AA}\) is defined as follows, \[I_{AA}=\frac{\frac{dv^{\rm AA}}{d\vec{p}_{\rm T}^{\rm\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, Figure 3: Nuclear modification factor \(I_{AA}\) as a function of the transverse momentum of the tagged jet within three \(p_{T}^{2}\) windows: 40–60, 60–80, 80–120 GeV in central 0–10% Pb+Pb collisions at \(\sqrt{s_{\rm NN}}\,=\,5.02\) TeV [191]. ### Transverse Momentum Imbalance The transverse momentum imbalance (\(x_{J}=p_{\text{T,2}}/p_{\text{T,1}}\)) is another useful observation, describing the momentum asymmetry of the dijet system in the transverse plane, where \(p_{\text{T,1}}\) and \(p_{\text{T,2}}\) denote the leading and sub-leading jet \(p_{T}\). It is noted that in the fixed-leading-order QCD calculations the two outgoing hard partons should be strictly back-to-back in the transverse (\(x_{J}=1\)), but the higher-order corrections and vacuum shower may break the symmetry which leads to \(x_{J}<1\). In heavy-ion collisions, the smaller \(x_{J}\) of the \(\gamma\)+jet [193] and \(Z^{0}\)+jet [194] systems have been observed in Pb+Pb collisions compared to p+p, which results from the energy loss of the tagged jet. The CMS collaboration reports the measurement on \(x_{J}\) of the inclusive and \(b\bar{b}\) dijets in Pb+Pb collisions at \(\sqrt{s_{NN}}\) = 5.02 TeV [122]. In their measurements, the biggest challenge was how to select the \(b\bar{b}\) dijet events initiated by the hard heavy-quark pairs, because it is crucial to address the mass effect by directly comparing such heavy-quark dijets with inclusive dijets. On the theoretical side, the production mechanisms of heavy quarks can be categorized into three classes: flavour creation (FCR), flavour excitation (FEX), and gluon splitting (GSP) [195; 196; 197; 198], only FCR represents the dijets initiated by heavy-quark pairs originating from the hard process. The CMS collaboration suggests a strategy to separate the FCR processes by selecting \(b\bar{b}\) dijets that have a large opening angle (\(|\Delta\phi|>2\pi/3\)) in azimuth, which could significantly suppress the contributions of the other two. This method has also been used in theoretical studies [104; 199; 200]. Figure 4 shows a comparison of the averaged \(x_{J}\) of the inclusive and \(b\bar{b}\) dijets in both p+p and Pb+Pb collisions with different centrality bins at \(\sqrt{s_{NN}}\) = 5.02 TeV, as well as the experimental data [122], where \(\left<x_{J}\right>\) was estimated as follows. \[\left<x_{J}\right>=\frac{1}{\sigma}\int_{0}^{1}\frac{d\sigma}{dx_{J}}dx_{J} \tag{15}\] The black triangle points are the CMS data in Pb+Pb collisions, and the black star points are the p+p reference used in their measurements. The blue and red rhombus points are the theoretical calculations, while the p+p reference is provided by the Monte Carlo event generator SHERPA [142] which matches the next-to-leading order QCD matrix elements and the parton shower effect in a vacuum [201; 202]. The \(\left<x_{J}\right>\) points of the inclusive (left panel) and \(b\bar{b}\) (right panel) dijets are listed within three centrality bins which correspond to the different numbers of the participant in Pb+Pb collisions. In Figure 4, the theoretical calculations based on the SHELL model [104] show an overall decrease in \(\left<x_{J}\right>\) in Pb+Pb collisions relative to the p+p Figure 4: Averaged \(x_{J}\) value as a function of the number of participants calculated in p+p and Pb+Pb collisions \(\sqrt{s_{NN}}\) = 5.02 TeV within different centrality bins compared with the experimental data, both for inclusive (**left**) and \(b\bar{b}\) (**right**) dijets. Figures are from [104]. baseline both for the inclusive and \(b\bar{b}\) dijets, consistent with the CMS data and indicates that the asymmetry between these two leading jets is amplified in A+A collisions. The reduction in \(\langle x_{I}\rangle\) is centrality-dependent since the in-medium interaction is sensitive to the temperature and size of the QGP. It is even more important that the calculations show that the decrease in \(\langle x_{I}\rangle\) of the \(b\bar{b}\) dijets is slightly smaller than that of the inclusive dijets within the same centrality bins. These results suggest that dijets initiated by bottom quarks may suffer smaller energy loss compared to those initiated by light quarks or gluons. Furthermore, another study on the \(b\bar{b}\) dijet in heavy-ion collisions [199] proposed that the invariant mass \(m_{jj}\) of the dijet system could be a novel observation sensitive to mass effects of jet quenching. In addition to the dijet system, the transverse momentum imbalances of the \(Z^{0}+\text{jet}\) (\(x_{jZ}=p_{T}^{\text{jet}}/p_{T}^{Z}\)) and \(Z^{0}+\text{b-jet}\) (\(x_{bZ}=p_{T}^{\text{b-jet}}/p_{T}^{Z}\)) have also been investigated [191]. It was found that the \(Z^{0}\)-tagging requirement considerably decreased the contribution of gluon-jets by 40% in \(Z^{0}+\text{jets}\) compared to the dijet sample, especially at a lower jet \(p_{T}\). The comparison of the medium modification on the \(x_{I}\) of \(Z^{0}+\text{jet}\) and \(Z^{0}+\text{b-jet}\) may be suitable to address the mass effect of jet quenching. Figure 5 shows the distributions of the \(x_{jZ}\) (left) and \(x_{bZ}\) (right) both in p+p and 0-10% Pb+Pb collisions at \(\sqrt{s_{NN}}\) = 5.02 TeV. In the calculations, the selected \(Z^{0}\) bosons are required to have \(p_{T}^{Z}>60\) GeV. The tagged jets (b-jets) are reconstructed with the anti-\(k_{T}\) algorithm with a cone-size R = 0.3 and pseudorapidity \(|\eta^{\text{jet}}|<1.6\), required to have \(p_{T}^{\text{jet}}>30\) GeV. In particular, to guarantee that the \(Z^{0}\) bosons and the tagged jets are back-to-back in the transverse plane, the \(Z^{0}+\text{jet}\) or \(Z^{0}+\text{b-jet}\) pairs are required to have a large opening angle in azimuth, \(\Delta\phi_{jZ}(\Delta\phi_{bZ})>7\pi/8\). The differences of \(x_{jZ}\) (\(x_{bZ}\)) distributions in p+p and Pb+Pb collisions are also shown in the lower panels. Due to the jet energy loss, the \(x_{jZ}\) and \(x_{bZ}\) distributions shift towards smaller \(x_{J}\) values in Pb+Pb collisions relative to p+p. Furthermore, one can find in the lower panel that the variations of \(x_{bZ}\) are slightly smaller than that of \(x_{jZ}\). More intuitive comparisons between the averaged \(x_{jZ}\) and \(x_{bZ}\) are listed in Table 1. Within the statistical errors, the results show that \(\Delta\langle x_{j_{z}}\rangle\sim 0.136\) is considerably larger than \(\Delta\langle x_{bZ}\rangle\sim 0.092\), consistent with the expectation that bottom jets lose less energy than light-quark jets. ### Angular Correlation Jet angular correlations, such as \(\Delta\phi\) distribution of dijets [203, 204] and \(\gamma/Z^{0}\) + jet [205, 206], are useful observable to address the medium-induced transverse momentum effect. In this context, estimating the medium modification on the angular distribution of heavy quark dijets in nucleus-nucleus collisions may also be of interest from the theoretical point of view. As shown in the left plot of Figure 6, medium modification of the azimuthal angular correlations (\(\Delta\phi=|\phi_{b1}-\phi_{b2}|\)) of the \(b\bar{b}\) dijet system in Pb+Pb collisions with different centralities at \(\sqrt{s_{NN}}\) = 5.02 TeV are calculated [207]. One can observe suppression at \(\Delta\phi\sim\)0 and enhancement at \(\Delta\phi\sim\pi\) in Pb+Pb collisions compared to the p+p, and the modifications are centrality dependent. Since the distributions are self-normalized, it implies that \(b\bar{b}\) dijets with a larger opening angle (back-to-back) suffer relatively weaker yield suppression compared to that with a smaller one (collinear). It can be noted that the main contribution of \(b\bar{b}\) dijet production at smaller \(\Delta\phi\) is from the GSP process while larger \(\Delta\phi_{bb}\) from the FCR process. The two b-jets from the former process share the energy of the gluon and then usually have lower \(p_{T}\) than that from the latter process. As a result, the yield at the smaller \(\Delta\phi\) region is more sensitive to the selection cut \(p_{T}^{\rm jet}>20\) GeV. Actually, in another study on the angular correlations of \(Z^{0}+\) b-jet [191], it's found that initial average b-jet \(p_{T}\) distribution versus \(\Delta\phi\) \begin{table} \begin{tabular}{c c c} \hline & \(Z^{0}\) + jet & \(Z^{0}\) + b-jet \\ \hline \(\left\langle x_{I}\right\rangle_{pp}\) & 0.987 \(\pm\) 0.0047 & 0.941 \(\pm\) 0.0056 \\ \hline \(\left\langle x_{I}\right\rangle_{PbPb}\) & 0.851 \(\pm\) 0.0061 & 0.849 \(\pm\) 0.0064 \\ \hline \(\Delta\left\langle x_{I}\right\rangle\) & 0.136 \(\pm\) 0.0108 & 0.092 \(\pm\) 0.012 \\ \hline \end{tabular} \end{table} Table 1: The averaged \(x_{I}\) of \(Z^{0}\) + jet and \(Z^{0}\) + b-jet both in p+p and Pb+Pb collisions at \(\sqrt{s_{NN}}\) = 5.02 TeV, as well as their variations \(\Delta x_{I}=\left\langle x_{I}\right\rangle_{\rm pp}-\left\langle x_{I}\right\rangle _{\rm PbPb}\). The statistical errors of \(x_{I}\) in the simulations are also presented. Table is from Ref. [191]. play a critical role, as shown in the right plot of Figure 6. We see that the ratio of PbPb/pp in the middle panel is flat, and the average b-jet \(p_{T}\) distribution is also flat. It's reasonable to guess that in Pb+Pb the azimuthal angle between b-jet and \(Z^{0}\) has not been modified compared to p+p, and the overall suppression occurs at whole \(\Delta\phi_{\rm bZ}\) region. Of course, we can imagine that it is more difficult for high-\(p_{T}\) (\(>\)30 GeV) jets to be significantly deflected by the scattering with thermal parton. To probe the angular deflection caused by the in-medium \(p_{T}\)-broadening, observables accessible to lower \(p_{T}\) region are needed. For this reason, it's proposed in Ref. [105] that the heavy-flavour meson tagged by direct photon (\(\gamma\)+HF) may provide a promising channel, with several advantages: (1) the transverse momentum resolution of \(D^{0}\) meson can be low down to \(\sim\)1 GeV [127] where the angular deflection is significant, (2) the photon gauges the initial momentum of heavy quarks, therefore, it's easy to quantify the direction change, (3) the selection bias effect can be suppressed by constraining the photon energy to citeCunqueiro:2021wls. In this way, the considerable angular de-correlations between the heavy quarks and photons are predicted both in central Au+Au collisions at the RHIC and Pb+Pb collisions at the LHC. Furthermore, by constructing the 2-dimensional (\(\Delta\phi,x_{I}\)) correlation diagram of \(\gamma\)+HF, it's argued that the two aspects of jet quenching, energy loss, and \(p_{T}\)-broadening, can be well displayed simultaneously. Additionally, it's noted that another measurement on the angular correlations of \(D^{0}\)+hadron in Au+Au collisions at \(\sqrt{s_{NN}}\) = 200 GeV may reflect the medium modification of the charm+jet correlation in the \(\eta-\phi\) plane [208], that awaits further detailed investigations. ### Radial Profile The radial profile of the heavy-flavour jet represents the distribution of the angular distance \(r=\sqrt{(\phi_{\rm Q}-\phi_{\rm jet})^{2}+(\eta_{\rm Q}-\eta_{\rm jet})^{2}}\) between the heavy-flavour meson and the jet-axis in the \(\eta-\phi\) plane. Systematic studies with a focus on the radial profiles of D-jet and B-jet in heavy-ion collisions are performed in Refs. [103; 209]. As shown in the left panel of Figure 7, the model calculated radial profiles of D-jets both in p+p and 0-100% Pb+Pb collisions at \(\sqrt{s_{NN}}\) = 5.02 TeV compared to the CMS measurements [123]. The black and red triangle points represent the Figure 6: **Left**: normalized azimuthal angular distributions of \(b\bar{b}\) dijet system in p+p and Pb+Pb collisions at \(\sqrt{s_{NN}}\) = 5.02 TeV. Results for different centrality bins, 0–10%, 10–30%, 30–100%, are presented. **Right**: the azimuthal angular distribution of \(Z^{0}+\) b-jet in p+p and 0–10% Pb+Pb collisions at \(\sqrt{s_{NN}}\) = 5.02 TeV in the upper panel, while the ratio of PbPb/pp (green solid line) was shown in the middle panel and the averaged b-jet \(p_{T}\) (blue band) in the lower panel. Figures are from Refs. [191; 207]. measured data. The D-jets are reconstructed with anti-\(k_{T}\) algorithm with R = 0.3 and \(|\eta^{\rm jet}|<1.6\). All selected D-jets must satisfy \(p_{T}^{\rm jet}>60\) GeV and contain at least one \(D^{0}\) meson in jet-cone with \(4<p_{T}^{D}<20\) GeV. The blue solid line is the p+p baseline provided by SHERPA [142], and the red dashed line denotes the calculations based on the SHELL model. One can observe that the model calculations show the radial profile of D-jets in Pb+Pb collisions shifts towards larger radii relative to that of p+p, which is consistent with the diffusion trend observed by the CMS collaboration. These results show a clear physics picture, that charm quarks change their moving direction when scattering with the thermal partons in the hot and dense QCD matter. The studies argue that the diffusion behavior of D meson is closely related to the \(p_{T}\)-broadening when charm quarks scatter with the thermal partons in the medium. It should be noted that in such an estimate the jets are required to have \(p_{T}>60\) GeV while D meson \(p_{T}<20\) GeV, which makes that the higher \(p_{T}\) jets can be viewed as a reference to probe the moving direction changes of charm quarks. It is found that the angular deviation \(\Delta r=\sqrt{(\varphi_{c}^{f}-\varphi_{c}^{i})^{2}+(\eta_{c}^{f}-\eta_{c}^{i })^{2}}\) of charm quarks from their initial position in the \(\eta-\phi\) plane is \(p_{T}\) dependent, as shown in the right plot of Figure 7. The charm quarks with lower \(p_{T}\) are more likely to change their traveling direction via the in-medium scattering, and this feature also explains why no visible modification is observed in the CMS data for \(p_{T}^{D}>20\) GeV [123]. The angular deviation at lower \(p_{T}\) (\(<\)5 GeV) is dominated by elastic scattering, whereas at higher \(p_{T}\) by inelastic reactions. These investigations may cast light on the in-medium energy loss mechanisms and constrain the transport coefficients of heavy quarks from a new perspective. We notice that a preliminary result of the D-jet radial profile in Au+Au collisions at \(\sqrt{s_{NN}}=200\) GeV has been reported by the STAR collaboration in Ref. [125]. This result shows a similar diffusion effect of charm quark in jets in mid-central 10-40% collisions. To test the mass effect reflected in the radial profile, an additional comparison of the medium modification between D-jet and B-jet has been presented in Refs. [210; 211], where an inverse modification pattern on the radial profile of B-jets compared to D-jets is observed. The jet quenching effect seems to narrow the jet radial profiles of B-jets while broadening those of D-jets. It's demonstrated that the selection bias effect [212] in A+A collisions may play a pivotal role. Heavy quark jets with higher \(p_{T}\) have narrower initial radial distributions, and would naturally lead to narrower modifications when they fall into the lower \(p_{T}\) domain due to jet energy loss. This reveals the fact that the final-state modification of the jet is not only influenced by the pure medium effect, but also by the other factors, such as the initial spectra and the selection bias [5]. ### Fragmentation Function The jet fragmentation function \(D(z)=(1/N_{\rm jet})dN_{\rm ch}(z)/dz\) is one of the most well-explored jet substructure observable [213; 214; 215], which usually refers to the longitudinal momentum distribution of charged hadrons inside the jet-cone [216; 217; 218; 219]. For heavy-flavour jets, the corresponding observable is the longitudinal momentum distribution of heavy-flavour mesons in jets, defined as in [112]. \[D(z_{||})=\frac{1}{N_{\rm jet}}\frac{dN_{\rm HQ}(z_{||})}{dz_{||}},\quad\text {where }z_{||}=\frac{\vec{p}_{\rm HQ}\cdot\vec{p}_{\rm jet}}{\vec{p}_{\rm jet} \cdot\vec{p}_{\rm jet}}. \tag{16}\] On the one hand, the \(D(z_{||})\) distribution may provide useful information to reveal the production mechanisms and substructure of heavy quark jets [220]. On the other hand, since \(z_{||}\) denotes the momentum projection of the heavy-flavour hadron on the jet axis, the medium modification of the \(D(z_{||})\) distribution in nucleus-nucleus collisions is closely related to the interplay of the partonic energy loss between the massive heavy quarks and the massless light partons [221]. Figure 8 shows the first theoretical investigation of the medium modification on the \(D(z_{||})\) distributions of both D-jets and B-jets in Pb+Pb collisions at \(\sqrt{s_{NN}}\) = 5.02 TeV. In these calculations, two jet \(p_{T}\) ranges are chosen, \(5<p_{T}^{\rm jet}<15\) GeV and \(15<p_{T}^{\rm jet}<50\) GeV. Respectively the selected \(D^{0}\) (\(B^{0}\)) mesons are also required to have \(p_{T,D^{0}(B^{0})}>2\) GeV and \(p_{T,D^{0}(B^{0})}>5\) GeV. The black solid lines represent the p+p baseline of \(D(z_{||})\) distributions calculated by the POWHEG+PYTHIA8 event generator [141; 222; 223; 224], and the orange dash lines are the theoretical calculations in Pb+Pb collisions based on the SHELL model. The upper and middle panels correspond to the \(D(z_{||})\) distributions of \(D\)-jets and \(B\)-jets, while the lower panels are their nuclear modification \(D(z_{||})_{PbPb}/D(z_{||})_{pp}\) (green is D-jet and yellow B-jet). One can observe that the initial \(D(z_{||})\) distributions in p+p are sensitive to the kinematic region of jet and heavy-flavour hadron, especially for D-jets. Moreover, even within the same kinematic region, a B-jet has an evident harder fragmentation pattern compared to a D-jet. The difference could be relevant to the fact that the stronger "dead-cone" effect suffered in heavier bottom quarks, in other words, the bottom quarks radiate less gluon and carry more energy fraction of jets than charm quarks. Besides, the contribution of the GSP process may also play different roles in the production of B-jets and D-jets, which may lead to additional differences in their \(z_{||}\) distributions [221]. In nuclear collisions, the main Figure 7: **Left**: radial profile of D-jet in p+p and Pb+Pb collisions at \(\sqrt{s_{NN}}\) = 5.02 TeV. **Right**: angular deviation of charm quark as a function of initial \(p_{T}\). Figures are from Refs. [103; 209]. finding is that the jet quenching effect results in softer fragmentation patterns of heavy-flavour jets in the QGP compared to that in a vacuum. It's different from what one could naively argue, that is, the energy fraction of heavy quarks in jets may increase because heavy quarks lose less energy than light partons. The modification of \(D(z_{||})\) reveals the different energy loss mechanisms between the single parton and the full jet. Critically, the lost energy from the jet constituents may be partially brought back to the jet energy by the reconstruction procedures. This is an essential difference in energy loss mechanisms between the full-jet and the single parton, which leads to less energy loss of full-jet compared to heavy quarks. Therefore, stronger medium modification of \(D(z_{||})\) can be obtained with larger R, which may be related to the R-dependence of jet energy loss [225; 226]. Furthermore, stronger medium modification of \(D(z_{||})\) is observed for B-jets compared to D-jets, due to their different initial spectra. ### The "Dead-Cone" Effect and Other Observables Until now, there are a few other heavy-flavour jet observables accessible in the current experimental measurements at the LHC, which have also attracted attention from the high-energy nuclear physics community. We briefly discuss them in the following. * The Cambridge-Aachen (CA) declustering techniques [227] which can help to obtain the angular-ordered pairwise tree of subjets [228] and the Soft Drop condition mentioned above enable us to expose the most basic heavy quark splitting structure by measuring the splitting-angle distributions in D\({}^{0}\) meson jets in p+p collisions at \(\sqrt{s_{NN}}=13\) TeV [229]. It has been measured in three different energy intervals of the radiators: Figure 8: \(D(z_{||})\) distributions of D-jet and B-jet within two \(p_{T}\) windows both in p+p and 0–10% Pb+Pb collisions, as well as the medium modifications (PbPb/pp). Figure is from Ref. [221]. 10 GeV, \(10\leq E_{\rm Radiator}\leq 20\) GeV and \(20\leq E_{\rm Radiator}\leq 30\) GeV and constrain the transverse momentum of the D\({}^{0}\) meson in jet to be \(2<p_{\rm T}^{\rm p^{0}}<36\) GeV/c. The ALICE collaboration directly observed for the first time a clear distribution suppression at the splitting angle smaller than the ratio of quark mass and the energy of such quark radiator: \(\theta\leq M_{\rm charm}/E_{\rm radiator}\), known as the "dead-cone" effect [230; 231]. Such a heavy quark jet and its substructure measurement reveal and confirm this most basic property of a fast quark interacting with the vacuum described by the QCD theory. A subsequent phenomenology study exposed the "dead-cone" effect of the medium-induced gluon radiation of jet quenching [53; 150; 184], by calculating the emission angle distribution of the heavy-flavour quark initiated splittings in a D\({}^{0}\) meson tagged jet and that of the light parton initiated splittings with the existence of the QGP in Pb+Pb collisions at \(\sqrt{s_{NN}}=5.02\) TeV [232], as demonstrated in Figure 9. Very interestingly, they find the collisional energy loss mechanism will not obscure the observation of the "dead-cone" effect in the medium-induced radiation. Such a proposal has also been verified by an analytical study that proposes a new jet substructure groomoment that selects the most collinear splitting in a QCD jet above a certain transverse momentum cutoff [233]. It's also found in another study that the "dead-cone" domain would be partially filled by the medium-induced emission as heavy quarks traversing QGP [150]. * The jet shape \(\rho(r)\) describes the transverse energy profile of charged hadrons as a function of the angular distance from the jet axis. This observable has been well-studied for light flavor jets [205; 234] to search the medium response effect as energetic parton dissipating energy to the medium [6]. The measurement of the medium modification on the b-jet shape has been reposted in Refs. [118; 128] by the CMS collaboration. On the ond hand, the comparison of jet shapes of b-jets in Pb+Pb and p+p collisions shows the presence of the QGP modifies the energy distributions around the jet axis of b-jets. On the other hand, their measurements indicate a stronger jet energy redistribution of b-jets at larger radii compared to that of inclusive jets. Generally speaking, the bottom quarks are expected to dissipate less energy in nuclear collisions compared to light quarks and gluons due to the "dead-cone" effect. However, at larger jet radii, the medium response effect plays the dominant role in the enhancement of jet energy distribution in Pb+Pb collisions compared Figure 9: The splitting-angle distributions for D\({}^{0}\) meson tagged jets, inclusive jets and also light-quark jets normalized to the number of jets in Pb+Pb collisions at \(\sqrt{s}=5.02\) TeV (upper plots) and also the \(D^{0}\) meson tagged jets/inclusive jets (light-quark jets/inclusive jets) ratios (bottom plots) calculated for three energy intervals of the radiators: \(5<E_{\rm Radiator}<10\) GeV (left panel), \(10<E_{\rm Radiator}<20\) GeV (middle panel) and \(20<E_{\rm Radiator}<30\) GeV (right panel). The shaded areas correspond to the angles at which the radiation is suppressed due to the “dead-cone” effect. Figure is from Ref. [232]. to the p+p baseline. Therefore, these interesting results may suggest that the heavier quark, like the bottom, may drive a stronger medium response effect than a massless parton. In this context, the heavy-flavour jets can serve as promising sensitive probes to the quasi-particle excitation of the quark soup. * The Soft Drop (SD) grooming procedures reveal the two-prong structure of a jet, described by the momentum sharing \(z_{g}\) and opening angle \(R_{g}\)[228], which establishes the connection between the final state observable to the parton splitting function. The splitting history could be helpful to identify the production mechanisms of heavy-flavour jets [235; 236]. Heavy quark jets from the gluon splitting process usually tend to have more balanced \(z_{g}\) and larger \(R_{g}\) compared to that from the FEX and FCR. The first measurement of the D-jet splitting function is performed by ALICE [237], and some theoretical efforts which focus on the medium modifications of \(z_{g}\) and \(R_{g}\) of c- and b-jets are presented in Refs. [238; 239]. The medium effects result in more imbalanced \(z_{g}\) distribution and larger opening angles between the two subjets in the heavy quark jets, similar to the medium modification of inclusive jets observed by the CMS [240] and ALICE [241] collaboration. ## 4 Summary and Conclusions This review covers the current development of theoretical studies on heavy-flavour jets in ultra-relativistic heavy-ion collisions. We introduce the recent theoretical advances of heavy-flavour production in heavy-ion collisions and then give a comprehensive discussion of several recent investigations relating to the heavy-flavour jet observables. * We briefly overview the recent theoretical advances that help us understand the heavy-flavour production in heavy-ion collisions, mainly focusing on the initial production, transport approaches, hadronization mechanism, and diffusion coefficient extraction. These phenomenological studies based on the transport models reveal a fact that the elastic scattering of heavy quarks is dominant at lower \(p_{T}\) region (\(p_{T}^{Q}<5m_{Q}\)), while the inelastic one dominate the high \(p_{T}\) regions. Besides, different from the fragmentation hadronization of heavy quarks in a vacuum, within the hot and dense nuclear matter, the coalescence mechanism plays an important role in explaining the large collective flow and the enhancement of baryon-to-meson ratio of a charmed hadron in nucleus-nucleus collisions at the RHIC and the LHC. The diffusion coefficient of heavy quarks in the QGP has been extracted by various theoretical frameworks, which implies that 2\(\pi TDs\) slightly increases with temperature. The newly developed Bayesian inference approach may be promising to implement a robust determination of the transport coefficient of heavy quarks by a model-data fit. * The studies on yield suppression and momentum imbalance of heavy-flavour jets are dedicated to addressing the mass effect of jet energy loss. Theoretical investigations predict stronger yield suppression of light quark jets compared to heavy-flavour jets, which is preliminarily proven by the recent ATLAS measurement of b-jet \(R_{AA}\). However, the dijet asymmetry shows a reduced sensitivity to the jet quenching effect, therefore the difference of the medium modification on \(x_{J}\) between inclusive and \(b\bar{b}\) dijets seems to be moderate. We have to say the nuclear modification factor is still an effective and powerful observable to test the mass effect of energy loss in QGP. On the other hand, the strategy to isolate the jets initiated by heavy quarks is also crucial to address the mass effect, since GSP processes indeed have a large contribution to the production of heavy quark jets but suffer stronger suppression in nucleus-nucleus collisions. * An observable related to angular correlation aims at the deflection of the jet axis caused by the medium-induced \(p_{T}\)-broadening of jet quenching. It's found that the angular deviation caused by the in-medium scattering is hard to be observed for high-\(p_{T}\) jets, both for \(b\bar{b}\) dijets and \(Z^{0}+\) (b-)jet. That makes sense because higher \(p_{T}\) jets are more difficult to be changed by the in-medium scattering with the thermal parton in QGP. Meanwhile, medium modification on the radial profiles of jets containing lower-\(p_{T}\) D meson can well capture the angular de-correlation of the charm quark and the jet axis. This suggests that heavy flavors may be more suitable to address the medium-induced \(p_{T}\)-broadening of jet quenching since they are experimentally accessible to the low-\(p_{T}\) domain where the angular deviation is visible. * The substructure observable can reveal a wealth of information about the inner configuration of heavy-flavour jets. In the vacuum case, declustering techniques provide an inventive way to reestablish the splitting history of hard partons which helps us unlock the "dead-cone" effect of charm quark in the experiment. For heavy-flavour jets, the substructure observable also provides a unique opportunity to identify their production mechanisms. Furthermore, jet substructure, such as jet shape, seems more sensitive to the induced medium excitation in nucleus-nucleus collisions than full-jet observables. Much theoretical effort should be made to address the interplay of the "dead-cone" effect of medium-induced radiation and the medium response of heavy quarks. From the current perspective, the studies of substructures of heavy-flavour jets could play an increasingly important role in high-energy nuclear physics. * The initial jet spectra and the "selection bias" play important roles in the medium modifications of jet substructure in nuclear collisions. Normally when we focus on the mass effect of the yield or substructure modification of heavy quark jets, it is apriori to believe that bottom jets should have a weaker medium modification in heavy-ion collisions compared to charm jets under the same conditions. However, in the studies of radial profile and fragmentation function of heavy-flavour jets, it's found that b-jets have very different initial substructure compared to that of c-jets event within the same kinematic constraints, which eventually leads to stronger medium modification of b-jets at the final-state compared to c-jets. On the other hand, the "selection bias" poses a challenge to the theoretical studies that aim at the nuclear modification mechanism of heavy-flavour jets in the hot and dense QCD medium. It brings additional "modifications" to the ratio PbPb/pp of jet substructure distributions, nevertheless, these "modifications" do not exactly reflect the change of jet substructure but only the decrease of jet energy from the higher kinematic region in Pb+Pb collisions. Conceptualization, S.W. and B.-W.Z.; methodology, W.D. and B.-W.Z.; investigation, S.W. and W.D.; writing--original draft preparation, S.W. and W.D.; writing--review and editing, B.-W.Z., E.W. and X.-N.W.; supervision, B.-W.Z., E.W. and X.-N.W. All authors have read and agreed to the published version of the manuscript. This research is supported by the Guangdong Major Project of Basic and Applied Basic Research No. 2020B0301030008, and the Natural Science Foundation of China with Project Nos. 11935007, 12035007, 12247127. S. Wang is also supported by China Postdoctoral Science Foundation under project No. 2021M701279. Not applicable. The authors declare no conflict of interest.
2306.09789
Dynamic Decision Tree Ensembles for Energy-Efficient Inference on IoT Edge Nodes
With the increasing popularity of Internet of Things (IoT) devices, there is a growing need for energy-efficient Machine Learning (ML) models that can run on constrained edge nodes. Decision tree ensembles, such as Random Forests (RFs) and Gradient Boosting (GBTs), are particularly suited for this task, given their relatively low complexity compared to other alternatives. However, their inference time and energy costs are still significant for edge hardware. Given that said costs grow linearly with the ensemble size, this paper proposes the use of dynamic ensembles, that adjust the number of executed trees based both on a latency/energy target and on the complexity of the processed input, to trade-off computational cost and accuracy. We focus on deploying these algorithms on multi-core low-power IoT devices, designing a tool that automatically converts a Python ensemble into optimized C code, and exploring several optimizations that account for the available parallelism and memory hierarchy. We extensively benchmark both static and dynamic RFs and GBTs on three state-of-the-art IoT-relevant datasets, using an 8-core ultra-lowpower System-on-Chip (SoC), GAP8, as the target platform. Thanks to the proposed early-stopping mechanisms, we achieve an energy reduction of up to 37.9% with respect to static GBTs (8.82 uJ vs 14.20 uJ per inference) and 41.7% with respect to static RFs (2.86 uJ vs 4.90 uJ per inference), without losing accuracy compared to the static model.
Francesco Daghero, Alessio Burrello, Enrico Macii, Paolo Montuschi, Massimo Poncino, Daniele Jahier Pagliari
2023-06-16T11:59:18Z
http://arxiv.org/abs/2306.09789v1
# Dynamic Decision Tree Ensembles for Energy-Efficient Inference on IoT Edge Nodes ###### Abstract With the increasing popularity of Internet of Things (IoT) devices, there is a growing need for energy-efficient Machine Learning (ML) models that can run on constrained edge nodes. Decision tree ensembles, such as Random Forests (RFs) and Gradient Boosting (GBTs), are particularly suited for this task, given their relatively low complexity compared to other alternatives. However, their inference time and energy costs are still significant for edge hardware. Given that said costs grow linearly with the ensemble size, this paper proposes the use of _dynamic ensembles_, that adjust the number of executed trees based both on a latency/energy target and on the complexity of the processed input, to trade-off computational cost and accuracy. We focus on deploying these algorithms on multi-core low-power IoT devices, designing a tool that automatically converts a Python ensemble into optimized C code, and exploring several optimizations that account for the available parallelism and memory hierarchy. We extensively benchmark both static and dynamic RFs and GETs on three state-of-the-art IoT-relevant datasets, using an 8-core ultra-low-power System-on-Chip (SoC), GAP8, as the target platform. Thanks to the proposed early-stopping mechanisms, we achieve an energy reduction of up to 37.9% with respect to static GBTs (8.82 uJ vs 14.20 uJ per inference) and 41.7% with respect to static RFs (2.86 uJ vs 4.90 uJ per inference), without losing accuracy compared to the static model. Energy Efficiency, Machine Learning, Random Forest, Gradient Boosting, Dynamic Inference + Footnote †: 2023 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See [https://www.ieee.org/publications/rights/index.html](https://www.ieee.org/publications/rights/index.html) for more information. ## I Introduction Machine Learning (ML) inference is increasingly present in multiple Internet of Things (IoT) applications, ranging from human activity recognition [1] to predictive maintenance [2] or to seizure detection [3]. A cloud-centric paradigm is traditionally leveraged, with the IoT nodes collecting data, and offloading almost all computations to high-end servers. This approach allows relying on robust and accurate models, independently from the IoT device's computing capabilities. Nevertheless, the need for a constant connection to remote servers, especially in unstable or insecure environments which are common for IoT systems, may lead to unpredictable response latencies or confidentiality concerns [4, 5]. Moreover, transmitting a constant stream of data to the cloud is an energy-hungry operation, which can severely affect the battery life of the device [6]. For these reasons, _extreme-edge_ (i.e., on-device) computing has grown as an increasingly popular alternative for simple ML-based tasks [5, 6]. Instead of a remote deployment on a high-end server, ML models are stored and executed directly on the device, eliminating or limiting the need to transmit the collected data. This reduces both privacy and latency concerns tied to the unreliability of the Internet connection, while also possibly leading to higher energy efficiency. Deploying ML at the edge is complicated by the tight resources budgets of IoT devices, which are mostly based on Microcontrollers (MCUs). Therefore, simple _tree-based ensembles_ such as Random Forests (RFs) [7] and Gradient-Boosted Trees (GBTs) [8] are often regarded as a more lightweight alternative to state-of-the-art Deep Learning (DL) models in extreme-edge settings [9], since they can obtain comparable accuracy on simple tasks, with fewer parameters and operations per inference [10]. Despite these advantages, the energy costs linked with tree ensembles inference can still be hard to sustain for battery-operated or energy-autonomous IoT nodes. Accurate ensembles often include hundreds of Decision Trees, resulting in thousands of clock cycles per inference. Several approaches have been introduced in the literature to optimize these models, generally consisting of pruning algorithms, which eliminate the least frequently used branches in each DT [11]. However, these solutions modify the ensemble structure _statically_, reducing its complexity once-for-all in exchange for a possible drop in accuracy. Thus, they offer limited flexibility in tuning the model execution costs at runtime. In this work, which extends [12], we consider the much less explored path of _runtime and input-dependent_ optimizations for tree-based ensembles, motivated by the fact that: i) a system's energy budget may vary over time (e.g., depending on battery state), and ii) not all inputs require the same computational effort to achieve an accurate classification. Indeed, most inputs are "easy", and a small subset of the DTs in the ensemble would be sufficient to classify them correctly, while saving energy. On the other hand, statically shrinking the model would cause complex inputs to be wrongly labelled, negatively affecting the accuracy. Accordingly, we study _early stopping_ policies that halt the execution of the ensemble after reaching a classification confidence target. We use those policies to dynamically adapt the amount of computation to the system's requirements and to the difficulty of the processed data (stopping early for easy inputs), saving energy compared to a static ensemble. While other works have studied dynamic inference for tree-based models [13, 14, 15], we are the first to thoroughly analyze the key issues and overheads associated with their deployment on a real-world, complex IoT platform. To this end, we design a tool that automatically generates optimized inference C code for both static and dynamic RFs and GBTs, starting from a Python model. The following are our main contributions: * We introduce two novel early-stopping policies for dynamic inference of GBTs or RFs. Furthermore, we detail the deployment of these models on complex IoT devices, describing the required data structures and memory allocation techniques, while also exploring the effect of quantization on tree-based ensembles. * We study the effectiveness of early-stopping on _multicore_ platforms, in which sets of DTs are evaluated in parallel, adapting our policies accordingly. * We benchmark our dynamic models on three IoT relevant datasets, reducing an hardware-unaware estimate of time complexity from 57% to 90% with respect to static ensembles, with less than 1% drop in accuracy on all the three tasks. When deployed on GAP8, a multi-core RISC-V architecture, our dynamic ensembles reduce the energy consumption by up to 42% compared to a static RF/GBT, without losing accuracy. The rest of the paper is structured as follows. Section II provides the required background, and Section III reviews the state-of-the-art; in Section IV, we present the details of the proposed early-stopping policies and of our implementation of dynamic tree-based ensembles for multi-core low-power platforms; lastly, Section V reports the results of our experiments, and Section VI concludes the paper. ## II Background ### _Decision Trees_ Decision Trees (DTs) are shallow, non-parametric Machine Learning (ML) algorithms widely used for both classification and regression in supervised learning setups. At training time (also known as "growth"), these models learn a set of decision rules from the data, producing a piece-wise constant approximation of the target variable. Specifically, starting from the root, each node compares one feature (column) of the input with a learned threshold and assigns the input either to its left or right child based on the result of such comparison. This process is repeated recursively until a terminal (leaf) node is reached, which contains the output estimate. Since this work focuses on post-training and runtime optimizations of DTs, we omit a detailed description of the various fitting algorithms for DTs, referring readers to [16] for further information. Figure 1 depicts a trained DT for a classification task, showing non-terminal nodes as circles and leaf nodes as rectangles. Leaves can, in general, store either the class label or the entire array of class probabilities [9]. In case of regression, they contain the predicted scalar. Algorithm 1 reports the inference pseudo-code. We denote as Root(\(t\)) and Leaves(\(t\)), respectively, the root and the leaves of tree \(t\). For each node \(n\), Feature(\(n\)) and \(\alpha(n)\) are the input feature used for the split and its threshold, while Right(\(n\)) and Left(\(n\)) are its descendants. Lastly, Prediction(\(n\)) extract the output value from the reached leaf. ``` \(n=\mathrm{Root}(t)\) While \(n\notin\mathrm{Leaves}(t)\) if \(\mathrm{Feature(n)}>\alpha(n)\): \(n=\mathrm{Right}(n)\) else: \(n=\mathrm{Left}(n)\) \(P=\mathrm{Prediction}(n)\) ``` **Algorithm 1** Decision Tree Inference The space complexity of a DT is \(O(2^{D})\), where \(D\) is the _depth_, i.e. the maximum-length path from the root to a leaf. The upper bound is a _perfect_ tree with \(2^{D}-1\) nodes. The time complexity is \(O(D+M)\), where \(M\) denotes the number of classes (with \(M=1\) for regression). Reaching a leaf implies, at worst, \(D\) branching operations, followed by an argmax over M elements to determine the largest output probability. Due to their lightweight branching operations and limited memory requirements, DTs represent an ideal candidate for embedding inference on constrained edge nodes [17]. Nonetheless, these methods also have some shortcomings. They are prone to overfitting and tend to introduce a bias towards the majority class in unbalanced datasets [16]. ### _Tree-based Ensembles_ In order to tackle these limitations, several DT ensembles have been introduced, in which multiple trees, referred to as "weak learners", perform an inference pass on the same input, before aggregating their output predictions. This leads to sharp increases in accuracy and resistance to overfitting and unbalancing issues, at the cost of increased time and memory complexity. We focus on the two most popular types of tree ensembles, i.e., RFs and GBTs. Fig. 1: Example of DT, where the root node performs a decision based on feature \(A\) and threshold \(\alpha_{A}\). #### Ii-B1 Random Forests RFs [7] are sets of classification DTs trained on a randomly selected subset of the data (i.e., with _boosting_) and using only a random subset of the input features. This ensures diversity in the predictions and makes the RF less prone to overfitting. At inference time, each DT is applied to the input, and the output probabilities are accumulated. An argmax on the accumulated scores yields the final label. Algorithm 2 shows the corresponding pseudo-code, where TreeInference(\(t\)) denotes a DT inference (i.e., Algorithm 1). ``` \(P=\mathbf{0}_{M}\)// array of 0s of size \(M\) for\(t\in\mathrm{Forest}\): \(P=P+\mathrm{TreeInference}(t)\) \(class=\) argmax(P) ``` **Algorithm 2** Random Forest Inference Noteworthy, for DT implementations that only store the predicted class label in leaf nodes, the RF aggregation can only use a crisp "majority voting", rather than a more precise averaging of probability scores. This is usually detrimental to accuracy; therefore, in this work, we follow the trend of most modern libraries [18], using weak learners that predict a probability value per class. #### Ii-B2 Gradient-Boosted Trees the standard implementation of GBTs [8] groups DTs in sets of cardinality \(M\) called "estimators", conceptually executed in a sequence. Each DT within an estimator is a _regression_ model, trained to predict the _residual error_ obtained by all previous estimators on a specific class. At inference time, all DTs outputs are accumulated in a vector, which is then converted to probabilities with a formula that depends on the loss function used for fitting. As for RFs, the last step is an argmax to extract the label. Algorithm 3 shows the pseudo-code of a GBT inference, where \(t_{i}\) is the DT in charge of class \(i\) within estimator \(e\). ``` \(P=\mathbf{0}_{M}\)// array of 0s of size \(M\) for\(e\in\mathrm{Estimators}\): // \(e\) array of \(M\) trees for\(t_{i}\in\mathrm{e}\): \(P_{i}=P_{i}+\mathrm{TreeInference}(t_{i})\) \(class=\) argmax(compute_probabilities(\(P\))) ``` **Algorithm 3** Gradient Boosting Trees Inference #### Ii-B3 Complexity Analysis The space complexity of RFs and GBTs is \(O(N*2^{D})\) and \(O(N*M*2^{D})\), respectively, where \(N\) is the number of estimators. For RFs, each single DT is considered an estimator, while for GBTs, an estimator is a set of \(M\) trees, hence the additional multiplicative factor. Here, \(D\) denotes the maximum depth across all DTs, which is generally fixed during training. Similarly, the time complexity for inference, which is also linked with energy consumption, is \(O(N*D)\) for RFs and \(O(N*M*D)\) for GBTs. ### _IoT End-node Target_ Microcontrollers (MCUs) are at the heart of most IoT end nodes, mainly due to their low production cost and high programmability. In fact, while Application-Specific Integrated Circuits (ASICs) are potentially more energy efficient, especially for ML applications, their huge Non-Recurrent Engineering costs are unaffordable for most IoT solutions. In recent years, the RISC-V Instruction Set has emerged in this domain due to its versatility and licensing-cost-free open-source nature [19]. In this work, we focus on the Parallel Ultra-Low-Power Processing Platform (PULP) family of RISC-V processors [20], and specifically on the GAP8 System-on-Chip (SoC). This SoC features one I/O core paired with an 8-core cluster, all leveraging an extended RISC-V instruction set with support for common signal processing and ML operations. The cores access a two-level memory hierarchy, including a 64 kB L1 with single-clock access latency (private of the cluster's cores) and a 512 kB L2. An additional L3 off-chip memory can be equipped to extend the storage capacity further but was not employed in this work. GAP8 also features a general-purpose Direct Memory Access (DMA) controller to transfer data between memory levels, reducing access bottlenecks and allowing the programmer to control data transfers. ### _Static and Dynamic ML Optimizations_ The problem of optimizing ML models to enable their execution on ultra-low-power edge nodes, trading off (small) accuracy drops for large latency, energy or memory savings, has been studied extensively in recent years, although with most focus being devoted to deep learning [1, 21, 22, 23, 24, 25]. One broad characterization distinguishes _static_ and _dynamic_ optimizations. The former optimize a model before deployment, either during training or post-training. Among the most well-known static approaches are quantization and pruning [22], particularly popular for DL, which respectively limit the precision of data and operations or eliminate them, to improve both memory occupation and efficiency. A fundamental limitation of static optimizations lies in their inability to adapt to _changes in external conditions_ during runtime, such as a low-battery state, or even more interestingly, to the processed input data. Naively, this could be solved deploying multiple independent models (e.g., multiple RFs or multiple GBTs), each with a different trade-off in terms of accuracy vs energy/latency, and selecting among them at runtime. However, this approach would incur a large memory overhead, which is particularly critical for IoT end nodes. Dynamic (or _adaptive_) inference techniques, including this work, are designed to overcome these limitations. They allow the deployment of a single model able to adapt its complexity at runtime, while keeping the memory overhead under control [24, 25, 26]. In practice, a dynamic model can be _partially turned off_ when the external conditions require it, or when the processing input's difficulty allows it [22]. This partial shut-off can be realized in various ways, depending on the type of model considered [25, 22, 15]. Most dynamic optimizations are _orthogonal_ to static ones, i.e., it is possible to build a dynamic system on top of statically optimized (e.g., quantized, pruned, etc.) ML models. For dynamic ML systems that tune their complexity based on the input, a key component is a suitable _policy_, i.e., the logic that selects which parts of the model to activate for a given datum [27]. Good policies should be accurate but also incur low overheads. Section IV analyzes this aspect in detail. ## III Related Works ### _Dynamic inference_ While dynamic/adaptive approaches are increasingly popular in the literature, the great majority applies solely to DL models. Most dynamic DL works adopt an _iterative_ approach, where the same input is processed multiple times, each time activating a larger "portion" of a neural network. After each iteration, the _confidence_ of the prediction is evaluated. The process is stopped when confidence reaches a pre-defined threshold. This scheme assumes that easy inputs are the majority, thus most executions will stop at the initial iterations, reducing the average energy consumption. On the other hand, complex inputs will still be classified by the largest "version" of the model, thus avoiding accuracy drops. Literature works differ mainly in how they decompose the model. For instance, the authors of [24, 25, 26, 28] obtain a single sub-model by selectively deactivating a subset of the layers or channels of a network, or truncating the bit-width used to represent parameters. Other works extend the approach more than two sub-models [25, 29] or enhance the stopping criterion with class-aware thresholds [22]. Applications of adaptive inference to shallow ML classifiers are much less common. In [14], the authors propose an _early stopping_ criterion for tree-based ensembles, which models the prediction confidence after a binomial or multinomial distribution (depending on the number of classes), stopping the inference after a suitable subset of the trees has been executed. The authors benchmark their approach on seven small public datasets and a private one, showing a reduction of up to 63% on the average number of trees executed with respect to the entire ensemble. However, this approach requires the storage of large lookup tables in the order of \(O(N^{2})\), where \(N\) is the number of estimators, thus incurring a significant overhead for large ensembles. In [13], the authors leverage the partially aggregated probabilities of the already executed weak learners to determine the next tree to execute at runtime. This selection is performed according to multiple criteria: i) the current highest class probability and ii) the computational cost associated with each tree. Since weak learners within an ensemble process different features of the input datum, the inference cost is estimated taking into account not only the evaluation of the trees themselves, but also the extraction of any new feature that is not already available, i.e., that was not used by any of the previously executed weak learners. A Gaussian distribution is used to obtain a probabilistic "twin" of the classifier and determine when to trigger an early stop. The authors also introduce a dimensionality reduction technique to limit the computations required to select the best next DT. Nonetheless, the overhead of such a complex policy on an ultra-low-power device would be hard to sustain. Indeed, as stated by the authors themselves, this approach becomes convenient only in the case of complex feature extraction, which is rarely the case in IoT applications [13]. Lastly, the authors of [15] propose the closest work to ours, introducing an early stopping method named Quit When You Can (QWYC). In this approach, two probability thresholds (\(\epsilon_{-}\) and \(\epsilon_{+}\)) are extracted post-training, determining the boundaries to trigger an early stopping in binary classification tasks. At runtime, QWYC requires only two additional comparisons, introducing a minimal overhead. Additionally, the authors propose a static sorting of weak learners, in which DTs able to trigger an early stopping most frequently are executed first. However, QWYC is only evaluated on binary tasks, and no deployment results are provided. ### _Tree-Based Ensembles Libraries_ Tree-based ensembles are widely used in various machine learning applications, and several optimized implementations have been proposed. Some works focus on optimizing inference time for high-end hardware [30, 31, 32], while others specifically target IoT edge nodes [9, 33]. In the former category, the authors of [32] propose a C++-based implementation of RFs that supports both training and inference. They utilize an object-oriented representation of the trees, storing node information and thresholds (\(\alpha\)) in separate classes. However, they do not store class logits or support quantization, making their library less compact than those designed for IoT edge nodes. The implementation in [31] mirrors the DT data structures of [18], storing information such as child indexes, class logits, alpha values, and feature indexes for each node. Quantization is not supported in this case either. [30] introduces a C++ implementation of RFs and GBTs. Single trees are implemented as classes, and nodes are represented as structures with pointers to left and right children, thresholds, and other fields. This implementation supports the integer representation of thresholds but only applied post-training and at 32-bit. Despite being optimized for fast inference, these approaches are not suitable for IoT node deployment as they do not prioritize memory minimization, a crucial constraint for this type of device. RF implementations tailored for RISC-V-based MCUs are presented in [9, 33]. The authors of [33] benchmark various RF implementations on a single-core RISC-V MCU called PULPissimo, testing fully unrolled trees, recursive and for-loop-based inferences. Data storage is done using arrays or structures, and compiler-level optimizations are explored, resulting in up to 4\(\times\) speed-up. In [9], the authors propose an array-based representation of trees, similar to our approach, specifically designed for the GAP8 SoC. However, our work addresses several important aspects that have been overlooked in previous implementations. First, we store the logit values instead of just the predicted class, as they are necessary for enabling dynamic inference. Second, we discuss the allocation of the tree ensemble on a multi-level memory hierarchy. Finally, we enable various memory minimization techniques such as quantization at multiple precisions and optimized storage of children indexes, as described in Section IV-B2. To the best of our knowledge, our library is the first to consider all these optimizations. ## IV Methodology ### _Early Stopping policies for Tree-Based Ensembles_ #### Iv-A1 Single-classifier Policies So-called _iterative_ dynamic inference approaches [22], including ours, perform a sequence of classifications, either with different models or with different "versions" of the same model, deciding adaptively when to stop the process. For these methods, most early-stopping policies use the output probabilities of the \(t\)-th classifier in the sequence (\(P^{t}\)) to determine the confidence of its prediction [25, 26, 15, 27]. One of the most straightforward and computationally inexpensive approaches simply looks at the largest probability (i.e. the one associated with the most likely class). Intuitively, a large top-probability will indicate a confident prediction and vice versa. We denote this policy as _Max Score_ (\(s^{t}\)). While only requiring \(O(M)\) comparisons per input, with M being the number of classes, this approach does not allow for a measure of the _gap_ between the top-probability and the others. For instance, a 4-class output \(P^{t}=[0.5,0.5,0,0]\) corresponds to a large value for the metric (\(s^{t}=0.5\)), far from the random guess, but the classification is clearly highly uncertain, since \(P^{t}_{0}=P^{t}_{1}\). In this case, using \(s^{t}\) might mislead the early stopping into triggering too early, negatively affecting the accuracy. A second policy that tries to overcome this issue is the _Score Margin_ (\(sm^{t}\)) [25, 27], which also considers the second largest probability in \(P^{t}\) and is computed as follows: \[sm^{t}=\max(P^{t})-\max_{\rm 2nd}(P^{t}) \tag{1}\] While having the same \(O(M)\) theoretical complexity, \(sm^{t}\) requires approximately twice as many operations as \(s^{t}\). On the other hand, it is generally more robust. In the previous example, while \(s^{t}=0.5\) may lead to wrong results, \(sm^{t}=0\) clearly indicates that the classifier is not confident about its prediction, ensuring that the early stopping is not triggered. Accordingly, \(sm^{t}\) has become the most popular choice in recent literature [25, 26, 27]. At runtime, \(s^{t}\) or \(sm^{t}\) are computed after each iteration, and compared with a user-defined threshold \(t_{h}\). Using \(sm^{t}\) as an example, the early stopping decision is formulated as: \[P=\begin{cases}P^{0}\text{ if }sm^{0}\geq t_{h}\\ P^{1}\text{ if }sm^{0}<t_{h}\wedge sm^{1}\geq t_{h}\\ P^{2}\text{ if }sm^{0}<t_{h}\wedge sm^{1}<t_{h}\wedge sm^{2}\geq t_{h}\\...\\ P^{N-1}\text{ if }sm^{i}<t_{h},\,\forall i<N\end{cases} \tag{2}\] where \(P\) is the final array of probabilities, which will be used to classify the input. The energy versus accuracy trade-off is controlled by \(t_{h}\), whose value alters the number of classifiers executed on average. Namely, a larger \(t_{h}\) results in a more conservative system (giving higher priority to accuracy), and vice versa. Therefore, the threshold can be tuned at runtime to select different operating points based on external conditions, e.g., on battery state. The main advantage of these confidence metrics is their low computational cost while also being accurate as long as the classifiers are well-calibrated [34]. Noteworthy, in case of a binary classification, \(s^{t}\) and \(sm^{t}\) become equally informative, since the second largest probability is just the complement of the largest. #### Iv-A2 Aggregated Scores Policies In their usual implementation, the metrics introduced in Section IV-A1 are evaluated using only the probabilities produced by the _last executed classifier_\(t\), ignoring the outputs of previous models in the cascade [25, 27]. This approach makes sense under the assumption that each new model is significantly more accurate than the previous ones, i.e., that \(P^{t}\) is a much more reliable estimate of the true output probabilities with respect to \(P^{t-1}\). However, for ensemble models like RFs and GBTs, all weak learners (DTs) have comparable predictive power. It becomes then sub-optimal to decide for early stopping based only on the latest executed tree, ignoring the output of all previous ones. In light of this, we propose two extensions of the policies described in Section IV-A1, designed so that early stopping is triggered based on the _accumulated predictions_ of all weak learners already executed (\(P^{[1:t]}\)). In other words, we take a decision based on the aggregated prediction of the "smaller ensemble" composed of all already executed DTs. The effectiveness of our approach lies in the fact that, for easy inputs, the accumulated probabilities quickly skew toward a single class after executing a small number of weak learners. Then, it becomes highly unlikely or even mathematically impossible for the leftover models to overturn the prediction, making their execution pointless to improve accuracy. Mathematically, for an RF ensemble, we define the partial output after executing \(t\) weak learners as: \[P^{[1:t]}=\sum_{i=1}^{t}P^{i} \tag{3}\] We then define the Aggregated Max Score (\(S^{t}\)) policy as: \[S^{t}=\max(P^{[1:t]}) \tag{4}\] and the Aggregated Score Margin (\(SM^{t}\)) as: \[SM^{t}=\max(P^{[1:t]})-\max_{\rm 2nd}(P^{[1:t]}) \tag{5}\] The corresponding early stopping policies are obtained by replacing the array of probabilities of the last executed tree \(P^{t}\) with the ones of _all_ executed trees \(P^{[1:t]}\) and the score \(sm^{t}\) with their aggregated versions \(S^{t}\) or \(SM^{t}\) in Eq. 2. For GBT, the formulation is similar except for one key difference. As mentioned in Section II-B, each estimator in a GBT is a set of _regression_ trees, whose outputs are converted to probabilities with a computationally expensive operation that depends on the training loss. Incurring the associated overheads after evaluating each estimator in order to extract \(P^{[1:t]}\) could outweigh the benefits of early stopping. Thus, we leverage the fact that the conversion formula is _monotonically increasing_[18], and prefer to estimate confidence directly on the raw predictions. Our results of Section V show that the proposed aggregated scores policies obtain superior energy versus accuracy trade-offs with respect to state-of-the-art solutions that only account for the last learner. Figure 2 shows a high-level overview of the adaptive inference mechanism proposed in this work, applied to an RF with \(N=3\), \(M=3\), \(D=3\), and using \(SM^{t}\) as confidence metric. We also assume a batch \(B=1\) (more details on this in Section IV-C2). Orange nodes represent the decision path taken in each tree for a hypothetical input. After each weak learner, \(SM^{t}\) is computed on the accumulated probabilities (\(P^{[1:T]}\)) and compared with the user-defined threshold \(t_{h}\). As soon as \(SM^{t}>t_{h}\), the process is stopped, and \(P^{[1:t]}\) undergoes an argmax to extract the final predicted class \(C_{i}\). ### _Deploying tree-based ensembles on MCUs_ In this section, we describe our efficient library for static and dynamic RF/GBT inference on multi-core IoT end-nodes, such as our target GAP8, introduced in Section II. Noteworthy, an RF library for GAP8 has recently been proposed in [9]. However, its data structure is unsuitable for dynamic inference since it stores in the leaves only the most likely class rather than the full array of probabilities, making it impossible to derive confidence metrics. To our knowledge, there are no open-source GBT libraries for multi-core RISC-V MCUs. For these reasons, we extend our previous in-house tool for the automated generation of optimized RF inference code [12], generalizing it to also support GBTs and to handle multi-core parallelism and complex memory hierarchy. The tool outputs C code, generated with template programming starting from a Python model of the ensemble, and depending on its hyper-parameters (\(N\), \(M\), \(D\), etc.)1. The next sections describe the generated data structures (Section IV-B1), the memory allocation strategy (Section IV-B2) and the quantization employed to support our FPU-less target (Section IV-B3). Note that while this work focuses on dynamic tree ensembles, our tool can also efficiently implement static models. Footnote 1: The code is available open-source at: [https://github.com/eml-eda/eden](https://github.com/eml-eda/eden) #### Iv-B1 Ensembles structure Our data structures take inspiration from the open-source OpenCV [32] library, with several modifications to make them more efficient for low power MCUs. Specifically, we replace lists with C arrays, saving memory and improving data locality while also making the structure more compact. Figure 3 shows the three main structures for a RF with \(M=3\) classes. The NODES array is composed of C "structs", representing the information of all DT nodes. Each node has three fields: * \(fidx\): storing the index of the input feature considered by the node. At inference time, it is used to select the input value compared with the threshold \(\alpha\) to determine the next visited node. For leaves, this field is set to the special value -2 for compatibility with [18]. * \(\alpha\): the threshold compared against the input value at position \(fidx\). If the latter is smaller or equal (larger) than \(\alpha\), we visit the left (right) child next. * \(right\): the offset in NODES between the current node and its right child. For terminal nodes, we reuse this field to store a row index in the LEAVES matrix, holding the class probabilities assigned to samples reaching that leaf. The ROOTS array stores the indexes of the root nodes of each tree in NODES, allowing a fast iteration among the trees. Lastly, as mentioned, the LEAVES matrix stores the class probabilities of all leaves. The inference pseudo-code for a single tree, in the most general case of a multi-class RF, is shown in the "run_tree" function of Algorithm 4. Note that we do not store the index of the left child of a node, to save memory. Instead, we organize our data structure so that the left child for all non-leaf nodes is always (implicitly) the next element in the NODES array. This is obtained by generating the structure during a _pre-order_ visit of each tree. The special value in \(fidx\) indicates when a leaf has been reached, thus being used as a loop exit condition. C denotes the total number of cores available during the inference, which will be discussed in detail in Section IV-C. We further optimize our data structures when working with _binary_ RF classifiers or GBTs. In the first case, each leaf needs only to store a single class probability (since \(P_{1}=1-P_{0}\)). Thus, we can save this value directly in the \(\alpha\) field of the leaf, completely removing the LEAVES array. Similarly, GBTs regression trees require the storage of a single value per leaf, allowing us to apply the same optimization. #### Iv-B2 Memory Allocation Strategy Modern IoT end nodes, including our target, have complex multi-level memory hierarchies. In particular, many of these devices use software-controlled scratchpad memories rather than hardware caches, coupled with Direct Memory Access (DMA) controllers to move data between, for instance, a smaller but faster L1 memory, and a bigger but slower L2 memory. With respect to using Fig. 3: C data structures of our tree ensemble library in the case of a RF. The arrows represent the inference steps for the first tree in Figure 2. Fig. 2: A dynamic RF with \(N=3\), \(M=3\) and \(D=3\). In case early stopping is not triggered, the obtained output is identical to a static RF. hardware caches, this approach requires more effort on the software side, but results in smaller and more power-efficient hardware, which is crucial for IoT nodes, while also possibly providing performance benefits for applications characterized by predictable and regular memory access patterns, such as many ML models. Examples of these devices are found both in academia [35] and in commercial products [20, 36]. Maximizing L1 accesses is, therefore, imperative to reduce inference latency and energy. The problem is not trivial, since ensembles achieving high accuracy, even for relatively simple tasks such as those considered in Section V, are generally too large to fit entirely in L1 (GAP8, for instance, has a 64kB L1). One solution would be to employ a _tiling_ approach, dynamically loading to L1 only the data required to execute a small chunk of computation (e.g., a single tree inference). This is the approach generally taken by DL libraries for edge devices [37]. The regularity of neural network computations makes tiling a profitable option because: i) data portions needed in L1 can be statically determined at compile time, and ii) once loaded, _all_ data elements will be accessed and reused multiple times, amortizing the transfer overheads. On the contrary, for tree-based ensembles, the access ratio of the NODES structure is logarithmic, requiring the transfer of up to \(2^{D}\) nodes per tree, but accessing at most \(D\) elements, with at most one access per node. Thus, the data transfer overhead out-weights the benefits of having node information in L1, making tiling detrimental. Similar considerations apply to the LEAVES matrix, whose rows are accessed with an increasing yet randomly strided and sparse pattern (1 every \(2^{D}\) rows in the worst case). In contrast, the input sample array (INPUT in Algorithm 4) is reused by all DTs in the ensemble, and multiple nodes within each tree might access the same element. Similarly, the array of accumulated outputs (P in Algorithm 4) is accessed densely and with a regular pattern at the end of each DT inference. We define a static (compile-time) memory allocation strategy for our tree ensemble code generator based on these considerations. We load INPUT, P, and the ROOTS array (whose size is generally negligible, i.e., less than 1kB) entirely in L1. We then compute the leftover L1 memory and check if the LEAVES or NODES structures can fit in the remaining space, prioritizing the former. When this happens (for small ensembles), all required structures are stored in L1. Otherwise, NODES and LEAVES are directly accessed from L2. Lines 14 and 15 of Algorithm 4 summarize the allocation scheme. We verified experimentally that this produces a faster and more efficient inference than tree-wise tiling. Note that this proposed memory allocation strategy is valid for any device characterized by a multi-level memory and a software-managed caching mechanism. Changing the deployment target only impacts the dimension of L1, which has to be specified as an input argument for our allocation strategy. On the contrary, SoCs equipped with hardware-controlled caches can skip this memory placement step. #### Iii-B3 Data Quantization One of the most promising approaches to make ML models compatible with edge devices is quantization, an optimization which consists of reducing the precision used to store inputs and parameters [23]. This reduces memory occupation and improves speed and energy efficiency for IoT end-nodes, where CPUs are either slower and more energy-hungry than ALUs, or completely absent, as in the case of GAP8, causing floating point operations to be approximated with expensive software routines. While extensively studied for DL [23], quantization is much less explored for tree ensembles. For RF/GBT classifiers, the valid targets for quantization are: i) the input array, ii) the internal comparison thresholds of each DT node (\(\alpha\)), and iii) the output probabilities. Since i) and ii) are directly compared, they should be quantized with the same precision and format. Input and threshold quantization can be introduced at training time (so-called _quantization-aware training_) by simply converting inputs to integers before starting the process. The comparison thresholds generated by the training framework [18] will still be floats in general. However, given that inputs are integers, it can be easily seen that if the thresholds are quantized by simply truncating their fractional part, the nodes' decisions will not be altered. In contrast, our tool quantizes the leaves probabilities after training (a.k.a., _post-training quantization_), statically computing the range of the values that the accumulated probabilities can assume, and using it to determine the quantizer parameters. In both cases, we use a symmetric min-max quantizer [22], computed with the following equations: \[x_{int} =round\left(\frac{x\cdot 2^{bits-1}}{\max(|x|)}\right) \tag{6}\] \[x_{Q} =clamp(-2^{bits-1},2^{bits-1}-1,x_{int}) \tag{7}\] Where \(x\) is the floating point value, and the max is computed over all training samples. The \(clamp\) is necessary for outliers that fall outside the training range and is defined as follows: \[clamp(a,b,x)=\begin{cases}a&\text{if }x\leq a\\ x&\text{if }a\leq x\leq b\\ b&\text{if }x\geq b\end{cases} \tag{8}\] We find that the accuracy loss when quantizing inputs, thresholds and outputs is often negligible. The detailed trade-off between quantization bit-width (8, 16, 32 bits for inputs, thresholds, and leaves) and accuracy is analyzed in Section V-C1. ### _Multi-core inference_ #### V-C1 Static ensembles To parallelize static RFs/GBTs on multi-core IoT platforms, we use the approach proposed in [9] as a starting point. Figure 4 schematizes a static inference on \(C\) cores (each represented by a different color), which corresponds to the pseudo-code of Algorithm 4. DTs are statically assigned to a core based on their index in the ensemble. Mutual exclusive access, indicated by a lock (critical_section in Alg. 4), is required when accumulating probabilities on the shared output vector P (_Acc._ in the Figure). Finally, a barrier has to be inserted after the parallel execution of trees, before the final argmax computation, performed only by Core0. Noteworthy, this scheme does not enforce a specific order on the DT executions in different cores. Global synchronization is required only at the end. In case of GBTs, also DTs belonging to different estimators can run in parallel. #### V-C2 Dynamic ensembles Previous dynamic inference approaches for tree ensembles [13, 14, 15] evaluate the early-stopping policy (Section IV-A) after executing each DT. However, as shown in our previous work [12], this is not necessarily optimal. Evaluating the policy more rarely (thus reducing the associated overheads due to its computation, i.e., Equation 2) might give benefits superior to the occasional wasted energy for executing "useless" extra DTs. This becomes even more relevant in multi-core setups, [14] where performing a stopping decision after each DT is highly sub-optimal. In fact, \(C\) trees are concurrently being executed at all times (with \(C=\) number of cores), requiring, on average, a similar execution time. Thus, taking an 8-core system for example, halting after either 10 or 16 DTs consumes almost the same amount of time and energy. However, the first option may result in a less informed decision, as it disregards the output of the remaining 6 trees, which is likely already available or produced shortly. Noteworthy, these considerations are ignored by all previous works, which assume a purely sequential computation model [13, 14, 15]. In contrast, we follow these observations and propose a configurable _batching_ mechanism, in which early-stopping is considered only after all cores have executed their following estimator. Figure 5 and Algorithm 5 schematize this approach for an RF. In the pseudo-code of Algorithm 5, each iteration of the outer loop in lines 4-11 corresponds to one of these macro-steps, whose maximum number is computed statically and inserted in the POLICY_TRIGGERS constant. Lines 12-15 handle the final "left-over" DTs when the total \(N\) is not a multiple of the batch size. The "policy" function in line 9 represents the evaluation of \(S^{t}\) or \(SM^{t}\), whose value is compared with \(t_{h}\) to set the stop flag. Compared to the execution of static ensembles, an additional barrier is inserted after each batch of \(B\) trees, allowing the execution of the early-stopping policy, which is in charge of Core0. When the policy determines that inference should be halted, execution jumps directly to the final argmax. Noteworthy, the added barriers (lines 7, 10), the computation of the policy, and its comparison with the exist threshold (line 9), cause an overhead in terms of latency and energy in the dynamic ensemble, which however is often minimal, as detailed in Sec. V. ``` P = {0}; t = 0; stop = 0; for(int bt=0; bt < POLICY_TRIGGERS &&!stop; bt++) { for(int i = 0; i < B; i++) run_tree(t++, P, INPUT, ROCTS, NODEs, LEAVES); barrier(); if(core_id == 0) stop = policy(F) > th; barrier(); 1} if(istop) { while(t<N) put_tree(t++, P, INPUT, ROCTS, NODEs, LEAVES); 2} barrier(); if(core_id == 0) res = argmax(P); ``` **Algorithm 5** Dynamic multi-class RF inference pseudo-code We set the batch size \(B\) equal to the available cores \(C\) to ensure that all hardware resources are fully used. In the case of RFs, where an estimator corresponds to a single DT (the total number of trees is identical to N, i.e., the number of estimators), we perform an early stopping decision once every B executed trees. For GBTs, instead, early stopping decisions can only be performed after executing an entire estimator, i.e., a group of \(M\) trees, (the total number of trees in the ensemble is \(N\cdot M\), with \(N\) being the number of estimators, and \(M\) the number of classes, see Section II). Fig. 4: Multi-core inference for a static tree ensemble. Fig. 5: Multi-core inference for a dynamic tree ensemble. ## V Results ### _Target Benchmarks_ We benchmark our work on three diverse IoT-relevant tasks: surface Electromyography (sEMG)-based hand gestures recognition, hard-drive failure detection and Human Activity Recognition (HAR) based on accelerometer data. For sEMG-based gesture recognition, we employ the **Ninapro DB1**[38], which encompasses EMG signals collected from 27 healthy subjects while performing hand movements. We follow the experimental setup described in [38], using the same pre-processing and data split, considering 14 hand movements classes, and a 10-channel EMG signal as input. We use a window of 150 ms, collected at 100 Hz, thus obtaining a dataset with \(\approx\)207k elements. As in most state-of-the-art works [38], we use a patient-specific training procedure, i.e., we train separate models for each subject in the dataset, using different recording sessions as training, validation and test sets. For sake of space, we show graphical results only for the first two subjects (S1 and S2), while reporting aggregate metrics over all 27 subject in tables. For hard-drive failure detection, we analyze the **Backblaze**[39] dataset, containing 19 Self-Monitoring Analysis and Reporting Technology (SMART) features collected from hard disks by different vendors during their lifetime in a data center from 2014 to 2019. The goal is predicting whether a disk will experience a failure in the next 7 days. For this dataset, we mirror the setup shown in [2] in terms of data split, preprocessing, and feature selection. Namely, we feed models with 90-day windows of the 19 features (each feature has 1 sample per day), obtaining a dataset with \(\approx\)707k elements. We use 10% of the training data as validation set. Lastly, we consider the **UniMiB-SHAR**[40] HAR dataset, featuring 3-axis acceleration signals collected from smartphone accelerometers, during 9 different daily-life activities (e.g., walking, standing, etc.) and 8 different kinds of falls. The sampling frequency is 50 Hz, and the authors provide the data already pre-processed in windows of 151 samples (\(\approx\)3s) centered around acceleration peaks. The datasets contains around 11k elements. We benchmark our models on the AF-17 task [40], which considers all 17 classes without subject-specific training, using the default pre-processing and windowing. Samples are divided into training, validation, and test datasets with a 60%, 20%, 20% split. The tasks involve different kinds of inputs signals, input dimensions (from 150 ms in NinaPro to 90 days in BackBlaze), and number of classes (from 2 in BackBlaze to 17 in UniMiB-SHAR), leading to RF/GBT models whose complexity spans over 3 orders of magnitude. Due to the unbalanced nature of the training sets, we augment the training sets by performing an oversampling of the minority classes. In the following sections, we report our results using the top-1 macro average accuracy (also known as balanced accuracy, i.e., the average of each class recall) for Ninapro and UniMiB-SHAR and the F1-score for Backblaze. ### _Experimental Setup_ All ensembles have been trained using Python 3.8 and the Scikit-Learn [18] library. To build our comparison baseline, we explore with grid search all static RFs and GBTs with the following combinations of hyper-parameters: depths in the range [1,15], number of estimators in [1,40], and input and leaves quantization to 8/16/32 bits, for a total of 5400 architectures tested for each dataset and model type. For Ninapro, given the personalized training, we repeated the grid search for each of the 27 subjects. For RFs on Backblaze, we instead fixed the maximum depth of the ensembles to 38 and limited the number of estimators to less than 30, following the reference work of [2]. After each search, we excluded static models too large to fit the L2 512kB memory of GAP8, and selected the top scoring one on the validation set as starting point to derive our dynamic model. Section V-C reports the results of this grid search, in which we estimate time complexity using a hardware-agnostic metric, i.e., the average number of visited tree nodes per inference. Sections V-D-V-F analyze dynamic solutions: in Sec. V-D, we report hardware-agnostic results with all dynamic policies; in Sec. V-E, we discuss the impact of execution order on dynamic ensembles; lastly, in Section V-F, we report the results obtained deploying all the dynamic and static models that are Pareto optimal in terms of scoring metric (Accuracy or F1) versus memory or time complexity. All deployments use our automated code generation tool, and target the GAP8 [20] SoC introduced in Section II-C. We set both the cluster and the fabric controller clock frequencies to 100 MHz. The inference runs entirely on the cluster cores. ### _Static Inference Results_ In this section, we report the results of the grid search for static RFs and GBTs on the three target tasks, with the goal of analyzing the trade-offs among the two types of models. #### V-C1 Ensembles quantization Figure 6 shows the models on the score vs. memory occupation Pareto front extracted from the validation set at different bit-widths for inputs/thresholds (\(B_{input}\)) and outputs (\(B_{leaves}\)) and scored on the test sets. For all datasets, we notice that points obtained with 8-bit output quantization are never on the global Pareto front for GBT, with Backblaze models incurring a F1 drop so large that they are omitted from the figure for easier visualization. This is probably due to the wider ranges of the GBT outputs. On the contrary, 8/16-bit inputs and 16-bits outputs are generally achieving the best memory versus score trade-offs. Concerning RFs, fewer bits are generally required, since leaf nodes store probabilities, with narrower ranges. In this case, 8-bit quantization is often enough, both for inputs and outputs. The only exception is represented by Backblaze, where 8-bit quantization causes sharp decreases in F1 score. For both types of models, we observe that 32-bit ensembles are rarely on the Pareto fronts. We impute this behavior to the combination of: i) the regularization effect of quantization, which, as already observed in Neural Networks [23], can lead to better generalization, and ii) the significant increase in memory that 32-bit models incur, leading rapidly to exceeding the L2 of the target device. #### V-C2 RF vs GBT comparison Fig. 6 also shows the global static Pareto fronts in the scoring metric versus memory occupation space. Specifically, we extract the Pareto points from the validation set, reporting then their score on the test set. On all datasets, we observe that for lower memory footprints (less than 40/150 kB, depending on the task), GBTs tend to outperform RFs, achieving higher accuracy for the same space occupation. Vice versa, RFs outperform GBTs under less tight constraints, while also reaching the highest score values for models fitting GAP8's memory on all tasks. On the Ninapro DB1 dataset, for S1, RFs reach up to 77.05% of balanced accuracy (vs 72.64% of GBTs), while for S2, they achieve 74.98% of accuracy (vs 69.56%). On Backblaze, RFs achieve a maximum F1 score of 79%, compared to the 66% achieved by the best GBT model; Lastly, for UniMiB-SHAR, RFs obtain a 2% higher maximum accuracy (67% vs 65%), but GBTs perform significantly better in the low-memory regime (e.g., the smallest GBT reaching 52% requires 4x less memory than the smallest RF achieving the same score). This trend is a direct effect of the structure of the two model types; GBTs do not need an external leaves array to store the probability of all the output classes, as discussed in Section IV, thus requiring less memory. This saving is more evident for smaller models, in which the LEAVES array size is comparable to the one of the NODES structure. Fig. 7 shows the trade-off between the scores achieved by the models and the _number of visited nodes_ per inference, averaged on all input samples. We use this metric as an estimate of the time and energy complexity for an inference, more accurate than just counting the number of DTs, since our models also have varying depths. The Pareto optimal models shown in this figure are in general distinct from those in Figure 6. In contrast to memory occupation, static RFs always outperform static GBTs in terms of time complexity, achieving gains ranging from 2\(\times\) to 45\(\times\) at iso-accuracy for Ninapro, and from 4\(\times\) to 30\(\times\) for UniMiB-SHAR. This is because, each GBT estimator includes one regression tree per class (vs. a single classification tree for RFs). Only on hard disk failure detection, which is indeed the task with the smallest number of classes (two), the trend is similar to the memory one. Overall, these results show that while GBTs are generally outperformed in terms of inference time complexity, they are competitive for small memory budgets. For this reason, we explore dynamic inference for both types of ensembles. ### _Dynamic Inference: Hardware-agnostic Results_ In this section, we discuss the results obtained with our proposed dynamic inference policies (Agg. Max and Agg. Score-Margin), comparing them against the static models discussed in the previous section and against three state-of-the-art dynamic policies, namely Max, Score-Margin, and Quit-When-You-Can (QWYC). Specifically, the comparison is done following the setup described in Section V-B, and reported in terms of accuracy versus the average number of visited nodes as a proxy for time/energy complexity, since the goal of early-stopping adaptive models is precisely to reduce the average latency or energy consumed per input. Table I reports the details of the models used as a starting point to construct dynamic ensembles, i.e., the rightmost models of the static Pareto curves of Fig. 7. For each model, we report the maximum depth of the trees, the number of estimators, the average number of visited nodes on the test set, the quantization bit-width used for inputs/thresholds (\(B_{input}\)) and leaves probabilities (\(B_{leaves}\)), the score (Bal. Fig. 6: Pareto front of score vs. memory occupation for ensembles with quantized inputs (\(B_{input}\)) and outputs (\(B_{leaves}\)) on the validation set scored on the test set. Accuracy or F1), and the memory occupation. For Ninapro, we report the average results over the 27 subjects, with the standard deviation in square brackets. Note that the best score is achieved with different depths, numbers of trees, and quantization precisions for different tasks and ensemble types, demonstrating that all parameters explored during the grid search are critical. Figure 7 compares eight different families of models: on the top row, we compare static GBTs (blue curve) with 6 different adaptive policies, while on the bottom one, we repeat the comparisons for RFs. All the adaptive Pareto curves are obtained by applying an early-stopping policy on top of the "seed" models from Table I. All points come from the same seed, simply changing the early stopping threshold \(t_{h}\) (whereas, for static models, each point is an entirely different RF/GBT model). We report the results of five different dynamic inference policies. Namely, we consider the state-of-the-art Max and Score Margin scores from Section IV-A in their native form, which uses only the probabilities of the latest executed classifier (\(s^{t}\) and \(sm^{t}\), labelled "Max" and "Score-Margin" respectively) and in our proposed aggregated variants (\(S^{t}\) and \(SM^{t}\), labelled "Agg. Max" and "Agg. Score-Margin"). Further, we also consider the state-of-the-art QWYC adaptive policy [15], which, however, only applies to the binary hard-drive failure detection task. In these experiments, we do not consider batching yet. On the Ninapro dataset, with dynamic GBTs using our proposed \(SM^{t}\) policy, we are able to consistently reduce the number of visited nodes with respect to static models achieving the same score. The maximum reduction occurs at 71% (65%) balanced accuracy for S1 (S2), respectively, where we reduce the number of visited nodes by 54% (51%). Conversely, the state-of-the-art adaptive policies fail to achieve the same score, leading to a reduction in accuracy of 9% (14%). Dynamic RFs with \(SM^{t}\), instead, obtain their maximum reduction at 73% (74%) balanced accuracy, cutting the number of visited nodes of 83% (45%) on the two displayed subjects. Also in this case, the best pre-existing policy, the Score-Margin, obtains a very low accuracy of 59% (56%). Over all subjects in the dataset, we achieve an average maximum reduction of 58.5 [\(\pm 9\)]% with GBTs and 58.8 [\(\pm 1.2\)]% with RFs with respect to static models at iso-score. On the Backblaze dataset, the maximum gain is \(88\%\) for GBTs and \(69\%\) for RFs, obtained at 66% and 73% F1 score. In this case, the QWYC approach is the best one, given its double threshold mechanism, which increases its accuracy when a low number of DTs is employed. On the other hand, the other pre-existing policy (the Max) leads to significant score drops, respectively of 6% and 22% with respect to the seed ensemble. Lastly, for UniMiB-SHAR, we reduce the number of visited nodes compared to an equally accurate static model by up to \(58\%\) and \(41\%\) for GBTs and RFs, respectively, at 63% \begin{table} \begin{tabular}{l|l|l|l|l|l|l|l} **Dataset** & **Depth** & **\#VisitedNodes** & **\#Estimators** & \(B_{input}\) & \(B_{taxes}\) & **Score [\%]** & **Memory [kB]** \\ \hline \hline \multicolumn{6}{l}{**GBTs**} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline Ninapro & 5.9 [\(\pm 0.7\)] & 3060 [\(\pm\)387] & 37[\(\pm\)3.5] & 17[\(\pm\)10] & 20[\(\pm\)9] & 75[\(\pm\)6] & 199.5[\(\pm\)65] \\ Backblaze & 15 & 128.53 & 9 & 16 & 16 & 66 & 226 \\ UniMiB-SHAR & 8 & 2987 & 22 & 8 & 16 & 65 & 363 \\ \hline **RFs** & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline Ninapro & 13.8 [\(\pm\)1.28] & 348 [\(\pm\)81] & 31.8 [\(\pm\)7] & 16 [\(\pm\)9.5] & 12.4 [\(\pm\)4.5] & 77 [\(\pm\)6] & 335.83 [\(\pm\)97] \\ Backblaze & 38 & 155 & 9 & 16 & 32 & 79 & 308 \\ UniMiB-SHAR & 15 & 136 & 10 & 32 & 8 & 67 & 292 \\ \hline \end{tabular} \end{table} TABLE I: Static RFs/GBTs used as a starting point to construct dynamic ensembles. Fig. 7: Dynamic and static ensembles Pareto fronts obtained on the validation set and scored on the test set with batch \(B=1\). and 66% balanced accuracy scores, outperforming the best existing adaptive policy (the Score-Margin), which achieves a maximum accuracy of 55% and 52%. Besides the aforementioned savings, an additional key benefit of dynamic solutions, compared to static models, is their flexibility. In fact, the entire Pareto frontiers of Figure 7 can be obtained by deploying only the seed model and then changing the value of \(t_{h}\) (e.g., depending on battery state or another external trigger). Conversely, the static curve is composed of tens of different models, each with different hyperparameters, which can not be simultaneously deployed on the target device due to memory constraints, thus limiting the choices available at runtime. In Table II, we compare the static baseline models ("S" column) and the best dynamic configurations built on top of them which are able to maintain the same score metric ("A-Iso"), or achieve a \(<1\%\) score drop ("A-1%"). Note that these models are using, for each input, a subset of the DTs included in "S". At iso-score, for the two ensemble types, we achieve a reduction in terms of visted nodes of up to \(49\%\) for Ninapro, \(88\%\) for Backblaze and \(41\%\) for UniMiB-SHAR. If we allow a \(1\%\) score drop, the savings increase to up to \(70\%\) for Ninapro, \(89\%\) for Backblaze and \(57\%\) for UniMiB-SHAR. We notice that in all multi-class classification tasks, the best-performing policy is the proposed aggregated score margin (\(SM^{t}\)). On the other hand, on the binary hard-drive failure prediction task, where the \(SM^{t}\) degenerates in the Agg. Max (\(S^{t}\)), the QWYC [15] algorithm with ordering works best for 3 out of 4 cases, except for the iso-score RF, which uses \(S^{t}\). The reason for this is two-fold: first, QWYC uses two separate confidence thresholds for the positive and negative classes, which allows it to execute less DTs on average when predicting that a sample belongs to the "easiest" class, i.e., no-failure in this case. Second, for a binary problem, the Agg. Max and Agg. SM policies become equivalent, as detailed in Section IV-A, but the former requires fewer operations, thus obtaining superior trade-offs. Also, notice that the Max and Score Margin are not present in this table, given that they always fail to reach the same level of accuracy of static models and are outperformed by more than 10% by our dynamic policies. In fact, both approaches are tailored for a cascade of increasingly accurate classifiers, which is not the case for tree ensembles of randomly generated week classifiers. Therefore, being always sub-optimal compared to our new proposed adaptive policies or to the QWYC algorithm, in the rest of the work, we removed them from the discussion, and we do not consider them for deployment. ### _Dynamic Inference: Tree ordering_ In this section, we investigate the impact of the execution order of estimators in dynamic ensembles. The intuitive assumption is that executing the decision trees (DTs) with the highest accuracy first would lead to quicker activation of early stopping policies without affecting accuracy. However, determining the order of trees based on accuracy is not straightforward. For example, Figure 7 demonstrates that the performance of the QWYC-ordered ensemble is inferior to that of the QWYC-unordered ensemble. This indicates that the trees achieving the best validation accuracy differ from those maximizing accuracy on the test data. Nonetheless, we tested if ordering could improve performance for our new policies. Figure 8 shows an example of the results with the Agg. Score Margin policy, on the UniMiB-SHAR dataset (corresponding to the purple markers in the rightmost panels of Fig. 7). We consider 53 different orderings, including: i) 50 randomly generated permutations, ii) two greedy ordering algorithms (QWYC-like and Score), and iii) the original training order. The QWYC-like order is inspired by [15], sorting the estimators in a way that minimizes the number of visited nodes needed to reach iso-accuracy with the static ensemble. The Score order sorts estimators in descending order of accuracy on the validation set. Each curve corresponds to a different ordering of the _same DTs_, and the different points are generated varying the early-exit threshold. As shown, none of the proposed "smart" orders outperform the randomly generated ones, and the original training order falls in the middle of the multiple random curves. However, selecting the best of the 50 random curves is impossible in practice, because we verified that there is no \begin{table} \begin{tabular}{l|l|l l l} **Dataset** & **Model** & **\#VisitedNodes** & **\#Estimators** & **Policy** \\ \hline \hline \multicolumn{5}{c}{**GBTs**} \\ \hline \multirow{3}{*}{Ninapro} & S. & 3060[37] & 37[3,5] & \\ & A-Iso & 1805[315] & 22 [3.73] & Agg.SM \\ & A-1\% & 1096[191] & 13.4[2.6] & Agg.SM \\ \hline \multirow{3}{*}{Backblaze} & S. & 128 & 9 & \\ & A-Iso & 14.89 & 1.01 & QWYC o. \\ & A-1\% & 14.75 & 1.003 & QWYC o. \\ \hline \multirow{3}{*}{UnMiB} & S. & 2987 & 22 & \\ & A-Iso & 1766 & 13.02 & Agg.SM \\ & A-1\% & 1286 & 9.48 & Agg.SM \\ \hline \multirow{3}{*}{Ninapro} & S. & 348[81] & 31.8[7] & \\ & A-Iso & 175[49] & 15[3.9] & Agg.SM \\ & A-1\% & 104[30] & 9[2] & Agg.SM \\ \hline \multirow{3}{*}{Backblaze} & S. & 156 & 9 & \\ & A-Iso & 55 & 3.03 & Agg.Max \\ \cline{1-1} & A-Iso & 17 & 1.0005 & QWYC o. \\ \hline \multirow{3}{*}{UnMiB} & S. & 136 & 10 & \\ \cline{1-1} & A-Iso & 116 & 8.46 & Agg.SM \\ \cline{1-1} & A-Iso & 73 & 5.37 & Agg.SM \\ \hline \end{tabular} \end{table} TABLE II: Statistics of dynamic models compared to their seeds at iso-score (A.-Iso) and with a loss of 1% accuracy (A.-1%). Abbreviations: o.: ordered. Fig. 8: Example of dynamic ensembles with different execution orders of the estimators. correlation between the best ordering on the validation set, and the best one on the test set. Similar results are also obtained for other benchmarks and policies, although we omit them for sake of space. Therefore, we conclude that ordering dynamic ensembles based on their performance on the validation set is not a sufficiently robust approach for our benchmarks, and use the natural training order for the rest of our experiments. ### _Dynamic Inference: Deployment Results_ Figures 9 and 10 show static Pareto-optimal ensembles and the dynamic model from Figure 7 when deployed on GAP8. Specifically, we replace the average number of visited nodes with the average number of clock cycles per inference on the target, which correlates with both latency and energy consumption. In this case, we report results with batch sizes \(B\) = 1, 2, 4 and 8. For each value of \(B\), we limit the number of cores used to parallelize the execution to \(C=B\) for both static and dynamic models, for the reasons explained in Section IV-C. The early-exit policy is evaluated after each batch. Moving from the previous complexity estimate to the actual clock cycles reveals a small advantage of GBTs. For these models, the accumulation of DT's scores on the shared output vector is faster, since each tree only produces a scalar versus a full array of probabilities of \(M\) values for RFs (in RFs, a single DT produces \(M\) different class probabilities). Given that accumulation happens in a critical section, we find that low-score GBTs outperform low-score RFs on our target, achieving the same score with lower cycles, contrary to the estimate of Figure 7. Nonetheless, the general trend is maintained, with RFs rapidly becoming superior as scores increase. For batch sizes up to \(B=4\), dynamic models consistently outperform static solutions for a big portion of the Pareto curve. In fact, with less parallelization, the overhead of the early-stopping mechanism is low w.r.t the execution of the static model, leading to large savings. At \(B=4\), on the Ninapro dataset, we obtain the maximum cycles reduction at 73.9% (74%) balanced accuracy for S1 (S2), respectively. With a dynamic RF exploiting the aggregated score margin (\(SM^{\prime}\)) policy, we save 71.8% (27.1%) of the cycles compared to the static RF at iso-score. On the Backblaze dataset, the maximum gain is instead obtained with a GBT at \(66.8\%\) F1 score, saving 36.6% of the cycles. Lastly, on UniMiB-SHAR, an adaptive GBT reaches 63.4% balanced accuracy, with 47.7% fewer cycles compared to the static GBT achieving the same score. On the contrary, with \(B=8\), the introduced overhead becomes significant w.r.t the fast and highly-parallel execution of the ensemble. In this case, only a reduced set of adaptive Fig. 9: Static and dynamic GBTs Pareto fronts obtained from the validation set and scored on the test set on GAP8. Each column shows a different batch size. models are Pareto optimal. Thus, a general conclusion is that _the effectiveness of dynamic early-stopping ensembles reduces with the available cores_. However, compared to the _most accurate_ static models, we still obtain large cycle reductions without loss of accuracy even at \(B=8\). Table III reports the cycles, energy, and latency results achieved by the "seed" static models and by two dynamic models, namely the fastest/most efficient ones that achieve the same score, or a score drop of less than 1%. All models reported refer to the curves with \(B=C=8\). The table also analyzes in detail the effects of parallelization, providing a breakdown of the cycle counts for the static "seed" models and for the various dynamic models, both when running on 8 cores, and when the same models are executed with \(B=C=1\). For each of the 18 ensembles, we report the average cycles for tree execution (Trees C.), probability accumulation (Acc. C.), and policy computation (Policy C.), as well as the total cycles (Total C.). Additionally, for the 8-core case, we also include energy and latency results. Comparing the \(C=1\) and \(C=8\) configurations, we observe speed-ups ranging from 3.15\(\times\) to 7.92\(\times\) for tree execution. The suboptimal speed-up is influenced by two factors: the imbalance between trees and the leftover trees executed in the last batch. For example, when executing 9 trees, the first 8 trees are parallelized, while the last one is executed individually, resulting in a maximum speed-up of \(\frac{9}{2}=4.5\times\). It is important to note that only the tree inference section of the ensemble execution is parallelized, as described in Algorithm 4 and 5. However, the table also shows a speed-up in the computation of the policy cycles. This is due to the batch size being equal to the number of cores (\(B=C\)), resulting in the policy being executed \(C\times\) fewer times. Also in this case, the speed up is affected by the leftover trees. The total speed-up on 8 cores ranges from 2.29\(\times\) to 7.07\(\times\), since it depends both on the parallel tree inference section and on the impact of the sequentially executed accumulation phase. Overall, when considering 8-core execution we achieve iso-score reductions in terms of latency and energy of up to 41.7% for Ninapro DB1, 35.2% for Backblaze, and 37.9% for UniMiB-SHAR. The maximum gains are obtained by a RF with the \(SM^{t}\) policy for Ninapro DB1, and GBrs with \(S^{t}\) and \(SM^{t}\) policies for Backblaze and UniMiB-SHAR, respectively. If we allow a score loss of 1% compared to the seed model, the gains for the three datasets improve to 60%, 50.6%, and 46.5%, respectively. ## VI Conclusions In this work, we have studied the effectiveness of early-stopping dynamic inference for RFs/GBTs in real-world IoT Fig. 10: Static and dynamic RFs Pareto fronts obtained from the validation set and scored on the test set on GAP8. Each column shows a different batch size. systems. Namely, thanks to a tool that generates efficient inference code automatically, we have deployed optimized static and dynamic tree ensembles, that support parallelization and data quantization, on a multi-core SoC with a complex memory hierarchy. We benchmarked several adaptive policies, finding that the proposed Aggregated Score Margin obtains the best results for multi-class classification problems, although the improvement with respect to the other proposed approach (Aggregated Max) is often small. Thanks to the proposed low-cost early stopping policies and batching mechanism, we have shown that we can mitigate the overheads of dynamic inference, which otherwise tend to increase with parallelism. On three IoT-relevant benchmarks, and using all 8 cores available, we have shown that the average energy consumption per inference can be reduced by up to 35.2-41.7% with respect to a static ensemble, while preserving the same accuracy. Additionally, the obtained dynamic system is extremely flexible, and permits to easily change its working point (in terms of accuracy and energy) by acting on a single tuning parameter. In our future work, we plan to explore additional lightweight early stopping policies for edge devices, e.g., considering a running mean of scores rather than a simple aggregation, and focus on optimizing the execution of small adaptive tree-based models for even smaller platforms, with tighter memory constraints.
2305.13281
LM vs LM: Detecting Factual Errors via Cross Examination
A prominent weakness of modern language models (LMs) is their tendency to generate factually incorrect text, which hinders their usability. A natural question is whether such factual errors can be detected automatically. Inspired by truth-seeking mechanisms in law, we propose a factuality evaluation framework for LMs that is based on cross-examination. Our key idea is that an incorrect claim is likely to result in inconsistency with other claims that the model generates. To discover such inconsistencies, we facilitate a multi-turn interaction between the LM that generated the claim and another LM (acting as an examiner) which introduces questions to discover inconsistencies. We empirically evaluate our method on factual claims made by multiple recent LMs on four benchmarks, finding that it outperforms existing methods and baselines, often by a large gap. Our results demonstrate the potential of using interacting LMs for capturing factual errors.
Roi Cohen, May Hamri, Mor Geva, Amir Globerson
2023-05-22T17:42:14Z
http://arxiv.org/abs/2305.13281v1
# LM vs LM: Detecting Factual Errors via Cross Examination ###### Abstract A prominent weakness of modern language models (LMs) is their tendency to generate factually incorrect text, which hinders their usability. A natural question is whether such factual errors can be detected automatically. Inspired by truth-seeking mechanisms in law, we propose a factuality evaluation framework for LMs that is based on cross-examination. Our key idea is that an incorrect claim is likely to result in inconsistency with other claims that the model generates. To discover such inconsistencies, we facilitate a multi-turn interaction between the LM that generated the claim and another LM (acting as an examiner) which introduces questions to discover inconsistencies. We empirically evaluate our method on factual claims made by multiple recent LMs on four benchmarks, finding that it outperforms existing methods and baselines, often by a large gap. Our results demonstrate the potential of using interacting LMs to capture factual errors. ## 1 Introduction Modern language models (LMs) often generate inconsistent Elazar et al. (2021), non-attributable Rashkin et al. (2021); Bohnet et al. (2022); Liu et al. (2023), or factually incorrect text Tam et al. (2022); Devaraj et al. (2022); Maynez et al. (2020), thus negatively impacting the reliability of these models Amodei et al. (2016); Hendrycks et al. (2021). This has prompted the community to develop methods that calibrate the confidence of model predictions to better align with their quality Brundage et al. (2020). For example, prior methods have used probabilistic approaches Jiang et al. (2020); Zablotskaia et al. (2023) clustering Kuhn et al. (2023), fine-tuning Kadavath et al. (2022); Lin et al. (2022) and in-context learning Alivanistos et al. (2022); Cohen et al. (2023). In this work, we take a different approach to this problem, motivated by truth-seeking mechanisms in law. Specifically, we consider the setting where a witness is cross-examined in order to check whether their statement is factually correct or not. In such a setting, the examiner asks questions that aim to lead towards contradictory statements by the witness, while a contradiction implies that the witness lied at least in some of the statements, hence the well known quote _"Were you lying then or are you lying now?"_Wilder et al. (1957). To employ this mechanism to LM factual calibration, we propose the following setting, illustrated in Figure 1. Our goal is to check whether a statement made by an LM (_"The Greek god of marriage is Hera"_) is factually correct. We refer to the model Figure 1: An example of our LMvLM approach. The first line shows the statement made by the Examinee LLM. Then an interaction between the Examiner and Examinee takes place, and the Examiner arrives at a conclusion whether the original statement was correct or not (here it concludes that it was a false statement). that generated this statement as the Examinee. To check whether this fact is correct, we use another LM, called Examiner, to conduct a cross-examination of Examinee. Concretely, we craft designated prompts to facilitate a multi-turn interaction between the two LMs, where Examiner issues questions (e.g., _"Is Hera associated with marriage in any way?"_) to Examinee to check the veracity of the original statement. The examination is concluded by a decision from Examiner as to whether the original claim was correct or not.1 Footnote 1: In practice, Examiner and Examinee can be the same LM (e.g., GPT-3) that is prompted in two different ways to define its different roles. Our problem setting is related to that of calibration (Guo et al., 2017), where the goal is to predict the probability at which a model will err. However, unlike previous approaches to this problem, we use text generated by LMs. Our approach is motivated by the intuition that calibration is actually an elaborate reasoning process where one checks the level of support that a fact has based on other statements the model believes. We argue that such complex reasoning is naturally performed via the strong conversational skills of modern LMs. We use our method to detect errors in LM generation in the context of factual question-answering. Our experiments with several recent LMs - Chat-GPT, GPT-3 (Brown et al., 2020; Ouyang et al., 2022), and LLAMA (Touvron et al., 2023) - show that cross-examination effectively detects factually incorrect claims generated by LMs. Specifically, across multiple datasets and examination settings, it detects over 70% of the incorrect claims while maintaining a high precision of \(>\)80%, outperforming strong baselines by a large gap. Further analysis shows that examiner LMs introduce multiple questions throughout the examination, and employ various strategies to reveal inconsistencies, including question paraphrasing, validation of implicated arguments, claim decomposition, and requests for evidence. To conclude, our contributions are (a) framing the task of factuality testing as an interaction between two LMs, (b) proposing a concrete implementation of this interaction via the use of one LM with different prompts in a zero-shot setting, and (c) demonstrating improved factuality detection accuracy across several benchmarks. ## 2 LM Cross-Examination Our goal is to employ an "examiner" LM (Examiner) to evaluate claims generated by another LM (Examinee). To this end, we leverage the recent success of prompting (Liu et al., 2023), to facilitate a cross-examination setting between the two LMs. In such a setting, Examiner should introduce questions with the objective of revealing inconsistencies with respect to an initial claim made by Examinee. Such inconsistencies can be considered as a signal for uncertainty of Examinee in its original claim, and thus, can be used to assess whether its original statement was correct. Given an Examiner LM and a claim \(C\) generated by an Examinee, our method establishes a multi-turn interaction between the LMs, where at each turn the other LM is prompted with a designated prompt that incorporates the outputs from previous turns. This interaction continues until the examiner has no further questions and can provide its final decision. To establish a meaningful interaction that reveals possible inconsistencies, we define three stages for the examination, each guided by a specific prompt. As part of each prompt for Examinee or Examiner, we provide the outputs generated in the previous rounds for context. We next describe the examination stages in detail, with the overall process illustrated in Figure 2. Stage 1: SetupThe examination begins by "assigning" the Examiner its role. Namely, describing the task setting, providing it with the Examinee's claim, and asking it to generate questions for the Examinee.2 Footnote 2: We observe that this effectively steers Examiner to ask natural questions directly related to the given claim \(C\) (§5). Next, we feed the questions generated by Examiner, one at a time, to Examinee, concatenated to the following instructions: Please answer the following questions regarding your claim. The response from Examinee yields a set of answers to the questions from Examiner. Stage 2: Follow-up QuestionsWe next feed Examiner with the answers generated by Examinee to its initial questions, and ask Examiner whether it has any follow-up questions. Notably, outputs from Examiner at this stage are conditioned on the previous output from Examinee. If the answer from Examiner is "Yes", we then further prompt it to obtain more questions. This phase is conducted iteratively, until either Examiner declares it has no follow-up questions, or the number of turns has reached a threshold.3 Footnote 3: We use a maximum of five turns in our experiments. Stage 3: Factuality DecisionOnce no further questions are obtained from Examiner, we prompt it to conclude whether the claim \(C\) is true or false. Specifically, we request it to reply with either "correct" or "incorrect" as its final conclusion. In cases where the examiner does not output either of "correct" or "incorrect", we consider its final decision to be a rejection of the claim. Typically though, we observe that the examiner follows the instructions and indeed generates a definitive conclusion (see statistics in SS5). ## 3 Related Work Attribution and Fact CheckingOur goal is closely related to works on attribution and fact verification. Namely, checking if a LM-generated text is faithful to some source text (Bohnet et al., 2022; Honovich et al., 2022). This problem has been addressed via several approaches, including question generation (Wang et al., 2020; Honovich et al., 2021; Scialom et al., 2021), NLI (Thorne et al., 2018; Welleck et al., 2019; Maynez et al., 2020; Dziri et al., 2022; Gao et al., 2022; Kamoi et al., 2023), data augmentation (Atanasova et al., 2022; Wright et al., 2022; Gekhman et al., 2023), and planning schemes that allow the model to self-edit its own generation (Schick et al., 2022). Unlike these works, we are not assuming any reference text or an external knowledge base. Instead, we directly check if the LM's claim is likely to be correct, by probing the model for inconsistencies. Our approach also uses multi-turn dialogue as a key component. Model CalibrationA key challenge with prediction models is to provide a probability of the answer being incorrect, a problem known as model calibration (Guo et al., 2017). The problem of factual-error detection can be viewed as a variation of calibration, where instead of a continuous probability, we provide a binary prediction for whether the model is correct or not. This is also related to the setting of selective prediction, where a model can choose to abstain from answering a query (Varshney et al., 2022; Kamath et al., 2020). Common approaches to calibration are to perform various transformations on model logits (Desai and Durrett, 2020; Jiang et al., 2021), and measuring uncertainty (e.g., see Kuhn et al., 2023). More recent works have studied the use of LMs for providing calibration, by training them on statements known to be factually correct or incorrect. This "supervised" approach has been explored via fine-tuning (Kadavath et al., 2022; Lin et al., 2022) and in-context learning (Cohen et al., 2023; Alivanistos et al., 2022). Our work focuses on zero-shot factual error detection that involves just two categories: predicting whether a model's claim is correct or incorrect. We propose a novel approach to this problem, using multi-turn LLM interaction. While we focus on a binary setting, one could envision an extension of our approach to continuous outputs (for example, to output a probabilistic estimation for the correctness of the claim). Multi-Agent LMsUsing multiple LMs in an interactive manner is a relatively new idea with many potential applications. It has been shown that LMs can utilize additional LMs or tools to better solve downstream tasks (Schick et al., 2023). Additionally, Park et al. (2022) showed that in a social setting, LMs demonstrate certain social skills that emerge from this interaction, and Shinn Figure 2: The three-stage process of cross-examination between the Examiner and Examinee, where the factuality of a claim \(C\) generated by Examinee is estimated by Examiner. et al. (2023) proposes that a LM can use a different model to instruct it when to "reflect" on its recent action, while performing a planned sequence of actions aimed at solving a given query. Intuitively, this model detects signs of hallucination or inefficient planning within the LM's trajectory. Consistency Across GenerationsLMs have been shown to generate inconsistent outputs given different prompt paraphrases Elazar et al. (2021); Newman et al. (2021). Prior work showed that prompts can be automatically optimized to produce factually correct claims more robustly Lester et al. (2021); Zhong et al. (2021); Qin and Eisner (2021). Hao et al. (2022) utilized multiple generated paraphrases to gauge consistency Hao et al. (2022), and other works Elazar et al. (2021); Zhou et al. (2022) further proposed training objectives to improve model consistency. Another approach to handling multiple outputs is via variants of decoding strategies Wang et al. (2022), or model ensembles Sun et al. (2022). In our work, we build on these, assuming inconsistencies are more likely to occur with incorrect claims, and let an examiner model search for these by introducing questions to the examinee. Chain of Thought ReasoningRecent work has shown that LMs can be prompted to elaborate on their reasoning process, to self-ask themselves follow-up questions, before reaching a final conclusion, and that this could be exploited to improve mathematical, multi-hop and common-sense reasoning skills Wei et al. (2022); Press et al. (2022); Yoran et al. (2023), along with planning and problem-solving abilities Huang et al. (2022); Long (2023). Another interesting approach to complex reasoning in LMs is recent work on Miaeutic prompting Jung et al. (2022), that answers a question by recursively generating a set of facts and reasoning over those. Our approach may be viewed as constructing an elaborate chain-of-thought explanation for the examinee's claim. However, we do not train this explanation via in-context or fine-tuning, and rather rely on different prompts for its generation. ## 4 Experiments In this section, we conduct experiments on multiple datasets and models to evaluate our approach, focusing on the task of factual question-answering. ### Experimental Setup Factual Question AnsweringOne key use-case of LMs is answering questions seeking factual knowledge. For example, _"How old was Barack Obama when he was first elected?"_. In such cases, it is crucial for the model to answer the question correctly, or to indicate that it does not know the answer. We thus evaluate our approach on several Question Answering and Fact Completion datasets. These are typically provided as a set of \((Q,A)\) pairs of a question \(Q\) and its ground-truth answer \(A\). Having gold answers allows us to evaluate if a predicted answer is factually correct or not, which can be used to evaluate our LMvLM approach. To apply cross-examination in this setting, we first convert the answer predicted by the model into a Examinee claim that can be provided as input to the examination procedure. Formally, given a question \(Q\), if \(Q\) is phrased as a fill-in-the-blank question (e.g. _"Bailey Peninsula is located in_ ), then we feed it to the Examinee model to obtain a prediction that completes the sentence and forms a claim. In cases where \(Q\) is phrased as a question (e.g. _"Where is Bailey Peninsula located?"_), we prompt the model to provide an answer in a claim format with: "Please answer the following question: <\(Q\)> Please phrase your answer as a claim." This process results in a claim \(C\) that states the model's "belief" about the answer to \(Q\). We then evaluate the truthfulness of \(C\) through cross-examination, and compare the examiner's decision of whether \(C\) is correct or not to the ground-truth correctness. Factuality Evaluation LabelsTo evaluate our method, it is necessary to have "gold decisions" to compare the examiner's decisions against. Such labels can be obtained from the ground-truth answers in the data, namely, the decision for a claim \(C\) is correct if it matches an evaluation of \(C\) against the gold answer \(A\). To evaluate if the claim \(C\) obtained for a question \(Q\) is correct with respect to the ground-truth answer \(A\), we first check if \(A\) or any of its aliases (if provided as part of the dataset, e.g., "FC Tottenham" and "Tottenham Hotspur") appears as a sub-string in \(C\)Schick et al. (2023); Meng et al. (2022). Next, to avoid incorrect labels resulting from this automatic evaluation Bulian et al. (2022), we manually review all the claims marked as incorrect in the first step, and fix any labeling mistakes. We also filter out any ambiguous or unclear claims generated by Examinee. Examiner EvaluationWe evaluate how well the examiner detects claims that are factually incorrect, using the following metrics:4 Footnote 4: We say that the examiner “rejects” a claim if the examiner concludes that the claim is incorrect. * **Precision**: the portion of incorrect claims, out of the claims rejected by the examiner. * **Recall**: the portion of incorrect claims rejected by the examiner, out of all the incorrect claims. * **F1**: the harmonic mean of precision and recall. For completeness, we additionally report (in SSC) the complementary Precision, Recall, and F1 scores with respect to detection of correct claims. DataWe consider the following datasets: LAMA (Petroni et al., 2019), TriviaQA (Joshi et al., 2017), Natural Questions (NQ) (Kwiatkowski et al., 2019) and PopQA (Mallen et al., 2022). These datasets cover a wide range of queries, from real user queries (NQ), to trivia questions (TriviaQA), and subject-relation-object facts phrased as queries (LAMA, PopQA). We consider the closed-book open-ended setting, where we do not provide any context or answer choices to the model. We evaluate our approach on 1,000 random examples from the test set (or from the development set if a test set is not available).5 Footnote 5: We use only a subset of examples due to the high cost of executing large LMs like GPT-3, which we include in our experiments. In addition, we created a dataset of false claims to further test our approach. This "Falsehoods" dataset contains only wrong claims, created separately for each model (GPT-3 and ChatGPT) and for each of the four QA datasets. Concretely, given a model and a question \(Q\), we prompt the model to generate a false answer (see SSB for details). We verify that these are indeed incorrect claims by checking that the gold answer (and any of its aliases, if they exist) does not occur in the generated text. This yields a subset of examples that are realistic, namely, the answer matches the target type (e.g., "a city") but is incorrect (see examples in Table 3). The examiner's decision for these examples should always be to reject. \begin{table} \begin{tabular}{l l l} \hline \hline **Stage** & **GPT3 Prompt(s)** & **ChatGPT Prompt(s)** \\ \hline (1) Setup & Imagine trying to prove & Your goal is to try to verify the correctness of the following claim:\textless{}C\textgreater{}, based on the background information & claims is true, is wrong. & you will gather. To gather this, You will provide short \\ & You have the opportunity & questions whose purpose will be to verify the correctness \\ & to ask any question & of the claim, and I will reply to you with the answers \\ & order to prove that the claim & to these. Hopefully, with the help of the background \\ & claim is wrong. The claim & questions and their answers, you will be able to reach a \\ & is: \textless{}C\textgreater{}. & conclusion as to whether the claim is correct or possibly \\ & what would you ask in order & incorrect. Please keep asking questions as long as you’re \\ & to validate that? & yet to be sure regarding the true veracity of the claim. \\ & Please start with the first questions. \\ \hline (2) Follow-Up & (i) Do you have any follow-up questions? Please answer with Yes or No. \\ Questions & (ii) What are the follow-up questions? \\ \hline (3) Factuality & Based on the interviewee’s answers to your questions, what is your conclusion & regarding the correctness of the claim? Do you think it is correct or incorrect? \\ \hline \hline \end{tabular} \end{table} Table 1: Prompts provided to Examiner in each stage of the examination, with respect to a claim \(C\) by Examiner. \begin{table} \begin{tabular}{l c c c} Examinee & LAMA & TriviaQA & NQ & PopQA \\ \hline LLMA-7B & 53.9 & 48.4 & 33.8 & 24.9 \\ GPT-3 & 79.8 & 74.2 & 50.1 & 43.9 \\ ChatGPT & 80.9 & 77.2 & 53.3 & 45.6 \\ \hline \hline \end{tabular} \end{table} Table 2: Portion of factually correct claims by every Examinee LM on each dataset. \begin{table} \begin{tabular}{l c c} & **False claim** & **Correct claim** \\ \hline _“Wanlocked is the highest_ & _“Wanlocked is the high-_ \\ _village in France because it is_ & _“Wanlocked in the French Alps.”_ & _“Wanlocked is the_ \\ _to leave in Scotland, a_ & _“Louis Oosthuizen the 2010_ & _“Louis Oosthuizen, the_ \\ _Open Golf Champion is Amer._ & _“2010_ & _“Open Golf Champican, because he was born in_ \\ _to leave was born in_ & _“United States.”_ & _“The screenwriter for_ _“Smile” was Jerry Belberg.”_ \\ \hline \hline \end{tabular} \end{table} Table 3: Example false claims generated by ChatGPT for PopQA and by GPT-3 for TriviaQA. ModelsWe use ChatGPT (gpt-3.5-turbo), GPT-3 (text-davinci-003) (Brown et al., 2020; Ouyang et al., 2022), and LLaMA-7B (Touvron et al., 2023), in three Examiner vs. Examinee cross-examination settings: GPT-3 vs. GPT-3, ChatGPT vs. ChatGPT, and ChatGPT vs. LLaMA. Notably, using the same LM as Examiner and Examinee (except for their prompts, which are different), provides a cleaner setting where both LMs share the same knowledge. The prompts used for each LM at every stage of the examination are shown in Table 1. BaselinesFor each setting, we compare LMvLM with recent methods for uncertainty detection and variants of our approach: * **Confidence-Based**: The prediction head of LMs outputs a probability for the predicted token. It is a common practice to use this probability as a measure of confidence in the prediction (Yoshikawa and Okazaki, 2023). In our case, the LM generates a multi-token claim, and we calculate the confidence for the claim as the product of probabilities for all predicted tokens of the answer only. In order to output a binary decision (i.e., is the claim correct or not), we optimize a threshold over the train dataset to maximize F1. Note that our examination approach does not require tuning any threshold. * **Are you sure? (AYS)**: Recent work (Kadavath et al., 2022; Cohen et al., 2023) has shown that LMs can be trained to estimate their certainty in generated facts. Here, we use a zero-shot version of this approach where we directly "ask" the model whether it is sure. Specifically, we add the following prompt right after the claim generation: "Are you sure regarding the correctness of your claim? Please answer with Yes or No". Then we take the output as the prediction whether the claim is correct or not. * **I don't know (IDK)**: Recently, Ganguli et al. (2023) showed that LMs might have the capability to self-correct themselves, when instructed to do so. Here we instruct the model to output "_I don't know"_ if it is uncertain, by concatenating the following sentence to the original query: "If you are not sure you know the answer, answer with 'I don't know' only."_. If the model answers 'I don't know' we label the corresponding claim as false, and otherwise true. * **In-context IDK (IC-IDK)**: We teach the model to output that it doesn't know the answer, via in-context demonstrations. We follow Cohen et al. (2023) and test each of the queries within an in-context setting. For each query, we first provide the model with \(K\) demonstrations, as \(D\) of them labeled as "_Don't know"_ examples, while the rest \(K-D\) are provided with their gold answer from the dataset. The "_Don't know"_ examples are randomly selected out of a set of examples the model \begin{table} \begin{tabular}{l c c c|c c c|c c c|c c c c} & \multicolumn{3}{c}{LAMA} & \multicolumn{3}{c}{TriviaQA} & \multicolumn{3}{c}{NQ} & \multicolumn{3}{c}{PopQA} \\ & **P** & **R** & **F1** & **P** & **R** & **F1** & **P** & **R** & **F1** & **P** & **R** & **F1** \\ \hline AYS & \(82.3\) & \(25.2\) & \(38.6\) & \(79.9\) & \(17.9\) & \(29.2\) & \(85.2\) & \(29.1\) & \(43.3\) & \(78.4\) & \(35.7\) & \(63.9\) \\ IDK & \(49.1\) & \(52.4\) & \(50.7\) & \(48.7\) & \(66.5\) & \(56.2\) & \(62.5\) & \(60.7\) & \(61.6\) & \(70.0\) & \(61.1\) & \(65.2\) \\ \hline **LMvLM** & \(85.1\) & \(70.7\) & \(76.7\) & \(82.8\) & \(71.6\) & \(76.8\) & \(74.5\) & \(74.9\) & \(77.7\) & \(83.6\) & \(77.1\) & \(80.2\) \\ **LMvLM** (Majority) & \(\mathbf{86.6}\) & \(\mathbf{75.8}\) & \(\mathbf{80.8}\) & \(\mathbf{84.5}\) & \(\mathbf{80.8}\) & \(\mathbf{82.6}\) & \(\mathbf{82.3}\) & \(\mathbf{76.1}\) & \(\mathbf{79.1}\) & \(\mathbf{87.0}\) & \(\mathbf{84.0}\) & \(\mathbf{85.4}\) \\ - Follow-up & \(83.8\) & \(68.1\) & \(75.1\) & \(82.3\) & \(69.7\) & \(75.5\) & \(74.8\) & \(72.1\) & \(73.4\) & \(82.0\) & \(73.3\) & \(77.4\) \\ \hline \hline \end{tabular} \end{table} Table 4: Precision (P), Recall (R), and F1 scores for LMvLM with ChatGPT as Examiner and Examinee, compared to baselines. The last row shows an ablation of our method without the follow-up questions stage. \begin{table} \begin{tabular}{l c c c|c c c|c c c|c c c} & \multicolumn{3}{c}{LAMA} & \multicolumn{3}{c}{TriviaQA} & \multicolumn{3}{c}{NQ} & \multicolumn{3}{c}{PopQA} \\ & **P** & **R** & **F1** & **P** & **R** & **F1** & **P** & **R** & **F1** & **P** & **R** & **F1** \\ \hline AYS & \(74.8\) & \(17.9\) & \(28.9\) & \(80.3\) & \(19.8\) & \(31.8\) & \(74.9\) & \(20.7\) & \(32.3\) & \(74.6\) & \(22.7\) & \(34.8\) \\ IDK & \(43.0\) & \(42.1\) & \(42.5\) & \(47.9\) & \(45.7\) & \(46.7\) & \(60.9\) & \(45.3\) & \(52.0\) & \(52.1\) & \(37.6\) & \(43.7\) \\ Confidence-Based & \(38.6\) & \(\mathbf{85.8}\) & \(53.2\) & \(39.6\) & \(\mathbf{84.4}\) & \(53.9\) & \(56.2\) & \(72.7\) & \(63.4\) & \(60.8\) & \(69.7\) & \(64.9\) \\ IC-IDK & \(71.5\) & \(46.3\) & \(56.2\) & \(70.6\) & \(49.7\) & \(60.1\) & \(70.0\) & \(57.6\) & \(63.2\) & \(76.9\) & \(37.7\) & \(50.6\) \\ \hline **LMvLM** & \(78.8\) & \(69.9\) & \(74.1\) & \(81.6\) & \(64.6\) & \(72.1\) & \(70.5\) & \(66.6\) & \(68.5\) & \(75.5\) & \(69.1\) & \(72.2\) \\ **LMvLM** (Majority) & \(\mathbf{80.7}\) & \(77.9\) & \(\mathbf{79.3}\) & \(\mathbf{83.1}\) & \(72.1\) & \(\mathbf{77.2}\) & \(\mathbf{79.3}\) & \(\mathbf{76.8}\) & \(\mathbf{78.0}\) & \(\mathbf{82.2}\) & \(\mathbf{71.4}\) & \(\mathbf{76.4}\) \\ - Follow-up & \(76.4\) & \(71.1\) & \(73.7\) & \(78.7\) & \(64.8\) & \(71.1\) & \(66.6\) & \(70.1\) & \(68.3\) & \(70.9\) & \(65.8\) & \(68.3\) \\ \hline \hline \end{tabular} \end{table} Table 5: Precision (P), Recall (R), and F1 scores for LMvLM with GPT-3 as Examiner and Examinee, compared to baselines. The last row shows an ablation of our method without the follow-up questions stage. failed on, while evaluating it on an held-out set of examples from the dataset in a zero-shot setting. Intuitively, these examples' answers are likely to be unknown to the model, hence we labeled them with _"Don't know"_. The model predictions are either a target text or _"Don't know"_. Based on the output, we generate a factuality label as in the IDK baseline above. Notably, this baseline requires labeled data for the in-context demonstrations, which is not necessary for our approach. * **LMvLM**: A single execution of our method, where we accept or reject the claim according to the examiner's final decision. * **LMvLM (Majority)**: For a given claim, we apply our method three times (with the same Examiner and Examinee), using sampling generation for follow-up questions generation. We reject the claim in case at least two of the examinations concluded it is false. Since output probabilities are not provided as part of the ChatGPT's API, we cannot provide results for the Confidence-Based baselines in this case. Moreover, we observe that ChatGPT often fails to understand the task of IC-IDK. ### Results Tables 4, 5, 6 show the results for the settings ChatGPT vs. ChatGPT, GPT-3 vs. GPT-3, and LLaMA vs. ChatGPT, respectively. Across all settings, our method outperforms the baselines, often by a large gap. For example, it obtains \(85.4\) F1 compared to \(\leq 65.2\) by baselines for ChatGPT on PopQA (Table 4), and \(77.2\) F1 compared to \(\leq 60.1\) for GPT-3 on TriviaQA (Table 5). Notably, the most substantial gains are in terms of recall, showing the superiority of our method in detecting factually incorrect claims (when compared to the baselines which achieve reasonable precision too). Interestingly, we observe that ChatGPT generally outperforms GPT-3. Last, Table 7 shows the accuracy of our method and baselines on our Falsehood dataset. For both ChatGPT and GPT-3, LMvLM successfully rejects the vast majority of the false claims, obtaining 87%-98% accuracy with ChatGPT and 75%-99% with GPT-3 across all datasets. ### Ablations We perform an ablation, where we remove the follow-up iterations in the examination process to gauge their benefit. Results are reported for GPT-3 in Table 5 (last row), showing a large decrease in performance (e.g. \(78\to 68.3\) in F1 for NQ and \(77.2\to 71.1\) for TriviaQA). Notably, recall scores \begin{table} \begin{tabular}{l c c c|c c c|c c c|c c c} & \multicolumn{2}{c}{**LAMA**} & \multicolumn{3}{c}{TriviaQA} & \multicolumn{3}{c}{NQ} & \multicolumn{3}{c}{PopQA} \\ & **P** & **R** & **F1** & **P** & **R** & **F1** & **P** & **R** & **F1** & **P** & **R** & **F1** \\ \hline AYS & \(61.4\) & \(38.0\) & \(46.9\) & \(60.0\) & \(35.7\) & \(44.8\) & \(71.1\) & \(15.0\) & \(24.8\) & \(74.8\) & \(14.2\) & \(23.9\) \\ IC-IDK & \(56.6\) & \(49.0\) & \(52.5\) & \(58.9\) & \(52.5\) & \(55.5\) & \(66.2\) & \(53.4\) & \(59.1\) & \(66.8\) & \(50.1\) & \(57.3\) \\ IDK & \(61.6\) & \(44.8\) & \(51.9\) & \(62.0\) & \(32.9\) & \(43.0\) & \(64.4\) & \(12.1\) & \(20.4\) & \(66.7\) & \(16.8\) & \(26.8\) \\ Confidence-Based & \(54.9\) & \(\mathbf{76.7}\) & \(64.0\) & \(56.9\) & \(\mathbf{85.8}\) & \(68.4\) & \(64.4\) & \(63.5\) & \(63.9\) & \(64.6\) & \(53.6\) & \(58.6\) \\ \hline **LMvLM** & \(81.1\) & \(66.4\) & \(73.0\) & \(80.1\) & \(70.8\) & \(75.2\) & \(79.3\) & \(65.5\) & \(71.7\) & \(84.9\) & \(73.6\) & \(78.8\) \\ **LMvLM** (Majority) & \(\mathbf{82.9}\) & \(73.9\) & \(\mathbf{78.1}\) & \(\mathbf{80.3}\) & \(76.8\) & \(\mathbf{78.5}\) & \(\mathbf{83.7}\) & \(\mathbf{74.2}\) & \(\mathbf{78.7}\) & \(\mathbf{88.3}\) & \(\mathbf{77.4}\) & \(\mathbf{82.5}\) \\ - Follow-up & \(79.7\) & \(65.7\) & \(72.0\) & \(80.0\) & \(69.8\) & \(74.6\) & \(79.4\) & \(63.7\) & \(70.7\) & \(83.3\) & \(71.8\) & \(77.1\) \\ \hline \end{tabular} \end{table} Table 6: Precision (P), Recall (R), and F1 scores for LMvLM with ChatGPT as Examiner and LLaMA as Examinee, compared to baselines. The last row is an ablation of our method without the follow-up questions stage. \begin{table} \begin{tabular}{l c c c} & \multicolumn{1}{c}{LAMA} & TriviaQA & NQ & PopQA \\ \hline GPT-3 & 65.7 & 98.4 & 89.9 & 83.1 \\ GPT-3 (Majority) & 75.8 & 98.5 & 92.0 & 88.0 \\ \hline ChatGPT & 83.6 & 97.9 & 90.4 & 88.8 \\ ChatGPT (Majority) & 87.1 & 98.6 & 94.2 & 93.9 \\ \hline \end{tabular} \end{table} Table 7: Accuracy of GPT-3 and ChatGPT as Examiner on false claims generated for each dataset. \begin{table} \begin{tabular}{l c c c} & \multicolumn{1}{c}{ChatGPT / GPT-3 / ChatGPT /} \\ & \multicolumn{1}{c}{ChatGPT / GPT-3} & LLaMA \\ \hline \(\#\) of questions & \(7.0\pm 2.8\) & \(6.4\pm 4.3\) & \(6.8\pm 4.4\) \\ \hline \(\#\) of follow-up questions per iteration & \(1.3\pm 1.0\) & \(1.3\pm 0.6\) & \(1.1\pm 0.5\) \\ \hline \(\#\) of follow-up questions & \(1.9\pm 1.2\) & \(1.3\pm 0.7\) & \(1.6\pm 1.0\) \\ \hline \(\#\) of questions per iteration & \(3.1\pm 2.1\) & \(2.7\pm 1.6\) & \(2.9\pm 1.9\) \\ \hline \(\%\) of inconclusive examiner decisions & \(14.8\%\) & \(9.1\%\) & \(10.3\%\) \\ \hline \end{tabular} \end{table} Table 8: Cross-examination statistics for each setting (Examiner/Examinee), averaged across datasets. are decreased by 6%-10%. Overall, this shows the importance of the follow-up questions issued by the examiner to assess the examinee's claim. ## 5 Analysis of Cross-Examinations We analyze cross-examinations by GPT-3 and ChatGPT to better understand the success and failure cases of our method. We find that examiner LMs typically ask multiple questions in the examination, and perhaps surprisingly, apply different strategies to reveal inconsistencies. Examination StatisticsTable 8 provides statistics on the cross-examinations performed by ChatGPT and GPT-3. Both models introduce multiple queries (6-7 on average) during an examination, with typically 1-2 steps of follow-up questions, which are important for the examiner's decision (SS4.3). We also observe a non-negligible number of claims (9%-15%) where the examiner LM does not arrive at a concrete final decision (i.e., it does not generate "correct" or "incorrect" as the final decision, we reject the claim in those cases). In our qualitative analysis, we identify reasons that could explain these cases. Qualitative AnalysisWe manually analyze a sample of 96 examinations - 48 by each LM, with 6 correct and 6 incorrect examinations for each model and each dataset. We observe the following trends (examples are in Table 9): 1. **Rephrasing the claim**: In about \(60\%\) of the examinations, both LMs introduce questions which are paraphrases of the original question. This supports the assumption that the Examiner seeks inconsistencies by generating variants of the original claim. 2. **Rephrasing Questions**: In about half of the examinations, both LMs introduce questions that are similar to previously asked questions or have a different phrasing. This is a desirable behavior as it can reveal inconsistencies if the examinee provides a different answer for the same question. 3. **Validation of logical implications**: The Examiner asks Examinee regarding implied arguments that must be true whenever the original claim is correct. This can be observed in \(70\%\) of the correct detections of GPT-3, and \(87.5\%\) out of the correct detections of ChatGPT. 4. **Logical questions**: The Examiner decomposes the claim into multiple sub-questions which together compose a trajectory to validating it. Such decompositions appear in about 75% of the cases for ChatGPT but only 10% in GPT-3. We observe these in \(33\%\) of the correct detections of GPT-3, and \(70\%\) for ChatGPT. 5. **Request for attribution**: The Examiner ask the Examinee about the existence of external evidence to support the claim. This happens in about 30% of the cases for both LMs. 6. **Wrong intermediate answers**: The Examinee responds with factually incorrect answers to one or more of the questions originated by \begin{table} \begin{tabular}{l l} \hline \hline **Pattern** & **Example statements/questions generated by Examiner during examination** \\ \hline \hline Rephrasing the claim & Claim: “_The first Fast and Fuvious film was released in 2001._” \\ \hline Rephrasing Questions & Claim: “_The screenwriter who is credited with writing the screenplay for Winner is Wendy Riss_” \\ & 1. What is the name of the screenwriter who is credited with writing the screenplay for Winner? 2. Who is credited with writing the screenplay for Winner? \\ \hline Validation of Implications & Claim: “_The director of The Town was Ben Affleck._” \\ & Is Ben Affleck known for directing any movies? \\ \hline Logical decomposition & Claim: “_The second oldest of the Pevensie children in C S Lewis’s The Lion, the Witch and the Wardrobe is Edmund._” \\ & 1. What is the birth order of the Pevensie children in C S Lewis’s The Lion, the Witch and the Wardrobe? 2. What are their ages? 3. Who appears second in this list? \\ \hline Request for attribution & Claim: “_The screenwriter of Cover Up is Bill Blake_” \\ & Is there any evidence or documentation that supports the claim that Bill Blake was the screenwriter for Cover Up? \\ \hline Wrong intermediate answers & Claim: “_There are eight vertices (corners) on an octahedron._” \\ \hline \hline \end{tabular} \end{table} Table 9: Examples for frequent patterns of ChatGPT and GPT-3 observed through manual analysis of cross-examinations. the Examiner. We observe this occurs mostly in cases where the original claim is false (it happens in only in about \(14\%\) of the cases where the Examinee is correct). In both models, this can be observed in about half of the cases where the claim is false and has also been detected by the Examiner. Furthermore, it occurs in about \(80\%\) of the cases where the Examiner has accepted a false claim, and in \(45\%\) where the Examiner has rejected a correct claim. We note that in most cases where LMvLM fails, Examinee has provided incorrect information to Examiner. This might indicate that in those cases Examinee has encoded a large set of factually wrong facts that are mutually consistent, thus making it hard for the Examiner to detect inconsistencies. Finally, the fact that ChatGPT more commonly validates the claim through logical questions might be a key factor in its superiority over GPT-3 in our setting. ## 6 Conclusion We introduce LMvLM, a method for zero-shot detection of factuality errors, inspired by the cross-examination practice employed in a court of law. Our method uses prompting to facilitate a multi-turn interaction between an examiner LM and an examinee LM, to reveal inconsistencies that imply factually incorrect claims. We evaluate LMvLM in the context of factual question answering, showing it substantially improves detection of factual errors made by LMs. Our method builds on a fundamental connection between self-consistency (i.e., consistency of an LM with itself) and factual consistency (i.e., consistency between factual claims generated by an LM and ground-truth facts). We consider the LM itself as the source of information, and we test whether a claim it has generated is faithful and consistent with several other beliefs it has. Our work can be extended in several ways. First, LMvLM provides interpretable information about related beliefs of the model, which could be analyzed to understand what makes the model commit certain mistakes. Second, one may incorporate several LM instances into the factuality detection process, rather the having only a single Examiner. Finally, one can train the Examiner to generate questions more effectively. ## Limitations We note three limitations of our method LMvLM. First, unlike other methods, it requires multiple queries of the examinee and examiner LMs, which could be costly when using external APIs such as those used in this work. This could be a key consideration when scaling this approach to large numbers of claims. Second, for our method to succeed, both LMs (Examinee and Examiner), but mostly Examiner, should be able to follow instructions and have the ability to reason over information in a relatively long context. This skill is currently mostly demonstrated by larger models (>10B parameters) and thus our method may not perform as well for smaller models. Last, any logical flaws in the examiner's operation are likely to affect the overall examination, potentially leading to inaccurate decisions. However, our experiments show that, even if such flaws occur, our method is still useful on average as it substantially improves factuality detection. Nonetheless, developing safety mechanisms that detect and mitigate logical flaws is an important research direction, that we leave for future work. ## Acknowledgements We thank Roee Aharoni and Avi Caciularu for valuable feedback and constructive suggestions. This work is supported in part by the Israeli Science Foundation.
2309.02059
Time delays in anisotropic systems
Scattering properties and time delays for general (non-symmetric) potentials in terms of the respective S-matrices are discussed paradigmatically in one dimension and in comparison to symmetric potentials. Only for the latter the Wigner and Smith time delays coincide. Considering asymmetric potentials also reveals that only one version of S-matrices used in the literature (the one with reflection coefficients on the diagonal) generalizes to the asymmetric case. Finally, we give a criterion how to identify a potential with intrinsic symmetry which behaves like an asymmetric one if it is merely offset from the scattering center.
Ulf Saalmann, Jan M. Rost
2023-09-05T09:00:24Z
http://arxiv.org/abs/2309.02059v1
# Time delays in anisotropic systems ###### Abstract Scattering properties and time delays for general (non-symmetric) potentials in terms of the respective S-matrices are discussed paradagimatically in one dimension and in comparison to symmetric potentials. Only for the latter the Wigner and Smith time delays coincide. Considering asymmetric potentials also reveals that only one version of S-matrices used in the literature (the one with reflection coefficients on the diagonal) generalizes to the asymmetric case. Finally, we give a criterion how to identify a potential with intrinsic symmetry which behaves like an asymmetric one if it is merely offset from the scattering center. ## I Introduction Time delays related to scattering phases [1; 2] have been discussed for a long time in transport problems [3; 4]. More recently, they have been addressed in acoustics [5], electromagnetics [6], and from a fundamental perspective of quantum trajectories [7], and since about a decade in the context of photo-ionization by ultra-short laser pulses. Experimentally, photo-ionization time delays have been extracted from streaking the momenta of electrons released by a short XUV pulse with a moderate IR field [8] or so called RABBIT measurements aiming at the same time-delay information of the released electron wave-packet by using IR sidebands of the XUV photo-ionizing pulse train [9; 10]. The link of the photo-ionization time delay to the Wigner-Smith time delay from scattering theory, as well as the delays in general emerging from these setups have been a source of ongoing debate [11; 12; 13]. This is not surprising since the setups are quite intricate and become even more cumbersome, if the long-range Coulomb interaction comes into play, which is the case for almost all experiments performed. Recent experimental advances have made it possible to measure time delays originating in photo-ionizing molecules [14; 15; 16; 17; 18; 19], that is from anisotropic potentials. This success motivates to ask for the theoretical foundation of time delays and their formulation for general interactions, since almost always time delay and S-matrices are discussed in the context of single-centered, often spherically-symmetric potentials [20]. In the following we elucidate basic properties of time delays in the simplest setting which is general enough to be sensitive to the properties of anisotropic (and isotropic, parity-respecting) potentials. Since characteristic features (such as the difference between proper and partial time delays) can only be uncovered in a system with at least two independent scattering channels, we do not investigate photo-ionization but scattering in one dimension from a generic short-range potential, a scenario which provides two scattering channels. Such type of scattering is relevant in complex media, in wave-guides, or generally, for transport problems. Additional motivation is provided by the fact, that symmetric potentials in 1D hide subtleties of scattering and related time delays in at least two aspects: (i) Two different versions of the S-matrix \(\mathbf{S}\) are pursued in the literature [21; 22; 23; 24], which have different eigenvalues. Yet, both fulfill the criteria for S-matrices, derived from overarching principles of flux conservation and time-reversal invariance (for a real potential), namely that \(\mathbf{S}\) is unitary and symmetric. However, only the version which is a symmetric matrix with respect to incoming and outgoing channels [21] remains symmetric in case of anisotropic potentials. (ii) Furthermore, without symmetric interaction, the two commonly used formulations of time delay, namely partial time delays and proper time delays do not agree, prompting the question what their respective meaning is. Scattering in one dimension was mostly theoretically investigated [22; 23; 24; 25] long before time delays have become popular, however, to the best of our knowledge never with a discussion or even a focus on situations where the scattering potential is not symmetric. ## II Scattering in 1D For our context a potential \(V(x)\) is short range if at large distances \(x\) the solutions of the time-independent Schrodinger equation \([-d^{2}/dx^{2}+2V(x)-2E]\psi(x)=0\) are free waves, \(\psi(|x|\gg 1)\propto e^{\pm ikx}\) with \(k=\sqrt{2E}>0\), see also App. A.1. We will use atomic units \(e=\hbar=m_{\mathrm{e}}\equiv 1\) and consider for convenience a particle of mass \(m_{\mathrm{e}}\), unless stated otherwise. ### The S-matrix and its parameterization There are two channels in an 1D scattering scenario. Most easily [26] they are described by reflection (\(r\)) and transmission (\(t\)) amplitudes for incoming waves from the left or the right side. Asymptotically those wave function read, with \(k=\sqrt{2E}\) and \(\{\lim_{x\to-\infty}\psi(x,E),\ \lim_{x\to+\infty}\psi(x,E)\}\), \[\psi_{1}(x,E) =\{\mathrm{e}^{+ikx}+r_{1}(E)\,\mathrm{e}^{-ikx},\ t_{1}(E) \mathrm{e}^{+ikx}\} \tag{1a}\] \[\psi_{r}(x,E) =\{\mathrm{t}_{1}(E)\mathrm{e}^{-ikx},\ \mathrm{e}^{-ikx}+r_{r}(E) \mathrm{e}^{+ikx}\}. \tag{1b}\]
2308.02370
A Machine Learning Method for Predicting Traffic Signal Timing from Probe Vehicle Data
Traffic signals play an important role in transportation by enabling traffic flow management, and ensuring safety at intersections. In addition, knowing the traffic signal phase and timing data can allow optimal vehicle routing for time and energy efficiency, eco-driving, and the accurate simulation of signalized road networks. In this paper, we present a machine learning (ML) method for estimating traffic signal timing information from vehicle probe data. To the authors best knowledge, very few works have presented ML techniques for determining traffic signal timing parameters from vehicle probe data. In this work, we develop an Extreme Gradient Boosting (XGBoost) model to estimate signal cycle lengths and a neural network model to determine the corresponding red times per phase from probe data. The green times are then be derived from the cycle length and red times. Our results show an error of less than 0.56 sec for cycle length, and red times predictions within 7.2 sec error on average.
Juliette Ugirumurera, Joseph Severino, Erik A. Bensen, Qichao Wang, Jane Macfarlane
2023-08-04T15:10:07Z
http://arxiv.org/abs/2308.02370v1
# A Machine Learning Method for Predicting Traffic Signal Timing from Probe Vehicle Data ###### Abstract Traffic signals play an important role in transportation by enabling traffic flow management, and ensuring safety at intersections. In addition, knowing the traffic signal phase and timing data can allow optimal vehicle routing for time and energy efficiency, eco-driving, and the accurate simulation of signalized road networks. In this paper, we present a machine learning (ML) method for estimating traffic signal timing information from vehicle probe data. To the authors best knowledge, very few works have presented ML techniques for determining traffic signal timing parameters from vehicle probe data. In this work, we develop an Extreme Gradient Boosting (XGBoost) model to estimate signal cycle lengths and a neural network model to determine the corresponding red times per phase from probe data. The green times are then be derived from the cycle length and red times. Our results show an error of less than 0.56 sec for cycle length, and red times predictions within 7.2 sec error on average. Traffic Signal Timing, Machine Learning, Probe Vehicle Data ## 1 Introduction In traffic networks, traffic signals play a key role in determining and managing vehicular traffic flow. They control the flow of traffic, ensure safety by regulating the flow of competing movements through intersections and can reduce traffic congestion when optimized [1, 2]. However, they can also induce stop-and-go traffic, which increases vehicles' delay and fuel consumption. In addition, not knowing the traffic signals' timings and switching patterns makes it very challenging to accurately determine the most time-efficient or energy-efficient routes, to inform driving decisions to maximize arrival on green or minimize engine idling at intersections [3], and to correctly simulate traffic for signal-controlled regions. Usually, traffic lights are managed by different local agencies. For example, in the US, hundreds of agencies are in charge of the more than 320,000 traffic lights installed [4]. This makes direct access to traffic signal timing and phase data a very challenging task. In this paper, we present a machine learning method that uses probe vehicle data to estimate the timings for pre-timed traffic signals. We demonstrate this algorithm on probe data generated from a well-calibrated microscopic simulation using the SUMO simulator[5] and the NEMA Type controller in SUMO [6]. We focus on fixed-time traffic signals since they represent the majority of traffic lights in the US [7]. ## 2 Related Works Probe vehicle data has enabled many important tasks in transportation research, including estimating link-level hourly traffic volume [8], queue length estimation [9], and traffic signal control [10]. The availability of and opportunities to exploit probe vehicle data will only continue to grow due to the predicted increase in connected and automated vehicle (CAV) adoption and the increased availability of high-speed communication infrastructure. Probe vehicle data has also been used broadly for estimating traffic signal timings. In [3], the authors uses statistical patterns in probe data from public buses in San Fransisco, California, USA, to estimate the cycle times, the duration of red times and the greens start times for fixed-timed traffic lights. Yu and Lu estimated cycle length for fixed time intersections using low frequency taxi trajectory data[11]. They formulate the cycle length estimation problem as a general approximate greatest common divisor (AGCD) problem and solved it with a most frequent AGCD algorithm. The work in [12] presents an optimization-based traffic signal state estimation algorithm on vehicle speed profiles to determine traffic signal phase schedules. Wang and Jiang [13] determine the cycle length and green times of signalized intersection by analyzing the velocity-time curves of all vehicles going through the intersection. Chuan et al. calculated cycle length and green times using shockwave theory [14]. Finally, [15] and [16] focus on using probabilistic prediction of traffic signal timings. To our best knowledge, there are only a few works that study machine learning (ML) methods to estimate signal timing parameters from probe vehicles. Unlike mathematical methods, such as statistical, optimization and probabilistic based-methods, which require the explicit mathematical representation of the desired problem to solve, ML techniques can derive models from data to produce meaningful insights and predictions[17]. Advances in sensing and communication have lead to unprecedentedly large amounts of data, which, when combined with ML's spontaneous model-learning capability, explains the recent surge in ML applications across domains from health care to self-driving cars and traffic control. For example, the work in [18] uses a Bayesian learning approach to determine the cycle length from historic trajectory data, but without estimating the red or green times per phase. In [19], a Random Survival Forest and a Long-Short-Term-Memory (LSTM) neural network are used to predict the residual time per intersection phase; however, they do not estimate the cycle length or the green and red phase times; additionally, these models require vehicle detection data as input. In [20], the authors presents a deep learning based LSTM methodology to predict actuated traffic lights' switching time from green to red and vice versa, but require signal timing parameters as input rather than estimating them. In this work, we showcase the potential of ML for estimating pre-timed traffic signal timing by using an Extreme Gradient Boosting (XGBoost) model [21] to determine the cycle length and a dense neural network model [22] to estimate the red times from probe vehicle data. We choose XGBoost for estimating cycle length because it has been known to perform well on tabulated data sets [23]. For predicting the red times, we tried several simpler models like XGBoost, Random Forest and Logistic Regression models, but none performed as well as the DNN due to the feature complexity. Traffic signals' green times are obtained by subtracting the red times from the cycle length for each intersection's phase. The yellow times are counted as part of the green times. Hence, our paper's contributions can be summarized as follows: * Development of a novel ML-based method for estimating pre-timed traffic signal timing parameters including cycle length, red times and green times, that is applicable to any location that has high-fidelity vehicle probe data. * Demonstration of the developed ML-based method on a sizeable realistic network with 39 intersections, with results showing an error of less than 0.56 sec for cycle length, and red times predictions within 7.2 sec error on average. ## 3 Methodology ### Simulation Description In order to train and evaluate our ML models, we use simulated probe data from a realistic road network consisting of 39 intersections in the city of Chattanooga in Tennessee, USA, shown in Figure 1. The network geometry was validated using satellite images of the real-world network and its demand was calibrated using historic, link-level, traffic volume estimates [8], as shown in Figure 2. The City of Chattanooga's department of transportation also provided signal timing information for the 39 intersections, which served as the ground truth for our models. We used SUMO [5] to run our simulation for a day. SUMO gives us the advantage of controlling and observing all aspects of the traffic and signal timings. This eliminates any uncertainty that might be observed in the field when using signal timing sheets that might be out of date or data that is difficult to process because various formats. Additionally, SUMO provides a second by second record of vehicle movements via the FCD (floating car device) listener. Thus, we could store all vehicle trajectories in simulation and their speeds for each simulation run. We also ran simulations with different percentiles of the traffic demand in Figure 2 (for example, we ran simulation with 50% of the hourly demand) to simulate demand variability. After running the simulations with various demand levels, we recorded FCD data for up to 50% of the vehicles. We then aggregated each trip and filtered only trips that served our purpose of estimating cycle lengths, red times and green times. ### Data Processing To process the data for feature engineering, we used the FCD trajectory data as inputs and the Traffic Light Scheme settings (tls) data as our target variable. The first step of processing the FCD data was to create a bounding box around each signalized intersection and filter out all the trajectories that passed within the bounding box. We set the bounding Figure 1: Road network in Chattanooga, Tennessee, with 39 intersections used for traffic signal timing estimation Figure 2: Hourly historic, link-level, traffic volume estimates in vehicle per hour (vph) for the area of interest in Chattanooga, Tennessee, used for demand calibration box to be 500 feet from the center of the intersection. This gave us enough room to develop aggregated metrics of each trajectory journeying through the intersection. After extracting the data points around each intersection, we then filtered only trips that stopped for any given length of time before the intersection and traversed through the center of the intersection with any vehicle movement and approach. In the top panel of Figure 3, we see one trip that approaches an intersection and stops for a duration of approximately 48 seconds shown by the shaded area in the top graphs. We can also see the way-points in the bottom panel showing the deceleration in red and acceleration in green of the vehicle before, through, and after the center of the signalized intersection. We filtered trips using four criteria. These criteria were originally built using real-world probe trajectories. Thus, this method translates easily to real-world probe data for future implementation. The first criteria is to find a pattern of deceleration, zero speed and acceleration of a vehicle trip. We select this pattern because if the vehicle never stops, we are unable to accurately estimate the initial onset of a green light given by a vehicle accelerating from zero. Thus, if a vehicle just passes through a light, we can only say the signal was green or yellow for the few seconds that the vehicle approaches and passes through the center of the intersection. This would not be sufficient information if we assume a low penetration rate of probe vehicles. The second criteria is based on sparsity of trace way points. For the simulation, this criteria did not matter since all data was recorded without sensing error. However, real-world data contain trips with missing way-points. Thus, we dropped any vehicle trip that was missing data for a period of 10 seconds or more. Third, we wanted to exclude any trips that were within 500 feet of the intersection, but did not pass through the center of the intersection. In the real-world, vehicles may be visiting a location at the corner of the intersection, such as a gas Figure 3: Vehicle trajectory near an intersection, including a speed profile, a heading profile, distance profile, and at the bottom, the actual way points of the trajectory station or fast food restaurant, but never actually traverse the intersection. Thus, we only include vehicles that have a 50 feet minimum distance away from the center of the intersection. You can see in the top panel of Figure 3 on the Distance graph that the minimum distance can be easily calculated for each trip. The last criteria we filter on is total duration of trip. Considering that the majority of signalized intersections have a cycle length under 120 seconds, we exclude any trip with a duration of over 2 minutes around the intersection. With real-world probes, we came across some trips where the vehicle turned around after passing the intersection and passed back through the light. These types of trips were rare but added complexity and noise to our processing, and this required extracting out these acceleration times. We use acceleration times to estimate cycle lengths since, given enough vehicle data, there will be a periodicity between consecutive green lights if the signal timings are fixed. To get the acceleration start times, we use a simple algorithm that looks at each vehicle trip and first finds the first positive change in speed after the vehicle has stopped. This timestamp is then recorded and assigned to an intersection and a phase so that it can be later matched to the ground truth cycle length for a given hour. ### Cycle Length Feature Engineering To create ML features from the acceleration start times, we hypothesized that the largest magnitude Fourier Frequencies from the Fourier Transform of the distribution of acceleration start times will be a good predictor of cycle lengths. This is because the change in signal phases throughout a cycle should cause vehicles to start accelerating at periodic intervals. To create these Fourier input features for the XGBoost model, we first group the acceleration start times by the intersection they were located at, the signal phase of the vehicle trajectory, and the time of day. This grouping ensures that all acceleration start times come from vehicles passing through a specific intersection with the same target cycle length. After binning, we perform the following to create the input features: 1. Bin all start times within the first grouping into 1 hour time windows. 2. For each one hour window perform the following: 1. Convert the date-time representation of each start time to the seconds after the first start time within the current time window. 2. Approximate the distribution of start times within the current time window using a Gaussian Kernel Density Estimator (KDE) with bandwidth 6. 3. Take the fast Fourier Transform (fft) of the KDE and save the 30 frequencies that had the largest magnitude of their Fourier amplitudes. 3. Repeat for all groupings In doing this process, one data point for training and testing the ML model consists of information derived from a 1 hour time window of vehicles moving through a specific intersection. We chose the bandwidth (standard deviation) of the Gaussian KDE to be 6 seconds so that it was large enough to ensure starts within the same cycle blend together. Second, we saved the top 30 most significant Fourier frequencies arbitrarily; however, we optimize the number of frequencies used by the model along with the other XGBoost hyperparameters. Finally, we discard all time windows with less than two starts because having no starts in a window provides no information and only one start in a window only provides artificial noise in the Fourier frequencies from the length of the time window and the frequencies that compose the single Gaussian in the KDE. We also optimize and analyze the minimum number of starts per hour as part of a multistage tuning process. Finally, to normalize the inputs, we use the StandardScalar transformation provided by Sci-kit Learn[24]. ### Cycle Length Training Methods For training the XGBoost model, we first randomly selected \(20\%\) of the points to set aside as the test set. Then, we randomly divide the training dataset into 5 folds for cross validation (CV), which is used during the optimization process. To optimize the XGBoost hyperparameters, the number of Fourier frequencies and minimum starts per hour, we used the following multistage optimization process: 1. Optimize the hyperparameters and number of Fourier frequencies using Bayesian Optimization from the bayes_opt package in Python[25] with 10 initialization points and 50 iterations. 2. Compute the CV predictions using the optimized hyperparameters and use them to compute the MAE of all points with more than \(c\) acceleration starts for \(c=2,3,...,249,250\). 3. Choose the optimal cutoff to be the minimum value of \(c\) such that the associated fraction of CV predictions with an error less than 2 seconds is at least \(95\%\). 4. Down select the training and testing data to only include points with \(>c\) acceleration starts and re-optimize the hyperparameters using the same Bayesian Optimization scheme. ### Red Times Feature Engineering To predict the red times, we start with the vehicle stop durations instead of the acceleration start times used for cycle estimation. We bin the vehicle stop times by intersection, direction of travel (North, South, East, West) and time of day (AM, PM, Off Peak). With this binning strategy, all vehicle stop times in a bin correspond to the same unique phase and target red times, and this means we can incorporate distributional information about the stop times into the input features. Next, we perform up-sampling to improve the granularity of the examples and evenly 4 weight all target values in the training process. To do this, we create groupings of vehicle stop times from each bin by sampling 50 stop times with replacement from each bin to create a grouping and repeating this 40 times. The sampling ensures empirical cumulative distribution functions (described below) were created from 50 stop times on average and the 40 repetitions presents distinct permutations of the data to the network for training and validation. This is useful to improve network interpolation between training points. We chose 40 repetitions to balance the increased number of unique permutations with the training time of the network. We can then encode the distributional information as input features by using empirical quantiles. Let \(s_{i}\) be the stop time of the \(i^{th}\) vehicle, \(\alpha\) be the quantile of interest, and \(n\) be the number of vehicle stop times in the grouping. Then the \(\alpha\) quantile \(Q_{\alpha}\) is given by Equation 1. \[Q_{\alpha}=\max\left\{s_{i}\;\middle|\;\frac{\#\left\{s_{j}|s_{j}\leq s_{i} \right\}}{n}<\alpha\right\} \tag{1}\] Where \(\{\}\) denotes a set, the notation \(\{s_{i}\;|\;X\}\) means the set of \(s_{i}\) values that satisfy condition \(X\) and \(\#A\) indicates the number of elements in the set \(A\). We use a \(100\times 1\) vector of the \(1\%,2\%,...,99\%,100\%\) quantiles as the input to the neural network. Finally, we also use the StandardScalar transform from Sci-kit Learn[24] to normalize the inputs before training. Note that the \(100\%\) quantile is just the maximum vehicle stop time from that grouping. ### Red Times Training Methods To predict the red times, we use a Dense Neural Network (DNN). The DNN inputs a \(100\times 1\) vector, outputs a scalar prediction and has 11 hidden layers of sizes: 550, 1000, 900, 800, 700, 600, 500, 400, 300, 200, 100 respectively. All layers used a Leaky ReLU activation function with \(\alpha=0.01\). To train the model, we divided the data randomly into a 20% validation set and 80% training set. Then we trained the model with minibatch size set to 32 until we reached an early stopping trigger with patience set to 50. The Neural Network was constructed using the Tensorflow[26] and Keras[27] packages in Python. \begin{table} \begin{tabular}{c||l|l} Parameter & Search Space & Optimal Value \\ \hline \hline n\_estimators & \(100-2000\) & \(2000\) \\ learning\_rate & \(10^{-4}-10^{0}\) & \(10^{0.0}\) \\ max\_depth & \(2-20\) & \(17\) \\ gamma & \(10^{-5}-10^{0}\) & \(10^{-5.0}\) \\ min\_child\_weight & \(1-10\) & \(1.499\) \\ subsample & \(0.5-1.0\) & \(1.0\) \\ colsample\_by\_tree & \(0.5-1.0\) & \(1.0\) \\ n\_fourier & \(2-30\) & \(1\) \\ min starts & \(2-250\) & 20 \\ \hline \end{tabular} \end{table} Table 1: Optimized hyperparameters for the XGBoost model. ## 4 Results and Discussion ### Cycle Length Results After the first round of the multi step optimization routine for the XGBoost model, we determined the minimum starts points per hour cutoff to be 20. We chose this cutoff because it is the minimum number of starts per hour for the model to produce 95% of it's estimations with an error less than 2 seconds. Table 1 shows the results from the final Bayesian Optimization to optimize the hyper parameters and number of Fourier frequencies. The results for the cycle length model have a mean absolute error (MAE) less than 0.7 seconds for both training and testing sets, along with a R-squared values of.97 and.99 for training and testing respectively. These results are shown in figure 4 with green being the test set and blue being the training set. ### Red Times and Green Times Results After optimizing, the neural network model was able to estimate red time within 7.2 seconds on average with an R-squared score of 0.85. The green times are derived by subtracting the red times from the cycle length for each intersection's phase. We count the yellow time as part of the green time. The accuracy of the green times was very similar to the red times accuracy. In Figure 5 we show parity plots for the red and green times on the left side with nearly 2,000 test points. We also include a point density to better understand where the concentration plotted points. Darker shaded areas show the highest concentration of estimates. As you can see, two large concentration of points cluster around the true diagonals. We can also see the spread of error residuals in the histograms on the right side of the Figure 5. This model does moderately well at estimating the red times per phase at each intersection, giving promise to estimating red times with real world probe data. Since this model uses the distribution of stop times over an arbitrary period of time, we do not conduct any penetration rate analysis since we can compensate for a low penetration rate by collecting stop time distributional data for a longer period of time. Figure 4: Cycle Length Parity Plot ## 5 Conclusion This paper presented an ML approach to estimating signal timing for pre-timed traffic lights from probe data. We used an XGBoost model to estimate the cycle length and a neural network to estimate the red times per phase. The green times are determined by substracting red times from cycle length durations. Our results demonstrated highly accurate estimations for cycle lengths given only 20 points of data or more and good accurate estimations for red times and green times. Our work complements existing literature that has significantly focused on using mathematical methods, such as statistical, probabilistic and optimization based methods, for determining signal timing parameters from vehicle probe data. Future research directions include doing a detailed sensitivity analysis of the performance of the ML-approach given different penetration rates of probe vehicle data, and extending our approach to estimating the signal timing for actuated signals. ## Acknowledgment This work was authored by the National Renewable Energy Laboratory, operated by Alliance for Sustainable Energy, LLC, for the U.S. Department of Energy (DOE) under Contract No. DE-AC36-08GO28308. Funding provided by U.S. Department of Energy Vehicle Technology Office. The views expressed in the article do not necessarily represent the views of the DOE or the U.S. Government. The U.S. Government retains and the publisher, by accepting the article for publication, acknowledges that the U.S. Government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this work, or allow others to do so, for U.S. Government purposes. This work was only made possible through the close cooperation of City of Chattanooga Department of Transportation.
2305.03092
Curating corpora with classifiers: A case study of clean energy sentiment online
Well curated, large-scale corpora of social media posts containing broad public opinion offer an alternative data source to complement traditional surveys. While surveys are effective at collecting representative samples and are capable of achieving high accuracy, they can be both expensive to run and lag public opinion by days or weeks. Both of these drawbacks could be overcome with a real-time, high volume data stream and fast analysis pipeline. A central challenge in orchestrating such a data pipeline is devising an effective method for rapidly selecting the best corpus of relevant documents for analysis. Querying with keywords alone often includes irrelevant documents that are not easily disambiguated with bag-of-words natural language processing methods. Here, we explore methods of corpus curation to filter irrelevant tweets using pre-trained transformer-based models, fine-tuned for our binary classification task on hand-labeled tweets. We are able to achieve F1 scores of up to 0.95. The low cost and high performance of fine-tuning such a model suggests that our approach could be of broad benefit as a pre-processing step for social media datasets with uncertain corpus boundaries.
Michael V. Arnold, Peter Sheridan Dodds, Christopher M. Danforth
2023-05-04T18:15:45Z
http://arxiv.org/abs/2305.03092v2
# Curating corpora with classifiers: A case study of clean energy sentiment online ###### Abstract Well curated, large-scale corpora of social media posts containing broad public opinion offer an alternative data source to complement traditional surveys. While surveys are effective at collecting representative samples and are capable of achieving high accuracy, they can be both expensive to run and lag public opinion by days or weeks. Both of these drawbacks could be overcome with a real-time, high volume data stream and fast analysis pipeline. A central challenge in orchestrating such a data pipeline is devising an effective method for rapidly selecting the best corpus of relevant documents for analysis. Querying with keywords alone often includes irrelevant documents that are not easily disambiguated with bag-of-words natural language processing methods. Here, we explore methods of corpus curation to filter irrelevant tweets using pre-trained transformer-based models, fine-tuned for our binary classification task on hand-labeled tweets. We are able to achieve F1 scores of up to 0.95. The low cost and high performance of fine-tuning such a model suggests that our approach could be of broad benefit as a pre-processing step for social media datasets with uncertain corpus boundaries. ## I Introduction The wide-spread availability of social media data has resulted in an explosion of social science studies as researchers adjust from data scarcity to abundance in the digital age [1; 2]. The potential for large scale digitized text to help understand human behavior remains immense. Researchers have attempted to quantify myriad social phenomena through changes in language use of societies over time, typically through the now massive collections of digitized books and texts [3] or natively digital large-scale social media datasets [4]. Analysis of social media data promises to supplement traditional polling methods by allowing for rapid, near real-time measurements of public opinion, and for historical studies of public language [5; 6; 7; 8]. Polling remains the gold standard for measuring public opinion where precision matters, such as predicting the outcomes of elections. Where trends in attention or sentiment suffice, social media data can provide insights at dramatically lower costs [9]. However, for targeted studies using social media data, researchers need a principled way to define the potentially arbitrary boundaries of their corpus [10]. When researchers characterize online discourse around a specific topic, a few approaches are available. Each comes with trade-offs, both in the costs of researchers' time, as well as the resulting precision and recall of the corpus. For some studies a corpus is best defined by a set of relevant users, such as a set of politicians' social media accounts or the set of users following a notable account [11]. Studies that observe the behavior of networked publics often take this user-focused approach [12]. For studies of social media advertising, a list of relevant buyers can be used to define the boundaries, whether politicians or companies [13; 14]. To curate a topic-focused corpus limited keyword filters can be an effective strategy. Keywords can be used to match a broad cross-section of relevant posts with high precision, but often have low recall [15]. Relevant hashtags can signal a user's intent to join a specific online conversation beyond their immediate social network. Hashtag based queries have been used by researchers to construct focused corpora of tweets ranging from sports and music [16; 17], to public health, natural disasters, political activism, and protests [18; 19; 20; 21; 22; 23; 24; 25; 26]. Alternatively, researchers can query for posts with an expansive set of keywords to increase recall at the expense of precision. Researchers can generate such a set of keywords algorithmically, or by asking experts with domain knowledge, or via a combination of the two. Expert-crafted keyword lists have been used by researchers to study topics such as social movements and responses to the COVID-19 pandemic [27; 28; 10; 22]. Other researchers have generated lists of keywords algorithmically, e.g., using Term Frequency - Inverse Document Frequency (TF-IDF) [29] and word embeddings [30], or by comparing the distribution of words in a corpus of interest to a reference corpus and selecting words with high rank-divergence contributions [31; 32; 33; 34; 35]. Regardless of the methods used to choose keywords, continued expansion beyond the most relevant necessarily reduces precision. Researchers can further refine the set of relevant keywords to balance precision and recall, and add complexity to their queries with exclusion terms or Boolean operators to require multiple keywords. The possibilities are endless [36] and reviewers have little information available to decide if the choices made were appropriate. While some topic-focused social media datasets can be well curated with simple heuristics or rules-based classifiers, others could benefit from an alternative paradigm. Here, we argue for a two step pre-processing pipeline that combines broad, high recall keyword queries with fine-tuned, transformer-based classifiers to increase precision. Our approach can trade the labor costs associated with building rules-based filters, for the cost of labeling social media data, which could potentially be further reduced using few-shot learning [37], while still achieving high precision. The tools available for text classification have improved significantly over the past decade. Since the introduction of Word2Vec in 2013 and GloVe in 2014, the natural language processing community has had access to high quality, global word embeddings [38, 39]. These embeddings are trained vector representations of words from a given corpus of text, enabling word comparisons with distance metrics. However, global embeddings average the representations of words, making them unsuitable for document classification where key terms have multiple meanings. The subsequent development of large pre-trained language models enabled high performance on downstream tasks with relatively little additional computational cost to fine-tune [40, 41]. Such models provide contextual, rather than global, word embeddings. Since 2019, pre-trained language models have become less resource intensive while improving performance. Knowledge distillation has enabled models like DistilBERT and MiniLM, which retain the performance of full sized models while requiring significantly less memory and performing inference more rapidly [42, 43]. Smaller, faster models enable researchers with limited resources to adopt these tools for NLP tasks, requiring only a laptop for state-of-the-art performance. Improved pre-training, introduced with MPNet, combines the benefits of masked language modeling (MLM) and permuted language modeling (PLM), better making use of available token and position information [44]. While transformer-based language models provide state of the art performance on natural language processing tasks, they can be difficult to understand and visualize. Using twin and triplet network structures, pre-trained models can be trained to generate semantically meaningful sentence embeddings that can be compared using cosign distances [45]. Through pre-training with contrastive learning on high quality datasets, general purpose sentence embeddings like E5 have become the new state-of-the-art [46]. Text classification still remains a difficult task. existing models are less successful with longer texts [47], and text classification with a large number of classes remains challenging [48]. However, for the specific task of classifying tweets [49] as'relevant' (R) or 'non-relevant' (NR) to a specific topic--an instance of binary classification--we feel existing models are sufficiently capable. Sophisticated, pre-trained language models are readily accessible to researchers from Hugging Face [50] and can be easily fine-tuned with a limited amount of labeled data [37, 51]. Tools like ChatGPT have been shown to outperform untrained human crowd-workers for zero-shot text classification, while costing an order of magnitude less [52]. As a case study, we examine online language around emission-free energy technologies. In democratic societies the social perception of technologies affects the willingness of governments to extend subsidies, expedite permitting, or regulate competing energy sources, ultimately effecting the energy mix of the grid. Quantifying public attitudes is useful for policy makers to be responsive to public preferences and for science communicators to respond when public opinion does not reflect expert consensus. To quantify public perceptions of energy on social media sites, researchers have use a variety of methods to curate tweet corpora. This could be as simple as querying for a single hashtag. Jain _et al._ choose '#RenewableEnergy' to generate a corpus for a renewable energy classification study [53]. Zhang _et al._ query for tweets containing a list of hashtags, before quantifying overall attention trends and sentiment by energy source [54]. Li _et al._ use a two-phase approach, querying for relevant hashtags, before filtering non-relevant tweets with keywords, such as those containing both '#solar' and 'eclipse', with filter keywords built on a trial-and-error approach [55]. Alternatively, Kim _et al._ use keyword phrases, such as'solar energy' and'solar panel', to search for relevant tweets, before using RoBERTa to classify sentiment [56]. Vagero _et al._ use a contextual language model to classify sentiment of tweets towards wind power in Norway [57]. Using Reddit, Kim _et al._ study renewable energy discourse by collecting all messages from a particular sub-reddit, a page devoted to a topic, before analyzing a word co-occurrence network [58]. Published studies use a wide range of corpus curation techniques and provide varying levels of justification for each choice. Although we focus on the topic of renewable energy, we hope our methods are broadly applicable to any text-based social media dataset. We structure the remainder of this paper as follows. In the Methods and Data section we present a description of our dataset and discuss the task of relevance classification as it relates to corpus curation. In the Results section, we present case studies for the keywords'solar', 'wind', and 'nuclear'. We examine the ambient sentiment time series for each corpus, and compare measurements between the unfiltered, relevant and non-relevant text. To show the differences in language between these corpora, we present sentiment shift plots [59] and allotaxonographs [31]. Finally, we share concluding remarks and potential future research. Methods and data We explore the performance of text classifiers powered by contextual sentence embeddings for social media corpus curation through a selection of case studies related to clean energy. ### Description of data sets In this study, we examine ambient tweet datasets, collections of tweets that are anchored by a single keyword or set of keywords. From Twitter's Decahose API, a random 10% sample of all public tweets, we select tweets containing user-provided locations [14]. We extracted these locations from a free text location field in each user's bio, if the text matched a valid 'city, state' string in the United States [11, 12]. From this selection, we query for tweets that both contain keywords of choice and are classified as being written in the language English by FastText [13]. We define the results of this query as the unfiltered ambient corpus. To illustrate the utility of our methods, we chose three keywords related to non-fossil fuel energy generating technologies, 'wind','solar', and 'nuclear'. Over the study period from 2016 to 2022, these keywords matched 3.43M, 1.39M, and 1.29M tweets in our sub-sample, respectively. In Tab. 1, we show example tweets from each corpus. We binned tweets into windows of two weeks, balancing the desire for large sample sizes for each bin with the need for higher resolution to show short term dynamics. While the terms of our service agreement with Twitter do not allow us to publish raw tweets, we provide relevant tweet IDs for rehydration. ### Sentence embeddings To better visualize the results of our classification algorithms, we chose pre-trained language models which had been fine-tuned to perform sentence embeddings. We also considered that vector representations for sentences would better align with our desired abstraction level for the relevance classification task. ### Relevance classification Our task of interest is classifying if a post, in its entirety, is relevant to the researcher's chosen topic of interest. Conceptually, this task is related to semantic textual similarity, for which sentence embeddings have achieved state of the art performance [10, 11]. Rather than finding nearest neighbors in a semantic space, we are training a classifier to partition the semantic space into relevant and non-relevant regions. For training, we hand-label a random sample of 1000 matching tweets for each keyword as either 'Relevant' (R) or 'Non-Relevant' (NR) to energy production. We have made tweet IDs and corresponding labels available for both the training data as well as predicted labels for the full data set. We then fine-tune nine models for comparison, based on pre-trained contextual sentence embeddings [10, 10]. We list the performance of these models in Table 2. For each model we labeled a random sample of one thousand (1,000) tweets. We choose a train-test split of 67% and 33%. Tweets are limited to a max of 280 characters for the duration of our study period, shorter than the minimum truncation length of 256 word pieces for the models we tested. ## III Results ### Interpretations of sentence embeddings We first examine our corpus within a semantically meaningful sentence embedding, shown in Fig. 1. For each tweet, we compute embeddings using all-mpnet-base-v2, a high performing, general-purpose sentence embedding model based on MPNet. The model is pre-trained to minimize cosign distance between a corpus of 1 billion paired texts and accessed using the sentence transformers python package [11]. We include embeddings of all three corpora, anchored \begin{table} \begin{tabular}{l l l} \hline \hline **Keyword** & **Class** & **Example Tweet** \\ \hline Solar & (R) & The decreasing costs of solar and batteries mean a sustainable future is closer than we think. \\ \multirow{6}{*}{(NR)} & (NR) & Looks like there’s a solar eclipse down here. The space nerds bought all the hotel rooms. \\ \cline{2-3} Wind & (R) & At this time of year wind makes up only a fraction of the state’s energy generation mix. \\ \cline{2-3} & (NR) & His mom caught wind of what they were up to and shut down their plans pretty quickly. \\ \hline Nuclear & (R) & Nuclear activists are questioning \#MAYankee’s accelerated decommissioning plan. \\ \cline{2-3} & (NR) & The global nuclear arsenal stands around 10,000 warheads, down from 70,000 at the peak of the Cold War. \\ \hline \hline \end{tabular} \end{table} Table 1: **Paraphrased example tweets for relevant (R) and non-relevant (NR) examples in each case study.** To label the training data, we defined relevant tweets as those which are related to the topic of electricity generation or clean energy. Non-relevant tweets contained the keyword, but were wholly or primarily unrelated. by the keywords'solar', 'wind', and 'nuclear', and project onto two dimensions for visualization using Uniform Manifold Approximation and Projection (UMAP) for dimensionality reduction [65]. In the 2D projection, semantic distances between words are distorted. Local relationships are preserved, but global position and structure is not. In Fig. 1A, we perform unsupervised clustering using HDBSCAN, a density and color by cluster [66]. Although we cannot share the interactive version of these plots, which allow the individual tweet texts to be read, we can summarize as follows. On the right side, a large red cluster contains tweets that are primarily about solar energy. To the left in light blue, we identify a dense cluster of wind and solar tweets. Nearby in light purple, we find a cluster of wind energy related tweets. The close green cluster contains nuclear energy tweets, with those being closer to the solar and wind tweets more likely to mention renewable energy source, while those further away only discuss nuclear in isolation. We found the performance of the semantic embedding impressive, but clustering within this embedding was unsuitable for corpus curation. For example, tweets arguing the relative merits of multiple technologies fell into a lower density location in the embedding space, and were classified as outliers by HDBSCAN, though they would clearly be classified as relevant by human raters. In Fig. 1B, we show the results of our three supervised text classifiers, based on MPNet trained for sentence embeddings and fine-tuned on a dataset of 1000 labeled tweets for each keyword. The local positioning of tweets within the embedding reflects similarity in the sentence embedding space. Tweets classified as relevant to clean energy technologies are clustered on the right-hand side, and overlap where they are mentioned together. For paraphrased example tweets within each classification, refer to Tab. 1. On the bottom third of the embedding, relevant Figure 1: **Embedded tweet distribution plot for the combined datasets.** Using a pre-trained model for semantically meaningful sentence embeddings based on MPNet, we plot the distribution of tweets within this semantic space. In both plots, points are tweets projected into 2D using UMAP for dimensionality reduction [65]. In panel A, we perform density based, hierarchical clustering using HDBSCAN and color by cluster. In panel B, we color by both the keyword used to query and the classification as relevant or non-relevant to the topic of clean energy. Relevant tweets containing the keywords ‘wind’, ‘solar’, and, to a lesser extent, ‘nuclear’ are relatively close together on the right in the embeddings, while non-relevant tweets are more dispersed. 'nuclear' tweets smoothly transition into non-relevant tweets, reflective of the occasionally blurry line between nuclear energy and weapons programs. 'Solar' tweets, by contrast, are easily separable. Phrases like'solar system','solar eclipse', and'solar opposites' (a television sitcom) are common example usages. These are entirely unrelated to solar energy and the sentence embedding model places them in distinct regions of the semantic space. Relevant 'wind' tweets are also clearly separable from non-relevant tweets, which often contain phrases related to the weather, such as 'wind storm' or 'wind speed', or more rhetorical expressions like 'wind up' or'second wind'. A number of weather bots regularly report wind speed measurements with a template format changing only speed and location. These tweets become close neighbors in the semantic embedding and, when projected onto two dimensions by UMAP, are split off from the larger connected component and pushed to the outer edge. ### Ambient time series plots For each case study we compare the text in the relevant corpus to the non-relevant corpus with three figure types. The first are ambient sentiment time series plots, shown in Figs. 2, 3, and 4. By sentiment we broadly mean the semantic differential of good-bad (or positive-negative). In these plots we show dynamic changes in language use for tweets containing the selected anchor keyword over time. On the top panel, we show the number of n-gram tokens with LabMT sentiment scores within each time bin [67]. In the center panel, we plot the ambient sentiment, \(\Phi\), using a dictionary of LabMT sentiment values \(\phi_{\tau}\). For each word \(\tau\). We compute the ambient sentiment as the weighted average, \[\Phi_{\text{avg}}=\sum_{\tau}\phi_{\tau}p_{\tau}, \tag{1}\] where \(p_{\tau}\) is the probability or normalized frequency of occurrence. Error bars represent the standard deviation of the mean, with \(N\) set conservatively as the number of tweets, rather than number of tokens. In the lower panel, we plot the standard deviation of ambient sentiment, which could help indicate when the distribution of sentiment is becoming narrower, broader, or even bimodal, indicating polarization. We plot three measurements for three corpora, tweets classified as relevant (R), non-relevant (NR), and the combined dataset (R + NR), with the latter reflecting the measurements we would have obtained without training a classifier. ### Lexical calculus: Word shift plots To examine how the average sentiment differs between the relevant and non-relevant corpora, we present three sentiment shift plots in Fig. 5[59]. Word shifts allow us to visualize how words individually contribute to differences in average sentiment between two texts, a reference and a comparison text. Words that contribute to the comparison text having a higher sentiment than the reference, are shown having a positive contribution, \(\delta\Phi_{\tau}\). Bars corresponding to words with a higher rated sentiment score than the average of the reference text are colored yellow, or blue if lower. Finally, we rank words by the absolute value of their contribution to the difference in average sentiment, \(\delta\Phi_{\text{avg}}\), giving a list of the top contributing words. ### Allotaxonometry We further compare language usage using an allotaxonograph in Fig. 6, an interpretable instrument that provides a rank-rank histogram of word usage and a ranked list of rank-turbulence divergence (RTD) contributions from individual words. Being able to compare the 1-gram or 2-gram distributions of two corpora with RTD allows us to extract characteristic words at all scales [31]. To compute RTD, we take each distinct word, \(\tau\), and compute the ranks with each corpus, \(r_{\tau,1}\) and \(r_{\tau,2}\). RTD is the sum the difference between inverse ranks, scaled with a parameter, \(\alpha\), and normalized to lie between 0 and 1, having the form: \[D_{\alpha}(R_{1}\|R_{2})\propto\sum\left|\frac{1}{\left[r_{\tau,1}\right]^{ \alpha}}-\frac{1}{\left[r_{\tau,2}\right]^{\alpha}}\right|^{1/(\alpha+1)}. \tag{2}\] We set \(\alpha=1/4\) for social media corpus comparisons [31]. We intend that the following cases studies may serve as an example set of procedures and provide diagnostic tools \begin{table} \begin{tabular}{l c c c} \hline \hline & \multicolumn{3}{c}{‘solar’ ‘wind’ ‘nuclear’} \\ \% Relevant & 43.7\% & 4.7\% & 16.0\% \\ \hline F1 - MPNet & 0.951 & **0.903** & 0.860 \\ F1 - MiniLM-L12 & 0.933 & 0.839 & 0.879 \\ F1 - MiniLM-L6 & 0.949 & 0.828 & 0.857 \\ F1 - DistilRoberta & **0.956** & **0.903** & 0.857 \\ F1 - paraphrase-MiniLM-L6 & 0.943 & 0.800 & 0.826 \\ F1 - paraphrase-MiniLM-L3 & 0.918 & 0.714 & 0.814 \\ F1 - distiluse-multilingual & 0.929 & 0.759 & **0.912** \\ F1 - e5-base & 0.949 & 0.867 & 0.881 \\ F1 - e5-large & 0.949 & 0.828 & 0.895 \\ \hline \hline \end{tabular} \end{table} Table 2: **Summary statistics and model performance for each of the three case studies.** First, we report the proportion of human labeled tweets that are labeled relevant to clean energy from our thousand tweet subsample. The ‘solar’ corpus is most evenly split, while the ‘wind’ corpus is the most imbalanced. Second, we detail F1 evaluation scores for a range of fine-tuned text classifiers trained on our labeled data. The model performance does not necessarily degrade dramatically for corpora with a small proportion of relevant documents, such as for ‘wind’. for computational social scientists to adopt this approach to social media corpus curation. ### Solar Energy Case Study Solar tweets were nearly evenly split with 47% of the corpus being relevant and 53% being non-relevant by volume of words. The solar tweet corpus also achieved the highest classification performance with an F1 score of 0.95, as shown in Tab. 2. Of the three case studies, we find the R'solar' tweets corpus evolves most relative to the corresponding NR corpus. Looking at the sentiment time series in Fig. 2, we see little difference between the ambient sentiment of the R and NR corpora prior to 2019. In May of 2019, NR ambient sentiment, shown in red, sharply falls while the R corpus appears to remain on trend. For the standard deviation of ambient sentiment, which measures the width of the distribution of sentiment scores for each LabMT word in the ambient corpus, we also observe a dramatic increase in 2019. We find that this shift in language use in the NR corpus occurs without a change in query terms, and demonstrates how simple keyword queries can fail. We contend that the process of selecting relevant social media documents to include in a corpus is just as important as the NLP measurement tools used to quantify sentiment. The difference in resulting sentiment measurements, between what would have been measured without a classifier (the R + NR corpus in purple) and the improved measurement after filtering with a classifier (the R corpus in blue) is stark. Looking at only the combined R + NR measurement, researchers could incorrectly conclude that language surrounding'solar' has decreased in sentiment dramatically since 2019. Focusing on only the R'solar' sentiment time series, we see clearly that there was in fact no dramatic drop in sentiment around'solar', and the relevant language around solar remains more positive relative to English language tweets in general. The decrease in observed NR sentiment is related to an influx of weather bots, which provide updates as often as hourly on local weather conditions and contain'solar' used in the context of measuring current solar radiation. In Fig. 5 we see terms like 'radiation', 'pressure', and 'humidity' are contributing to a lower average sentiment for the NR corpus. Examining the rank-turbulence divergence shift for'solar' from January 2020 to March 2021 in Fig. 6, we can see terms like 'energy', 'power', and 'panels' are much more common in the R corpus, all being among the top 15 most frequently used terms. On the other side of the ledger, we find weather related terms like'mph', 'uv', 'radiation', and 'gust' to be top words in the NR corpus. We also observe that function words--e.g., 'the', 'to', and 'for'--are more common in the R corpus, skewing the rank-rank histogram to the left. The lack of function words is another result of weather bots dominating in the latter period of our study. ### Wind Energy Case Study The unclassified 'wind' tweets corpus had the lowest proportion of relevant tweets. Only 5% of the human labeled subset was related to clean energy. The \(n\)-gram 'wind' is used in many different contexts besides energy generation, from casual discussion of today's weather to figurative uses like references to athletes getting their'second wind' and the anticipatory rotational phrase 'wind up' where 'wind' rhymes with 'kind'. In the top panel of Fig. 3, we see that the number of n-grams in relevant tweets with corresponding sentiment scores is consistently around \(10^{3}\), while the NR corpus contains more than an order of magnitude more text. We found the ambient sentiment of the R 'wind' cor Figure 2: **Ambient sentiment time series comparison for relevant (R), non-relevant (NR), and combined tweet corpora, containing the keyword ‘solar’.** In the top panel, we show the number of tokens with LabMT [68] sentiment scores in each corpus on each day. ‘Relevant’ tweets, in blue, have more scored tokens early on, but the number tokens in ‘non-relevant’ tweets increase in relative proportion over time. The center panel shows the average sentiment for each corpus, including a measurement of English language tweets as a whole in gray for comparison. Before 2019, the measured sentiment for both corpora are comparable, but subsequently the mean sentiment of ‘non-relevant’ tweets drops. In the bottom panel we plot the standard deviation of the sentiment measurement, which captures a broader distribution of sentiment scores for ‘non-relevant’ tweets. Without classification filtering, the ambient sentiment measurement would be entirely misleading, appearing as though the sentiment contained in tweets containing the word ‘solar’ dropped dramatically in 2019, when in fact sentiment has only modestly declined. pus has been slightly more positive than average language use on Twitter. The NR corpus had distinctly lower sentiment, but is more dynamic, rising from a low of 5.5 in 2016, to 5.9 in 2020. Because the proportion of tweets relevant to energy is so low, the combined sentiment time series measurement is dominated by the NR corpus. The standard deviation of sentiment, \(\sigma\), for the R corpus also increases from around 1.0 in 2016, before leveling off around 1.2, slightly under the NR corpus. The choice of 'wind' could seem to be a poor choice of keyword, given that the vast majority of matching tweets are non-relevant. Under a paradigm of expert-crafted lists of keywords, we would indeed agree such a generously matching term would not be suitable. However, by choosing a potentially ambiguous term, we are able to capture a wider range of users. Those who do not wish to project their thoughts into a global conversation by attaching a hashtag, but are content with discussing among their local network, are still included with this methodology. Also included are users writing informally or using context of a threaded conversation, who might not use a high precision keyword phrase, like 'wind power', 'wind generation', or 'wind energy'. These cases make up a significant proportion of conversation around any given topic; researchers studying more obscure topics could benefit from the increased sample size, and temporal resolution of a higher recall set of keywords. ### Nuclear Energy Case Study The 'nuclear' case study had the lowest classification performance after fine-tuning, achieving an F1 score of 0.86. The proportion of relevant tweets, 16%, was higher than for the 'wind' corpus. We believe the performance was impacted negatively by the close proximity and overlap of nuclear energy and nuclear weapons topics in the semantic embedding space. The ambient sentiment time series, in Fig. 4, for the Figure 4: **Ambient sentiment time series comparison for relevant (R), non-relevant (NR), and combined tweet corpora, all containing the keyword ‘nuclear’.** In the top panel, we show the number of tokens with LabMT [68] sentiment scores for each corpus in each two week period. The number of relevant n-grams, in blue, is consistently lower than non-relevant n-grams. The center panel shows the average sentiment for each corpus, including measurement of English language tweets as a whole in gray. We found that R tweets had higher sentiment than NR tweets containing ‘nuclear’, but had much lower sentiment than Twitter as a whole. Sentiment appears relatively stable for both corpora with periods of higher sentiment around 2017 and 2020-2022 for the R corpus. In the bottom panel, we plot the standard deviation of the sentiment measurement, which shows a broader distribution of sentiment scores for NR tweets, as well as sentiment for both corpora trending down slightly. Figure 3: **Ambient sentiment time series comparison for relevant (R), non-relevant (NR), and combined tweet corpora, all containing the keyword ‘wind’.** In the top panel, we show the number of tokens with LabMT sentiment scores for each corpus during each two week period [68]. R tweets, in blue, have more than an order of magnitude fewer tokens per time window over the entire study period. The center panel shows the average sentiment for each corpus, including measurement of English language tweets as a whole in gray for comparison. R ‘wind’ tweets are more positive than Twitter on average early on, but this difference is reduced over time. Because most ‘wind’ tweets are non-relevant, sentiment of the combined corpus closely follows the NR sentiment. In the bottom panel we plot the standard deviation of the sentiment measurement, which captures a broader distribution of sentiment scores for ‘non-relevant’ tweets, as was the case for all case-studies we examined. Without classification filtering, the ambient sentiment measurement would have been dominated by NR tweets. R 'nuclear' corpus was much lower than average sentiment on Twitter for the entire study period, but higher than the NR corpus. It appears that ambient sentiment around R nuclear energy tweets has been increasing, with a higher stable level since fall 2020. We found that the standard deviation of sentiment is also decreasing slightly, though it starts from a much higher level of around 1.7, when compared with wind and solar. In Fig. 5, we can see that the 'nuclear' R corpus's higher sentiment relative to that the NR corpus is driven by more positive words like 'power' and 'energy', but also fewer negative words, like 'war' and 'weapons'. Going against the grain is the word 'nuclear' itself as well as term 'waste' which are both negatively scored words that are used much more frequently in the R corpus relative to the NR corpus. ## IV Concluding Remarks Disambiguating relevant tweets has been a challenge for researchers, especially when a natural keyword choice has a commonly used homograph [69]. We have demonstrated that text classifiers can be trained on top of pre-trained contextual sentence embeddings, which can accurately encode researcher discretion and infer the relevance of millions of messages on a laptop. Rather than defining the boundaries of a corpus by a set of expert chosen keywords or expert crafted query rules, researchers can look at a sample of data, label messages as relevant as they see see fit, and communicate their reasoning directly. Reviewers and skeptical readers would be empowered to make their own judgments of what qualifies as a relevant tweet, by labeling themselves and comparing the resulting text measurements. Classification for social media datasets is not a panacea; Twitter's user base remains a non-representative sample of populations, skewing younger, more male, and more educated [70]. A small proportion of prolific users generate an outsized proportion of text, while most users rarely tweet [71]. Despite these problems, the platform remains a critical source of data on public conversations at the time of writing with a low barrier to entry compared to traditional media. Future work could explore better sampling methods for humans labeling tweets to reduce the amount of labeled data needed to train the text classifier. Sampling messages by shuffling risks oversampling from dense regions of the semantic embedding space. The coder sees repetitive messages that provide little marginal information to the model. This would have negative impacts on the generalizability of the classifier, and we would be skeptical of real-time measurements as conversation could drift into under-explored regions of the semantic embedding space. Other work could explore the trade-offs between optimizing for high recall and high precision when curating social media datasets, and the impacts on resulting measurements. For online applications of relevance classifiers, such work would be useful in identifying when more training data is needed. By measuring changes in language use, both by measuring rank-turbulence or probability-turbulence divergence [31; 72] between the training corpus and incoming data, and by measuring changes in the distribution of messages within a semantic embedding, thresholds for train data updates could be determined. Finally, researchers could explore viewing social media datasets as having uncertain boundaries, and running measurements over data set ensembles to better capture the uncertainly in researcher discretion inherent in corpus curation. Overall, we hope our work here highlights a viable alternative corpus curation method for computational social scientists studying social media datasets. ###### Acknowledgements. The authors are grateful for support furnished by MassMutual and Google, and the computational facilities provided by the Vermont Advanced Computing Center.
2302.08160
The Inadequacy of Shapley Values for Explainability
This paper develops a rigorous argument for why the use of Shapley values in explainable AI (XAI) will necessarily yield provably misleading information about the relative importance of features for predictions. Concretely, this paper demonstrates that there exist classifiers, and associated predictions, for which the relative importance of features determined by the Shapley values will incorrectly assign more importance to features that are provably irrelevant for the prediction, and less importance to features that are provably relevant for the prediction. The paper also argues that, given recent complexity results, the existence of efficient algorithms for the computation of rigorous feature attribution values in the case of some restricted classes of classifiers should be deemed unlikely at best.
Xuanxiang Huang, Joao Marques-Silva
2023-02-16T09:19:14Z
http://arxiv.org/abs/2302.08160v1
# The Inadequacy of Shapley Values for Explainability ###### Abstract This paper develops a rigorous argument for why the use of Shapley values in explainable AI (XAI) will necessarily yield provably misleading information about the relative importance of features for predictions. Concretely, this paper demonstrates that there exist classifiers, and associated predictions, for which the relative importance of features determined by the Shapley values will incorrectly assign more importance to features that are provably irrelevant for the prediction, and less importance to features that are provably relevant for the prediction. The paper also argues that, given recent complexity results, the existence of efficient algorithms for the computation of rigorous feature attribution values in the case of some restricted classes of classifiers should be deemed unlikely at best. ## 1 Introduction Motivated by the widespread adoption of machine learning (ML) in a ever-increasing range of domains, explainable AI (XAI) is becoming critical, both to build trust, but also to validate ML models [42; 87; 76]. Some of the best-known methods of explainability can be broadly organized into two families: those based on _feature attribution_ and those based on _feature selection_. **Feature selection.** These methods identify sets of features (i.e. an explanation) relevant for a prediction. One solution for feature selection is model-agnostic, and is exemplified by tools such as Anchors [84]. Another solution is model-based and can be related with logic-based abduction [72; 69]. A well-known concept in logic-based abduction is _relevancy_[32], i.e. whether a hypothesis (which represents a feature in the case of explainability) is included in some irreducible explanation. A feature that is not included in any irreducible explanation is deemed _irrelevant_. **Feature attribution & Shapley values.** These methods assign an _importance_ to each feature. Well-known examples include LIME [83] and SHAP [66]. For neural networks, dedicated methods have also been proposed [42; 87; 76]. SHAP is arguably among the most established XAI feature-attribution methods, being based on computing an approximation of Shapley values. Shapley values were originally proposed in the context of game theory [90], and find widespread use [85; 102]. For more than a decade, Shapley values have been employed with the purpose of explaining the predictions of ML models, e.g. [93; 94; 27; 66; 21; 65; 74; 23; 36; 22; 35; 91; 58; 88; 100; 5; 17; 41; 4; 97]. In these settings, Shapley values represent often the relative importance of features [86]. However, exact computation of Shapley values is considered unrealistic in practice, and so a number of different approaches have been proposed for their approximate computation. The computational complexity of computing exact Shapley values for explainability has been studied recently [29; 30], confirming in theory what in practice was already assumed. Tractable cases have also been identified in recent work [8; 7]. **SHAP vs. exact Shapley values.** The original motivation for this work was to assess the rigor of SHAP [66] when compared with the exact computation of Shapley values [90], as defined in SHAP's original work [66]. Although it is in general unrealistic to compute exact Shapley values, recent work proposed two algorithms for computing the exact Shapley values in the case of deterministic decomposable Boolean circuits (which include binary decision trees) [8, 7]1. (Deterministic decomposable Boolean circuits are represented as d-DNNF formulas [26], and so we will use the acronym d-DNNF throughout.) Thus, the initial goal of this work was to compare the scores obtained with SHAP against those obtained by exact computation of Shapley values2. Footnote 1: One alternative had been proposed in earlier work [65], but the proposed algorithm has been shown to be unsound [29, 30]. Footnote 2: Throughout the paper, the approximate results obtained with the SHAP tool [66] will be referred to as _SHAP_’s results, whereas the exact computation of Shapley values for explainability, studied in [8, 7] but proposed in earlier work [66] will be referred to as _exact Shapley values_. **Explainability is not SHAP's game.** Perhaps unsurprisingly, the results of SHAP essentially never matched the Shapley values obtained with exact computation. This may somehow be expected because the goal of SHAP is to measure the relative importance of features and not so much to compute the exact value of feature importance. As a result, we analyzed the orders of feature importance produced by SHAP and by exact computation. Somewhat more surprisingly, we observed that the obtained orders almost never match. The conclusion to draw is that the feature attributions computed by SHAP do not accurately capture neither the exact Shapley values, nor the relative order of features imposed by the exact Shapley values. Therefore, our experiments demonstrate that, as a tool for measuring feature importance with a measure that relates with (exact) Shapley values, SHAP is flawed. The limitations of SHAP are analyzed in Section 7.1. **Exact Shapley values attribute misleading feature importance.** Since SHAP ought not be used for assessing feature attribution, we considered exploiting rigorous feature selection approaches, for listing _all_ the explanations of a prediction, thereby obtaining qualitative information about the relative importance of features for a prediction. Concretely, we considered abductive explanations [72]. However, during this process, we observed that irrelevant features, i.e. features that could be shown _not_ to be included in _any_ explanation (among those based of feature selection), would have non-zero Shapley values, i.e. those features would be deemed of _some_ importance. Even more unsettling, we observed that it could also happen that relevant features, i.e. features that were used in some explanation(s), would have a Shapley value of zero, i.e. those features would be deemed of _no_ importance. By putting a more concerted effort, we were able to discover that other pitfalls of Shapley values could be identified, and that all the pitfalls of Shapley values are not at all uncommon. These additional pitfalls include (a) uncovering instances for which at least one relevant feature has a Shapley value of zero and a relevant feature has a non-zero Shapley value; and (b) uncovering instances where there exist irrelevant features significantly better ranked than relevant features according to the order of features determined by their Shapley values. The issues above indicate that exact Shapley values for explainability, as proposed originally in SHAP [66], and as computed exactly (for d-DNNFs) in more recent work [8, 7], do not reflect the effective relevancy of features for explanations, and can incorrectly assign top-ranked importance to features that are irrelevant for a prediction, and assign low-ranked importance to features that are relevant for a prediction. Another consequence of these observations is that SHAP and its variants are not only flawed in the results they produce, but they are also flawed in their core assumptions3. The experimental confirmation of the limitations of exact Shapley values are analyzed in Section 7.2. Footnote 3: Naturally, a flawed approximation of a flawed concept of feature attribution offers no guarantees whatsoever of quality of approximating feature attribution. Finally, we argue that some other definition of Shapley values, if it were to exist and _respect_ feature (ir)relevancy, ought not be computed in polynomial time in the case of d-DNNFs, unless \(\mathrm{P}=\mathrm{NP}\). **An alternative measure of feature importance.** Given the inadequacy of Shapley values for explainability, the paper proposes a simple measure of feature importance which respects feature (ir)relevancy, i.e. irrelevant features are assigned a score of zero, and relevant features are assigned a non-zero score. The measure is based on the enumeration of all explanations based on feature selection. This means that for complex ML models, the proposed measure will be difficult to compute. **Additional contributions.** Despite being empirically motivated, the paper outlines a principled approach for uncovering a number of pitfalls with the use of Shapley values in explainable AI (XAI). To implement such a principled approach, and to obtain the results outlined above, we devised dedicated algorithms for computing exact Shapley values for any (discrete) function represented by a truth table, but also for deciding feature relevancy/irrelevancy. The proposed algorithms run in polynomial-time on the size of the truth table. **Organization.** The paper is organized as follows. Section 2 introduces the definitions and notation used throughout the paper. Section 3 develops alternative representations of abductive explanations, which serve to relate exact Shapley values with these explanations. Section 4 uncovers a number of links between the definition of Shapley values and abductive explanations, and investigates a number of possible issues that Shapley values may exhibit, which would confirm the inadequacy of Shapley values for explainability. Section 5 outlines the algorithms developed for assessing the existence of such issues. Section 6 briefly outlines an alternative measure for assessing feature importance in classifier predictions. This section also argues that variants of the definition of Shapley values are unlikely to give correct results in polynomial-time, unless \(\mathrm{P}=\mathrm{NP}\). Section 7 presents extensive evidence to the issues that Shapley values exhibit, thereby demonstrating the inadequacy of Shapley values for any form of explainability where any form of rigor matters. Section 8 summarizes the paper's contributions. ## 2 Preliminaries The paper assumes basic knowledge of computational complexity, namely the classes \(\mathrm{P}\) and \(\mathrm{NP}\). (A standard reference is [10].) **Classification problems.** Classification problems are defined on a set of features (or attributes) \(\mathcal{F}=\{1,\ldots,m\}\) and a set of classes \(\mathcal{K}=\{c_{1},c_{2},\ldots,c_{K}\}\). Each feature \(i\in\mathcal{F}\) takes values from a domain \(\mathbb{D}_{i}\). In general, domains can be categorical or ordinal. However, for the purposes of this paper, all features are assumed to be Boolean, and so \(\mathbb{D}_{i}=\{0,1\}\). (However, some results are straightforward to generalize to categorical features.) Feature space is defined as \(\mathbb{F}=\mathbb{D}_{1}\times\mathbb{D}_{2}\times\ldots\times\mathbb{D}_{m}\), which is this paper will result in \(\mathbb{F}=\{0,1\}^{m}\). The notation \(\mathbf{x}=(x_{1},\ldots,x_{m})\) denotes an arbitrary point in feature space, where each \(x_{i}\) is a variable taking values from \(\mathbb{D}_{i}\). The set of variables associated with features is \(X=\{x_{1},\ldots,x_{m}\}\). Moreover, the notation \(\mathbf{v}=(v_{1},\ldots,v_{m})\) represents a specific point in feature space, where each \(v_{i}\) is a constant representing one concrete value from \(\mathbb{D}_{i}\). A classifier \(\mathcal{M}\) is characterized by a (non-constant) _classification function_\(\kappa\) that maps feature space \(\mathbb{F}\) into the set of classes \(\mathcal{K}\), i.e. \(\kappa:\mathbb{F}\rightarrow\mathcal{K}\). An _instance_ denotes a pair \((\mathbf{v},c)\), where \(\mathbf{v}\in\mathbb{F}\) and \(c\in\mathcal{K}\), with \(c=\kappa(\mathbf{v})\). Finally, an explanation problem \(\mathcal{E}\) is a tuple \((\mathcal{M},(\mathbf{v},c))\). **Shapley values.** This section provides a brief overview of Shapley values. Shapley values were first introduced by L. Shapley [90] in the context of game theory. Moreover, Shapley values have been proposed for explaining the predictions of ML models in a vast number of works, which include [93; 94; 27; 66; 21; 65; 74; 23; 36; 22; 35; 91; 58; 88; 100; 5; 17; 41; 4; 97], among many others. Shapley values are also discussed in a number of XAI surveys [87; 11; 76; 96; 75], in addition to a recent survey on the uses of Shapley values in machine learning [86]. More importantly, in some applications the use of Shapley values can have a direct impact on human subjects (existing references include [57; 106; 103; 54; 77; 14; 6; 108; 62; 3; 92; 107; 67; 99; 63; 64; 109; 38; 39; 44; 1] among many others). The complexity of computing Shapley values (as proposed in SHAP [66]) has been studied in recent years [8; 29; 7; 30]. Throughout this section, we adapt the notation used in recent work [8; 7], which builds on the work of [66]. Let \(\upsilon:2^{\mathcal{F}}\to 2^{\mathbb{F}}\) be defined by4, Footnote 4: Throughout the paper, we distinguish function and predicate arguments from their parameterizations by separating them with ’;’ instead of ’,’, as in \(\upsilon(\mathcal{S};\mathbf{v})\) or \(\phi(\mathcal{S};\mathcal{M},\mathbf{v})\). \[\upsilon(\mathcal{S};\mathbf{v})=\{\mathbf{x}\in\mathbb{F}\mid\wedge_{i\in \mathcal{S}}x_{i}=v_{i}\} \tag{1}\] i.e. for a given set \(\mathcal{S}\) of features, and parameterized by the point \(\mathbf{v}\) in feature space, \(\upsilon(\mathcal{S};\mathbf{v})\) denotes all the points in feature space that have in common with \(\mathbf{v}\) the values of the features specified by \(\mathcal{S}\). Also, let \(\phi:2^{\mathcal{F}}\rightarrow\mathbb{R}\) be defined by, \[\phi(\mathcal{S};\mathcal{M},\mathbf{v})=\frac{1}{2^{|\mathcal{F}\setminus \mathcal{S}|}}\sum\nolimits_{\mathbf{x}\in\upsilon(\mathcal{S};\mathbf{v})} \kappa(\mathbf{x}) \tag{2}\] Hence, given a set \(\mathcal{S}\) of features, \(\phi(\mathcal{S};\mathcal{M},\mathbf{v})\) represents the average value of the classifier over the points of feature space represented by \(\upsilon(\mathcal{S};\mathbf{v})\). For the purposes of this paper, and in contrast with [8; 7], we will solely consider a uniform distribution of the inputs, and so the dependency with the input distribution is not accounted for. Finally, let \(\mathsf{S}\mathsf{v}:\mathcal{F}\to\mathbb{R}\) be defined by, \[\mathsf{S}\mathsf{v}(i;\mathcal{M},\mathbf{v})=\sum\nolimits_{\mathcal{S} \subseteq(\mathcal{F}\setminus\{i\})}\frac{|\mathcal{S}|!(|\mathcal{F}|-| \mathcal{S}|-1)!}{|\mathcal{F}|!}\left(\phi(\mathcal{S}\cup\{i\};\mathcal{M}, \mathbf{v})-\phi(\mathcal{S};\mathcal{M},\mathbf{v})\right) \tag{3}\] Given an instance \((\mathbf{v},c)\), the Shapley value assigned to each feature measures the _importance_ of that feature for the given prediction. A positive/negative value indicates that the feature can contribute to changing the prediction, whereas a value of 0 indicates no contribution. Moreover, and as our results demonstrate, SHAP never really replicates exact Shapley values. As a result, we focus on the relative order of features imposed by the computed Shapley values. The motivation is that, even if the computed values are not correct (as in the case of SHAP), then what matters for a human decision maker is the order of features in terms of their importance for the prediction. **Logic-based explanations.** Given an explanation problem \(\mathcal{E}=(\mathcal{M},(\mathbf{v},c))\), an abductive explanation (AXp) [53, 25, 72, 69] (which is also referred to as a PI-explanation [25]) represents an irreducible set of features which, if fixed to the values dictated by \(\mathbf{v}\), are sufficient for the prediction. Similarly, a contrastive explanation (CXp) represents an irreducible set of features which, if allowed to take any value from their domain (while the other features remain fixed), allows the prediction to change. A weak AXp (resp. CXp) is a subset \(\mathcal{X}\) (resp. \(\mathcal{Y}\)) of \(\mathcal{F}\) such that the following predicate(s) hold(s): \[\mathsf{WAXp}(\mathcal{X};\mathcal{M},\mathbf{v}) :=\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! select subsets of features relevant to the prediction, but target instead finding an absolute/relative order of feature importance. In contrast, a number of authors have reported pitfalls with the use of SHAP and Shapley values as a measure of feature importance [105; 60; 95; 74; 37; 104; 78; 2; 101; 59; 20]. However, these earlier works do not identify fundamental flaws with the use of Shapley values in explainability. Attempts at addressing those pitfalls include proposals to integrate Shapley values with abductive explanations, as reported in recent work [61]. ## 3 Alternative Representation of Abductive Explanations This section proposes a different representation of abductive explanations, which will serve to highlight the similarities and differences with respect to exact Shapley values. ### Relationship with Prime Implicants of Boolean Functions We relate AXp's with Boolean functions as follows. For each \(i\in\mathcal{F}\), let the Boolean variable \(s_{i}\) be associated with selecting feature \(i\) (for inclusion in some set). Moreover, let \(\pi:\{0,1\}^{m}\to 2^{j}\) be defined as follows, with \(\mathbf{s}=(s_{1},\ldots,s_{m})\in\{0,1\}^{m}\): \[\pi(\mathbf{s})=\{i\in\mathcal{F}\,|\,s_{i}=1\}\] As observed below, \(s_{i}=1\) denotes that \(x_{i}\) is to be fixed to the value dictated by the \(i^{\text{th}}\) coordinate of \(\mathbf{v}\), whereas \(s_{i}=0\) denotes that \(x_{i}\) is free to take any value from its domain. Moreover, we let \(\mathcal{S}^{\mathbf{s}}=\pi(\mathbf{s})\) and \(\mathcal{U}^{\mathbf{s}}=\mathcal{F}\setminus\mathcal{S}^{\mathbf{s}}\). Finally, we define a Boolean function \(\sigma:\{0,1\}^{m}\to\{0,1\}\), as follows. \[\sigma(\mathbf{s})=1\qquad\equiv\qquad\forall(\mathbf{x}\in\mathbb{F}).\, \bigwedge_{i\in\mathcal{S}^{\mathbf{s}}}(x_{i}=v_{i})\,{\rightarrow}(\kappa( \mathbf{x})=\kappa(\mathbf{v})) \tag{8}\] which can be stated as: \[\sigma(\mathbf{s})=1\qquad\equiv\qquad\bigwedge_{\mathbf{x}\in v(\mathcal{S}^{ \mathbf{s}};\mathbf{v})}(\kappa(\mathbf{x})=\kappa(\mathbf{v})) \tag{9}\] We can view \(\sigma\) as a _sufficiency function_, i.e. \(\sigma(\mathbf{s})=1\) iff the variables \(s_{i}\) assigned value 1 in \(\mathbf{s}\), and which represent fixed features in \(\mathcal{F}\), are sufficient for predicting \(c\). Observe that \(\sigma\) allows us to look at all the possible subsets (\(\mathcal{S}\)) of fixed features (each uniquely represented by a different \(\mathbf{s}\)) and, if \(\sigma(\mathbf{s})=1\), then we know that the subset of fixed features encoded by \(\mathbf{s}\) suffices for the prediction. **Example 1**.: _We consider an example classifier \(\mathcal{M}\), with \(\mathcal{F}=\{1,2,3\}\), \(\mathbb{F}=\{0,1\}^{3}\), with the classification function \(\kappa(x_{1},x_{2},x_{3})=(x_{1}\wedge x_{2})\vee(\neg x_{1}\wedge x_{3})\). Furthermore, the example instances are \(((1,1,1),1)\) and \(((1,0,1),0)\)._ _The computation of the \(\sigma\) values and the sets of AXp's and CXp's are shown in Table 1. All these results are easy to confirm. As can be observed, all features are relevant for the instance \(((1,1,1),1)\), but feature \(3\) is irrelevant for the instance \(((1,0,1),0)\) since feature 3 is not included in any AXp (or CXp)._ _For the first instance, i.e. \(((1,1,1),1)\), it is plain to conclude that for any feature \(i\) there exists at least one AXp (or CXp) that contains \(i\). Hence, for any feature \(i\), one can provide a human decision maker with an irreducible set of features, that is sufficient for guaranteeing the prediction, which contains \(i\). Observe that, for this instance, \(\{1,2\}\) and \(\{2,3\}\) are the AXp's, which can be interpreted as the following rules:_ \[\text{IF}\quad(x_{1}=1)\wedge(x_{2}=1)\quad\text{THEN}\quad\kappa(\mathbf{x})=1\] _and,_ \[\text{IF}\quad(x_{2}=1)\wedge(x_{3}=1)\quad\text{THEN}\quad\kappa(\mathbf{x})=1\] _For the second instance, i.e. \(((1,0,1),0)\), one can observe that for any pick \(\mathcal{P}\) of the features to fix, such that \(\mathcal{P}\) is sufficient for the prediction to be 0, there is always an irreducible set of features \(\mathcal{Q}\subseteq\mathcal{P}\) that does not contain feature 3. Similarly, for any pick of features that is sufficient to allow changing the prediction, there is always an irreducible subset that does not contain feature 3. Invoking Occam's razor, there is no reason whatsoever to propose as an explanation for the instance \(((1,0,1),0)\) a set of features that includes feature 3._ _For this second instance, \(\{1,2\}\) is the only AXp, which can be interpreted as the following rule:_ \[\text{IF}\quad(x_{1}=1)\wedge(x_{2}=0)\quad\text{THEN}\quad\kappa(\mathbf{x})=0\] _As can be observed, \(x_{3}\) plays no role in the prediction._ There is an important relationship between \(\sigma\) and WAXp's, which we state as follows: **Proposition 1**.: _Given an explanation problem \((\mathcal{M},(\mathbf{v},c))\), and the sufficiency function \(\sigma\), then_ \[\sigma(\mathbf{s})=1\quad\equiv\quad\mathsf{WAXp}(\mathcal{S}^{\mathbf{s}}; \mathcal{M},\mathbf{v})\] The following observations are immediate: \(\sigma(1,\ldots,1)=1\) and \(\sigma(0,\ldots,0)=0\) (provided \(\kappa\) is not constant). Moreover, it is plain to conclude the following, **Proposition 2**.: \(\sigma\) _is monotone and up-closed._ From earlier work, we have additional important results: **Proposition 3**.: _Each prime implicant of \(\sigma\) is essential, i.e. it must be included in any DNF representation of \(\sigma\)._ It is well-known that a Boolean function can be uniquely represented by the disjunction of its prime implicants [16, 80, 81, 82, 73, 19, 24]. A standard name is Blake's Canonical Form (BCF) of a Boolean function. Although the number of prime implicants can be unwieldy, we may still be interested in assessing properties of such a function representation. For example, since \(\sigma\) can be viewed as a sufficiency function, its prime implicants capture the sets of features that are minimally sufficient for the prediction. Thus, if a feature \(j\) does not occur in any prime implicant of \(\sigma\) it will not be included in the function's BCF. This is a clear indication of the irrelevancy of the feature for the prediction. ### Feature Relevancy & Essential Variables The concept of _inessential_ variables in Boolean functions has been studied in the past [43, 19, 33, 24]. Building on earlier work [19, 33, 24], we say that a variable \(s_{i}\) is inessential if, \[\forall(s_{1},\ldots,s_{i-1},s_{i+1},\ldots,s_{m}\in\{0,1\}).\\ \sigma(s_{1},\ldots,s_{i-1},0,s_{i+1},\ldots,s_{m})=\sigma(s_{1}, \ldots,s_{i-1},1,s_{i+1},\ldots,s_{m}) \tag{10}\] Inessential variables are also referred to as _irrelevant_[33], _redundant_[19], _vacuous_[19], or _dummy_[33]. Because of the connection with formal explainability, we will refer to essential (resp. inessential) variables as relevant (resp. irrelevant). Moreover, we underline that the definition of _irrelevant_ variable is particularly demanding, in that a variable \(i\) is irrelevant only if for any point in its domain, the value of the function does not depend on the value of \(i\). In the case of the sufficiency function \(\sigma\), this requirement signifies that a feature is irrelevant only if for any possible pick of the fixed features, the prediction of \(c\) never depends on feature \(i\). (We will see later how relevancy of features can be computed in practice, and investigate its relationship with Shapley values.) Furthermore, [24] proves that: **Proposition 4** (Theorem 1.17 in [24]).: _For a Boolean function \(\sigma\), defined on variables \(S=\{s_{1},\ldots,s_{m}\}\), the following statements are equivalent: 1. _The variable_ \(s_{i}\) _is inessential for_ \(\sigma\)_._ 2. _The variable_ \(s_{i}\) _does not appear in any prime implicant of_ \(\sigma\)_._ 3. \(\sigma\) _has a DNF representation in which the variable_ \(s_{i}\) _does not appear._ The previous result is a consequence of the uniqueness (and the properties) of the representation of \(\sigma\) by its prime implicants [16; 80; 81; 82; 73; 19]. In terms of the sufficiency function \(\sigma\), an irrelevant variable \(s_{i}\) is such that one can exhibit a DNF representation of \(\sigma\) that does not include \(s_{i}\). Hence, we can represent all the sets of fixed features that are sufficient for the prediction with a DNF that does not include \(s_{i}\). The relationship between essential variables and relevancy in abductive reasoning [89; 34; 32] is apparent, and we have the following result: **Proposition 5**.: _A feature \(i\) is relevant for the instance \((\mathbf{v},c)\) iff \(s_{i}\) is essential for \(\sigma\)._ Furthermore, we can relate recent work on explainability queries, especially related with relevancy [13; 47; 12; 45], with the problem of deciding whether a variable is essential for a Boolean function. Given the results above, what we have established is that we can identify the irrelevant variables of the \(\sigma\) function by deciding the relevant features for abductive explanations. While the feature relevancy is in general \(\Sigma^{\mathsf{p}}_{2}\)-complete [47], there exist tractable cases, namely decision trees [47]. We will exploit those tractable cases to efficiently decide feature relevancy. Furthermore, this section also serves to underline that the concepts of relevancy and of inessential variables are pervasive in the study of boolean functions and logic-based abduction, and that the two concepts can be related. ## 4 Relating Shapley Values with Abductive Explanations This section highlights some connections between Shapley values and abductive explanations. This will then allow us to uncover possible aspects of explainability that Shapley fail to capture. ### Similarities and Differences Using the function \(\upsilon\) introduced for computing Shapley values, one can can define a explainability function \(\xi:2^{\mathcal{F}}\to\{0,1\}\) as follows, \[\xi(\mathcal{S};\mathcal{M},\mathbf{v})=\bigwedge_{\mathbf{x}\in\upsilon( \mathcal{S};\mathbf{v})}\left(\kappa(\mathbf{x})=c\right)\] A consequence of this definition is the following one. **Proposition 6**.: _Given an explanation problem \((\mathcal{M},(\mathbf{v},c))\), then for any \(\mathcal{S}\subseteq\mathcal{F}\), it is the case that_ \[\mathsf{WAXp}(\mathcal{S};\mathcal{M},\mathbf{v}) \leftrightarrow \xi(\mathcal{S};\mathcal{M},\mathbf{v}) \tag{11}\] \[\mathsf{WAXp}(\mathcal{S};\mathcal{M},\mathbf{v}) \rightarrow \left(\phi(\mathcal{S};\mathcal{M},\mathbf{v})=c\right)\] (12) \[\mathsf{WAXp}(\mathcal{S};\mathcal{M},\mathbf{v}) \rightarrow \left(\phi(\mathcal{S}\cup\{i\};\mathcal{M},\mathbf{v})-\phi( \mathcal{S};\mathcal{M},\mathbf{v})\right)=0,\ \ i\in\mathcal{F} \tag{13}\] (Observe that (12) and (13) are fairly easy to prove in the case of Boolean classifiers with \(c=1\).) It should also be noted that weak AXp's represent some of the sets also considered when computing Shapley values, but only those were the prediction remains unchanged. Furthermore, (actual) AXp's are, among those sets, the ones that are subset-minimal. Given the comments above, we can further observe that if \(\mathcal{S}\) is a weak AXp, then the contribution of \(\phi(\mathcal{S};\mathcal{M},\mathbf{v})\) to \(\mathsf{S}\forall(i;\mathcal{M},\mathbf{v})\) will be 0. Hence, \(\mathsf{S}\forall(i;\mathcal{M},\mathbf{v})\) will be non-zero only because of sets \(\mathcal{S}\) that are not weak AXp's. Thus, whereas the exact computation of Shapley values requires analyzing all possible subsets \(\mathcal{S}\) of fixed features, and for each such set, it requires computing the average value of the classifier over the non-fixed features \(\mathcal{F}\setminus\mathcal{S}\), finding an AXp requires looking at a linear number of such subsets \(\mathcal{S}\), and for each such subset \(\mathcal{S}\), the goal is to decide whether the predicted value does not change. These observations are aligned with the established computational complexity of computing Shapley values and abductive explanations [8; 29; 7; 30]. ### Framing the Inadequacy of SHAP's Shapley Values This section answers the question: _How can one demonstrate that exact Shapley values do not capture important explainability information?_ To answer this question, we build on logic-based explanations, but we exploit a fundamental property of these explanations, i.e. we will seek to determine whether a feature is in any way usable for a point \(\mathbf{v}\in\mathbb{F}\) to predict \(\kappa(\mathbf{v})=c\). Given an instance \((\mathbf{v},c)\), we will split the features into two sets: the features that are relevant for the prediction and the features that are irrelevant for the prediction, according to the definition of feature relevancy introduced earlier. If Shapley values adequately reflect explainability information about \((\mathbf{v},c)\), it should at least be the case that (i) any irrelevant feature \(i\) is deemed to have no importance for the prediction (i.e. \(\mathsf{Sv}(i)=0\)); and (ii) that any relevant feature \(j\) should have some degree of importance for the prediction (i.e. \(\mathsf{Sv}(j)\neq 0\)). These are basic issues that we would expect Shapley values to respect. However, we will look for additional issues. Overall, we are interested in determining whether there exist Boolean functions for which Shapley values exhibit the following issues: 1. [label=**Q0.**, ref=Q0.] 2. Decide whether there can exist classifiers and instances for which there exists at least one feature \(i\) such that, \[\mathsf{Irrelevant}(i)\wedge(\mathsf{Sv}(i)\neq 0)\] If the answer to 1 is positive, this means that Shapley values can assign importance to features that are irrelevant for a prediction. (As we clarified earlier, an irrelevant feature does not bear any role whatsoever, over all points in feature space, in changing the prediction given \(\mathbf{v}\).) If the answer is positive, for some function and instance, we say that this is an I1 issue. 3. Decide whether there exist classifiers and instances for which there exists at least one pair of features \(i_{1}\) and \(i_{2}\) such that, \[\mathsf{Irrelevant}(i_{1})\wedge\mathsf{Relevant}(i_{2})\wedge(|\mathsf{Sv}(i_ {1})|>|\mathsf{Sv}(i_{2})|)\] If the answer to 2 is positive, then we have an unsettling situation for which irrelevant features are deemed more important (in terms of the overall ranking of features by their Shapley values) than relevant features. If the answer is positive, for some function and instance, we say that this is an I2 issue. 4. Decide whether there exist classifiers and instances for which there exists at least one feature \(i\) such that, \[\mathsf{Relevant}(i)\wedge(\mathsf{Sv}(i)=0)\] If the answer to 3 is positive, this means that Shapley values can assign no importance to features that are actually important for a prediction. (As we clarified earlier, a relevant feature plays some role, in at least one point in feature space in changing the prediction given \(\mathbf{v}\).) If the answer is positive, for some function and instance, we say that this is an I3 issue. 5. Decide whether there exist classifiers and instances for which there exists at least one pair of features \(i_{1}\) and \(i_{2}\) such that, \[[\mathsf{Irrelevant}(i_{1})\wedge(\mathsf{Sv}(i_{1})\neq 0)]\wedge[\mathsf{ Relevant}(i_{2})\wedge(\mathsf{Sv}(i_{2})=0)]\] If the answer to 4 is positive, then we are faced with the rather problematic situation where, in the relative order of features induced by their Shapley values, a relevant feature is deemed of no important while an irrelevant feature is deemed of some importance. (This would represent a serious blow to whether Shapley values can be trusted for assigning relative importance to features.) If the answer is positive, for some function and instance, we say that this is an I4 issue. The issues above should not be expected when analyzing Shapley values. Indeed, it is widely accepted that Shapley values measure the _influence_ of a feature [93, 94, 66, 8, 29]. Concretely, [93] reads: _"...if a feature has no influence on the prediction it is assigned a contribution of 0."_ But [93] also reads _"According to the 2nd axiom, if two features values have an identical influence on the prediction they are assigned contributions of equal size. The 3rd axiom says that if a feature has no influence on the prediction it is assigned a contribution of 0."_ (In this last quote, the axioms refer to the axiomatic characterization of Shapley values.) **Basic issues.** Unfortunately, and as shown in the remainder of the paper, for any of the issues listed above, there exist Boolean functions that exhibit one or more of those issues. Furthermore, even without the algorithms described in the next section, it is fairly simple to devise Boolean functions for which issues I1 and I3 occur. **Example 2**.: _With respect to the Boolean function of Example 1, let us consider the second instance, i.e. \(((1,0,1),0)\), for which features 1 and 2 are relevant, and feature 3 is irrelevant. We can now use (3) to conclude that \(\mathsf{Sv}(3;\mathcal{M},(1,0,1))=\nicefrac{{1}}{{8}}\), meaning that feature 3 is somewhat important for the prediction. Now, this is problematic, since feature 3 is irrelevant for the instance \(((1,0,1),0)\). Thus, we have uncovered one example of issue I1._ _Moreover, let us consider the first instance, i.e. \(((1,1,1),1)\), for which all features are relevant. As before, we use (3) to conclude that \(\mathsf{Sv}(1;\mathcal{M},(1,1,1))=0\), meaning that feature 1 is not important for the prediction. Again, this is fairly problematic, since feature 1 is relevant for the instance \(((1,1,1),1)\), Thus, we have uncovered one example of issue I3._ _The experiments in Section 7.2 offer a more comprehensive picture of occurence of all the issues discussed earlier in this section._ ## 5 Uncovering the Inadequacy of Shapley Values for Explainability This section outlines the approach used to search for Boolean functions that answer Q1 to Q4 positively. The first step is to devise a procedure for the systematic enumeration of Boolean functions and all of their possible instances. Furthermore, we need to be able to compute (exact) Shapley values efficiently, and also to decide feature relevancy efficiently. ### Boolean Function Search by Truth Table Enumeration The approach we follow is to construct a truth table for \(m\) variables (i.e. the features of our classifier). Depending on the computing resources available, and in the concrete case of Boolean functions, it is realistic to construct a truth table for a number of variables between 30 and 40. However, we also need to enumerate all the possible Boolean functions of \(m\) variables. The number of such functions is well-known to be \(2^{2^{m}}\). This imposes a practical bound on the value of \(m\). In this paper we studied Boolean functions of 3 and 4 variables, but we also obtained results for specific Boolean functions with up to 10 variables. Moreover, the number of instances to consider is exactly the number of entries in the truth table. So for each Boolean function, out of a total of \(2^{2^{m}}\), we consider \(2^{m}\) instances. ### Computing Shapley Values in Polynomial Time Given a function represented by a truth table, there exist polynomial-time algorithms (on the size of the truth table) for computing Shapley values of all the features. On can proceed as follows. When computing exact Shapley values, and for each feature, the number of subsets to consider is \(2^{m-1}\), which is polynomial on the size of the truth table. As noted earlier, each subset picks a set of features to fix. For each subset of fixed features, the average value (i.e. the value of \(\phi\)) can be obtained by summing up the values of the function over the points in feature space consistent with the fixed features, and then dividing by the number of such points. Since we have access to the truth table, then we can compute each Shapley value in polynomial time (on the size of the truth table). Moreover, and over all features, we also compute the Shapley values in polynomial time. The argument sketched above gives the following result. **Proposition 7**.: _For a Boolean function represented by a truth table, there exists a polynomial-time algorithm for computing the Shapley values (as defined in (3))._ ### Deciding Feature Relevancy in Polynomial Time As with Shapley values, for a Boolean function \(\kappa\) represented by a truth table, there exist polynomial-time algorithms for deciding feature relevancy. One can proceed as follows. Given the relationship between feature relevancy and essential variables of Boolean functions, we construct the function \(\sigma\) (see (8) and (9)), starting from the truth-table definition of the Boolean function \(\kappa\) and the chosen instance. As argued earlier, we need to consider \(2^{m}\) possible picks of features, and so we can use the same truth table as the one used for representing the function \(\kappa\), the difference being the computed function. Given the \(\sigma\) function, and a target feature \(i\), we can decide whether \(i\) is essential as follows. We set \(i\) to some value, say 1. We then enumerate all possible entries in the truth table (for \(\sigma\)) with \(i\) fixed. For each entry, we compare its value with the value of the entry for which \(i\) has the value 0. If the values differ, then the feature is relevant. If the feature is not deemed relevant after looking at the entries in the truth table for \(\sigma\), then the feature is declared irrelevant. The argument sketched above gives the following result. Proposition 8: _For a Boolean function represented by a truth table, there exists a polynomial-time algorithm for deciding relevancy of each feature, and so of all the features._ ### Two Case Studies Figure 1 depicts two Boolean functions of four variables which display issues I1, I2, I3 and I4. (These two functions were obtained with the algorithms outlined in the previous sections.) It is plain that _any_ of these functions suffices for demonstrating the inadequacy of Shapley values for explainability. Nevertheless, the detailed experiments presented Section 7 provide extensive additional evidence to such inadequacy. ## 6 An Alternative Measure of Feature Importance One might argue that, by changing the definition of \(\phi\), it would be possible to devise a _fixed_ definition of Shapley values for explainability. This seems unlikely in the case of d-DNNFs. If that were to be the case, then one would be able to compute the Shapley values for a d-DNNF in polynomial-time, and then decide the irrelevancy of features having a Shapley value of 0. However, deciding feature relevancy is known to be NP-complete in the case of d-DNNF classifiers [46; 45]. So, either the definition of Shapley values cannot be fixed, or otherwise it is unlikely that it can be computed in polynomial-time, unless of course that \(\text{P}=\text{NP}\). An alternative measure of feature importance is to enumerate all the AXp's of an explanation problem, and then rank the features by their occurrence in explanations, giving more weight to the smaller explanations. Such a measure respects _feature irrelevancy_ in that irrelevant features will have a score of 0. A drawback of such a measure of feature importance is that it hinges on the enumeration of all abductive explanations, and their number is worst-case exponential. However, complete (or Figure 1: Case studies: classifiers \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\), each with four features, and instance \(((1,0,0,0),0)\). partial) enumeration is possible in some cases, and in those cases, the proposed measure of feature importance could be used. It is left for future work the identification of other possible drawbacks. ## 7 Experiments The experiments are organized in two main parts. The first part, in Section 7.1, highlights the limitations of SHAP to approximate the order of feature importance dictated by the exact computation of Shapley values in the case of d-DNNF's. The second part, in Section 7.2, summarizes the results of searching for functions and/or instances which answer Q1 to Q4 positively. These results encompass the systematic search for Boolean functions defined on \(k\) variables, but also the Boolean functions evaluated in Section 7.2. It should be mentioned that more detailed experiments are included in Appendix. Namely, Appendix A.1 includes additional results for the comparison between SHAP and Shapley values, whereas Appendix A.2 includes additional results for the assessment of Shapley values in explainability. The tables presented in this section summarize those results. **Experimental setup.** We consider 6 publicly available datasets from the Penn Machine Learning Benchmarks [79], with Boolean features and Boolean classes. From the datasets, d-DNNFs were generated. To ensure that SHAP and the exact computation of Shapley values are based on the same assumptions, we sample uniformly the d-DNNFs, to produce a (uniformly distributed) dataset which is then used by SHAP. This step is important because abductive explanations implicitly assume a uniform distribution on the inputs, i.e. all inputs are possible and equally likely. To obtain d-DNNF circuits, we first trained Read-once Decision Tree (RODT) models on the given datasets using Orange3 [28] and then mapped the obtained RODTs into d-DNNFs. The _read-once_ property is defined as: _each variable is encountered at most once on each path from the root to a leaf node_. RODTs can be encoded in linear time as d-DNNF circuits [7]. The experiments were performed on a MacBook Pro with a 6-Core Intel Core i7 2.6 GHz processor with 16 GByte RAM, running macOS Ventura. Finally, the algorithms outlined in Section 5 were implemented in Perl5, and were similarly run on macOS Ventura. Footnote 5: The sources are available from the authors. ### SHAP vs. Shapley Values For comparing SHAP [66] with the exact computation of Shapley values, we use the public distribution of SHAP6. As observed earlier in the paper, the experiments revealed that SHAP never matched exactly the exact Shapley values. As a result, the experimental evaluation focuses on computing ranking of features. Footnote 6: Available from [https://github.com/slundberg/shap](https://github.com/slundberg/shap). Table 2 summarizes the cumulative error distribution for the six datasets considered. For each dataset, we show the number of pairs of SHAP values assigned to features which have an incorrect order according to their exact Shapley values; these are referred to as the wrong pairs, and the total measures the number of comparisons to get the order given by exact Shapley values. As can be concluded, for \begin{table} \begin{tabular}{c r r r r r r r r} \hline \hline Dataset & corral & mux6 & xd6 & 3Of9 & mof3710 & par5+5 & Total & Fraction(\%) \\ \hline \# Err \(=0\) & 0 & 50 & 0 & 63 & 0 & 0 & 113 & 3.53 \\ \# Err \(\leq 2\) & 20 & 64 & 14 & 312 & 0 & 66 & 492 & 15.38 \\ \# Err \(\leq 4\) & 60 & – & 64 & 452 & 0 & 301 & 941 & 29.41 \\ \# Err \(\leq 8\) & 64 & – & 297 & 510 & 200 & 855 & 2254 & 70.44 \\ \# Err \(\leq 12\) & – & – & 434 & 512 & 944 & 1016 & 3034 & 94.81 \\ \# Err \(\leq 16\) & – & – & 504 & – & 1024 & 1024 & 3192 & 99.75 \\ \# Err \(\leq 20\) & – & – & 510 & – & – & – & 3198 & 99.94 \\ \# Err \(\leq 24\) & – & – & 511 & – & – & – & 3199 & 99.97 \\ \# Err \(\leq 32\) & – & – & 512 & – & – & – & 3200 & 100.00 \\ \hline \# instances & 64 & 64 & 512 & 512 & 1024 & 1024 & 3200 & \\ \hline \hline \end{tabular} \end{table} Table 2: Distribution of errors (i.e. wrong pairs) in the SHAP ranking of feature importance the vast majority of instances, the number of wrong pairs is not zero, and a significant number of wrong pairs can be observed for some instances. Overall, SHAP computed the same order of features as the one computed with exact Shapley values for only 3.53% of the instances. For more than 70% of the instances, the number of wrong pairs exceeds 4. For close to 30% of the instances, the number of wrong pairs exceeds 8. Given the results above, one should not expect that, in practice, SHAP to approximate with rigor the order of features imposed by exact Shapley values. Furthermore, and as shown in the next section, even the goal of approximating exact Shapley values may prove inadequate for explainability. ### Shapley Values vs. Feature Relevancy **Boolean functions with four variables.** Table 2(a) summarizes the results over all Boolean functions of 4 variables. As can be readily observed, for almost all of Boolean functions of 4 variables (i.e. 99.67% of the functions) there exists at least one instance such that an I1 issue is observed, i.e. an irrelevant feature with non-zero Shapley value. As discussed earlier, one could argue that an irrelevant feature with a non-zero Shapley value would still be acceptable, as long as it is the feature with the smallest score (in absolute value). As can be observed, for 61.75% of the functions, the fact that an irrelevant feature has a non-zero Shapley value (or a relevant feature has a Shapley value of zero) also means that an irrelevant feature is better ranked than a relevant feature. Clearly, this can induce human decision makers in error. Moreover, for 11.9% of the functions there exists at least one instance such that an I3 issue is observed, i.e. a relevant feature with a Shapley value of 0. Finally, for close to 4% of the functions there exists at least one instance such that an I4 issue is observed, i.e. issues I1 and I2 are simultaneously observed for the same instance. Observe that the analysis only focuses on classifying features as irrelevant or irrelevant, and still a large percentage of the functions exhibit one or more issues. It should also be noted that for functions with more variables, similar observations were made, but in those cases the number of functions considered was only a fraction of the total number of functions. \begin{table} \end{table} Table 3: Summary of results. For Table 2(c), % Out of Order denotes the percentage of instances with out of order irrelevant features, i.e. there exist some irrelevant feature(s) with a Shapley value (or score) greater than the Shapley value of some relevant feature(s); % \(\exists\) R in Bot\(K\) denotes the percentage of instances with relevant features having some of the \(K\) smallest scores (and which have at least one irrelevant feature that has a larger score); % Maj R in Bot\(K\) denotes the percentage of instances with relevant features representing the majority of the \(K\) smallest scores (and which have at least one irrelevant feature that has a larger score); % \(\exists\) I in Top\(K\) denotes the percentage of instances with irrelevant features having some of the \(K\) largest scores (and which have at least one relevant feature that has a smaller score); % Maj I in Top\(K\) denotes the percentage of instances with irrelevant features representing the majority of the \(K\) largest scores (and which have at least one relevant feature that has a smaller score); \(K\) is 2 for datasets with 6 original features, and 3 for the other datasets. One additional test that we considered was to identify the functions and the instances for which all the irrelevant features had larger absolute-value scores that all the relevant features. For Boolean functions with 4 variables, we could identify more than 1500 of these cases (out of more than 1 million instances). For example, for the function 0110101111111111 (where the first digit corresponds to the prediction for the point 0000, and the last digit corresponds to the prediction for the point 1111), and for the instance \(((0,0,0,1),1)\), feature 1 is irrelevant and exhibits a Shapley value \(\mathsf{Sv}(1)=-0.17188\). The other features are relevant and exhibit Shapley values \(\mathsf{Sv}(2)=\mathsf{Sv}(3)=\mathsf{Sv}(4)=0.11979\). As can be observed, the Shapley values of the relevant features are substantially smaller (in absolute value) than the Shapley value of the irrelevant feature. **Results for d-DNNFs constructed from datasets.** Table 2(b) summarizes the occurrence of issues I1 and I2 over all instances for each of the d-DNNFs obtained from the considered datasets. (For the generated d-DNNFs, we did not observe neither I3 nor I4 issues, but these could be observed by exhaustive enumeration of the 6-variable Boolean functions.) For each d-DNNF, the total number of I1 issues far exceeds the number of instances. The number of I2 issues is smaller, but not negligible. Moreover, for most of the d-DNNFs, more than 10% of the I1 issues yield I2 issues. For one d-DNNF, this number exceeds 30%. Table 2(c) summarizes some additional statistics for the datasets and instances considered. As can be observed, and for a very significant number of instances, there exist irrelevant features better ranked than relevant features (i.e. an instantiation of issue I2). For example, and for three of the datasets, more than 50% of the instances exhibit this issue. Moreover, and without exceptions, for all the datasets one can either observe relevant features ranked among the smallest scores, or irrelevant features ranked among the largest scores, or both. Furthermore, for some datasets, and for a non-negligible number of instances, either the relevant features represent the majority of the lowest ranked features, or the irrelevant features represent the majority of the highest ranked features, or both. Among many other similar examples, we analyze one concrete instance, namely \(((0,0,0,1,0,1,0,0,0),0)\) for the xd6 dataset. In this case, the exact Shapley values give the following ranking (sorted by increasing value): \(\langle 7,3,2,9,8,1,4,6,5\rangle\). However, features 6 and 4 are irrelevant. Thus, there are six _relevant_ features (i.e. \(7,3,2,9,8,1\)) with the _lowest_ Shapley values, and two _irrelevant_ features (i.e. \(4,6\)) ranked among those with the _highest_ Shapley values. Evidently, the information provided by the Shapley values in this case would mislead a human decision maker into looking at irrelevant features, and possible overlook relevant features. As confirmed by Table 2(c) in the case of the xd6 dataset, for 25.8% of the instances the majority of the top-ranked features are irrelevant. **Discussion.** The results in this section offer thorough evidence to the inadequacy of Shapley values for distinguishing between relevant features (which occur in some explanation) and irrelevant features (which are excluded from any irreducible explanation). The experiments also demonstrated that irrelevant features can be assigned Shapley values which incorrectly give those features crucial importance to the prediction. In light of the results presented in this section, we conclude that feature attribution based on exact Shapley values (as well as any other approach that correlates in practice with exact Shapley values) will almost surely mislead human decision makers in some use cases. ## 8 Conclusions For more than a decade Shapley values have represented one of the most visible approaches for feature attribution in explainability. Almost without exception, and motivated by its computational complexity [29, 8, 7, 30], existing work approximates the computation of exact Shapley values. This means that the adequacy of Shapley values for explainability has not been investigated with rigor. This paper demonstrates that exact Shapley values can attribute incorrect importance to features. Concretely, the paper demonstrates that there exist functions and instances such that: (i) there exist features that are irrelevant for the prediction, but have non-zero Shapley values; (ii) there exist pairs of features, one relevant and the other irrelevant for the prediction, and such that the irrelevant feature is deemed more important given its Shapley value; (iii) there exist features which are relevant for the prediction, but which are deemed for having no importance according to their Shapley value, i.e. a Shapley value of 0; and, finally, (iv) there exist pairs of features such that one is relevant for the prediction, but has no importance according to its Shapley value, and the other is irrelevant for the prediction, but has some importance according to its Shapley value. The conclusion from the results in this paper is that Shapley values are not guaranteed to bear any correlation with the actual relevancy of features for classifiers' predictions. The significance of our results should be framed in light of the rapid growth of practical uses of explainability methods based on Shapley values, with one concrete example being the medical domain, of which [57, 106, 103, 54, 77, 14, 6, 108, 62, 3, 92, 107, 67, 99, 63, 64, 109, 44, 1] represent a fraction of the many existing examples. Furthermore, and given the results in this paper, the use of Shapley values as a measure of feature importance should be expected to mislead decision makers when assessing the features that impact some prediction. Finally, the paper proposes an alternative measure of feature importance, which respects feature relevancy, and which is expected to be efficient to compute in some settings, e.g. decision trees, when using contrastive explanations as the basis for computing feature importance [47, 55]. **Acknowledgments.** This work was supported by the AI Interdisciplinary Institute ANITI, funded by the French program "Investing for the Future - PIA3" under Grant agreement no. ANR-19-PIJA-0004, and by the H2020-ICT38 project COALA "Cognitive Assisted agile manufacturing for a Labor force supported by trustworthy Artificial intelligence". This work was motivated in part by discussions with several colleagues including L. Bertossi, A. Ignatiev, N. Narodytska, M. Cooper, Y. Izza, R. Passos, J. Planes and N. Asher. JMS also acknowledges the incentive provided by the ERC who, by not funding this research nor a handful of other grant applications between 2012 and 2022, has had a lasting impact in framing the research presented in this paper.
2307.15089
A new algorithm for Subgroup Set Discovery based on Information Gain
Pattern discovery is a machine learning technique that aims to find sets of items, subsequences, or substructures that are present in a dataset with a higher frequency value than a manually set threshold. This process helps to identify recurring patterns or relationships within the data, allowing for valuable insights and knowledge extraction. In this work, we propose Information Gained Subgroup Discovery (IGSD), a new SD algorithm for pattern discovery that combines Information Gain (IG) and Odds Ratio (OR) as a multi-criteria for pattern selection. The algorithm tries to tackle some limitations of state-of-the-art SD algorithms like the need for fine-tuning of key parameters for each dataset, usage of a single pattern search criteria set by hand, usage of non-overlapping data structures for subgroup space exploration, and the impossibility to search for patterns by fixing some relevant dataset variables. Thus, we compare the performance of IGSD with two state-of-the-art SD algorithms: FSSD and SSD++. Eleven datasets are assessed using these algorithms. For the performance evaluation, we also propose to complement standard SD measures with IG, OR, and p-value. Obtained results show that FSSD and SSD++ algorithms provide less reliable patterns and reduced sets of patterns than IGSD algorithm for all datasets considered. Additionally, IGSD provides better OR values than FSSD and SSD++, stating a higher dependence between patterns and targets. Moreover, patterns obtained for one of the datasets used, have been validated by a group of domain experts. Thus, patterns provided by IGSD show better agreement with experts than patterns obtained by FSSD and SSD++ algorithms. These results demonstrate the suitability of the IGSD as a method for pattern discovery and suggest that the inclusion of non-standard SD metrics allows to better evaluate discovered patterns.
Daniel Gómez-Bravo, Aaron García, Guillermo Vigueras, Belén Ríos, Alejandro Rodríguez-González
2023-07-26T21:42:34Z
http://arxiv.org/abs/2307.15089v2
# A new algorithm for Subgroup Set Discovery based on Information Gain ###### Abstract Pattern discovery is a machine learning technique that aims to find sets of items, subsequences, or substructures that are present in a dataset with a higher frequency value than a manually set threshold. When dealing with sequential data, a frequent subsequence represents a pattern that occurs regularly in the sequence of items. On the other hand, a substructure can take various structural forms, such as subgraphs, subtrees, or sublattices, which can be combined with itemsets or subsequences. If a substructure occurs frequently in a database, it is known as a (frequent) structural pattern. This process helps to identify recurring patterns or relationships within the data, allowing for valuable insights and knowledge extraction. In this work, we propose Information Gained Subgroup Discovery (IGSD), a new SD algorithm for pattern discovery that combines Information Gain and Odds Ratio (OR) as a multi-criteria for pattern selection. The algorithm tries to tackle some limitations of state-of-the-art SD algorithms like the need for fine-tuning of key parameters for each dataset, usage of a single pattern search criteria set by hand, usage of non-overlapping data structures for subgroup space exploration, and the impossibility to search for patterns by fixing some relevant dataset variables. Thus, we compare the performance of IGSD with two state-of-the-art SD algorithms: FSSD and SSD++. Performance comparison of FSSD, SSD++ and IGSD is done by finding patterns using these three algorithms in eleven datasets. For the performance evaluation, we also propose to complement standard SD measures with some metrics like Information Gain, Odds Ratio, and p-value, not considered typically in SD literature. Obtained results show that FSSD and SSD++ algorithms provide less reliable patterns, due to a lower confidence value, and also provide reduced sets of patterns when compared with the proposed IGSD algorithm for all datasets considered. Additionally, IGSD provides better OR values than FSSD and SSD++, stating a higher dependence between patterns and targets. Moreover, patterns obtained for one of the datasets used, have been validated by a group of domain experts. Thus, patterns provided by IGSD show better agreement with experts than patterns obtained by FSSD and SSD++ algorithms. The results presented demonstrate the suitability of the proposed IGSD algorithm as a method for pattern discovery and suggest that the inclusion of non-standard SD metrics allows to better evaluate discovered patterns. keywords: Subgroup Discovery, Pattern Mining, Information Gain + Footnote †: journal: Pattern Mining ## 1 Introduction Pattern discovery or pattern mining is a machine learning technique that aims to find sets of items, subsequences, or substructures that are present in a dataset with a higher frequency value than a manually set threshold. In this context, a set of items that frequently appear together in a transaction data set, (e.g. milk and bread), is referred to as a frequent itemset. When dealing with sequential data, a frequent subsequence represents a pattern that occurs regularly in the sequence of items. On the other hand, a substructure can take various structural forms, such as subgraphs, subtrees, or sublattices, which can be combined with itemsets or subsequences. If a substructure occurs frequently in a graph database, it is known as a (frequent) structural pattern. This process helps identify recurring patterns or relationships within the data, allowing for valuable insights and knowledge extraction [1]. Also known as frequent pattern mining, it was initially popularized for market basket analysis, especially in the form of association rule mining. In this way, customer buying habits can be examined by identifying associations between different items that customers place in their shopping baskets. For instance, the analysis may reveal that customers who buy milk are highly likely to purchase cereal during the same shopping trip, and it can further specify which types of cereal are commonly associated with milk purchases [2]. Among machine learning methods for pattern search, Subgroup Discovery (SD) [3] has been used previously to find relevant patterns in datasets. SD is a data mining task that aims to identify and extract interpretable patterns from the data, which exhibit interesting or exceptional characteristics with respect to a specific property of interest. It has been used in several fields, such as clinical applications [4; 5], and technical applications [6], among others. However, several limitations were found due to the low complexity of the rules. Therefore, in this work, we propose the use of a wider range of features from datasets to discover more personalized patterns. Even if SD has been proposed as a useful technique, the standard version and state of art implementations [7; 8] present several limitations. First, fine-tuning some key parameters is always necessary (i.e. _Beam width_) for each analyzed dataset. Moreover, regarding the discovery of patterns, they are usually obtained by maximizing some index such as weighted relative accuracy (_WRAcc_). However, pattern complexity is not optimized, and obtained patterns may lack interesting information. Additionally, previous algorithms do not offer the option of fixing some important dataset variables to be present in the discovered patterns, thus reducing the interest or acceptance of patterns by problem domain experts. Furthermore, subgroup lists are used in most recent SD algorithms and this can pose a problem as some information can be lost in non-overlapping subgroups found in datasets. Finally, the quality of discovered patterns is evaluated based on single index criteria, and sometimes the selection of evaluation indices is not unified across different SD algorithms. Domain expert validation is a key point to find interesting conclusions on SD analysis. In the CN2-SD study, [9], validation was performed for SD results on a real-life problem of traffic accident analysis dataset. Thus, interpretable and relevant patterns obtained from SD algorithms are necessary. However, SD literature typically lack validation of results by a set of experts. To overcome these issues, in this work, we propose a new SD algorithm, InfoGained-SD (IGSD). This algorithm searches patterns through an optimization that combines information gained [10; 11] and odds ratio [12; 13] metrics. In addition, no fine-tuning regarding _Beam width_ is required by the user. Furthermore, it allows fixing key attributes in discovered patterns by an expert in the field of study, in order to increase acceptance of discovered patterns. Besides, a subgroup set is obtained, so no information is lost for the analyzed input data. Thus, this algorithm is used to examine different datasets in order to find relevant patterns, taking into account characteristics that might be relevant for expert validation. Multiple datasets with heterogeneous variable types (categorical, numerical, and mixed) are assessed in this study, to prove that our method can be useful in the pattern discovery data mining field, obtaining relevant and interpretable patterns, for a domain expert. ## 2 State of the Art Subgroup Discovery (SD) [3] has been proven as a suitable method for identifying statistical and relevant patterns in datasets. It has been applied to clinical trials, precision medicine, and treatment optimization or disease study [14; 15; 16; 5; 17]. Other applications are found in the bibliography, such as in social media study and [18] and smart electricity meter data [19], among other fields. Even if SD has been proven as a suitable method for pattern discovery, an ongoing problem with this data mining technique is the difficulty of interpreting or analyzing the results produced, either because of the complexity and the large amount of information or because of their relevancy [20; 21]. In order to reduce the number of results obtained, some solutions such as ranking and selecting the best \(n\) associations, eliminating associations composed of many features, or discarding associations with a specific measure value below a manual threshold have been proposed in the literature. Thus, SSD++ [22] and FSSD [7] seem to lead the state of the art of SD algorithms. SSD++ relies on beam search strategy, a heuristic approach for discovering subgroups in a population. This process, during the exploration phase, looks through combinations of variables until the maximum search depth of the dataset is covered, and stores only a predetermined number of subgroups at each level (_Beam width_), which are the ones having the best heuristic cost. This means that the same value of _Beam width_ can result in a too large or insufficient number of patterns, depending on the field of study, avoiding to obtain relevant patterns. On the other hand, the exploration of subgroups in FSSD is done using the DEPTH-FIRST-SEARCH strategy, which does not have the limitation of discarding patterns when exploring all possible combinations, however, it is still necessary to determine the optimal number of patterns to return, and this poses a problem as described before. Therefore, using a manual threshold does not seem to be a suitable technique to find the most relevant patterns, and removing the necessary fine-tuning of key parameters such as _Beam width_ seems to be a well-suited approach in this field. Additionally, even if a correct threshold is manually set, found patterns should represent a balance between the complexity of patterns and dependency on the target variable. Actual approaches of the state of the art are not considering this concept. Regarding exploration strategy of subgroups search space, both SSD++ and FSSD are based on subgroup lists, which can be defined as the fragmentation of the subgroup population into multiple sections, each of which is represented by a unique group. This feature is not in line with the aim of IGSD, which is to discover how different characteristics may influence each instance in a dataset. Hence, some information may be lost if subgroup lists are used. Also, validation of obtained patterns is an important aspect in SD, since an expert can evaluate the significance of obtained patterns or the relevant variables that should be present in patterns. Thus, SSD++ and FSSD do not allow to specify a set of key attributes to be present in returned patterns. Finally, several performance indices have been used in the literature to compare different SD algorithm executions (such as for SSD++ and FSSD). However, there is a lack of homogeneity in the indices used and a complete comparison regarding all the measures should be provided to better analyze the performance of the different approaches. ## 3 Materials and Methods ### Data description The objective of this study is to evaluate and contrast the proposed IGSD algorithm with existing state-of-the-art algorithms like FSSD and SSD++. To accomplish this, a total of 10 datasets were chosen from the UCI and Mulan repositories, along with the P4Lucat dataset. In Table 1, the datasets were classified into three categories based on their data type: numeric datasets are represented in green, nominal datasets in blue, and mixed datasets containing both numeric and nominal data in yellow. Furthermore, the "Rows" column shows the number of records in each dataset, the "Targets" column indicates the number of targets for each dataset along with the number of possible values, and the "DataType" column specifies the count of nominal and/or numeric columns present in each dataset. Furthermore, P4Lucat and Genbase datasets present more than one target, so it was decided to transform all the possible targets into one unique target, using the combination of the different target values for each dataset. Thus, resulting in one target option for the P4Lucat dataset with 4 possible values and one target option for the Genbase dataset with 32 possible values. This transformation strategy (One-vs-Rest) for handling multi-target datasets, and obtain a single target is explained later in Section 3.2.1. ### Pattern discovery methods In this section, we will discuss the methodologies employed for identifying patterns in the aforementioned data. Initially, we will provide an overview of SD and present essential definitions related to this field. Subsequently, we will introduce the IGSD algorithm, which is the proposed method for pattern discovery. #### 3.2.1 Subgroup Discovery Subgroup Discovery (SD) is a data mining technique that aims to uncover meaningful associations between variables in relation to a specific property of interest [23]. The literature distinguishes two versions or cultures of SD: Subgroup Identification (SI) and Knowledge Discovery in Databases (KDD) [4]. In this study, the KDD culture is adopted due to its domain-agnostic nature, which allows for the utilization of diverse quality metrics or measures such as coverage, support, unusualness, and more. By employing these \begin{table} \begin{tabular}{|c c|c|c|c|} \hline **Type** & **Dataset** & **Rows** & **Targets** & **DataType(nom/num)** \\ \hline \hline \multirow{4}{*}{**P4Lucat**} & **Iris** & 150 & 1 (3) & 0/4 \\ \cline{3-5} & **Echo** & 108 & 1 (2) & 0/6 \\ \cline{2-5} & **Heart** & 270 & 1 (2) & 0/13 \\ \cline{2-5} & **Magic** & 19020 & 1 (2) & 0/10 \\ \hline \multirow{4}{*}{**P4Lucat**} & **tic-lac-toe** & 958 & 1 (2) & 9/0 \\ \cline{3-5} & **Vote** & 435 & 1 (2) & 16/0 \\ \cline{3-5} & **P4Lucat** & 650 & 2 \(\rightarrow\) 1(4) & 9/0 \\ \cline{2-5} & **Genbase** & 662 & 27 \(\rightarrow\) 1(32) & 1186/0 \\ \hline \hline \multirow{4}{*}{**P4Lucat**} & **Adult** & 45222 & 1 (2) & 8/6 \\ \cline{3-5} & **Nursery** & 12960 & 1 (5) & 7/1 \\ \cline{1-1} \cline{2-5} & **Breast-cancer** & 286 & 1 (2) & 8/1 \\ \hline \end{tabular} \end{table} Table 1: Datasets description metrics, KDD endeavors to identify statistically significant subgroups that satisfy a given target property. The following set of definitions is presented as a foundational background for key concepts that are common to SD algorithms: **Dataset**: A dataset (D) can be defined as the set of items \(I=(X,Y)\), where \(X=\{k1-v1,k2-v2,..,kn-vn\}\) represents the conjunction of \(attributes(k)-values(v)\) pairs and \(Y\) the target value selected. The attributes set \((k)\) encompasses all the explanatory variables present in the dataset. The values \((v)\) can be classified into three types: numeric, boolean, and nominal. On the other hand, in literature, it is found that SD can be employed for binary, nominal and numerical targets, as stated in [22]. SSD++ is capable of handling all types of targets while FSSD is limited to binary targets. On the other hand, regarding IGSD algorithm, the decision employed to handle the numeric targets is to transform them into nominal targets. Moreover, nominal targets will be treated as binary targets employing the One-vs-Rest strategy explained later on. **Subgroup**: A subgroup (s) refers to a combination (Comb) of properties or features, which are attribute-value pairs that describe a distribution with respect to the Target\({}_{value}\) in a given dataset. Therefore, the properties or features (Comb) must contain a combination that exists in the dataset. Additionally, each attribute-value pair, also known as a selector, consists of an attribute, a condition, and a value. The possible conditions depend on the variable type: numeric variables support greater and less than \(\{\geq\), \(\leq\}\) while binary and categorical support equal to \(\{==\}\), i.e \(\{attr1=="possible\ value"\}\) or \(\{attr1\geq 5\}\). These subgroups can be represented as individual patterns being regularly defined as: \[s:Comb\to Target_{value} \tag{1}\] **Sets and Lists of Subgroups**: Subgroup sets can be described as disjunctions of subgroups, allowing for overlapping between subgroups within the same set. Against that, subgroup lists do not permit overlapping, meaning that each attribute of a subgroup is not contained within another subgroup. This distinction is crucial when comparing the performance of FSSD, SSD++, and IGSD algorithms. **Quality Function**: A quality function \(q:\Omega_{sd}\times\Omega_{E}\rightarrow\mathbb{R}\) is employed to assess the effectiveness of a subgroup description \(sd\) belonging to the set \(\Omega_{sd}\), given a target concept \(t\in\Omega_{E}\), and to rank the discovered subgroups during the search process. Quality functions are presented here in a general context for subgroup discovery and will be subsequently elaborated upon as descriptive and predictive measures. For binary target variables, various significant quality functions can be defined in the following form: \[q_{a}=n^{a}\cdot(p-p_{0}),\quad a\in[0,1]\] Here, \(p\) represents the relative frequency of the target variable within the subgroup, \(p_{0}\) denotes the relative frequency of the target variable in the total population, and \(n\) indicates the size of the subgroup. The parameter \(a\) allows for a trade-off between the increase in the target share \(p-p_{0}\) and the generality \(n\) of the subgroup. Regarding the target space \(\Omega_{E}\) a combination of scenarios can be found with a binary class target variable, a multi-class target variable (containing more than 2 possible values) resulting in a single-target analysis; and a combination of different target variables previously defined resulting in a multi-target problem. When it comes to multi-class dataset analysis, the literature offers various approaches for handling multi-class problems. For instance, SSD++ executes the SD algorithm for each target and incorporates only subgroups that enhance the information in a subgroup list. In the case of a multi-class problem with more than two classes, it can be transformed into a binary problem by employing the One-vs-Rest (OvR) strategy [24]. Each class under consideration is treated as one class, while the remaining classes are grouped together as another class (not belonging to the class under study). This transformation allows binary target datasets to be analyzed effectively, and it is the procedure employed by IGSD to manage multi-class datasets. Moreover, more than one variable of interest may be used in the analysis, thus a multi-target problem appears. These scenarios are explored in SSD++, where a subgroup list model is generated, considering the categorical distribution for each class found within those targets. The solution proposed in this work to manage the multi-target scenario is to generate a new target variable where all target variables and their respective classes are combined so that they are linked through a conjunction operation. This procedure must be done as a preprocessing step of the data before employing the IGSD algorithm. For example, in P4Lucat dataset we have two binary variables as targets: disease progression-relapse and toxicity. Consequently, the target variable will contain the information from both binary variables, namely \(Progression-Relapse\)=[YES/NO] and \(Toxicity\)=[YES/NO]. This combination results in a new target variable with four distinct classes. The following paragraphs describe the main descriptive measures commonly found in the literature on SD. These measures allow for the evaluation of individual subgroups, enabling the comparison of results across different algorithms: * Coverage [3]: It measures the percentage of examples covered on average. This can be computed as: \[Cov(R)=\frac{n(Cond)}{ns}\] (2) where \(ns\) is the number of total examples and \(n(Cond)\) is the number of examples that satisfy the conditions determined by the antecedent part of the pattern. The average coverage of a subgroup set is computed as: \[COV=\frac{1}{nR}\sum_{i=1}^{nR}Cov(R_{i})\] (3) where \(nR\) is the number of induced patterns. * Confidence [3]: It measures the relative frequency of examples satisfying the complete pattern among those satisfying only the antecedent. This can be computed as: \[Cnf(R)=\frac{n(Target_{value}Cond)}{n(Cond)}\] (4) where \(n(Target_{value}Cond)=TP\) and it is the number of examples that satisfy the conditions and also belong to the value for the target variable in the pattern. The average confidence of a pattern set is computed as: \[CNF=\frac{1}{nR}\sum_{i=1}^{nR}Cnf(R_{i})\] (5) * Size: The pattern set size is computed as the number of patterns in the induced pattern set. * Complexity: It measures the level of information presented in patterns. It is determined as the number of variables contained in the pattern. * Unusualness [3]: This measure is described as the weighted relative accuracy of a pattern. It can be calculated as: \[WRAcc(R)=Cov(R)*(Conf(R)-\frac{n(Target_{value})}{ns})\] (6) The unusualness of a pattern can be described as the balance between its coverage, represented by \(Cov(R)\), and its accuracy gain, denoted by \(Conf(R)-\frac{n(Target_{value})}{ns}\). The average unusualness of a pattern set can be computed as: \[WRAcc=\frac{1}{nR}\sum_{i=1}^{nR}WRAcc(Ri),\] (7) In addition to the descriptive metrics discussed earlier, predictive measures can also be utilized to evaluate a pattern set, treating a set of subgroup descriptions as a predictive model. Although the primary objective of pattern discovery algorithms is not accuracy optimization, these measures can be employed to compare predictive performance. * Predictive accuracy [25]: Predictive accuracy refers to the percentage of correctly predicted instances. In the case of a binary classification problem, the accuracy of a pattern set can be computed as: \[ACC=\frac{TP+TN}{TP+TN+FP+FN}\] (8) where TP represents true positives, TN denotes true negatives, FP represents false positives, and FN denotes false negatives. In this paper, we also incorporate quality functions that describe relevant aspects of patterns. One such measure is Information Gain (IG) [10][11], which quantifies the reduction in entropy or surprise by splitting a dataset based on a specific value of a random variable. It is calculated as follows: \[IG(D,v)=H(D)-H(D|v)\] Here, \(IG(D,v)\) represents the information gain for the dataset \(D\) with respect to the variable \(v\), \(H(D)\) is the entropy of the dataset before any change, and \(H(D|v)\) is the conditional entropy of the dataset when the variable \(v\) is added. The entropy of a dataset can be understood in terms of the probability distribution of observations within the dataset belonging to different classes. Thus, the entropy measures the level of uncertainty or randomness in the distribution of classes within the dataset. For example, in a binary classification problem with two classes, the entropy of a data sample can be calculated using the following formula: \[Entropy=-(p(a)*\log(P(a))+p(1-a)*\log(P(1-a))) \tag{9}\] In addition, we also employed the odds ratio (OR) measure[12, 13], which is represents the association between an antecedent and an outcome. The OR represents the ratio of the odds of the outcome occurring given a specific antecedent, compared to the odds of the outcome occurring in the absence of that antecedent. In this work, we utilize ORs to compare the relative odds of the occurrence of the outcome of interest based on specific patterns that contain multiple selectors. This measure enables us to evaluate the strength of the association between the antecedent (pattern) and the outcome of interest. Consequently, odds ratios (ORs) can be utilized to assess whether adding a new selector to a pattern serves as a risk factor for a specific outcome. They also allow for comparing the magnitude of various risk factors associated with that outcome. This comparison helps determine the effectiveness of adding more information to a pattern. In IGSD, ORs are employed as an index to select the most relevant subgroups based on the association between the antecedent and the target. By considering the ORs, IGSD identifies subgroups with higher odds ratios, indicating stronger associations between the antecedent and the target outcome. This selection process helps prioritize the most relevant subgroups in terms of their predictive power and relevance to the target. The odds ratio (OR) can be calculated using the following formula: \[OR=\frac{TP\cdot TN}{FP\cdot FN}\] To interpret the OR as a size effect in [13] is proposed the transformation of OR into Cohen's \(d\). This transformation makes the interpretation of the OR easier, as it allows for considering an \(OR>6.71\) to have a similar effect size, regardless of the actual magnitude of the OR. In cases where Cohen's \(d\) is not obtained, comparing subgroup sets based solely on mean values of the OR may lead to distorted results. Higher OR values can disproportionately influence the mean, while lower OR values may not receive due consideration. To address this, four intervals are defined: * \(OR<1.68\) represents a very low effect. * \(1.68<OR<3.47\) represents a low effect. * \(3.47<OR<6.71\) represents a moderate effect. * \(OR>6.71\) represents a high effect. For ease of numerical representation, a value is assigned to each interval, resulting in the odds ratio range (ORR) being defined from 1 to 4. This allows for a more balanced comparison between subgroups and avoids overemphasizing the impact of extremely high OR values. Finally, we have employed the p-value as a subgroup filtering criterion, which is calculated using the Chi-Square statistical test [26]. A p-value threshold of 0.05 is commonly used as the standard criterion for statistical significance. ### IGSD algorithm This section presents IGSD, a pattern discovery algorithm that aims to minimize pattern complexity while simultaneously maximizing the quality of the knowledge derived from discovered patterns. This algorithm combines IG and ORR to identify, on the basis of IG, the attributes with greater relevance and, on the basis of ORR, the set of variable values that have a stronger dependence on a particular target. As previously stated, the proposed algorithm attempts to overcome some limitations of current SD methods. As a result, prior algorithms necessitated adjusting key parameters for each dataset being analyzed. As a result, parameters like beam width, which affect discovered patterns and control the size of the search space, must be defined for each input dataset. In addition, previous algorithms tried to find patterns by maximizing a single index, usually weighted relative accuracy (_WRAcc_), which necessitated manually setting a threshold for the optimization index for each analyzed dataset once more. Additionally, previous algorithms explored subgroup search space by making use of non-overlapping data structures like subgroup lists. Since non-overlapping information in explored subgroups can prevent the discovery of relevant and intriguing patterns, this can be a limitation. Additionally, some crucial dataset variables cannot be fixed to be present in the discovered patterns using previous algorithms. However, because experts in the field may require them to consider a pattern to be useful or interesting, patterns with fixed key variables are an important aspect. Lastly, the quality of discovered patterns is evaluated using a single index, and the evaluation indices chosen by various SD algorithms may not always be consistent. The IGSD algorithm addresses all of these limitations. As a result, this new strategy employs a dynamic threshold using IG as a single optimization index when searching for subgroups. This threshold will be used to select which selectors will be considered relevant options in each subgroup discovery step. Furthermore, there is no need to manually define an arbitrary value for this threshold. Instead, at each algorithm discovery step, it is dynamically calculated and modified for each explored subgroup. Since the IG threshold is dynamically adjusting the size of the search space, it is unnecessary to fine-tune the _Beam width_ parameter at this time. In addition, IGSD provides a uniform measure output that can be compared to that of other implementations. Three arguments are needed to start the IGSD algorithm. These arguments can be used to choose between different options for the algorithm and do not require any fine-tuning. The arguments are the maximum depth during the exploration phase (\(d_{max}\)), the condition attributes (\(Cond_{list}\)), and the threshold mode (\(t_{mode}\)). The algorithm will use either the maximum IG threshold or the dynamic IG threshold, which is the default, according to the \(t_{mode}\) variable. The \(d_{max}\) parameter determines the depth of the exploration space, which can also be interpreted as the pattern complexity or the maximum number of selectors that the patterns will have. In addition, the user can specify some dataset variables that must be present in the obtained patterns using the \(Cond_{list}\) parameter. The algorithm's workflow is depicted in Figure 1, which shows the steps the algorithm takes. Finding interesting associations and removing irrelevant information from associations are two tasks that can be defined. First, a dataset and the values of the parameters \(t_{mode}\), \(Cond_{list}\), and \(d_{max}\) are given to the algorithm as input. As a result, the first task will be performed using the IG threshold to eliminate patterns and discover interesting associations. After the first task is completed, the generated patterns are used as input for the second task, which removes irrelevant selectors from these patterns to obtain patterns with a large amount of information and dependencies on the target, while minimizing complexity. This is achieved by relying on IG and OR measures. #### 3.3.1 Discovering relevant associations In order to select the selectors that surpass the IG \(thre\) and contribute the most information to the problem, the first step, which is finding interesting associations, begins with the calculation of an IG threshold for each selector. Equation 10 is used to compute the IG threshold: \[thre=\sqrt{\frac{n\sum_{i=1}^{n}x_{i}^{2}-(\sum_{i=1}^{n}x_{i})^{2}}{n(n-1)}} \tag{10}\] Where the \(n\) term indicates the total number of selectors that are contemplated, and the \(x_{i}\) term indicates the IG value of a particular selector. As a result, among all of the possible selectors, the IG threshold for each subgroup at each exploration step is calculated. Figure 1: Algorithm workflow ``` Data: Dataset \(D\), maximum depth \(dmax\), threshold mode \(t\_mode\), condition attributes \(CondList\) M \(\leftarrow\) filterByThreshold(all selectors in \(D\), t_mode); for\(i\gets 2\)to\(dmax\)do M_aux \(\leftarrow\)[]; for\(j\gets 0\)to len(M)do \(\leftarrow\) get selector candidates which contain the attributes presented in \(CondList\) ([Patterns with length=i with parent node = \(M[j]\)]); final_cands \(\leftarrow\) filterByThreshold(cands,t_mode); M_aux \(\leftarrow\) M_aux + final_cands; 1 end for M \(\leftarrow\) M_aux; 2 3 end for R \(\leftarrow\)[]; 4for\(k\gets 0\)to len(M)do 5 s \(\leftarrow\) calculate_optimal_cut(M[k]); R \(\leftarrow\) R + s; 6 7 end for return R; ``` **Algorithm 1**InfoGained SD Algorithm Algorithm 1 shows the steps performed during the interesting association discovery phase. In line 1 of Algorithm 1, variable \(M\) will contain subgroups with one selector, i.e., of length 1, with an IG value higher or equal to an IG threshold. Depending on the parameter \(t_{mode}\), this IG threshold will be either the maximum IG value of all the subgroups with one selector (\(t_{mode}\)='maximum') or the value computed using (Equation 10) (\(t_{mode}\)='dynamic'). Subgroups are constructed in an iterative process in lines 2 to 10, adding selectors with IG values equal to or greater than \(thre\) at each step (Equation 10). Each pattern contained in \(M\) obtained in line 1 will serve as the basis for this iterative process. Consequently, in line 5 for each pattern (\(j\)) in \(M\), another selector is added to design \(M[j]\) expanding the length by 1 up to an all-out design length of \(i\). Furthermore, line 5 in Algorithm1 stores in \(cands\) variable the new patterns of length \(i\) that contain attributes specified in user-provided argument \(Cond_{list}\), on the off chance that it isn't empty. Then, at that point, in line 6, patterns stored in \(cands\) variable are filtered by IG value by computing dynamic threshold and according to argument \(t_{mode}\), as in line 1 of Algorithm 1. This process of iteration will continue until the \(d_{max}\) parameter is reached. In addition, using a \(d_{max}=2\) as an illustration, Fig. 2 provides a better understanding of how associations are constructed. As can be seen, in Iteration = 1 schema, an IG threshold is computed utilizing the IG values of available selectors from Selector1, to Selector6. Selectors 1 and 3 will be chosen to build the patterns (Patterns 1 and 2) in this first iteration because they exceed the threshold after the threshold was calculated. From here, the algorithm iterates for each pattern from the previous iteration (such as Pattern1 and Pattern2) in the second iteration. For the first pattern, Selectors 3, 5 and 6 are candidates since the combination of Pattern1 and these selectors, is present in the input data set. On the other hand, for the second pattern, Selector2, Selector4, and Selector5 are the possible selectors to add. It is important to notice that for each pattern of the iterative process, a different IG threshold is calculated for each one. So, for Pattern1, only Selector3, and Selector5 surpass the particular threshold, so they will be added to Pattern1, getting Pattern3 and Pattern4, of length 2 every one. However, only Selector 4 surpasses the required threshold, so it is added to Pattern 2, resulting in Pattern 5. #### 3.3.2 Removing irrelevant information For the second task, pattern complexity is reduced by removing irrelevant information from patterns after they are generated in the first task. The purpose of this step is to determine which selectors of a pattern are not giving valuable or important information. Thus, Figure 3 shows that among the 6 selectors of a given pattern, selector 3 is identified as the best selector since its IG value is over the IG threshold (dashed line) and has a high ORR value. Based on the identified optimal selector, the pattern is cut and selectors 4, 5 and 6 are removed as irrelevant information. Figure 2: Iterative optimization process for pattern search This step initiates by setting of list \(R\) for storing optimized patterns in Algorithm 1, line 11. As a result, on lines 11 and 14, IGSD stores the optimal cut in the \(R\) output list for each pattern that was returned in the variable \(M\) by the first step of Algorithm 1. Algorithm 2 demonstrates the procedures used to determine a pattern's best cut point (i.e. selector), First of all, Line 1 of Algorithm 2 converts \(OR\) values to \(ORR\), and Line 2 calculates an IG threshold using all the selectors presented in the input pattern in accordance with Equation 10). Thusly, the calculation will dispose of those selectors with IG values lower than the IG threshold. Besides, line 3 filters not statistically relevant selectors by removing those ones with a p-value measure below 0.05. After the filtering of selectors, if only one selector remains, its position will be used to cut the pattern and returned as the \(optimal\_cut\) cut (lines 4 through 6 of Algorithm 2). On the other hand, line 8 of Algorithm 2 will iterate over the candidates as follows in order to determine the best cut: * At the beginning, the first selector of potential candidates is considered as the \(optimal\_cut\) ((Algorithm 2, line 7). * The \(optimal\_cut\) is not updated in lines 9 to 10 until a candidate selector's ORR improves the \(optimal\_cut\) ORR. Figure 3: Example of complexity pattern reduction * There are two conditions to stop the iteration in lines 12 to 14. Whether the up-and-corner selector in the ongoing iteration has a lower ORR than the \(optimal\_cut\) ORR or the ongoing examined selector isn't sequential to the recently analyzed selector and its ORR is equivalent to the ORR \(optimal\_cut\). In such cases, the iteration stops because new elements should be added to a pattern only when the \(optimal\_cut\) ORR improves. ``` Data: Information gained list ig, Odd ratio list or, p-value list pv, Pattern p 1 ORR \(\leftarrow\) [values of Odd ratio list are transformed into ranges]; 2cut_candidates \(\leftarrow\) filterByThreshold(ig,t_mode='dynamic'); 3cut_candidates \(\leftarrow\) [elements in cut_candidates with p_value \(\leq\) 0.05]; 4iflen(cut_candidates) == 1then 5 return p[:ig.index(cut_candidates[0])]; 6 7 end if 8 optimal_cut \(\leftarrow\) ORR[0]; 9for\(i\gets 1\)to len(ORR)do 10ifORR[i] \(>\) ORR[i-1]then 11 optimal_cut \(\leftarrow\) ORR[i]; 12 13 end if 14 15elseif(ORR[i] == ORR[i-1] \(\land\) ORR[i] is not consecutive to ORR[i-1]]) \(\lor\) (ORR[i] \(<\) ORR[i-1])then 16 break; 17 18 end if 19 20 end for return p[:ORR.index(optimal_cut)]; ``` **Algorithm 2**calculate_optimal_cut Algorithm #### 3.3.3 Discovered patterns validation This section describes the validation performed of patterns provided by the compared algorithms. Thus, the pattern validation process is described as well as the inter-rater agreement indices used for assessing the validation. To validate the patterns obtained through different SD algorithms, seven oncologists were recruited to assess and rate the medical relevance of patterns. All raters received the same data in the same order. The goal of this procedure was to understand whether these patterns were providing useful information to clinicians or not. Hence, two options could be chosen for each pattern: * Accept: if the information contained therein is relevant or of interest, regardless of whether its content is in line with CG or clinical experience, or whether it is something new. * Reject: if the information provided does not add anything clinically relevant or does not contain sufficient information to be considered of interest. **Inter-rater agreement metrics** In order to assess the inter-rater agreement of the pattern evaluation, AC1 index was used. Also called "first-order agreement coefficient", it adjusts the overall probability based on the chance that raters may agree on a rating, despite the fact that one or all of them may have given a random value [27]. It can be calculated as follows: \[AC1=\frac{p-e(\gamma)}{1-e(\gamma)} \tag{11}\] Where, \[e(\gamma)=2q(1-q),\quad p=\frac{A+D}{N}\quad\mbox{and}\quad q=\frac{A1+B1}{2N} \tag{12}\] Here, \(A\) is the number of times both raters accept the patterns, \(D\) is the number of times both raters reject the patterns and \(N\) is the total sample size. Thus, \(p\) is the proportion of observed agreement and _e(\(\gamma\))_ is the proportion of the expected agreement. Furthermore, Intraclass correlation coefficient (ICC) was used to evaluate the inter-rater reliability. It can be stated as: \[ICC=\frac{\sigma_{b}^{2}}{\sigma_{b}^{2}-\sigma_{w}^{2}} \tag{13}\] Here, \(\sigma_{b}^{2}\) is the variance between subjects and \(\sigma_{w}^{2}\) is the variance within subjects. p-value and confidence interval (CI) is also provided for each index. ## 4 Results In this section, we present the performance results of SD algorithms in 11 data sets. Also, for the P4Lucat dataset it was possible to perform validation by a group of problem domain experts, i.e. clinicians. Validation results are reported and compared with performance metrics obtained by each SD method for the P4Lucat dataset. ### Subgroup Discovery algorithms Initially, performance comparison of FSSD, SSD++, and IGSD algorithms has been made based on descriptive and predictive measures. FSSD and SSD++ were used with default parameters, and the two versions of IGSD (i.e. IGSD-M and IGSD-T) were tested according to the two possible values of algorithm argument \(t_{mode}\). On the other hand, argument \(d_{max}\) was not manually set. Thus, the maximum exploration depth was, by default, the number of variables of the input dataset. Also, since FSSD and SSD++ define a timer of one hour to limit computations during pattern search, we have define a similar timer for both IGSD versions for comparison purposes. Additionally, for the P4Lucat dataset, results for both versions of IGSD were obtained by including cancer stage and first treatment variables in the \(Cond_{list}\) argument. These variables, also included in a previous study [28], were identified by clinicians as essential and required to be present in discovered patterns. Table 2 shows the obtained metric values for the numeric datasets considered. Also, table 2 uses a color range (from green to yellow) to indicate the best and worst metric value for each row, being thick green the best and thick yellow the worst. The summary of results in terms of descriptive and predictive measures is as follows, considering that SSD++ algorithm was not able to discover patterns in the ECHO dataset as well as FSSD algorithm for the MAGIC dataset: * Regarding the number of patterns found (i.e. size), both versions of IGSD obtain a higher number of patterns with respect to FSSD and SSD++, except for MAGIG dataset where SSD++ provides 92 patterns, versus 9 provided by IGSD-M and 51 provided by IGSD-T. * In terms of pattern complexity of the rule set (i.e. length), FSSD produces the set of patterns with the largest number of variables. On the other hand, SSD++ produces patterns of similar complexity than IGSD-M and IGSD-T. However, both IGSD versions are the only ones capable of discovering patterns for all datasets. * Regarding the average coverage per rule, considering the IRIS and HEART dataset, patterns produced by SSD++ have higher values \begin{table} \end{table} Table 2: Statistical comparison using SD algorithms for Numeric datasets. (0.48, 0.29) than the patterns produced by both IGSD versions and FSSD algorithms with values similar to 0.2. Moreover, in the ECHO dataset, FSSD produced patterns with higher values (0.247) with respect to IGSD-M and IGSD-T (0.1, 0.083). On the other hand, concerning MAGIC dataset, IGSD-M and IGSD-T reports higher values (0.042, 0.039) respecting SDD++ (0.032). * Assessing the unusualness of the patterns (i.e. WRacc), all the algorithms, for the IRIS dataset, produce sets of patterns with a value similar to 0.12. Besides, in the ECHO dataset, FSSD produces a set of patterns with a higher value (0.247) than both IGSD versions (0.1, 0.083). On the other hand, concerning HEART and MAGIC datasets, IGSD-M produces slightly higher values (0.089, 0.02) than IGSD-T (0.075, 0.019), and much higher than SDD++ (0.027, 0.0048), respectively. * When evaluating the rule confidence, regarding ECHO and MAGIC datasets, the sets of patterns produced by IGSD-M (0.91, 0.96) and IGSD-T (0.9, 0.95), have higher confidence values than FSSD (0.83, -) and SSD++(-, 0.74). Furthermore, regarding IRIS dataset, IGSD-T produces a set of patterns with slightly more confidence value (0.98) than IGSD-M and FSSD (0.94, 0.96, respectively) and much higher than SSD++ (0.67). Moreover, concerning HEART dataset, IGSD-M and IGSD-T set of patterns produces higher values (0.87, 0.86) than FSSD (0.82) and SDD++ (0.66). * In terms of accuracy prediction, concerning the IRIS dataset, both IGSD versions and FSSD algorithms might be considered reliable models due to the accuracy values reported, 0.84, 0.84 and 0.85 respectively. Meanwhile SSD++ accuracy decreases to a value of 0.73. Regarding the ECHO dataset, FSSD reports an accuracy value of 0.77, meanwhile, IGSD algorithm reports a lower accuracy value (0.61, 0.65 respectively). On the other hand, for HEART and MAGIC datasets, IGSD-M and IGSD-T report a slightly higher accuracy value (0.67, 0.59 and 0.64, 0.59, respectively) than FSSD (0.63, -) and SSD++ (0.55, 0.52) algorithms. * Considering the average IG per rule, concerning the IRIS and ECHO datasets, IGSD-M and IGSD-T report patterns with less information gained (0.327, 0.05) and (0.327, 0.049) respectively than FSSD algorithm, with values of 0.47 and 0.117 respectively. Regarding the HEART dataset, SSD++ reports the highest information gained value (0.174) while IGSD-M, IGSD-T and FSSD report lower information gained values (0.137, 0.11 and 0.1), respectively. Furthermore, for MAGIC dataset, IGSD-M and IGSD-T produce sets of patterns with information gained values of 0.036 and 0.034 respectively while SSD++ produces a set of patterns with a lower information gained value of 0.022. * Taking into consideration the ORR metric, concerning the IRIS dataset, patterns produced by both IGSD versions and FSSD report an ORR of 4, meanwhile, SSD++ patterns produce a lower ORR of 3.4. Regarding the ECHO dataset, FSSD patterns report an ORR value of 4 while both IGSD versions report lower ORR values (3.89, 3.92), respectively. On the other hand, for HEART and MAGIC datasets, IGSD-M and IGSD-T set of patterns report the highest ORR values (4, 4) and (4, 3.98) respectively, while FSSD and SDD++ patterns report lower ORR values (3.5, -) and (3, 3.16) respectively. * In terms of p-value, regarding the IRIS dataset, both IGSD versions and SDD++ algorithms produce statistically significant patterns due to reporting a p-value below 0.05. Nevertheless, the IGSD algorithm reports a value below 0.001, which indicates that patterns might be considered more statistically significant. On the other hand, the FSSD algorithm reports patterns with a p-value above 0.05. In regards to the ECHO dataset, both IGSD versions and FSSD algorithms produce statistically significant patterns due to reporting a p-value below 0.05, nonetheless, FSSD reports a p-value below 0.01, being able to consider the patterns more statistically significant. Furthermore, concerning the HEART and MAGIC datasets, all the algorithms report a p-value below 0.001, which indicates a strong statistical significance. Table 3 shows the obtained metric values for the nominal datasets considered. Also, table 3 uses a color range (from green to yellow) to indicate the best and worst metric value for each row, being thick green the best and thick yellow the worst. The summary of results in terms of descriptive and predictive measures is as follows: * Regarding the TIC-TAC-TOE dataset, SSD++ returns a higher number of discovered patterns (17) with respect to FSSD (10) and both IGSD versions (4). On the other hand, concerning the VOTE and P4Lucat dataset, IGSD-T was able to discover a higher amount of patterns (21, 52) in contradistinction to IGSD-M (13, 19) and FSSD (10, 20) and SSD++ (4, 16). Moreover, considering the GENBASE dataset, both IGSD versions produce a huge amount of patterns (37041, 45625), as opposed to FSSD and SSD++ algorithms (32, 33), respectively. * In terms of the complexity of the rule set (i.e. length), regarding the TIC-TAC-TOE and P4Lucat, both IGSD versions produce sets of patterns with the largest number of variables (3, 3.1) and (3, 3.9). In turn, SSD++ sets of patterns have the least complexity (2.29, 1.11), thus containing much less information. Furthermore, respecting VOTE \begin{table} \end{table} Table 3: Statistical comparison using SD algorithms for Nominal datasets. and GENBASE datasets, FSSD patterns contain a much higher number of variables (7.7, 1151) than IGSD-M (1.46, 2), IGSD-T (2.14, 2) and SSD++ (2.5, 1), respectively. * Regarding the average coverage per rule, considering the TIC-TAC-TOE dataset, patterns produced by FSSD have the highest coverage value (0.13), while both IGSD versions sets of patterns have less coverage value (0.073). Furthermore, considering the VOTE dataset, IGSD-M reports a slightly higher value (0.41) than IGSD-T (0.37), meanwhile, FSSD returns a set of patterns with much less coverage (0.12). Finally, with respect to P4Lucat and GENBASE datasets, SSD++ produces a set of patterns with a high coverage value (0.35, 0.128), meanwhile, FSSD and both IGSD versions report a much lower coverage value, (0.094, 0.032) and (0.024, 0.0086 and 0.016, 0.027) respectively. * Assessing the unusualness of the patterns (i.e. WRacc), regarding the TIC-TAC-TOE dataset, FSSD and both IGSD versions produce sets of patterns with high unusualness values similar to 0.035. However, SSD++ set of patterns has lower unusualness value (0.02). Moreover, concerning the VOTE dataset, IGSD-M and IGSD-T report higher values (0.173, 0.15) than FSSD and SSD++ (0.05, 0.085), respectively. Finally, respecting P4Lucat and GENBASE datasets, FFSD produces patterns with higher unusualness value (0.017, 0.027) than IGSD-M (0.0095, 0.008), IGSD-T (0.0068, 0.02) and SSD++ (0.011, 0.02). * When evaluating patterns confidence, concerning the TIC-TAC-TOE and P4Lucat datasets, IGSD-M and IGSD-T report higher confidence values (1, 0.83) and (1, 0.81), respectively. However, in P4Lucat dataset, FSSD and SSD++ report much lower confidence values (0.51, 0.31). On the other hand, the set of patterns discovered by FSSD for the VOTE dataset reports a confidence value of 0.98, while, IGSD-M and IGSD-T confidence values are slightly lower (0.91, 0.9) and SSD++ confidence value is significantly lower (0.75). Finally, for GENBASE dataset, both IGSD versions and FSSD algorithms report high confidence values (0.98) in contrast with the low confidence value reported by SSD++ (0.348). * In terms of accuracy prediction, regarding the TIC-TAC-TOE dataset, all algorithms report an accuracy value around 0.57. However, with re spect to VOTE and P4Lucat datasets, both IGSD versions and FSSD report a high accuracy value above 0.8 in VOTE dataset for IGSD algorithm and above 0.7 in P4Lucat for the mentioned algorithms. These confidence values show the high reliability of IGSD and FSSD for VOTE and P4Lucat datasets. Finally, considering the GENBASE dataset, all the algorithms are highly reliable due to reported accuracy values above 0.9. * Considering the average IG per rule, with respect to TIC-TAC-TOE and VOTE datasets, IGSD-M and IGSD-T report high IG values (0.072, 0.45) and (0.072, 0.33), respectively, and SSD++ reports a high IG value considering the VOTE dataset (0.41). Furthermore, regarding the P4Lucat dataset, SSD++ was able to discover patterns with an IG value of 0.034, while slightly lower values are reported by both IGSD versions (0.016, 0.011) and FSSD (0.019), respectively. Finally, considering the GENBASE dataset, FSSD, SSD++ and IGSD-T produce sets of patterns with a similar IG value (0.1, 0.15, 0.1), respectively, while IGSD-M set of patterns has a lower IG value of 0.048. * Taking into consideration the ORR, patterns produced by IGSD-M and IGSD-T report the highest ORR value for all the datasets. On the other hand, FSSD reports an ORR value slightly lower in TIC-TAC-TOE dataset, and a significantly lower ORR value regarding P4Lucat. In addition, although SSD++ reports the same ORR value as the rest of the algorithms considering the GENBASE dataset, regarding the other datasets SSD++ reports a considerably lower ORR value. * In terms of p-value, regarding TIC-TAC-TOE and GENBASE datasets, both IGSD versions and FSSD algorithms produce highly statistically significant patterns due to a p-value below 0.001. In turn, SSD++ obtains a statistically non-relevant set of patterns for TIC-TAC-TOE dataset, reporting a p-value above 0.05, while a relevant set of patterns is obtained by the same algorithm for GENBASE dataset, reporting a p-value below 0.05. Furthermore, concerning the VOTE dataset, both IGSD versions and SDD++ algorithms produce highly statistically significant patterns, reporting a p-value below 0.001, while FSSD reports a p-value above 0.05, thus providing a statistically non-relevant set of patterns for the same dataset. Finally, with respect to P4Lucat, both IGSD versions report a p-value below 0.01, meanwhile, FSSD reports a p-value below 0.05 and SDD++ reports a p-value above 0.05. Table 4 shows the obtained metric values for the mixed datasets considered. Also, table 4 uses a color range (from green to yellow) to indicate the best and worst metric value for each row, being thick green the best and thick yellow the worst. The summary of results in terms of descriptive and predictive measures is as follows: * Regarding the BREAST-CANCER dataset, FSSD returns a higher number of discovered patterns (10) with respect to both IGSD versions and SSD++ algorithms, which return a similar amount of patterns (4, 5, 3), respectively. Furthermore, concerning the NURSERY dataset, SSD++ was able to discover a higher amount of patterns (97). In turn, a lower number of patterns is returned by both IGSD versions (13, 20) and FSSD (13), respectively. Finally, with respect to the HEART dataset, IGSD-T discover more patterns (22) than IGSD-M, FSSD and SSD++ algorithms (7, 2, 6) respectively. \begin{table} \end{table} Table 4: Statistical comparison using SD algorithms for Mixed datasets. * In terms of the complexity of the rule set (i.e. length), FSSD produces sets of patterns with the largest number of variables for the 3 datasets. In addition, it can be noticed that regarding the HEART dataset, FSSD set of patterns has a much higher number of variables (13) than both IGSD versions (1.86, 2.14) and SSD++ (1.83), respectively. * Regarding the average coverage per rule, considering the BREAST-CANCER and NURSERY datasets, patterns produced by FSSD have slightly higher values (0.11, 0.1), respectively, than IGSD-M (0.072, 0.063), IGSD-T (0.072, 0.08) and SSD++ (0.1, 0.06). In turn, SSD++ reports for the HEART dataset the highest coverage value (0.29), followed by both IGSD versions (0.257, 0.22) and FSSD (0.2). * Assessing the unusualness (i.e. WRacc) of the patterns for BREAST-CANCER, NURSERY and HEART datasets, IGSD-M obtains sets of patterns with the highest unusualness values (0.03, 0.039, 0.089), respectively. Moreover, IGSD-T and FSSD report slightly lower values (0.03, 0.033, 0.075) and (0.02, 0.03, 0.07), respectively, while SSD++ reports much lower values (0.0085, 0.01, 0.027). * When evaluating the rule confidence for BREAST-CANCER, NURSERY and HEART datasets, IGSD-M produces sets of patterns with the highest confidence values (0.91, 0.87, 0.87). Furthermore, IGSD-T and FSSD report slightly lower values (0.82, 0.72, 0.86) and (0.82, 0.64, 0.82), respectively, while SSD++ reports much lower values compared to the others (0.65, 0.63, 0.66). * In terms of accuracy prediction, regarding the NURSERY dataset, all the algorithms might be considered reliable models due to the reported accuracy values (0.78, 0.78, 0.74, and 0.73) respectively for each algorithm. Regarding the BREAST-CANCER and HEART datasets, the accuracy values decrease to values around 0.6, although SSD++ reports low accuracy values (0.46, 0.55) respectively. * Considering the average IG per rule, SSD++ was able to discover patterns in BREAST-CANCER and HEART datasets with the highest IG value (0.065, 0.174), respectively, while slightly lower IG values are reported for patterns discovered by IGSD-M (0.53, 0.53) and IGSD-T (0.137, 0.11), respectively. In turn, FSSD is the worst performant reporting the lowest IG values for all three datasets. * Regarding ORR values, patterns produced by IGSD-M report the highest ORR value (4) for all the datasets, while patterns produced by IGSD-T have ORR values slightly lower for BREAST-CANCER and NURSERY datasets. On the other hand, FSSD and SDD++ algorithms report lower ORR values regarding all the datasets (3.1, 3.15, 3.5) and (3, 2.7, 3), respectively. * In terms of p-value, both IGSD versions and SDD++ algorithms produce statistically significant patterns for BREAST-CANCER dataset, reporting a p-value below 0.01. In turn, FSSD algorithm provides non-significant patterns reporting a p-value above 0.05. Furthermore, concerning the NURSERY and HEART datasets, all the algorithms report a p-value below 0.001, with the exception of SSD++ algorithm in the NURSERY dataset, reporting a p-value above 0.001; which still indicates statistical significance. Analyzing all metric results presented above, it can be seen that patterns generated by IGSD and SSD++ have similar complexity values, while FSSD produces patterns with more complexity and amount of information. However, considering the nominal TIC-TAC-TOE and P4Lucat datasets, IGSD was able to produce patterns with more complexity than FSSD and SSD++. Additionally, the sets returned by IGSD usually contain more patterns than FSSD and SSD++. Thus, it can be concluded that IGSD produces larger sets of patterns with less amount of information or variables for each returned pattern. Considering the coverage measure, there is variability among different dataset types, all three algorithms can discover patterns with considerable representation in datasets, but both IGSD methods were successful for all datasets while FSSD and SSD++ failed for one numeric dataset. Looking at coverage and ORR measures together, it can be noticed that as a general rule, IGSD patterns are more reliable due to reported values, being the patterns produced by SSD++ the least reliable. In addition, the p-value of the patterns produced by IGSD are always below 0.05, being in several datasets below 0.001, thus these patterns can be considered as highly statistically significant. On the other hand, FSSD and SSD++ can not guarantee statistically significant patterns due to p-value scores above 0.05 reported for IRIS, TIC-TAC-TOE, VOTE, P4LUCAT, and BREAST-CANCER datasets. Therefore, in summary, it can be concluded that IGSD is able to discover a considerable number of statistically significant patterns with also a high dependence on targets, but offering less amount of information than FSSD. On the other hand, it is noticeable that for numeric datasets such as MAGIC and ECHO, FSSD and SSD++, executed with default parameter values, failed to discover patterns, while IGSD was able to finish and uncover a set of patterns. This limitation could be due to the way FSSD and SSD++ handle the generation of ranges using numeric data. Regarding pattern search exploration, IGSD managed to provide patterns for all datasets but SSD++ and FSSD failed to find any pattern for ECHO and MAGIC datasets, respectively. However, GENBASE dataset, with the highest number of columns (1186), made IGSD reach the one-hour time limit while searching for patterns. This is due to the fact that both FSSD and SSD++ follow a greedy search strategy, using list of subgroups, and IGSD uses sets of subgroups which allow to do not discard too early potentially good patterns. This exploration strategy makes IGSD to require longer computational times but enables a larger exploration of the search space, allowing to obtain potentially more relevant patterns in this way. ### Experts validation This section presents the results of the validation performed by domain experts for the P4Lucat dataset and discusses the interrelation between the evaluation and the pattern measures results obtained in Section 4.1. A total of 92 patterns, containing the output of IGSD, FSSD, and SSD++, were given to the group of expert raters. We first assess the quality of the clinical validation by calculating AC1 and ICC indices for the validated patterns discovered by all algorithms. Table 5 contains indices values showing a moderate agreement regarding AC1. On the other hand, ICC shows moderate reliability. \begin{table} \begin{tabular}{|c|c|c|} \hline **Index** & **AC1** & **ICC** \\ \hline **Value** & 0.48 & 0.60 \\ \hline **CI** & (0.35, 0.58) & (0.46, 0.71) \\ \hline **p-value** & 3,09E-12 & 3,89E-08 \\ \hline \end{tabular} \end{table} Table 5: Indices of raters for pattern validation, CI: Confidence interval. Table 6 shows the evaluator's acceptance rate values for the patterns provided by each algorithm using the P4Lucat dataset. It can be seen that IGSD-M achieved the highest average acceptance rate. Overall, Table 6 shows that each of the two IGSD methods provides significantly higher average acceptance rates and also achieves better results on an evaluator basis. When comparing validation and performance results provided in Section 4.1 for P4Lucat dataset, the method with higher acceptance rates (i.e. IGSD) also provided significantly higher values of standard metrics from SD literature like confidence and higher values of non-standard performance metrics used in this work, like ORR and p-value. Looking at the rest of datasets, it can be seen that IGSD, in general, provides higher confidence and ORR values than FSSD and SSD++, while providing a p_value below 0.05. Thus, although unfortunately validation was not possible to be performed in this work for the rest of datasets, based on the validation of the P4Lucat dataset, we consider that the use of non-standard SD performance metrics like: IG, ORR and p-value can complement standard SD metrics and allow to better evaluate discovered patterns. ## 5 Conclusions In this work, we have proposed Information Gained Subgroup Discovery (IGSD), a new SD algorithm for pattern discovery that combines Information Gain and Odds Ratio as a multi-criteria for pattern selection. Additionally, two versions of IGSD are proposed to evaluate the dynamic adjustment of the search optimization thresholds during subgroup space exploration. Also, main and general limitations of state-of-the-art SD algorithms are discussed, identifying the following ones: need for fine-tuning of key parameters for each dataset, usage of a single pattern search criteria set by hand, usage of non-overlapping data structures for subgroup space exploration, and impossibility to search for patterns by fixing some relevant dataset variables. The proposed IGSD algorithm tries to tackle all these limitations and thus is evaluated \begin{table} \begin{tabular}{|c|c||c|c|c|c|c|c|c|} \hline **Algorithm/Users** & **Rater1** & **Rater2** & **Rater3** & **Rater4** & **Rater5** & **Rater6** & **Rater7** & **Average** \\ \hline **IGSD-T** & 64\% & 10\% & 56\% & 40\% & 27\% & 10\% & 33\% & 34\% \\ \hline **IGSD-M** & 58\% & 20\% & 58\% & 37\% & 16\% & 37\% & 47\% & 40\% \\ \hline **SSD++** & 0\% & 18\% & 0\% & 0\% & 0\% & 18\% & 18\% & 8\% \\ \hline **FSSD** & 32\% & 21\% & 16\% & 21\% & 11\% & 21\% & 32\% & 22\% \\ \hline \end{tabular} \end{table} Table 6: Acceptance rate of patterns in different algorithms using up to eleven datasets with different characteristics to uncover patterns. For comparison purposes, the same datasets are also used with two state-of-the-art SD algorithms: FSSD and SSD++. Results obtained showed that FSSD provides more complex patterns and SSD++ provides less complex patterns than IGSD. In turn, IGSD usually finds more larger patterns sets than FSSD and SSD++. Thus, it can be concluded that IGSD produces larger sets of patterns with less amount of information or variables for each returned pattern. On the other hand, FSSD and SSD++ confidence average values are 83% and 63%, respectively, significantly lower than IGSD confidence average values of around 90%. This lower reliability of FSSD and SSD++ is also reflected in ORR average values providing 3.58 and 3, respectively, stating a medium-high dependence between patterns and targets. In turn, IGSD provided an ORR average value of around 4, stating a high dependence between patterns and targets. The fact that IGSD obtained better results than FSSD and SSD++ without manual setting of any search parameter also validates the proposed method. In the performance evaluation of patterns obtained by compared algorithms for all datasets, we propose to complement standard SD measures and include some metrics: Information Gain, ORR and pvalue, not considered typically in SD literature. Also, results obtained for P4Lucat dataset have been validated by a group of experts. Thus, patterns acceptance rates show that results provided by IGSD, are more in agreement with the experts than results obtained using FSSD and SSD++ algorithms. For the P4Lucat dataset, better-accepted patterns also have higher ORR and confidence values while being statistically significant with a pvalue below 0.05. Hence, we consider that the inclusion of the proposed non-standard SD metrics allows to better evaluate discovered patterns. Finally, as mentioned above, the proposed IGSD algorithm uses sets of subgroups and follows a non-greedy pattern search strategy. This makes IGSD perform a wider exploration of the search space, allowing to obtain potentially more relevant patterns but at the cost of significantly longer computational times. As a future work, we plan to explore a similar strategy we adopted in a previous work [29] for selecting statistically significant variables. Then, the set of variables in datasets with a large number of columns can be reduced, and search patterns based on this reduced set of significant variables.
2304.05912
PH-STAT
We introduce PH-STAT, a comprehensive Matlab toolbox designed for performing a wide range of statistical inferences on persistent homology. Persistent homology is a prominent tool in topological data analysis (TDA) that captures the underlying topological features of complex data sets. The toolbox aims to provide users with an accessible and user-friendly interface for analyzing and interpreting topological data. The package is distributed in https://github.com/laplcebeltrami/PH-STAT.
Moo K. Chung
2023-04-12T15:29:49Z
http://arxiv.org/abs/2304.05912v1
# Ph-Stat ###### Abstract We introduce PH-STAT, a comprehensive Matlab toolbox designed for performing a wide range of statistical inferences on persistent homology. Persistent homology is a prominent tool in topological data analysis (TDA) that captures the underlying topological features of complex data sets. The toolbox aims to provide users with an accessible and user-friendly interface for analyzing and interpreting topological data. The package is distributed in [https://github.com/laplcebeltrami/PH-STAT](https://github.com/laplcebeltrami/PH-STAT). ## 1 Introduction PH-STAT (Statistical Inference on Persistent Homology) contains various statistical methods and algorithms for the analysis of persistent homology. The toolbox is designed to be compatible with a range of input data types, including point clouds, graphs, time series and functional data, graphs and networks, and simplicial complexes, allowing researchers from diverse fields to analyze their data using topological methods. The toolbox can be accessed and downloaded from the GitHub repository at [https://github.com/laplcebeltrami/PH-STAT](https://github.com/laplcebeltrami/PH-STAT). The repository includes the source code, a detailed user manual, toy examples and example data sets for users to familiarize themselves with the toolbox's functionality. The user manual provides a comprehensive guide on how to run the toolbox, as well as an explanation of the underlying statistical methods and algorithms. PH-STAT includes a rich set of statistical tools for analyzing and interpreting persistent homology, enabling users to gain valuable insights into their data. The toolbox provides visualization functions to help users understand and interpret the topological features of their data, as well as to create clear and informative plots for presentation and publication purposes. PH-STAT is an open-source project, encouraging users to contribute new features, algorithms, and improvements to the toolbox, fostering a collaborative and supportive community. PH-STAT is a versatile and powerful Matlab toolbox that facilitates the analysis of persistent homology in a user-friendly and accessible manner. By providing a comprehensive set of statistical tools, the toolbox enables researchers from various fields to harness the power of topological data analysis in their work. ## 2 Morse filtrations In many applications, 1D functional data \(f(t)\) is modeled as [49] \[f(t)=\mu(t)+\epsilon(t),\;t\in\mathbb{R}, \tag{1}\] where \(\mu\) is the unknown mean signal to be estimated and \(\epsilon\) is noise. In the usual statistical parametric mapping framework [30, 37, 72], inference on the model (1) proceeds as follows. If we denote an estimate of the signal by \(\widehat{\mu}\), the residual \(f-\widehat{\mu}\) gives an estimate of the noise. One then constructs a test statistic \(T(t)\), corresponding to a given hypothesis about the signal. As a way to account for temporal correlation of the statistic \(T(t)\), the global maximum of the test statistic over the search space \(\mathcal{M}\) is taken as the subsequent test statistic. Hence a great deal of the signal processing and statistical literature has been devoted to determining the distribution of \(\sup_{t\in\mathbb{M}}T(t)\) using the random field theory [66, 72], permutation tests [54] and the Hotelling-Weyl volume of tubes calculation [53]. The use of the mean signal is one way of performing data reduction, however, this may not necessarily be the best way to characterize complex multivariate imaging data. Thus instead of using the mean signal, we can use persistent homology, which pairs local critical values [24, 26, 76]. It is intuitive that local critical values of \(\widehat{\mu}\) approximately characterizes the shape of the continuous signal \(\mu\) using only a finite number of scalar values. By pairing these local critical values in a nonlinear fashion and plotting them, one constructs the persistence diagram [22, 24, 51, 74]. The function \(\mu\) is called a _Morse function_ if all critical values are distinct and non-degenerate, i.e., the Hessian does not vanish [50]. For a 1D Morse function \(y=\mu(t)\), define sublevel set \(R(y)\) as \[R_{y}=\{t\in\mathbb{R}:\mu(t)\leq y\}.\] As we increase height \(y_{1}\leq y_{2}\leq y_{3}\leq\cdots\), the sublevel set gets bigger such that \[R_{y_{1}}\subset R_{y_{2}}\subset R_{y_{3}}\subset\cdots.\] The sequence of the sublevel sets form a _Morse filtration_ with filtration values \(y_{1},y_{2},y_{3},\cdots\). Let \(\beta_{0}(R_{y})\) be the 0-th Betti number of \(R_{y}\), which counts the number of connected components in \(R_{y}\). The number of connected components is the most often used topological invariant in applications [24]. \(\beta_{0}(R_{y})\) only changes its value as it passes through critical values (Figure 1). The birth and death of connected components in the Morse filtration is characterized by the pairing of local minimums and maximums. For 1D Morse functions, we do not have higher dimensional topological features beyond the connected components. Let us denote the local minimums as \(g_{1},\cdots,g_{m}\) and the local maximums as \(h_{1},\cdots,h_{n}\). Since the critical values of a Morse function are all distinct, we can combine all minimums and maximums and reorder them from the smallest to the largest: We further order all critical values together and let \[g_{1}=z_{(1)}<z_{(2)}<\cdots<z_{(m+n)}=h_{n},\] where \(z_{i}\) is either \(h_{i}\) or \(g_{i}\) and \(z_{(i)}\) denotes the \(i\)-th largest number in \(z_{1},\cdots,z_{m+n}\). In a Morse function, \(g_{1}\) is smaller than \(h_{1}\) and \(g_{m}\) is smaller than \(h_{n}\) in the unbounded domain \(\mathbb{R}\)[12]. By keeping track of the birth and death of components, it is possible to compute topological invariants of sublevel sets such as the 0-th Betti number \(\beta_{0}\)[24]. As we move \(y\) from \(-\infty\) to \(\infty\), at a local minimum, the sublevel set adds a new component so that \[\beta_{0}(R_{g_{i}-\epsilon})=\beta_{0}(R_{g_{i}})+1\] for sufficiently small \(\epsilon\). This process is called the _birth_ of the component. The newly born component is identified with the local minimum \(g_{i}\). Similarly for at a local maximum, two components are merged as one so that \[\beta_{0}(R_{h_{i}-\epsilon})=\beta_{0}(R_{h_{i}})-1.\] This process is called the _death_ of the component. Since the number of connected components will only change if we pass through critical points and we can iteratively compute \(\beta_{0}\) at each critical value as \[\beta_{0}(R_{z_{(i+1)}})=\beta_{0}(R_{z_{(i)}})\pm 1.\] The sign depends on if \(z_{(i)}\) is maximum \((-1)\) or minimum \((+1)\). This is the basis of the Morse theory [50] that states that the topological characteristics of the sublevel set of Morse function are completely characterized by critical values. To reduce the effect of low signal-to-noise ratio and to obtain smooth Morse function, either spatial or temporal smoothing have been often applied to brain imaging data before persistent homology is applied. In [15, 45], Gaussian kernel smoothing was applied to 3D volumetric images. In [70], diffusion was applied to temporally smooth data. Example 1: As an example, elder's rule is illustrated in Figure 1, where the gray dots are simulated with Gaussian noise with mean 0 and variance \(0.2^{2}\) as \[f(x)=\mu(x)+N(0,0.2^{2})\] with signal \(\mu(t)=t+7(t-1/2)^{2}+\cos(7\pi t)/2\). The signal \(\mu\) is estimated using heat kernel smoothing [13] using degree \(k=100\) and kernel bandwidth \(\sigma=0.0001\) using WFS_COS.m. Figure 1: The births and deaths of connected components in the sublevel sets in a Morse filtration [12]. We have local minimums \(a<b<d<f\) and local maximums \(c<e\). At \(y=a\), we have a single connected component (gray area). As we increase the filtration value to \(y=b\), we have the birth of a new component (second gray area). At the local maximum \(y=c\), the two sublevel sets merge together to form a single component. This is viewed as the death of a component. The process continues till we exhaust all the critical values. Following the Elder rule, we pair birth to death: \((b,c)\) and \((d,e)\). Other critical values are paired similarly. These paired points form the persistent diagram. Now we apply Morse filtration for filtration values \(y\) from \(-\infty\) to \(\infty\). When we hit the first critical value \(y=a\), the sublevel set consists of a single point. When we hit the minimum at \(y=b\), we have the birth of a new component at \(b\). When we hit the maximum at \(y=c\), the two components identified by \(a\) and \(b\) are merged together to form a single component. When we pass through a maximum and merge two components, we pair the maximum with the higher of the two minimums of the two components [24]. Doing so we are pairing the birth of a component to its death. Obviously the paired extremes do not have to be adjacent to each other. If there is a boundary, the function value evaluated at the boundary is treated as a critical value. In the example, we need to pair \((b,c)\) and \((d,e)\). Other critical values are paired similarly. The persistence diagram is then the scatter plot of these pairings computed using PH_morse1D.m. This is implemented as t=[0:0.002:1]'; s= t + 7*(t - 0.5).~2 + cos(8*pi*t)/2; e=normrnd(0,0.2,length(x),1); Y=s+e; k=100; sigma=0.0001; [wfs, beta]=WFS_COS(Y,x,k,sigma); pairs=PH_morse1D(x,wfs); ## 3 Simplical Homology A high dimensional object can be approximated by the point cloud data \(X\) consisting of \(p\) number of points. If we connect points of which distance satisfy a given criterion, the connected points start to recover the topology of the object. Hence, we can represent the underlying topology as a collection of the subsets of \(X\) that consists of nodes which are connected [33, 32, 25]. Given a point cloud data set \(X\) with a rule for connections, the topological space is a simplicial complex and its element is a simplex [75]. For point cloud data, the Delaunay triangulation is probably the most widely used method for connecting points. The Delaunay triangulation represents the collection of points in space as a graph whose face consists of triangles. Another way of connecting point cloud data is based on Rips complex often studied in persistent homology. Homology is an algebraic formalism to associate a sequence of objects with a topological space [25]. In persistent homology, the algebraic formalism is usually built on top of objects that are hierarchically nested such as morse filtration, graph filtration and dendrograms. Formally homology usually refers to homology groups which are often built on top of a simplical complex for point cloud and network data [39]. The \(k\)-simplex \(\sigma\) is the convex hull of \(v+1\) independent points \(v_{0},\cdots,v_{k}\). A point is a 0-simplex, an edge is a 1-simplex, and a filled-in triangle is a 2-simplex. A _simplicial complex_ is a finite collection of simplices such as points (0-simplex), lines (1-simplex), triangles (2-simplex) and higher dimensional counter parts [25]. A \(k\)_-skeleton_ is a simplex complex of up to \(k\) simplices. Hence a graph is a 1-skeleton consisting of 0-simplices (nodes) and 1-simplices (edges). There are various simplicial complexes. The most often used simplicial complex in persistent homology is the Rips complex. ### Rips complex The Vietoris-Rips or Rips complex is the most often used simplicial complex in persistent homology. Let \(X=\{x_{0},\cdots,x_{p}\}\) be the set of \(n\) points in \(\mathbb{R}^{d}\). The distance matrix between points in \(X\) is given by \(w=(w_{ij})\) where \(w_{ij}\) is the distance between points \(x_{i}\) and \(x_{j}\). Then the Rips complex \(R_{\epsilon}(X)\) is defined as follows [24, 34]. The Rips complex is a collection of simplicial complexes parameterized by \(\epsilon\). The complex \(R_{\epsilon}(X)\) captures the topology of the point set \(X\) at a scale of \(\epsilon\) or less. * The vertices of \(R_{\epsilon}(X)\) are the points in \(X\). * If the distance \(w_{ij}\) is less than or equal to \(\epsilon\), then there is an edge connecting points \(x_{i}\) and \(x_{j}\) in \(R_{\epsilon}(X)\). * If the distance between any two points in \(x_{i_{0}},x_{i_{1}},\ldots,x_{i_{k}}\) is less than or equal to \(\epsilon\), then there is a \(k\)-simplex in \(R_{\epsilon}(X)\) whose vertices are \(x_{i_{0}},x_{i_{1}},\ldots,x_{i_{k}}\). While a graph has at most 1-simplices, the Rips complex has at most \(k\)-simplices. In practice, the Rips complex is usually computed following the above definition iteratively adding simplices of increasing dimension. Given \(p+1\) number of points, there are potentially up to \(\binom{p+1}{k}\)\(k\)-simplices making the data representation extremely inefficient as the radius \(\epsilon\) increases. Thus, we restrict simplices of dimension up to \(k\) in practice. Such a simplical complex is called the \(k\)-skeleton. It is implemented as PH_rips.m, which inputs the matrix X of size \(p\times d\), dimension k and radius e. Then outputs the structured array S containing the collection of nodes, edges, faces up to \(k\)-simplices. For instance, the Rips complex up to 3-simplices in Figure 2 is created using p=50; d=3; Figure 2: Left: 50 randomly distributed points \(X\) in \([0,1]^{3}\). Right: Rips complex \(R_{0.3}(X)\) within radius 0.3 containing 106 1-simplices, 75 2-simplices (yellow) and 22 3-simplices (blue). X = rand(p, d); S= PH_rips(X, 3, 0.3) PH_rips_display(X,S); S = 4x1 cell array { 50x1 double} {106*2 double} { 75*3 double} { 22*4 double} The Rips complex is then displayed using PH_rips_display.m which inputs node coordinates X and simplical complex S. ### Boundary matrix Given a simplicial complex \(K\), the boundary matrices \(B_{k}\) represent the boundary operators between the simplices of dimension \(k\) and \(k-1\). Let \(C_{k}\) be the collection of \(k\)-simplices. Define the \(k\)-th boundary map \[\partial_{k}:C_{k}\to C_{k-1}\] as a linear map that sends each \(k\)-simplex \(\sigma\) to a linear combination of its \(k-1\) faces \[\partial_{k}\sigma=\sum_{\tau\in F_{k}(\sigma)}(-1)^{\operatorname{sgn}(\tau,\sigma)}\tau,\] where \(F_{k}(\sigma)\) is the set of \(k-1\) faces of \(\sigma\), and \(\operatorname{sgn}(\tau,\sigma)\) is the sign of the permutation that sends the vertices of \(\tau\) to the vertices of \(\sigma\). This expression says that the boundary of a \(k\)-simplex \(\sigma\) is the sum of all its \((k-1)\)-dimensional faces, with appropriate signs determined by the orientation of the faces. The signs alternate between positive and negative depending on the relative orientation of the faces, as determined by the permutation that maps the vertices of one face to the vertices of the other face. The \(k\)-th boundary map removes the filled-in interior of \(k\)-simplices. The vector spaces \(C_{k}\), \(C_{k-1},C_{k-2},\cdots\) are then sequentially nested by boundary operator \(\partial_{k}\)[25]: \[\cdots\xrightarrow{\partial_{k+1}}C_{k}\xrightarrow{\partial_{k}}C_{k-1} \xrightarrow{\partial_{k-1}}C_{k-2}\xrightarrow{\partial_{k-2}}\cdots. \tag{2}\] Such nested structure is called the _chain complex_. Consider a filled-in triangle \(\sigma=[v_{1},v_{2},v_{3}]\in C_{2}\) with three vertices \(v_{1},v_{2},v_{3}\) in Figure 3. The boundary map \(\partial_{k}\) applied to \(\sigma\) resulted in the collection of three edges that forms the boundary of \(\sigma\): \[\partial_{2}\sigma=[v_{1},v_{2}]+[v_{2},v_{3}]+[v_{3},v_{1}]\in C_{1}. \tag{3}\] If we give the direction or orientation to edges such that \[[v_{3},v_{1}]=-[v_{1},v_{3}],\] and use edge notation \(e_{ij}=[v_{i},v_{j}]\), we can write (3) as \[\partial_{2}\sigma=e_{12}+e_{23}+e_{31}=e_{12}+e_{23}-e_{13}.\] The boundary map can be represented as a boundary matrix \(\boldsymbol{\partial}_{k}\) with respect to a basis of the vector spaces \(C_{k}\) and \(C_{k-1}\), where the rows of \(\boldsymbol{\partial}_{k}\) correspond to the basis elements of \(C_{k}\) and the columns correspond to the basis elements of \(C_{k-1}\). The \((i,j)\) entry of \(\boldsymbol{\partial}_{k}\) is given by the coefficient of the \(j\)th basis element in the linear combination of the \(k-1\) faces of the \(i\)th basis element in \(C_{k}\). The boundary matrix is the higher dimensional version of the incidence matrix in graphs [41, 40, 60] showing how \((k-1)\)-dimensional simplices are forming \(k\)-dimensional simplex. The \((i,j)\) entry of \(\boldsymbol{\partial}_{k}\) is one if \(\tau\) is a face of \(\sigma\) otherwise zero. The entry can be -1 depending on the orientation of \(\tau\). For the simplicial complex in Figure 3, the boundary matrices are given by \[\boldsymbol{\partial}_{2} =\begin{array}{c}\sigma\\ e_{12}\\ e_{23}\\ e_{31}\\ e_{24}\\ e_{41}\\ e_{45}\end{array}\] \[\boldsymbol{\partial}_{1} =\begin{array}{c}v_{1}\\ v_{2}\\ v_{3}\\ v_{4}\\ v_{5}\end{array}\left(\begin{array}{cccccc}-1&0&1&0&1&0\\ 1&-1&0&-1&0&0\\ 0&1&-1&0&0&0\\ 0&0&0&1&-1&-1\\ 0&0&0&0&0&1\end{array}\right)\] \[\boldsymbol{\partial}_{0} =\begin{array}{c}v_{1}&v_{2}&v_{3}&v_{4}&v_{5}\\ 0&0&0&0&0\end{array}.\] In example in Figure 4-left, PH_rips(X,3,0.5) gives >> S{1} Figure 3: A simplicial complex with 5 vertices and 2-simplex \(\sigma=[v_{1},v_{2},v_{3}]\) with a filled-in face (colored gray). After boundary operation \(\partial_{2}\), we are only left with 1-simplices \([v_{1},v_{2}]+[v_{2},v_{3}]+[v_{3},v_{1}]\), which is the boundary of the filled in triangle. The complex has a single connected component (\(\beta_{0}=1\)) and a single 1-cycle. The dotted red arrows are the orientation of simplices. 1 2 3 4 5 >> S{2} 1 3 2 5 3 4 PH_boundary.m only use node set S{1} and edge set S{2} in building boundary matrix B1 saving computer memory. >> B{1} -1 0 0 0 -1 0 1 0 -1 0 0 1 The columns of boundary matrix B{1} is indexed with the edge set in S{2} such that the first column corresponds to edge [1,3]. Any other potential edges [2,3] that are not connected is simply ignored to save computer memory. When we increase the filtration value and compute PH_rips(X,3,0.6), a triangle is formed (yellow colored) and S{3} is created (Figure 4-middle). >> S{2} Figure 4: Examples of boundary matrix computation. From the left to right, the radius is changed to 0.5, 0.6 and 1.0. 1 3 1 4 2 5 3 4 >> S{3} 1 3 4 Correspondingly, the boundary matrices change to >> B{1} -1 -1 0 0 0 0 -1 0 1 0 0 -1 0 1 0 1 0 0 1 0 >> B{2} 1 -1 0 1 From the edge set S{2} that forms the row index for boundary matrix B{2}, we have [1,3] - [1,4] + [3,4] that forms the triangle [1,3,4]. When we increase the filtration value further and compute PH_rips(X,3,1), a tetrahedron is formed (blue colored) and S{4} is created (Figure 4-right). >> S{3} 1 3 4 2 3 4 2 3 5 2 4 5 3 4 5 >> S{4} 2 3 4 5 Correspondingly, the boundary matrix B{3} is created >> B{3} 0 -1 -1 1 The easiest way to check the computation is correct is looking at the sign of triangles in - [2,3,4] + [2,3,5] - [2,4,5] + [3,4,5]. Using the right hand thumb rule, which puts the orientation of triangle [3,4,5] toward the center of the tetrahedron, the orientation of all the triangles are toward the center of the tetrahedron. Thus, the signs are correctly assigned. Since computer algorithms are built inductively, the method should work correctly in higher dimensional simplices. ### Homology group The image of boundary map is defined as \[\mathrm{im}\partial_{k+1}=\{\partial_{k+1}\sigma|\sigma\in C_{k+1}\}\subset C_{ k},\] which is a collection of boundaries. The elements of the image of \(\partial_{k+1}\) are called \(k\)-boundaries, and they represent \(k\)-dimensional features that can be filled in by \((k+1)\)-dimensional simplices. For instance, if we take the boundary \(\partial_{2}\) of the triangle \(\sigma\) in Figure 3, we obtain a 1-cycle with edges \(e_{12},e_{23},e_{31}\). The image of the boundary matrix \(B_{k+1}\) is the subspace spanned by its columns. The column space can be found by the Gaussian elimination or singular value decomposition. The kernel of boundary map is defined as \[\mathrm{ker}\partial_{k}=\{\sigma\in C_{k}|\partial_{k}\sigma=0\},\] which is a collection of cycles. The elements of the kernel of \(\partial_{k}\) are called cycles, since they form closed loops or cycles in the simplicial complex. The kernel of the boundary matrix \(B_{k}\) is spanned by eigenvectors \(v\) corresponding to zero eigenvalues of \(B_{k}\). The boundary map satisfy the property that the composition of any two consecutive boundary maps is the zero map, i.e., \[\partial_{k-1}\circ\partial_{k}=0. \tag{4}\] This reflect the fact that the boundary of a boundary is always empty. We can apply the boundary operation \(\partial_{1}\) further to \(\partial_{2}\sigma\) in Figure 3 example and obtain \[\partial_{1}\partial_{2}\sigma =\partial_{1}e_{12}+\partial_{1}e_{23}+\partial_{1}e_{31}\] \[=v_{2}-v_{1}+v_{3}-v_{2}+v_{1}-v_{3}=0.\] This property (4) implies that the image of \(\partial_{k}\) is contained in the kernel of \(\partial_{k-1}\), i.e., \[\mathrm{im}\partial_{k+1}\subset\mathrm{ker}\partial_{k}.\] Further, the boundaries \(\mathrm{im}\partial_{k+1}\) form subgroups of the cycles \(\mathrm{ker}\partial_{k}\). We can partition \(\mathrm{ker}\partial_{k}\) into cycles that differ from each other by boundaries through the quotient space \[H_{k}(K)=\mathrm{ker}\partial_{k}/\mathrm{im}\partial_{k+1},\] which is called the \(k\)-th homology group. \(H_{k}(K)\) is a vector space that captures the \(k\)th topological feature or cycles in \(K\). The elements of the \(k\)-th homology group are often referred to as \(k\)-dimensional cycles or \(k\)-cycles. Intuitively, it measures the presence of \(k\)-dimensional loops in the simplicial complex. The rank of \(H_{k}(K)\) is the \(k\)th Betti number of \(K\), which is an algebraic invariant that captures the topological features of the complex \(K\). Although we put direction in the boundary matrices by adding sign, the Betti number computation will be invariant of how we orient simplices. The \(k\)-th Betti number \(\beta_{k}\) is then computed as \[\beta_{k}=rank(H_{k})=rank(\text{ker}\partial_{k})-rank(\text{im}\partial_{k +1}). \tag{5}\] The \(0\)-th Betti number is the number of connected components while the \(1\)-st Betti number is the number of cycles. The Betti numbers \(\beta_{k}\) are usually algebraically computed by reducing the boundary matrix \(\boldsymbol{\partial}_{k}\) to the Smith normal form \(\mathcal{S}(\boldsymbol{\partial}_{k})\), which has a block diagonal matrix as a submatrix in the upper left, via Gaussian elimination [25]. In the Smith normal form, we have the rank-nullity theorem for boundary matrix \(\boldsymbol{\partial}_{k}\), which states the dimension of the domain of \(\boldsymbol{\partial}_{k}\) is the sum of the dimension of its image and the dimension of its kernel (nullity) (Figure 5). In \(\mathcal{S}(\boldsymbol{\partial}_{k})\), the number of columns containing only zeros is \(rank(\text{ker}\boldsymbol{\partial}_{k})\), the number of \(k\)-cycles while the number of columns containing one is \(rank(\boldsymbol{\partial}_{k})\), the number of \(k\)-cycles that are boundaries. Thus \[\beta_{k}=rank(ker\boldsymbol{\partial}_{k})-rank(\boldsymbol{\partial}_{k+1 }). \tag{6}\] The computation starts with initial rank computation \[rank\boldsymbol{\partial}_{0}=0,\quad rank(ker\boldsymbol{\partial}_{0})=p.\] Example 2: The boundary matrices \(\boldsymbol{\partial}_{k}\) in Figure 3 is transformed to the Smith normal form \(\mathcal{S}(\boldsymbol{\partial}_{k})\) after Gaussian elimination as \[\mathcal{S}(\boldsymbol{\partial}_{1})=\left(\begin{array}{cccc}1&0&0&0&0&0 \\ 0&1&0&0&0&0\\ 0&0&1&0&0&0\\ 0&0&0&1&0&0\\ 0&0&0&0&0&0\end{array}\right),\quad\mathcal{S}(\boldsymbol{\partial}_{2})= \left(\begin{array}{c}1\\ 0\\ 0\\ 0\\ 0\\ 0\end{array}\right).\] Figure 5: The rank-nullity theorem for boundary matrix \(\boldsymbol{\partial}_{k}\), which states the dimension of the domain of \(\boldsymbol{\partial}_{k}\) is the sum of the dimension of its image and the dimension of its kernel (nullity). From (6), the Betti number computation involves the rank computation of two boundary matrices. \(rank(\mathbf{\partial}_{0})=5\) is trivially the number of nodes in the simplicial complex. There are \(rank(\text{ker}\mathbf{\partial}_{1})=2\) zero columns and \(rank(\mathbf{\partial}_{1})=4\) non-zero row columns. \(rank(\mathbf{\partial}_{2})=1\). Thus, we have \[\beta_{0} =rank(\text{ker}\mathbf{\partial}_{0})-rank(\mathbf{\partial}_{1})=5-4=1,\] \[\beta_{1} =rank(\text{ker}\mathbf{\partial}_{1})-rank(\mathbf{\partial}_{2})=2-1=1.\] Following the above worked out example, the Betti number computation is implemented in PH_boundary_betti.m which inputs the boundary matrices generated by PH_boundary.m. The function outputs \(\beta_{1},\beta_{2},\cdots\). The function computes the \((d-1)\)-th Betti number as betti(d)= rank(null(B{d-1})) - rank(B{d}). Figure 6 displays few examples of Betti number computation on Rips complexes. The rank computation in most computational packages is through the singular value decomposition (SVD). ### Hodge Laplacian The boundary matrix\(\partial_{1}\) relates nodes to edges is commonly referred as an incidence matrix in graph theory. The boundary operator \(\partial_{k}\) can be represented and interpreted as as the higher dimensional version of _incidence matrix_[41, 40, 60]. The standard graph Laplacian can be computed using an incidence matrix as \[\Delta_{0}=\partial_{1}\partial_{1}^{\top},\] which is also called the 0-th Hodge Laplacian [41]. In general, the \(k\)-th Hodge Laplacian is defined as \[\Delta_{k}=\partial_{k+1}\partial_{k+1}^{\top}+\partial_{k}^{\top}\partial_{k}.\] The boundary operation \(\partial_{k}\) depends on \(k\)-simplices. Figure 6: Betti number computation on simplical complex using PH_betti.m function. The \(k\)-th Laplacian is a sparse \(n_{k}\times n_{k}\) positive semi-definite symmetric matrix, where \(n_{k}\) is the number of \(k\)-simplices [29]. Then the \(k\)-th Betti number \(\beta_{k}\) is the dimension of \(ker\Delta_{k}\), which is given by computing the rank of \(\Delta_{k}\). The 0th Betti number, the number of connected component, is computed from \(\Delta_{0}\) while the 1st Betti number, the number of independent cycles, is computed from \(\Delta_{1}\). In case of graphs, which is 1-skeletons consisting of only 0-simplices and 1-simplices, the boundary matrix \(\partial_{2}=0\), thus the second term in the Hodge Laplacian \(\Delta_{1}\) vanishes and we have \[\Delta_{1}=\partial_{1}^{\top}\partial_{1}. \tag{7}\] In brain network studies, brain networks are usually represented as graphs and thus (7) is more than sufficient unless we model higher order brain connectivity [2]. The Hodge Laplacian can be computed differently using the adjacency matrices as \[\Delta_{k}=D-A^{+}+(k+1)I_{n_{k}}+A^{-},\] where \(A^{+}\) and \(A^{-}\) are the upper and lower adjacency matrices between the \(k\)-simplices. \(D=diag(deg(\sigma_{1}),\cdots,deg(\sigma_{n_{k}}))\) is the diagonal matrix consisting of the sum of node degrees of simplices \(\sigma_{j}\)[52]. The boundary matrices B as input, PH_hodge.m outputs the Hodge Laplacian as a cell array H. Then PH_hodge_betti.m computes the Betti numbers through the rank of kernel space of the Hodge Laplacians: H=PH_hodge(B) betti= PH_hodge_betti(H) ### Rips filtrations The Rips complex has the property that as the radius parameter value \(\epsilon\) increases, the complex grows by adding new simplices. The simplices in the Rips complex at one radius parameter value are a subset of the simplices in the Rips complex at a larger radius parameter value. This nesting property is captured by the inclusion relation \[\mathcal{R}\epsilon_{0}\subset\mathcal{R}\epsilon_{1}\subset\mathcal{R} \epsilon_{2}\subset\cdots\] for \(0=\epsilon_{0}\leq\epsilon_{1}\leq\epsilon_{2}\leq\cdots\). This nested sequence of Rips complexes is called the _Rips filtration_, which is the main object of interest in persistent homology (Figure 7). The filtration values \(\epsilon_{0},\epsilon_{1},\epsilon_{2},\ldots\) represent the different scales at which we are studying the topological structure of the point cloud. By increasing the filtration value \(\epsilon\), we are connecting more points, and therefore the size of the edge set, face set, and so on, increases. The exponential growth in the number of simplices in the Rips complex as the number of vertices \(p\) increases can quickly become a computational bottleneck when working with large point clouds. For a fixed dimension \(k\), the number of \(k\)-simplices in the Rips complex grows as \(\mathcal{O}(p^{k})\), which can make computations and memory usage impractical for large values of \(p\). Furthermore, as the filtration value \(\epsilon\) increases, the Rips complex becomes increasingly dense, with edges between every pair of vertices and filled triangles between every triple of vertices. Even for moderately sized point clouds, the Rips filtration can become very ineffective as a representation of the underlying data at higher filtration values. The complex becomes too dense to provide meaningful insights into the underlying topological structure of the data. To address these issues, various methods have been proposed to sparsify the Rips complex. One such method is the graph filtration first proposed in [43, 42], which constructs a filtration based on a weighted graph representation of the data. The graph filtration can be more effective than the Rips filtration especially when the topological features of interest are related to the graph structure of the data. ### Persistent diagrams As the radius \(\epsilon\) increases, the Rips complex \(R_{\epsilon}(X)\) grows and contains a higher-dimensional simplex that merges two lower-dimensional simplices representing the death of the two lower-dimensional features and the birth of a new higher-dimensional feature. The persistent diagram is a plot of the birth and death times of features. We start by computing the homology groups of each of the simplicial complexes in the filtration. Let \(H_{k}(K_{i})\) denote the \(k^{th}\) homology group of the simplicial complex \(K_{i}\). We then track the appearance and disappearance of each homology class across the different simplicial complexes in the filtration. The birth time of a homology class is defined as the smallest radius \(\epsilon_{b}\) for which the class appears in the filtration, and the death time is the largest radius \(\epsilon_{d}\) for which the class is present. We then plot each homology class as a point in the two-dimensional plane as \((\epsilon_{b},\epsilon_{d})\in\mathbb{R}^{2}\). The collection of all these points is the persistence diagram for \(k\)-th homology group. Figure 7: Rips filtration on 1-skeleton of the point cloud data that was sampled along the underlying key shaped data. If two points are within the given radius, we connect them with an edge but do not form any other dimensional simplex. Such sparsity in Rips filtration can be more effective in practice. PH_rips.m limit the dimension of skeleton we build Rips filtrations and do not build every possible simplical complexes. To track the birth and death times of homology classes, we need to identify when a new homology class is born or an existing homology class dies as the radius \(\epsilon\) increases. We can do this by tracking the changes in the ranks of the boundary matrices. Specifically, a \(k\)-dimensional cycle is born when it appears as a new element in the kernel of \(\partial_{k}\) in a simplicial complex \(K_{i}\) that did not have it before, and it dies when it becomes a boundary in \(K_{j}\) for some \(j>i\). Thus, we can compute the birth time \(\epsilon_{b}\) of a \(k\)-dimensional homology class as the smallest radius for which it appears as a new element in the kernel of \(\partial_{k}\). Similarly, we can compute the death time \(\epsilon_{d}\) of the same class as the largest radius for which it is still a cycle in the simplicial complex \(K_{j}\) for some \(j>i\). By tracking the changes in the ranks of boundary matrices, we can compute the birth and death times of homology classes and plot them in the persistence diagram for the \(k\)-th homology group. However, the computation is fairly demanding and not scale well. Example 3: The example came from [18]. The atomic structure of spike proteins of corona virus can be determined through the cryogenic electron microscopy (cryo-EM) [7, 69]. Figure Figure 8: Top: Spike proteins of the three different corona viruses. The spike proteins consist of three similarly shaped interwinding substructures identified as A (blue), B (red) and C (green) domains. Bottom: The persistent diagrams of spike proteins. The red dots are 0D homology and the black dots are 1D homology. 8-top displays a spike consists of three similarly shaped protein molecules with rotational symmetry often identified as A, B and C domains. The 6VXX and 6VYB are respectively the closed and open states of SARS-Cov-2 from human [69] while 6JX7 is feline coronavirus [73]. We used the atomic distances in building Rips filtrations in computing persistent diagrams. The persistent diagrams of both closed and open states are almost identical in smaller birth and death values below 6 A (angstrom) (Figure 8-bottom). The major difference is in the scatter points with larger birth and death values. However, we need a quantitative measures for comparing the topology of closed and open states. ## 4 Graph filtrations ### Graph filtration The graph filtration has been the first type of filtrations applied in brain networks and it is now considered as the baseline filtrations in brain network data [43, 42, 44]. Euclidean distance is often used metric in building filtrations in persistent homology [25]. Most brain network studies also use the Euclidean distances for building graph filtrations [57, 36, 10, 71, 3, 56]. Given weighted network \(\mathcal{X}=(V,w)\) with edge weight \(w=(w_{ij})\), the binary network \(\mathcal{X}_{\epsilon}=(V,w_{\epsilon})\) is a graph consisting of the node set \(V\) and the binary edge weights \(w_{\epsilon}=(w_{\epsilon,ij})\) given by \[w_{\epsilon,ij}=\begin{cases}1&\text{ if }w_{ij}>\epsilon;\\ 0&\text{ otherwise.}\end{cases} \tag{8}\] Note [42, 44] defines the binary graphs by thresholding above such that \(w_{\epsilon,ij}=1\) if \(w_{ij}<\epsilon\) which is consistent with the definition of the Rips filtration. However, in brain connectivity, higher value \(w_{ij}\) indicates stronger connectivity so we usually thresholds below [15]. Note \(w_{\epsilon}\) is the adjacency matrix of \(\mathcal{X}_{\epsilon}\), which is a simplicial complex consisting of 0-simplices (nodes) and 1-simplices (edges) [32]. In the metric space \(\mathcal{X}=(V,w)\), the Rips complex \(\mathcal{R}_{\epsilon}(X)\) is a simplicial complex whose \((p-1)\)-simplices correspond to unordered \(p\)-tuples of points that satisfy \(w_{ij}\leq\epsilon\) in a pairwise fashion [32]. While the binary network \(\mathcal{X}_{\epsilon}\) has at most 1-simplices, the Rips complex can have at most \((p-1)\)-simplices. Thus, \(\mathcal{X}_{\epsilon}\subset\mathcal{R}_{\epsilon}(\mathcal{X})\) and its compliment \(\mathcal{X}_{\epsilon}^{c}\subset\mathcal{R}_{\epsilon}(\mathcal{X})\). Since a binary network is a special case of the Rips complex, we also have \[\mathcal{X}_{\epsilon_{0}}\supset\mathcal{X}_{\epsilon_{1}}\supset\mathcal{ X}_{\epsilon_{2}}\supset\cdots\] and equivalently \[\mathcal{X}_{\epsilon_{0}}^{c}\subset\mathcal{X}_{\epsilon_{1}}^{c}\subset \mathcal{X}_{\epsilon_{2}}^{c}\subset\cdots\] for \(0=\epsilon_{0}\leq\epsilon_{1}\leq\epsilon_{2}\cdots\). The sequence of such nested multiscale graphs is defined as the _graph filtration_[42, 44]. Figure 9 illustrates a graph filtration in a 4-nodes example while Figure 10 shows the graph filtration on structural covariates on maltreated children on 116 parcellated brain regions. Note that \(\mathcal{X}_{0}\) is the complete weighted graph while \(\mathcal{X}_{\infty}\) is the node set \(V\). By increasing the threshold value, we are thresholding at higher connectivity so more edges are removed. Given a weighted graph, there are infinitely many different filtrations. This makes the comparisons between two different graph filtrations difficult. For a graph with \(p\) nodes, the maximum number of edges is \((p^{2}-p)/2\), which is obtained in a complete graph. If we order the edge weights in the increasing order, we have the sorted edge weights: \[0=w_{(0)}<\min_{j,k}w_{jk}=w_{(1)}<w_{(2)}<\cdots<w_{(q)}=\max_{j,k}w_{jk},\] where \(q\leq(p^{2}-p)/2\). The subscript \({}_{(\ )}\) denotes the order statistic. For all \(\lambda<w_{(1)}\), \(\mathcal{X}_{\lambda}=\mathcal{X}_{0}\) is the complete graph of \(V\). For all \(w_{(r)}\leq\lambda<w_{(r+1)}\) (\(r=1,\cdots,q-1\)), \(\mathcal{X}_{\lambda}=\mathcal{X}_{w_{(r)}}\). For all \(w_{(q)}\leq\lambda\), \(\mathcal{X}_{\lambda}=\mathcal{X}_{\rho_{(q)}}=V\), the vertex set. Hence, the filtration given by \[\mathcal{X}_{0}\supset\mathcal{X}_{w_{(1)}}\supset\mathcal{X}_{w_{(2)}} \supset\cdots\supset\mathcal{X}_{w_{(q)}}\] is _maximal_ in a sense that we cannot have any additional filtration \(\mathcal{X}_{\epsilon}\) that is not one of the above filtrations. Thus, graph filtrations are usually given at edge weights [15]. The condition of having unique edge weights is not restrictive in practice. Assuming edge weights to follow some continuous distribution, the probability of any two edges being equal is zero. For discrete distribution, it may be possible to have identical edge weights. Then simply add Gaussian noise or add extremely small increasing sequence of numbers to \(q\) number of edges. ### Monotone Betti curves The graph filtration can be quantified using monotonic function \(f\) satisfying \[f(\mathcal{X}_{\epsilon_{0}})\geq f(\mathcal{X}_{\epsilon_{1}})\geq f( \mathcal{X}_{\epsilon_{2}})\geq\cdots \tag{9}\] or \[f(\mathcal{X}_{\epsilon_{0}})\leq f(\mathcal{X}_{\epsilon_{1}})\leq f( \mathcal{X}_{\epsilon_{2}})\leq\cdots \tag{10}\] Figure 9: Schematic of graph filtration and Betti curves. We sort the edge weights in an increasing order. We threshold the graph at filtration values and obtain binary graphs. The thresholding is performed sequentially by increasing the filtration values. The 0-th Betti number \(\beta_{0}\), which counts the number of connected components, and the first Betti number \(\beta_{1}\), which counts the number of cycles, is then plotted over the filtration. The Betti plots curves monotone in graph filtrations. The number of connected components (zeroth Betti number \(\beta_{0}\)) and the number of cycles (first Betti number \(\beta_{1}\)) satisfy the monotonicity (Figures 9 and 11). The size of the largest cluster also satisfies a similar but opposite relation of monotonic increase. There are numerous monotone graph theory features [15, 20]. For graphs, \(\beta_{1}\) can be computed easily as a function of \(\beta_{0}\). Note that the Euler characteristic \(\chi\) can be computed in two different ways \[\chi = \beta_{0}-\beta_{1}+\beta_{2}-\cdots\] \[= \#nodes-\#edges+\#faces-\cdots,\] where \(\#nodes,\#edges,\#faces\) are the number of nodes, edges and faces. However, graphs do not have filled faces and Betti numbers higher than \(\beta_{0}\) and \(\beta_{1}\) can be ignored. Thus, a graph with \(p\) nodes and \(q\) edges is given by [1] \[\chi=\beta_{0}-\beta_{1}=p-q.\] Thus, \[\beta_{1}=p-q-\beta_{0}.\] In a graph, Betti numbers \(\beta_{0}\) and \(\beta_{1}\) are monotone over filtration on edge weights [16, 17]. When we do filtration on the maximal filtration in (9), edges are deleted one at a time. Since an edge has only two end points, the deletion of an edge disconnect the graph into at most two. Thus, the number of connected components (\(\beta_{0}\)) always increases and the increase is at most by one. Note \(p\) is fixed over the filtration but \(q\) is decreasing by one while \(\beta_{0}\) increases at most by one. Hence, \(\beta_{1}\) always decreases and the decrease is at most by one. Figure 10: Graph filtrations of maltreated children vs. normal control subjects on FA-values [15]. The Pearson correlation is used as filtration values at 0.5, 0.6 and 0.7. maltreated subjects show much higher correlation of FA-values indicating more homogeneous and less varied structural covariate relationship. Further, the length of the largest cycles, as measured by the number of nodes, also decreases monotonically (Figure 12). Identifying connected components in a network is important to understand in decomposing the network into disjoint subnetworks. The number of connected components (0-th Betti number) of a graph is a topological invariant that measures the number of structurally independent or disjoint subnetworks. There are many available existing algorithms, which are not related to persistent homology, for computing the number of connected components including the Dulmage-Mendelsohn decomposition [58], which has been widely used for decomposing sparse matrices into block triangular forms in speeding up matrix operations. In graph filtrations, the number of cycles increase or decreases as the filtration value increases. The pattern of monotone increase or decrease can visually show how the topology of the graph changes over filtration values. The overall pattern of _Betti curves_ can be used as a summary measure of quantifying how the graph changes over increasing edge weights Figure 11: The Betti curves on the covariance correlation matrices for Jacobian determinant (left column) and fractional anisotropy (right column) on 548 (top two rows) and 1856 (bottom two rows) nodes [15]. Unlike the covariance, the correlation seems to shows huge group separation between normal and maltreated children visually. However, in all 7 cases except top right (548 nodes covariance for FA), statistically significant differences were detected using the rank-sum test on the areas under the Betti-plots (\(p\)-value \(<0.001\)). The shapes of Betti-plots are consistent between the studies with different node sizes indicating the robustness of the proposed method over changing number of nodes. [14] (Figure 9). The Betti curves are related to barcodes. The Betti number is equal to the number of bars in the barcodes at the specific filtration value. Figure 14 displays an example of graph filtration constructed using random scatter points in a cube. Given scatter points X, the pairwise distance matrix is computed as w = pdist2(X,X). The maximum distance is given by maxw = max (w(:)). Betti curves are computed using PH_betti.m, which inputs the pairwise distance w and the range of filtration values[0:0.05:maxw]. The function outputs \(\beta_{0}\) and \(\beta_{1}\) values as structured arrays beta.zero and beta.one. We display them using PH_betti_display.m. p=50; d=3; X = rand(p, d); w = pdist2(X,X); maxw = max(w(:)); thresholds=[0:0.05:maxw]; beta = PH_betti(w, thresholds); PH_betti_display(beta,thresholds) ### Rips filtration vs. graph filtration Persistent homology does not scale well with increased data size (Figure 13). The computational complexity of persistent homology grows rapidly with the number of simplices [67]. With \(p\) number of nodes, the size of the \(k\)-skeleton grows as \(p^{k+1}\). Homology calculations are often done by Gaussian elimination, and if there are \(N\) simplices, it takes \(\mathcal{O}(N^{3})\) time to perform. In \(\mathbb{R}^{d}\), the computational complexity is \(\mathcal{O}(p^{3k+3})\)[63]. Thus, the computation of Rips complex is exponentially costly. It can easily becomes infeasible when when one tries to use brain networks at the voxel level resolution. Thus, there have been many attempts Figure 12: The largest cycle at given correlation thresholds on rs-fMRI. Two representative subjects in HCP were used [16]. As the threshold increases, the length of cycles decreases monotonically. in computing Rips complex approximately but fast for large-scale data such as alpha filtration based on Delaunay triangulation with \(\mathcal{O}(n^{2})\) for \(k=3\) and sparse Rips filtration with \(\mathcal{O}(n)\) simplicies and \(\mathcal{O}(n\log n)\) runtime [19, 55, 61]. However, all of these filtrations are all approximation to Rips filtration. To remedy the computational bottleneck caused by Rips filtrations, _graph filtration_ was introduced particularly for network data [42, 44]. The graph filtration is a special case of Rips filtration restricted to 1-simplices. If the Rips filtration up to 2-simplices is given by PH_rips(X,2, e), the graph filtration is given by PH_rips(X,1, maxw-e). In Figure 14 displays the comparison of two filtrations for randomly generated 50 nodes in a cube. In the both filtrations, \(\beta_{0}\)-curves are monotone. However, \(\beta_{1}\)-curve for the Rips filtration is not monotone. Further, the range of changes in \(\beta_{1}\) is very narrow. In some randomly generated points, we can have multiple peaks in \(\beta_{1}\) making the \(\beta_{1}\)-curve somewhat unstable. On the other hand, the \(\beta_{1}\)-curve for the graph filtration is monotone and gradually changing over the whole range of filtration values. This will give consistent to the \(\beta_{1}\)-curve that is required for increasing statistical power in the group level inference. ### Graph filtration in trees Binary trees have been a popular data structure to analyze using persistent homology in recent years [4, 47]. Trees and graphs are 1-skeletons, which are Rips complexes consisting of only nodes and edges. However, trees do not have 1-cycles and can be quantified using up to 0-cycles only, i.e., connected components, and higher order topological features can be simply ignored. However, [31] used somewhat inefficient filtrations in the 2D plane that increase the radius of circles from the root node or points along the circles. Such filtrations will produces persistent diagrams (PD) that spread points in 2D plane. Further, it may create 1-cycles. Figure 13: rs-fMRI correlation network of two subjects from HCP with more than 25000 nodes. Identifying cycles and computing the number of cycles can be computationally demanding in this type of dense correlation network since persistent homology computations are not very scalable. Such PD are difficult to analyze since scatter points do not correspond across different PD. For 1-skeleton, the _graph filtration_ offers more efficient alternative [17, 64]. Consider a tree \(\mathcal{T}=(V,w)\) with node set \(V=\{1,2,\cdots,p\}\) and weighted adjacency matrix \(w\). If we have binary tree with binary adjacency matrix, we add edge weights by taking the distance between nodes \(i\) and \(j\) as the edge weights \(w_{ij}\) and build a weighted tree with \(w=(w_{ij})\). For a tree \(T\) with \(p\geq 2\) nodes and unique \(p-1\) positive edge weights \(w_{(1)}<w_{(2)}<\cdots<w_{(p-1)}\). Threshold \(\mathcal{T}\) at \(\epsilon\) and define the binary tree \(\mathcal{T}_{\epsilon}=(V,w_{\epsilon})\) with edge weights \(w_{\epsilon}=(w_{\epsilon,ij}),w_{\epsilon,ij}=1\) if \(w_{ij}>\epsilon\) and \(0\) otherwise. Then we have graph filtration on trees \[\mathcal{T}_{w_{(0)}}\supset\mathcal{T}_{w_{(1)}}\supset\mathcal{T}_{w_{(2) }}\supset\cdots\supset\mathcal{T}_{w_{(p-1)}}. \tag{11}\] Since all the edge weights are above filtration value \(w_{(0)}=0\), all the nodes are connected, i.e., \(\beta_{0}(w_{(0)})=1\). Since no edge weight is above the threshold \(w_{(q-1)}\), \(\beta_{0}(w_{(p-1)})=p\). Each time we threshold, the tree splits into two and the number of components increases exactly by one in the tree [17]. Thus, we have \[\beta_{0}(\mathcal{T}_{w_{(1)}})=2,\beta_{0}(\mathcal{T}_{w_{(2)}})=3,\cdots, \beta_{0}(\mathcal{T}_{w_{(p-1)}})=p.\] Figure 14: The comparison between the Rips and graph filtrations. Thus, the coordinates for the 0-th Betti curve is given by \[(0,1),(w_{(1)},2),\cdots,(w_{(2)},3),(w_{(p-1)},p),(\infty,p).\] All the 0-cycles (connected components) never die once they are bone over graph filtration. For convenience, we simply let the death value of 0-cycles at some fixed number \(c>w_{(q-1)}\). Then PD of the graph filtration is simply \[(w_{(1)},c),(w_{(2)},c),\cdots,(w_{(q-1)},c)\] forming 1D scatter points along the horizontal line \(y=c\) making various operations and analysis on PD significantly simplified [64]. Figure 15 illustrates the graph filtration and corresponding 1D scatter points in PD on the binary tree used in [31]. A different graph filtration is also possible by making the edge weight to be the shortest distance from the root node. However, they should carry the identical topological information. For a general graph, it is not possible to analytically determine the coordinates for its Betti curves. The best we can do is to compute the number of connected components \(\beta_{0}\) numerically using the single linkage dendrogram method (SLD) [44], the Dulmage-Mendelsohn decomposition [58, 11] or through the Gaussian elimination [62, 9, 26]. ### Birth-death decomposition Unlike the Rips complex, there are no higher dimensional topological features beyond the 0D and 1D topology in graph filtration. The 0D and 1D persistent diagrams \((b_{i},d_{i})\) tabulate the life-time of 0-cycles (connected components) and 1-cycles (loops) that are born at the filtration value \(b_{i}\) and die at value \(d_{i}\). The 0th Betti number \(\beta_{0}(w_{(i)})\) counts the number of 0-cycles at filtration value \(w_{(i)}\) and shown to be non-decreasing over filtration (Figure 16) [16]: \(\beta_{0}(w_{(i)})\leq\beta_{0}(w_{(i+1)})\). On the other hand the 1st Betti number \(\beta_{1}(w_{(i)})\) counts the number of independent loops and shown to be non-increasing over filtration [16]: \(\beta_{1}(w_{(i)})\geq\beta_{1}(w_{(i+1)})\). During the graph filtration, when new components is born, they never dies. Thus, 0D persistent diagrams are completely characterized by birth values \(b_{i}\) only. Loops are viewed Figure 15: Left: binary tree used in [31]. Middle: \(\beta_{0}\)-curve over graph filtration. Edge weights of the tree is used as the filtration values. Right: The points in the persistent diagram all lined up at \(y=0.31\), which is arbitarily picked to be larger than the maximum edge weight 0.3034. as already born at \(-\infty\). Thus, 1D persistent diagrams are completely characterized by death values \(d_{i}\) only. We can show that the edge weight set \(W\) can be partitioned into 0D birth values and 1D death values [65]: Theorem 3.1 (Birth-death decomposition): _The edge weight set \(W=\{w_{(1)},\cdots,w_{(q)}\}\) has the unique decomposition_ \[W=W_{b}\cup W_{d},\quad W_{b}\cap W_{d}=\emptyset \tag{12}\] _where birth set \(W_{b}=\{b_{(1)},b_{(2)},\cdots,b_{(q_{0})}\}\) is the collection of 0D sorted birth values and death set \(W_{d}=\{d_{(1)},d_{(2)},\cdots,d_{(q_{1})}\}\) is the collection of 1D sorted death values with \(q_{0}=p-1\) and \(q_{1}=(p-1)(p-2)/2\). Further \(W_{b}\) forms the 0D persistent diagram while \(W_{d}\) forms the 1D persistent diagram._ In a complete graph with \(p\) nodes, there are \(q=p(p-1)/2\) unique edge weights. There are \(q_{0}=p-1\) number of edges that produces 0-cycles. This is equivalent to the number of edges in the maximum spanning tree of the graph. Thus, \(q_{1}=q-q_{0}=\frac{(p-1)(p-2)}{2}\) number of edges destroy loops. The 0D persistent diagram is given by \(\{(b_{(1)},\infty),\)\(\cdots,\)\((b_{(q_{0})},\infty)\}\). Ignoring \(\infty\), \(W_{b}\) is the 0D persistent digram. The 1D persistent diagram of the graph filtration is given by \(\{(-\infty,d_{(1)}),\)\(\cdots,(-\infty,d_{(q_{1})})\}\). Ignoring \(-\infty\), \(W_{d}\) is the 1D persistent digram. We can show that the birth set is the maximum spanning tree (MST) (Figure 16) [64]. Numerical implementation: The identification of \(W_{b}\) is based on the modification to Kruskal's or Prim's algorithm and identify the MST [44]. Then \(W_{d}\) is identified as \(W/W_{d}\). Figure 16 displays how the birth and death sets change over time for a single subject used in the study. Given edge weight matrix \(W\) as input, the Matlab function WS_decompose.m outputs the birth set \(W_{b}\) and the death set \(W_{d}\). Figure 16: The birth-death decomposition partitions the edge set into the birth and death edge sets. The birth set forms the maximum spanning tree (MST) and contains edges that create connected components (0D topology). The death set contains edges not belong to maximum spanning tree (MST) and destroys loops (1D topology). ## 5 Topological Inference The main difference between the geometric and topological distance is if the distance can discriminate the presence of topological difference and not able to discriminate the presence of topological indifference (Figure 17). This can be achieved using the Wasserstein distance between persistent diagrams. ### Wasserstein distance Given two probability distributions \(X\sim f_{1}\) and \(Y\sim f_{2}\), the \(r\)-_Wasserstein distance_\(D_{W}\), which is the probabilistic version of the optimal transport, is defined as \[\mathcal{D}(f_{1},f_{2})=\Big{(}\inf\mathbb{E}|X-Y|^{r}\Big{)}^{1/r},\] where the infimum is taken over every possible joint distributions of \(X\) and \(Y\). The Wasserstein distance is the optimal expected cost of transporting points generated from \(f_{1}\) to those generated from \(f_{2}\)[8]. There are numerous distances and similarity measures defined between probability distributions such as the Kullback-Leibler (KL) divergence and the mutual information [38]. While the Wasserstein distance is a metric satisfying positive definiteness, Figure 17: Comparison between geometric distance \(d_{geo}\) and topological distance \(d_{top}\). We used the shortest distance between objects as the geometric distance. The left and middle objects are topologically different while the left and right objects are topologically equivalent. The geometric distance cannot discriminate topologically different objects (left and middle) and produces false negatives. The geometric distance incorrectly discriminate topologically equivalent objects (left and right)and produces false positives. symmetry, and triangle inequality, the KL-divergence and the mutual information are not metric. Although they are easy to compute, the biggest limitation of the KL-divergence and the mutual information is that the two probability distributions has to be defined on the same sample space. If the two distributions do not have the same support, it may be difficult to even define them. If \(f_{1}\) is discrete while \(f_{2}\) is continuous, it is difficult to define them. On the other hand, the Wasserstein distance can be computed for any arbitrary distributions that may not have the common sample space making it extremely versatile. The Wasserstein distance is often used distance for measuring the discrepancy between persistent diagrams. Consider persistent diagrams \(P_{1}\) and \(P_{2}\) given by \[P_{1}:x_{1}=(b_{1}^{1},d_{1}^{1}),\cdots,x_{q}=(b_{q}^{1},d_{q}^{1}),\quad P_{2 }:y_{1}=(b_{1}^{2},d_{1}^{2}),\cdots,y_{q}=(b_{q}^{2},d_{q}^{2}).\] Their empirical distributions are given in terms of Dirac-delta functions \[f_{1}(x)=\frac{1}{q}\sum_{i=1}^{q}\delta(x-x_{i}),\quad f_{2}(y)=\frac{1}{q} \sum_{i=1}^{q}\delta(y-y_{i}).\] Then we can show that the _r-Wasserstein distance_ on persistent diagrams is given by \[\mathcal{D}(P_{1},P_{2})=\inf_{\psi:P_{1}\to P_{2}}\Big{(}\sum_{x\in P_{1}} \|x-\psi(x)\|^{r}\Big{)}^{1/r} \tag{13}\] over every possible bijection \(\psi\), which is permutation, between \(P_{1}\) and \(P_{2}\)[68, 8, 5]. Optimization (13) is the standard assignment problem, which is usually solved by Hungarian algorithm in \(\mathcal{O}(q^{3})\)[27]. However, for graph filtration, the distance can be computed _exactly_ in \(\mathcal{O}(q\log q)\) by simply matching the order statistics on the birth or death values [59, 64, 65]. Note the 0D persistent diagram for graph filtration is just 1D scatter points of birth values while the 1D persistent diagram for graph filtration is just 1D scatter points of death values. Thus, the Wasserstein distance can be simplified as follows. **Theorem 2**.: _[_64_]_ _The r-Wasserstein distance between the 0D persistent diagrams for graph filtration is given by_ \[\mathcal{D}_{0}(P_{1},P_{2})=\Big{[}\sum_{i=1}^{q_{0}}(b_{(i)}^{1}-b_{(i)}^{2 })^{r}\Big{]}^{1/r},\] _where \(b_{(i)}^{j}\) is the \(i\)-th smallest birth values in persistent diagram \(P_{j}\). The 2-Wasserstein distance between the 1D persistent diagrams for graph filtration is given by_ \[\mathcal{D}_{1}(P_{1},P_{2})=\Big{[}\sum_{i=1}^{q_{1}}(d_{(i)}^{1}-d_{(i)}^{2 })^{r}\Big{]}^{1/r},\] _where \(d_{(i)}^{j}\) is the \(i\)-th smallest death values in persistent diagram \(P_{j}\)._ In this study, we will simply use the combined 0D and 1D topological distance \(\mathcal{D}=\mathcal{D}_{0}+\mathcal{D}_{1}\) for inference and clustering. For collection of graphs con_i and con_j, the pairwise Wasserstein distance between graphs is computed as lossMtx = WS_pdist2(con_i,con_j) struct with fields: D0: [10*10 double] D1: [10*10 double] D01: [10*10 double] Each entry of lossMtx stores 0D, 1D and combined topological distances. ### Topological inference There are a few studies that used the Wasserstein distance [48, 73]. The existing methods are mainly applied to geometric data without topological consideration. It is not obvious how to apply the method to perform statistical inference for a population study. We will present a new statistical inference procedure for testing the topological inference of two groups, the usual setting in brain network studies. Consider a collection of graphs \(\mathcal{X}_{1},\cdots,\mathcal{X}_{n}\) that are grouped into two groups \(C_{1}\) and \(C_{2}\) such that \[C_{1}\cup C_{2}=\{\mathcal{X}_{1},\cdots,\mathcal{X}_{n}\},\quad C_{1}\cap C_{ 2}=\emptyset.\] We assume there are \(n_{i}\) graphs in \(C_{i}\) and \(n_{1}+n_{2}=n\). In the usual statistical inference, we are interested in testing the null hypothesis of the equivalence of topological summary \(\mathcal{T}\): \[H_{0}:\mathcal{T}(C_{1})=\mathcal{T}(C_{2}).\] Under the null, there are \(\binom{n}{n_{1}}\) number of permutations to permute \(n\) graphs into two groups, which is an extremely large number and most computing systems including MATLAB/R cannot compute them exactly if the sample size is larger than 50 in each group. If \(n_{1}=n_{2}\), the total number of permutations is given asymptotically by Stirling's formula [28] \[\binom{n}{n_{1}}\sim\frac{4^{n_{1}}}{\sqrt{\pi n_{1}}}.\] The number of permutations _exponentially_ increases as the sample size increases, and thus it is impractical to generate every possible permutation. In practice, up to hundreds of thousands of random permutations are generated using the uniform distribution on the permutation group with probability \(1/\binom{n}{n_{1}}\). The computational bottleneck in the permutation test is mainly caused by the need to recompute the test statistic for each permutation. This usually cause a serious computational bottleneck when we have to recompute the test statistic for large samples for more than million permutations. We propose a more scalable approach. Define the within-group distance \(\mathcal{L}_{W}\) as \[2\mathcal{L}_{W}=\sum_{\mathcal{X}_{i},\mathcal{X}_{j}\in C_{1}}\mathcal{D}( \mathcal{X}_{i},\mathcal{X}_{j})+\sum_{\mathcal{X}_{i},\mathcal{X}_{j}\in C_{ 2}}\mathcal{D}(\mathcal{X}_{i},\mathcal{X}_{j}).\] The within-group distance corresponds to the sum of all the pairwise distances in the block diagonal matrices in Figure 18. The average within-group distance is then given by \[\overline{\mathcal{L}}_{W}=\frac{\mathcal{L}_{W}}{n_{1}(n_{1}-1)+n_{2}(n_{2}- 1)}.\] The between-group distance \(\mathcal{L}_{B}\) is defined as \[2\mathcal{L}_{B}=\sum_{\mathcal{X}_{i}\in C_{1}}\sum_{\mathcal{X}_{j}\in C_{2}} \mathcal{D}(\mathcal{X}_{i},\mathcal{X}_{j})+\sum_{\mathcal{X}_{i}\in C_{2}} \sum_{\mathcal{X}_{j}\in C_{1}}\mathcal{D}(\mathcal{X}_{i},\mathcal{X}_{j}).\] The between-group distance corresponds to the off-diaognal block matrices in Figure 18. The average between-group distance is then given by \[\overline{\mathcal{L}}_{B}=\frac{\mathcal{L}_{B}}{n_{1}n_{2}}.\] Note that the sum of within-group and between-group distance is the sum of all the pairwise distances in Figure 18: \[2\mathcal{L}_{W}+2\mathcal{L}_{B}=\sum_{i=1}^{n}\sum_{j=1}^{n}\mathcal{D}( \mathcal{X}_{i},\mathcal{X}_{j}).\] When we permute the group labels, the total sum of all the pairwise distances do not change and fixed. If the group difference is large, the between-group distance \(\mathcal{L}_{B}\) will be large and the within-group distance \(\mathcal{L}_{W}\) will be small. Thus, to measure the disparity between groups as the ratio [64] \[\phi_{\mathcal{L}}=\frac{\mathcal{L}_{B}}{\mathcal{L}_{W}}.\] The ratio statistic is related to the elbow method in clustering and behaves like traditional \(F\)-statistic, which is the ratio of squared variability of model fits. If \(\phi_{\mathcal{L}}\) is large, the groups Figure 18: Pairwise Wasserstein distance between 50 healthy controls (HC) and 101 temporal lobe epilepsy (TLE) patients. There are subtle pattern difference in the off-diagonal patterns (between group distances \(\mathcal{L}_{B}\)) compared to diagonal patterns (within group distances \(\mathcal{L}_{W}\)). The permutation test with 100 million permutations was used to determine the statistical significance using the ratio statistic. The red line is the observed ratio. The histogram is the empirical null distribution obtained from the permutation test. differ significantly in network topology. If \(\phi_{\mathcal{L}}\) is small, it is likely that there is no group differences. Since the ratio is always positive, its probability distribution cannot be Gaussian. Since the distributions of the ratio \(\phi_{\mathcal{L}}\) is unknown, the permutation test can be used to determine the empirical distributions. Figure 18-right displays the empirical distribution of \(\phi_{\mathcal{L}}\). The \(p\)-value is the area of the right tail thresholded by the observed ratio \(\phi_{\mathcal{L}}\) (dotted red line) in the empirical distribution. Since we only compute the pairwise distances only once and only shuffle each entry over permutations. This is equivalent to rearranging rows and columns of entries corresponding to the permutations in Figure 18. The simple rearranging of rows and columns of entries and sum them in the block-wise fashion should be faster than the usual two-sample \(t\) test which has to be recomputed for each permutation. To speed up the permutation further, we adapted the transposition test, the online version of permutation test [21]. In the transposition test, we only need to work out how \(\mathcal{L}_{B}\) and \(\mathcal{L}_{W}\) changes over a transposition, a permutation that only swaps one entry from each group. When we transpose \(k\)-th and \(l\)-th graphs between the groups (denoted as \(\tau_{kl}\)), all the \(k\)-th and \(i\)-th rows and columns will be swapped. The within-group distance after the transposition \(\tau_{kl}\) is given by \[\tau_{kl}(\mathcal{L}_{W})=\mathcal{L}_{W}+\Delta_{W},\] where \(\Delta_{W}\) is the terms in the \(k\)-th and \(i\)-th rows and columns that are required to swapped. We only need to swap up to \(\mathcal{O}(2n)\) entries while the standard permutation test that requires the computation over \(\mathcal{O}(n^{2})\) entries. Similarly we have incremental changes \[\tau_{kl}(\mathcal{L}_{B})=\mathcal{L}_{B}+\Delta_{B}.\] The ratio statistic over the transposition is then sequentially updated over random transpositions. To further accelerate the convergence and avoid potential bias, we introduce one permutation to the sequence of 1000 consecutive transpositions. The observed ratio statistic is computed using WS_ratio.m, which inputs the distance matrix lossMtx, sample size in each group. The whole procedure for performing the transposition test is implemented as WS_transpositions.m and takes less than one second in a desktop computer for million permutations. The function inputs the distance matrix lossMtx, sample size in each group, number of transpositions and the number of permutations that are interjected into transpositions. Figure 19 displays the convergence plot of the transposition test. ### Topological clustering We validate the proposed topological distances in simulations with the ground truth in a clustering setting. The Wasserstein distance was previously used for clustering for _geometric objects_ without topology in \(\mathcal{O}(q^{3})\)[48, 73]. The proposed topological method builds the Wasserstein distances on persistent diagrams in \(\mathcal{O}(q\log q)\) making our method scalable. Consider a collection of graphs \(\mathcal{X}_{1},\cdots,\mathcal{X}_{n}\) that will be clustered into \(k\) clusters \(C=(C_{1},\cdots,C_{k})\). Let \(\mu_{j}=\mathbb{E}C_{j}\) be the topological mean of \(C_{j}\) computing using the Wasserstein distance. Let \(\mu=(\mu_{1},\cdots,\mu_{k})\) be the cluster mean vector. The within-cluster Wasserstein distance is given by \[l_{W}(C;\mu)=\sum_{j=1}^{k}\sum_{X\in C_{j}}\mathcal{D}(X,\mu_{j})=\sum_{j=1}^ {k}|C_{j}|\mathbb{V}C_{j}\] with the topological variance \(\mathbb{V}C_{j}\) of cluster \(C_{j}\). The within-cluster Wasserstein distance generalizes the within-group distance defined on two groups to \(k\) number of groups (or clusters). When \(k=2\), we have \(l_{W}(C;\mu)=2\mathcal{L}_{W}\). The topological clustering through the Wasserstein distance is then performed by minimizing \(l_{W}(C)\) over every possible \(C\). The Wasserstein graph clustering algorithm can be implemented as the two-step optimization often used in variational inferences [6]. The algorithm follows the proof below. Theorem 4.1: _Topological clustering with the Wasserstein distance converges locally._ Proof: 1) Expectation step: Assume \(C\) is estimated from the previous iteration. In the current iteration, the cluster mean \(\mu\) corresponding to \(C\) is updated as \(\mu_{j}\leftarrow\mathbb{E}C_{j}\) for each \(j\). The cluster mean gives the lowest bound on distance \(l_{W}(C;\nu)\) for any \(\nu=(\nu_{1},\cdots,\nu_{k})\): \[l_{W}(C;\mu)=\sum_{j=1}^{k}\sum_{X\in C_{j}}\mathcal{D}(X,\mu_{j})\leq\sum_{j= 1}^{k}\sum_{X\in C_{j}}\mathcal{D}(X,\nu_{j})=l_{W}(C;\nu). \tag{14}\] 2) We check if the cluster mean \(\mu\) is changed from the previous iteration. If not, the algorithm simply stops. Thus we can force \(l_{W}(C;\nu)\) to be strictly decreasing over each iteration. 3) Minimization step: The clusters are updated from \(C\) to \(C^{\prime}=(C^{\prime}_{J_{1}},\cdots,C^{\prime}_{J_{k}})\) by reassigning Figure 19: The plot of ratio statistic \(\phi_{\mathcal{L}}\) (top) over 100 million transpositions in testing the topological difference between HC and TLE. The plot is only shown at every 10000 transposition. The redline is the observed ratio static 0.9541. The estimated \(p\)-value (middle) converges to 0.0086 after 100 million transpositions. The CPU time (bottom) is linear and takes 102 seconds for 100 million transpositions. each graph \(\mathcal{X}_{i}\) to the closest cluster \(C_{J_{i}}\) satisfying \(J_{i}=\arg\min_{j}\mathcal{D}(\mathcal{X}_{i},\mu_{j})\). Subsequently, we have \[l_{W}(C^{\prime};\mu)=\sum_{J_{i}=1}^{k}\sum_{X\in C^{\prime}_{J_{i}}}\mathcal{D }(X,\mu_{J_{i}})\leq\sum_{j=1}^{k}\sum_{X\in C_{j}}\mathcal{D}(X,\mu_{j})=l_{W} (C;\mu). \tag{15}\] From (14) and (15), \(l_{W}(C;\mu)\) strictly decreases over iterations. Any bounded strictly decreasing sequence converges. Just like \(k\)-means clustering that converges only to local minimum, there is no guarantee the Wasserstein graph clustering converges to the global minimum [35]. This is remedied by repeating the algorithm multiple times with different random seeds and identifying the cluster that gives the minimum over all possible seeds. Let \(y_{i}\) be the true cluster label for the \(i\)-th data. Let \(\widehat{y}_{i}\) be the estimate of \(y_{i}\) we determined from Wasserstein graph clustering. Let \(y=(y_{1},\cdots,y_{n})\) and \(\widehat{y}=(\widehat{y}_{1},\cdots,\widehat{y}_{n})\). In clustering, there is no direct association between true clustering labels and predicted cluster labels. Given \(k\) clusters \(C_{1},\cdots,C_{k}\), its permutation \(\pi(C_{1}),\cdots,\)\(\pi(C_{k})\) is also a valid cluster for \(\pi\in\mathbb{S}_{k}\), the permutation group of order \(k\). There are \(k!\) possible permutations in \(\mathbb{S}_{k}\)[21]. The clustering accuracy \(A(y,\widehat{y})\) is then given by \[A(\widehat{y},y)=\frac{1}{n}\max_{\pi\in\mathbb{S}_{k}}\sum_{i=1}^{n}\mathbf{1 }(\pi(\widehat{y})=y).\] This a modification to an assignment problem and can be solved using the Hungarian algorithm in \(\mathcal{O}(k^{3})\) run time [27]. In Matlab, it can be solved using confusionmat.m, which tabulates misclustering errors between the true cluster labels and predicted cluster labels. Let \(F(\widehat{y},y)=(f_{ij})\) be the confusion matrix of size \(k\times k\) tabulating the correct number of clustering in each cluster. The diagonal entries show the correct number of clustering while the off-diagonal entries show the incorrect number of clusters. To compute the clustering accuracy, we need to sum the diagonal entries. Under the permutation of cluster labels, we can get different confusion matrices. For large \(k\), it is prohibitive expensive to search for all permutations. Thus we need to maximize the sum of diagonals of the confusion matrix under permutation: \[\frac{1}{n}\max_{Q\in\mathbb{S}_{k}}\operatorname{tr}(QC)=\frac{1}{n}\max_{Q \in\mathbb{S}_{k}}\sum_{i,j}q_{ij}f_{ij}, \tag{16}\] where \(Q=(q_{ij})\) is the permutation matrix consisting of entries 0 and 1 such that there is exactly single 1 in each row and each column. This is a linear sum assignment problem (LSAP), a special case of linear assignment problem [23, 46]. The clustering accuracy is computed using function [accuracy C]=cluster_accuracy(ytrue,ypred) where ytrue is the true cluster labels and ypred is the predicted cluster labels. accuracy is the clustering accuracy and C is the confusion matrix [23]. Example 4: We replace the Euclidean distance (\(L_{2}\)-norm) in \(k\)-means clustering with the topological distance \(\mathcal{D}\) and compared the performance with the traditional \(k\)-means clustering and hierarchical clustering [42]. We generated 4 circular patterns of identical topology (Figure 20) and different topology (Figure 21). Along the circles, we uniformly sampled 60 nodes and added Gaussian noise \(N(0,0.3^{2})\) on the coordinates. We generated 5 random networks per group. The Euclidean distance (\(L_{2}\)-norm) between randomly generated points are used to build connectivity matrices for \(k\)-means and hierarchical clustering. Figures 20 and 21 shows the superposition of nodes from 20 networks. For \(k\)-means and Wasserstein graph clustering, the average result of 100 random seeds are reported. We tested for false positives when there is no topology difference in Figure 20, where all the groups are simply obtained from Group 1 by rotations. All the groups are topologically equivalent and thus we should not detect any topological difference. Any detected signals are all false positives. The \(k\)-means had \(0.90\pm 0.15\) while the hierarchical clustering had perfect 1.00 accuracy. Existing clustering methods based on Euclidean distance are reporting significant false positives and should not be used in topological clustering task had the accuracy. On the other hand, the Wasserstein graph clustering had low \(0.53\pm 0.08\) accuracy. We conclude that Wasserstein graph clustering are not reporting topological false positive like \(k\)-means and hierarchical clusterings. We also tested for false negatives when there is topology difference in Figure 21, where all the groups have different number of cycles. All the groups are topologically different and thus we should detect topological differences. The \(k\)-means clustering achieved \(0.83\pm 0.16\) accuracy. The hierarchical clustering is reporting perfect 1.00 accuracy. On the other hand, the topological clustering achieved respectable \(0.98\pm 0.09\) accuracy. It is extremely difficult Figure 20: Simulation study on topological equivalence. The correct clustering method should _not_ be able to cluster them since they are all topologically equivalent. Right: the pairwise Euclidean distance (\(L_{2}\)-norm) is used in \(k\)-means and hierarchical clustering. The Wasserstein distance is used in topological clustering. to separate purely topological signals from geometric signals. Thus, when there is topological difference, it is expected to have geometric signal. Thus, all the methods are expected to perform well. Existing clustering methods based on geometric distances will likely to produce significant amount of false positives and and not suitable for topological learning tasks. On the other hand, the proposed Wasserstein distance performed extremely well in both cases and not likely to report false positives or false negatives. The clusterings are performed using acc_WS = WS_cluster(G) acc_K = kmeans_cluster(G) acc_H = hierarchical_cluster(G) ## Acknowledgement The project is supported by NIH R01 EB028753 and NSF MDS-2010778. We also like to thank Hyekyung Lee of Seoul National University and Tananun Songdechakraiwut of University of Wisconsin-Madison for the contribution of some of functions. Figure 21: Simulation study on topological difference. The correct clustering method should be able to cluster them since they are all topologically different. Right: the pairwise Euclidean distance (\(L_{2}\)-norm) is used in \(k\)-means and hierarchical clustering. The Wasserstein distance is used in topological clustering.
2303.08249
Systematic design space exploration by learning the explored space using Machine Learning
Current practice in parameter space exploration in euclidean space is dominated by randomized sampling or design of experiment methods. The biggest issue with these methods is not keeping track of what part of parameter space has been explored and what has not. In this context, we utilize the geometric learning of explored data space using modern machine learning methods to keep track of already explored regions and samples from the regions that are unexplored. For this purpose, we use a modified version of a robust random-cut forest along with other heuristic-based approaches. We demonstrate our method and its progression in two-dimensional Euclidean space but it can be extended to any dimension since the underlying method is generic.
Avinash Kumar, Anish Kumar, Sumit Sharma, Surjeet Singh, Kumar Vardhan
2023-03-14T21:51:08Z
http://arxiv.org/abs/2303.08249v1
# Systematic design space exploration by learning the explored space using Machine Learning ## I Abstract _Current practice in parameter space exploration in euclidean space is dominated by randomized sampling or design of experiment methods. The biggest issue with these methods is not keeping track of what part of parameter space has been explored and what has not. In this context, we utilize the geometric learning of explored data space using modern machine learning methods to keep track of already explored regions and samples from the regions that are unexplored. For this purpose, we use a modified version of a robust random-cut forest along with other heuristic-based approaches. We demonstrate our method and its progression in two-dimensional Euclidean space but it can be extended to any dimension since the underlying method is generic._ ## II Introduction Design space exploration [1] has a major role in both design optimization [2] as well as surrogate modeling [3]. It is the process of discovering and evaluating the manifold of the function or process under consideration. The goal in the case of design optimization is to find the parameter [4] that performs the best on the performance metrics and meets all requirements. On the other hand, in the case of surrogate modeling the goal is to learn a data-driven alternative model that is cheaper to evaluate and acts as a proxy of the function under consideration. In both cases, a systematic exploration of design space is required especially in high-dimensional design space. Current practice in design space exploration is done by either random sampling or traditional design of experiment methods [5]. Random sampling generally relies on pseudo-random compute-generated sequences [6]. In DoE [7] methods, factorial-based [8] sampling is the most used. In Latin hypercube-based sampling (LHC) [9] sampling and its flavor is most used [10]. Extension of these factorial methods relies on embedding factorial or fractional factorial design with points that are augmented with a group of specific shape points with some invariant properties (like orthogonality, rotatibility, etc) that allow easy estimation of the response surface [7, 11]. All these methods do not keep track of what part of the design space is explored and what part is not explored. Consequently, there is a high likelihood of sample collision. Sample collision is a waste of resources and it becomes more problematic in high-dimensional design search space. To address this, in this work we developed a method that relies on geometric summarization of explored space in sketches and use the \(\epsilon\)-hyper-ball based heuristic to select new samples for exploration. It is an iterative method and at each iteration, we select a batch of samples and learn the explored space using robust random cut forest [12] and then we find the peripheral design points. These peripheral design points are used to further create an \(\epsilon\)-hyper-ball and then samples from it. We showed multiple experiments and shows how \(\epsilon\) can be used to control the step sizing or density of sampling. Accordingly, in this work, we developed a machine learning-based method for systematic sampling of design space. ## III The problem In this section, we formalize our problem. For this purpose, we first formalize the euclidean space learning problem. On a given data set \(D\in R^{m}\) where \(D=\{d_{1},d_{2},...,D_{n}\}\) is the already sampled data points, the design space learning problem is finding an operator \(\Omega\), where \(\Omega\) maps the dataset from Euclidean space to a structured trainable parametric or non-parametric model (\(M\)) Once trained, we want our prediction model to work in a subspace of input space and detect the input data which is not part of training data subspace by raising a flag for taking some corrective measure. \[\Omega:D\mapsto M\] Once trained, we want to find the points that are on the periphery of the design space in Euclidean space using the trained model (\(M\)). The model \(M\) should not only be able to predict what points are in the learned sub-space and what are not but also be able to give an inference about the points on the periphery. Let \(P\) be the peripheral points. Then we extend these peripheral points by sampling beyond these peripheral points and add these points \(D_{new}\) to already explored data points \(D\). ## IV Approach In our approach for learning the explored space (\(T\)) occupied by \(D\) in to the trained model \(M\), we rely on learning the cluster of design sub-space (\(T\))'s shape in Euclidean space to a data-structure (\(S\)). The motivation behind learning the cluster's shape into a data structure is to abstract the information from metric space in a structured manner such that computer and related algorithms can be efficiently deployed for inference. If \(D=\{d_{1},d_{2},...,d_{n}\}\) are set of datapoints such that \(d_{i}\in R^{m}\). For the purpose of Out-of-Distribution detection, following requirements are imposed on this data-structure(\(S\)): 1. \(S\) should represent the cluster of data in a structured way. 2. Relationships(\(\psi\)) between the data points in metric space must be preserved in this data structure. i.e \[\psi\{T(d_{k},d_{l})\}\approx\psi\{S(d_{k},d_{l})\}\] 3. Relationship(\(\phi\)) of a data with the explored space should be encoded in simple quantitative measure. i.e. \(\phi(T,d_{k})\) can be measured as a scalar value in the data-structure(\(S\)). 4. Any modification and inference using this data structure (S) should be computationally cheap. 5. All the above properties should be compatible with streaming data. To represent our cluster in an organized manner, we choose the RRCF [12] as our choice of data structure. Robust Random Cut Forest (RRCF) can be defined as follow : **Definition 1**: _Robust Random Cut Tree on set of data point \(D=\{d_{1},d_{2},...,d_{n}\}\) can be generated by following procedure:_ 1. \(r_{i}=max_{X\in D}(X_{i})-min_{X\in D}(X_{i})\)__\(\forall\,i\in m\)__ 2. \(p_{i}=\dfrac{r_{i}}{\sum_{i=1}^{i=m}r_{i}}\)__ 3. _select a random dimension_ \(i\) _with probability proportional to_ \(p_{i}\)__ 4. \(choose\,x_{i}\,\left|\,x_{i}\sim Uniform(max(X_{i})-min(X_{i}))\right.\)__ 5. \(D_{1}=\{X\,|\,X\in D,X_{i}\leq x_{i}\}\)__ 6. \(D_{2}=D\setminus D_{1}\)__ \(recurse\) _on_ \(D_{1}\) _and_ \(D_{2}\) _until_ \(D_{i}\geq 1\)__ Robust Random cut Forest is an ensemble of various RRCT. We need a distance-preserving embedding in the data structure since the chosen relationship (\(psi\)) between data points in metric space is represented by the \(Lp\) distance between data points. For this reason, the weight of the least common ancestor of two datapoints \(dk\) and \(dl\) is used to establish the tree distance between these points in the data structure (\(S\)) [12]. Following Johnson-Landatrauss lemma [13] the tree distance can be bounded from at-least \(L_{1}(d_{k},d_{l})\) to maximum \(O(d*log|k|/L_{1}(d_{k},d_{l}))\). Consequently, a point will remain at least as far from other points in a random cut tree if it is far from them in metric space. Displacement, which is an estimation of the change in model complexity (summation of leave's depth) before and after adding a given point \(x\) in tree data structure, can be used to convey the relationship (\(phi\)) of a data point with the cluster in a straightforward quantitative manner. By using these RRCF [14], we are interested in learning a subspace in parameter space and also finding the data points that are on the periphery. At the start, we select a batch of samples randomly (call \(D\)), called the warm-up stage. These random samples are used to train various RRCT. The cut space created by an RRCT is shown in figure 1. The ensemble of RRCT that forms an RRCF is trained on this initial data set. This RRCF represents our model \(M\). Our next goal is to find the points on the periphery. For this purpose, we use the metric **displacement** that is defined above. The insertion of a point in from cluster called inlier has a high likelihood that once inserted into the RRCF, it will get inserted at the bottom part of the tree (refer to figure 2) and consequently the change in displacement would be lesser than the point that is an outlier. An outlier has a high likelihood that it gets attached at the initial branches of the tree and consequently increases the displacement to a maximum amount (refer to figure 3). We capitalize on it and run this displacement calculation on all points in the data set and select the points that have maximum displacement. For a batch of samples, we select the batch of point order in descending order based on displacement. Collection of These selected points is the peripheral point set \(P\). The next step is to explore and add new points. For this purpose, we create an Fig. 1: A robust random cut tree on sample data Fig. 3: An outlier data point and its position in tree. Fig. 2: An inlier data point and its position in the tree. \(\epsilon\)-hyperball with a center on these \(P\) points. The \(\epsilon\) is a hyper-parameter and controls the sparsity and density of the design points. Bigger \(\epsilon\) represents the sparser selection of sample and vice-versa. We select one sample from each hyper-ball, evaluate it and label it as newly sampled data set \(D_{new}\). The data set \(D\) is modified as \[D\leftrightarrow D\cup D_{new}\] we again train our RRCF on the data set \(D\) and repeat the process again. ## V Results To demonstrate and visual explanation, we test our approach for parameter space exploration in 2D space. Figure 4 shows the effect of \(\epsilon\) on the sparsity of the design space exploration process at one iteration. By controlling the \(\epsilon\), we control the hyper-ball size created around the peripheral points \(P\). At \(\epsilon=0.1\), the selected samples are closest to the already explored region, and by increasing the values of \(\epsilon\), the selected samples become more and more separated from the already explored data cluster. With \(\epsilon=4\), the selected samples are widely separated from the Fig. 4: Result of different epsilon using our algorithm. Blue points represent the newly selected points and red points are points that are already explored. already sampled cluster. In the next experiment, we let our algorithm run for 2000 iterations with \(50\) samples per iteration. The value of \(\epsilon\) is kept fixed and we are only interested in observing does the algorithm work as we expected when we let it run for a longer iteration. The red samples in figure 5 are the initial samples chosen randomly and the blue points are the points selected by the algorithm. It can be observed that algorithm performs as expected. ## VI Related work Three key elements in engineering include model-based design ([15]), model-based control ([16]), and model-based optimization ([17]). Machine learning and AI is affecting all three fields like for control [18, 19, 20, 21, 22], for modeling [23], for prediction of complex systems [24, 25, 26, 27, 28, 29, 30] and for optimization [31, 32, 20, 33]. Parameter space exploration is crucial for all these applications. Systematic exploration has benefits as it can avoid sample collision and would be the most useful in unstructured regions of design space. The early work in design space exploration is done by Fisher [34]. Later Box and Wilson gave Box-Wilson Central Composite Design of an experiment. Later Joan Fisher's box revisited the design of the experiment work. Till now, most of the DoE methods rely on factorial methods. Latin hypercube [35] is one the most used method. First introduced by [9], it relies on the factorial separation of space, with the goal of non-colliding sample selection. LHC relies on creating a square Fig. 5: Result on a large number of samples on a fixed epsilon using our algorithm. grid containing sample positions in a Latin square if (and only if) there is only one sample in each row and each column. Morris and Mitchell [10] added a space-filling criterion to the vanilla LHC with the optimization objective of maximizing the shortest distance between the points. Eriksson et al [36, 37] provide a comprehensive tutorial on all these methods. On the Machine learning front, the AL methods are being used for complex control design [38, 39], computer vision [40, 41, 42]. ## VII Conclusions In this work, we showed a state-of-the-art approach to systematic exploration. By learning the already explored space and the periphery data points it is possible to sample new points in euclidean space. By doing this we can systematically explore the design space. In all these cases, by controlling a hyper-parameter \(\epsilon\), we can control the step or spread of the sampling and exploration process. The future work would be the inclusion of constraint handling during exploration and application in real-world problems.
2310.08371
Worst-Case Morphs using Wasserstein ALI and Improved MIPGAN
A morph is a combination of two separate facial images and contains identity information of two different people. When used in an identity document, both people can be authenticated by a biometric Face Recognition (FR) system. Morphs can be generated using either a landmark-based approach or approaches based on deep learning such as Generative Adversarial Networks (GAN). In a recent paper, we introduced a \emph{worst-case} upper bound on how challenging morphing attacks can be for an FR system. The closer morphs are to this upper bound, the bigger the challenge they pose to FR. We introduced an approach with which it was possible to generate morphs that approximate this upper bound for a known FR system (white box), but not for unknown (black box) FR systems. In this paper, we introduce a morph generation method that can approximate worst-case morphs even when the FR system is not known. A key contribution is that we include the goal of generating difficult morphs \emph{during} training. Our method is based on Adversarially Learned Inference (ALI) and uses concepts from Wasserstein GANs trained with Gradient Penalty, which were introduced to stabilise the training of GANs. We include these concepts to achieve similar improvement in training stability and call the resulting method Wasserstein ALI (WALI). We finetune WALI using loss functions designed specifically to improve the ability to manipulate identity information in facial images and show how it can generate morphs that are more challenging for FR systems than landmark- or GAN-based morphs. We also show how our findings can be used to improve MIPGAN, an existing StyleGAN-based morph generator.
Una M. Kelly, Meike Nauta, Lu Liu, Luuk J. Spreeuwers, Raymond N. J. Veldhuis
2023-10-12T14:40:24Z
http://arxiv.org/abs/2310.08371v2
# Worst-Case Morphs using Wasserstein ALI and Improved MIPGAN ###### Abstract A morph is a combination of two separate facial images and contains identity information of two different people. When used in an identity document, both people can be authenticated by a biometric Face Recognition (FR) system. Morphs can be generated using either a landmark-based approach or approaches based on deep learning such as Generative Adversarial Networks (GAN). In a recent paper, we introduced a _worst-case_ upper bound on how challenging morphing attacks can be for an FR system. The closer morphs are to this upper bound, the bigger the challenge they pose to FR. We introduced an approach with which it was possible to generate morphs that approximate this upper bound for a known FR system (white box), but not for unknown (black box) FR systems. In this paper, we introduce a morph generation method that can approximate worst-case morphs even when the FR system is not known. A key contribution is that we include the goal of generating difficult morphs _during_ training. Our method is based on Adversarially Learned Inference (ALI) and uses concepts from Wasserstein GANs trained with Gradient Penalty, which were introduced to stabilise the training of GANs. We include these concepts to achieve similar improvement in training stability and call the resulting method Wasserstein ALI (WALI). We finetune WALI using loss functions designed specifically to improve the ability to manipulate identity information in facial images and show how it can generate morphs that are more challenging for FR systems than landmark- or GAN-based morphs. We also show how our findings can be used to improve MIPGAN, an existing StyleGAN-based morph generator. ## 1 Introduction It has been shown that _morphing attacks_ pose a significant risk to both Face Recognition (FR) systems and humans, e.g. border guards [1, 2]. A morph is an image that is created by combining two images of two different people. If it contains sufficient identity information of each person, then FR systems and humans will accept the morph as a match both when it is compared to a different image of the first person, but also when it is compared with a different image of the second person. This means that two people could share one passport or other identity document and avoid travel restrictions or border controls, e.g. a criminal could travel using the identity document of an accomplice. Some countries intend to stop allowing people to bring their own printed photos for passport applications, e.g. Germany [3]. At the same time, there are still countries that allow applicants to provide their own digital or printed passport photo, e.g. Ireland [4]. Morphed images also pose a challenge in other scenarios since two people could for example share a driver's licence, health insurance, public transportation tickets etc. There are myriad ways to exploit systems, subscriptions, access rights, and more using morphed images. Generative Adversarial Networks (GAN) have been shown to successfully generate fake data that matches a real data distribution [5]. Image characteristics such as expression or age (in the case of facial images) can be manipulated by applying changes to latent representations of images, which are vectors in a GAN's latent space. If an inversion were available that maps images to the latent space of a GAN, then this would allow advantage to be taken of the benefits that GANs provide and allow real data to be manipulated directly. Mapping two images onto two respective latent vectors, and finding an appropriate interpolation between those two vectors would then lead to a GAN-generated morph. Both MorGAN [6] and MIPGAN [7] are examples of this approach. Morph generation can rely on landmark-based, GAN-based or manual methods. More recently, morphs generated using diffusion models were introduced [8, 9]. How challenging morphs are varies depending on implementation details such as the landmark detector used, the splicing method used, post-processing, whether images were printed and scanned, which pairs of images were selected for morphing, etc. A criminal could make a morph using hand-selected landmarks, and then iteratively apply changes and test the morph using one or more FR systems to find a morph that is most likely to be accepted by FR systems. They could also apply changes that make it harder for Morphing Attack Detection (MAD) methods to detect the morphs. This means that the variation in morphing methods used in research may not be representative of morphs that could exist in reality, since criminals will not advertise which morphing methods they are using. Therefore, the estimated vulnerability of FR and MAD systems may be different on such morphs than on datasets generated by researchers, where some trade-off between quantity and quality may have to be made. MAD methods have been proposed, targeted at detecting landmark-based morphs, GAN-based morphs or both. Developing an MAD approach that can detect both landmark- and GAN-based morphs - especially if they are of a type not seen during training - is still an open challenge [10]. GAN-based morphing detection is very similar to the general detection of GAN images (deepfakes) [11]. Increasing the variation in available morphing tools could be helpful in the development of detection methods, since both in GAN-based morph detection and deepfake detection more generally, it has been shown that methods struggle to detect images of a type not seen during the training phase. In [12] it was shown that theoretically - and if the FR system is known also in practice - morphs can be even more challenging than either landmark- or GAN-morphs. While landmark-based morphing combines images in the image domain, GAN-based morphing combines them by mapping them to embeddings in the GAN latent space, interpolating in that latent space, and generating a morph from the interpolated latent embedding. On the other hand, our approach in [12] was to directly reverse the mapping from images to latent embeddings in the FR latent space (different from the GAN latent space). This approach can be used to exploit the vulnerabilities of the FR system it was trained with, but is less suited than GAN-based methods to generate morphs that visually (to humans) look like both contributing identities and struggles to fool unseen FR systems. In this work, we continue this investigation to find out whether it is possible to automatically generate morphs that approximate the theoretical _worst case_ for more than one face recognition system simultaneously, even when the FR system is unknown ("black box"), showing there are morphs that can be even more challenging than landmark- or GAN-based morphs. The variation of morphs used in existing MAD benchmarks, such as [13, 14, 15], can be increased by including approximations of worst-case morphs. Our contributions consist firstly of adapting the method introduced in Adversarially Learned Inference (ALI) [16] and improving it to better enable manipulation of real data, e.g. generating interpolations of real images. We call the resulting improved method Wasserstein ALI (WALI) and use it to generate morphs. Like ALI, WALI jointly learns a generative and an inverse mapping, enabling its use for morph generation. We improve training stability, which allows generation of larger images: we generate images using WALI of up to 512\(\times\)512 pixels, compared to 64\(\times\)64 pixels achieved by ALI. It may be possible to generate images with even higher resolutions using WALI, but we did not try this, due to hardware and time restraints. ALI's aim is to generate images that look as real as possible, which means it is not necessarily optimal for generating _challenging_ morphs. WALI is further improved for this purpose by including loss functions designed specifically to improve the ability to manipulate identity information in facial images. The resulting model provides an easy way to generate (large) morphing datasets intended for training or evaluating face recognition and morphing attack detection systems. Our second set of contributions lies in applying WALI and our improved implementation of MIPGAN to approximate worst-case morphs, evaluating these approximations, and comparing them to other morphs. Since morphs generated using an underlying StyleGAN Generator [17] are currently the SOTA when it comes to GAN-based morphing, we include MIPGAN morphs in all our comparisons. Summarising, our main contributions are * Improving ALI to enable morph generation, resulting in Wasserstein ALI (WALI), which provides an easy way to generate (large) morphing datasets intended for training or evaluating face recognition and morphing attack detection systems, * showing that already considering the goal of generating difficult morphs _during_ training instead of only during optimisation (after training) leads to more challenging morphs in both white-box and black-box settings than if WALI is only trained to generate real-looking images, * showing that optimisation on our trained model leads to morphs that are more challenging for FR systems than landmark- or MIPGAN-morphs, even when evaluating under black-box settings. This proves the existence of morphs that lie closer (than landmark or MIPGAN) to the theoretical worst-case morph for six out of eight FR systems we evaluated, * showing that optimising towards a worst-case embedding is also possible when using existing generative models. Since we see that WALI does not generalise well to new datasets that are different from the data it was trained on, we also apply some of our suggested improvements to a StyleGAN Generator that is better at generalising to new datasets, resulting in an improved MIPGAN approach that also leads to more challenging morphs than other GAN-based approaches. ## 2 Related Work ### Worst-Case Morphs In [12] an upper bound on the vulnerability of FR systems to morphing attacks was introduced. Let \(\varphi\) be the function that describes an FR system's mapping from the image space \(X\) to the embedding space \(Y\), i.e. \(\varphi:X\to Y\). If \(d\) is the dissimilarity score function that is used to calculate the dissimilarity score for pairs of embeddings in \(Y\), then the _worst-case embedding_ for two images \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) is \[\mathbf{y}^{*}:=\text{argmin}_{\mathbf{y}\in Y}\left(\max\left[d(\mathbf{y},\varphi(\mathbf{ x}_{1})),d(\mathbf{y},\varphi(\mathbf{x}_{2}))\right]\right). \tag{1}\] For example, if \(d\) returns the euclidean distance, denoted as \(||.||_{2}\), between two embeddings \(\mathbf{y}_{1}\) and \(\mathbf{y}_{2}\), then the dissimilarity score is \(d(\mathbf{y}_{1},\mathbf{y}_{2})=||\mathbf{y}_{1}-\mathbf{y}_{2}||_{2}\). In that case \(\mathbf{y}^{*}\) is that \(\mathbf{y}\) for which \(d(\mathbf{y}_{1},\mathbf{y})=d(\mathbf{y},\mathbf{y}_{2})=d(\mathbf{y}_{1},\mathbf{y}_{2})/2\), see the example on the left in Fig. 1. If an FR system uses similarity scores, defined by a function \(S\), then \[\mathbf{y}^{*}:=\text{argmax}_{\mathbf{y}\in Y}\left(\min\left[S(\mathbf{y},\varphi(\mathbf{x} _{1})),S(\mathbf{y},\varphi(\mathbf{x}_{2}))\right]\right). \tag{2}\] For example, if \(S\) returns cosine similarity, then \(S(\mathbf{y}_{1},\mathbf{y}_{2})=\cos(\theta)\), where \(\theta\) is the angle between \(\mathbf{y}_{1}\) and \(\mathbf{y}_{2}\), see Fig. 1. In that case \(\mathbf{y}^{*}\) is any \(\mathbf{y}\) for which \(S(\mathbf{y}_{1},\mathbf{y})=S(\mathbf{y},\mathbf{y}_{2})=\cos(\theta/2)\). Since worst-case embeddings can be calculated using only normal (bona fide) images, no morphs are needed to compute the worst-case upper bound. This means that the potential vulnerability of an FR system can be determined without having to make or evaluate one single morph. ### GANs for Morph Generation MorGAN [6] uses Adversarially Learned Inference (ALI) to generate \(64\times 64\) pixel morphs. ALI consists of training three networks: an Encoder, a Decoder (similar to the Generator in a plain GAN) and a Discriminator. MorGAN generates morphs by passing two images through the Encoder, interpolating between the two resulting latent embeddings and then passing this interpolation through the Decoder. This approach results in an image that shares similarities with both original images. Resulting morphs have low resolution and compared to landmark-based morphs are not nearly as successful at fooling FR systems. MIPGAN [7] makes use of a pretrained StyleGAN network by training an Encoder that encodes images into the StyleGAN latent space. Optimisation is then used to approximate an optimal embedding in the StyleGAN latent space, that when passed through StyleGAN results in a morph. The morphs are visually convincing, as confirmed by studies on human ability to distinguish between morphs and real images. They are about as successful at attacking FR systems as landmark-based morphs. The MIPGAN method is improved on in RegenMorph [18]. The resulting images are visually more convincing, but are shown to be less successful than MIPGAN morphs at fooling FR systems. What these existing GAN-based images have in common, is that the underlying networks were all trained with the goal of generating fake images that look like real images. While MorGAN uses a pixel-based loss to preserve identity in images, none of the networks were specifically _trained_ to generate morphs. This means that optimisation may be used together with a trained and frozen network to find the optimal latent embedding that leads to a successful morph, but we hypothes that already considering the goal of generating morphs _during_ instead of only _after_ training might lead to Figure 1: The worst-case embedding \(\mathbf{y}^{*}\) when \(d\) denotes euclidean distance (left) or angle (right). If it exists, an image that maps to \(\mathbf{y}^{*}\) is even more challenging than a landmark- (\(\mathbf{y}_{\text{m}}\)) or GAN-based morph (\(\mathbf{y}_{\text{GAN}}\)). more successful morphs. Morphing attacks generated specifically to exploit vulnerabilities of deep-learning-based FR can be considered as a type of _adversarial attack_ on an FR system [19], since images are manipulated in a way similar to _impersonation attacks_, where in the case of morphing, two identities are being "impersonated" simultaneously. An overview of research on GAN inversion is provided in [20], where new inverse networks are trained to invert already existing GANs. On the other hand, approaches such as in [16, 21] attempt to _jointly_ train an Encoder, a Decoder (the GAN Generator) and a Discriminator network. As mentioned in [16], it is possible that there are interactions that can be better learned by training these networks jointly, since the Encoder and Decoder can interact during training, which is not possible when using a frozen GAN. For this reason, we explore whether it is possible to improve methods that use the second approach, such as [16, 21], by addressing some disadvantages such as unstable training. We show that the resulting approach Wasserstein ALI (WALI) is well-suited to approximate worst-case morphs. ### Morphing Attack Detection (MAD) Research on variation in morphing generation algorithms includes: post-processing landmark-based morphs to mask effects caused by the morphing process [22], a model to simulate the effects of printing and scanning [23] and considering the influence of ageing on morphing attacks [24]. The lack of variation in morphing techniques is addressed in [25], which presents a method for MAD and evaluates it on morphs created using different algorithms, which are all landmark-based. Printed-and-scanned morphs are included in this evaluation, but GAN morphs or other methods are not taken into consideration. In this work, we evaluate morphs using two MAD methods to show that if they are trained with landmark-based morphs only, then they struggle to detect WALI- as well as (improved) MIPGAN-based morphs, emphasising the need for varied datasets for training MAD. ## 3 Proposed System ### Adversarially Learned Inference In ALI [16] two probability distributions over \(\mathbf{x}\) and \(\mathbf{z}\) are considered: * the Encoder joint distribution \(q(\mathbf{x},\mathbf{z})=q(\mathbf{x})q(\mathbf{z}\mid\mathbf{x})\), * the Decoder joint distribution \(p(\mathbf{x},\mathbf{z})=p(\mathbf{z})p(\mathbf{x}\mid\mathbf{z})\). The Encoder marginal \(q(\mathbf{x})\) is the empirical data distribution over the image space \(\mathcal{X}=[0,1]^{d_{1}}\), where \(d_{1}=w\times h\times n_{c}\), the width by height of the image by the number of colour channels \(n_{c}\). The Decoder marginal \(p(\mathbf{z})\) over the latent space \(\mathcal{Z}\) is the distribution from which input noise is sampled, e.g. a standard Normal distribution \(p(\mathbf{z})=\mathcal{N}(0,I)\) over \(\mathcal{Z}=(-\infty,\infty)^{d_{2}}\) (this can be truncated to \([-R,R]^{d_{2}},\ R\in\mathbb{R}\) to ensure that \(\mathcal{Z}\) is compact, which is needed to prove that ALI converges). Embeddings in the ALI latent space \(\mathcal{Z}\) are denoted \(\mathbf{z}\) and should not be confused with embeddings \(\mathbf{y}\) in the FR latent space. The objective of ALI is to match the two joint distributions. In order to achieve this, an adversarial game is played using: * \(G_{z}\): an Encoder that maps from image space to a latent space, * \(G_{x}\): a Decoder that maps from the latent space to image space, * \(D\) (or \(C\)): a Discriminator (or Critic) that tries to determine whether joint pairs \((\mathbf{x},\mathbf{z})\) are drawn either from \(q(\mathbf{x},\mathbf{z})\) or \(p(\mathbf{x},\mathbf{z})\). See Fig. 2 for a visualisation of these networks. If the two joint distributions are successfully matched, then existing data points can be encoded into latent vectors that follow the same distribution as the sampled input noise. Then, if the latent vectors are passed through the Decoder, the generated images in turn follow the same distribution as the real images. These properties together allow us to manipulate existing data and to interpolate between real data points. ALI suffers from some limitations, such as training instability and limited ability to faithfully reconstruct images [26, 27]. We find that to successfully train ALI to generate facial images, some tweaks are needed, such as limiting the updates of the Discriminator and ending training before mode collapse occurs. For this reason, we combine the advantages of Wasserstein GANs [28, 29] with the ALI architecture to improve training stability. First, we adapt ALI to include Wasserstein elements and train until convergence, see Fig. 2(a). Next, we finetune using losses to encourage the system to generate difficult morphs. We do this using losses on the image level that encourage the system to faithfully reconstruct normal images, but also use a Face Recognition (FR) system to ensure the reconstructed images maintain identity information, see Fig. 2(b). We use the same FR system to nudge the system to generate morphs that approximate worst-case morphs, see Fig. 2(c). ### Baseline training We mainly follow the ALI training procedure, but replace transposed convolutions with upsampling and size-maintaining convolutions to avoid chequerboard artefacts [30]. We also remove the sigmoid output layer in the Discriminator, so that it no longer outputs values between 0 and 1 (where 0 fake, 1 real), but instead outputs a _score_, making the Discriminator network a _Critic_ (\(C\)). A higher Critic score indicates that an image looks more real, and a lower score indicates that according to the Critic the generated image looks more fake. We follow the approach from [29], i.e. we update \(G_{z}\) and \(G_{x}\) after every fifth update of the Critic. The Critic in turn is trained to output larger Figure 2: The networks and architecture used in Wasserstein ALI (WALI). scores for real images and vice versa, and to ensure Lipschitz continuity a gradient penalty is added to the Critic loss. Since WALI is trained to match a joint distribution, we include a gradient penalty \(R_{x}\) w.r.t. latent input and a gradient penalty \(R_{x}\) w.r.t. image input. Following recommendations from [29] we set the gradient penalty weight to 10. We start with a baseline architecture that generates \(32\times 32\) pixel images. The architecture can be changed to generate higher-resolution images by simply adding layers to the three networks. For example, to generate 64 by 64 pixel images, we add one more convolutional layer before the first layer in \(G_{z}\) and \(C\), and one more upsampling and convolution after the last layer in \(G_{x}\). We train \(C\) to minimise \[\mathcal{L}_{C}=s_{\text{fake}}-s_{\text{real}}+\lambda(R_{x}+R_{z}), \tag{3}\] and \(G_{z}\) and \(G_{x}\) to minimise \[\mathcal{L}_{G}=|s_{\text{real}}-s_{\text{fake}}|, \tag{4}\] where \[s_{\text{real}} =\mathop{\mathbb{E}}_{(\mathbf{x}_{\text{real}},\mathbf{z}_{\text{real}} )\sim q(x,z)}[C(\mathbf{x}_{\text{real}},\mathbf{z}_{\text{real}})], \tag{5}\] \[s_{\text{fake}} =\mathop{\mathbb{E}}_{(\mathbf{x}_{\text{fake}},\mathbf{z}_{\text{real}} )\sim p(x,z)}[C(\mathbf{x}_{\text{fake}},\mathbf{z}_{\text{fake}})],\] (6) \[R_{x} =\mathop{\mathbb{E}}_{(\mathbf{x},\mathbf{z})\sim p(x,z)}[(\|\nabla_{\mathbf{ z}}C(\mathbf{x},\mathbf{z})\|_{2}-1)^{2}],\] (7) \[R_{z} =\mathop{\mathbb{E}}_{(\mathbf{x},\mathbf{z})\sim p(x,z)}[(\|\nabla_{\mathbf{ z}}C(\mathbf{x},\mathbf{z})\|_{2}-1)^{2}]. \tag{8}\] ### Finetuning for Morph Generation Once the three networks \(G_{z},G_{x}\) and \(C\) have converged, we fine-tune them using losses that encourage the network to generate morphs that are close to the worst case. We do this using five different losses. The first two losses are a pixel loss \(\mathcal{L}_{\text{pixel}}\) and a Focal Frequency Loss (FFL) [31], both encourage the generator network to reconstruct images on a pixel level. This second loss \(\mathcal{L}_{\text{ffn}}\) has the advantage that it forces the generator to reconstruct more challenging frequencies as well as easier frequencies. Next, we define losses to manipulate identity information in generated images using an FR system. Without loss of generality1, we assume the FR system used compares images using a dissimilarity score function \(d\) that calculates the angle between two latent embedding vectors. We denote the mapping used by the FR system to map images onto latent embeddings by \(\varphi\). We use three losses to encourage WALI to generate morphs that contain as much as possible relevant identity information. These are \(\mathcal{L}_{\text{FR}}\), \(\mathcal{L}_{\text{FR,Morph},\alpha}\) and \(\mathcal{L}_{\text{FR,Morph}}\), which are defined as follows: Footnote 1: _The only requirement is that \(d\) is known and differentiable._ \[\mathcal{L}_{\text{FR}}=\mathop{\mathbb{E}}_{\mathbf{x}_{\text{real}}\sim q(\mathbf{ x})}\left[d(\varphi(\mathbf{x}_{\text{recon}}),\varphi(\mathbf{x}_{\text{real}}))\right], \tag{9}\] where \(\mathbf{x}_{\text{recon}}=G_{x}(G_{z}(\mathbf{x}_{\text{real}}))\). \[\mathcal{L}_{\text{FR,Morph},\alpha}=\mathop{\mathbb{E}}_{(\mathbf{x}_{\text{ model}},\mathbf{z}_{\text{real}})\sim q(x,z)}\left[d(\varphi(\mathbf{x}_{\text{morph}}^{\alpha}),\mathbf{y}^{ \ast})\right], \tag{10}\] where \[\mathbf{x}_{\text{morph}}^{\alpha}=G_{x}(\alpha\mathbf{z}_{1}+(1-\alpha)\mathbf{z}_{2}) \tag{11}\] for \(\mathbf{z}_{1}=G_{z}(\mathbf{x}_{1})\) and \(\mathbf{z}_{2}=G_{z}(\mathbf{x}_{2})\), and \(0\leq\alpha\leq 1\). As defined in Eq. 1, \(\mathbf{y}^{\ast}\) is the worst-case embedding given \(\mathbf{y}_{1}=\varphi(\mathbf{x}_{1})\) and \(\mathbf{y}_{2}=\varphi(\mathbf{x}_{2})\). Finally \[\mathcal{L}_{\text{FR,Morph}}=\mathcal{L}_{\text{FR,Morph},\alpha},\quad \alpha=0.5. \tag{12}\] In principle, \(\mathcal{L}_{\text{FR}}\) and \(\mathcal{L}_{\text{FR,Morph}}\) are the same as \(\mathcal{L}_{\text{FR,Morph},\alpha}\), just for fixed \(\alpha=1\) and \(\alpha=0.5\), resp. We find that including these losses specifically instead of simply increasing the weight for Figure 3: The losses used in WALI. \(\mathcal{L}_{\text{FR,Morph},\alpha}\) leads to the network being able to generate more challenging morphs when evaluating with FR systems under black-box settings, see Table 2. All five losses are combined in \[\mathcal{L}=\gamma_{1}\mathcal{L}_{\text{pixel}} +\gamma_{2}\mathcal{L}_{\text{fl}}+\gamma_{3}\mathcal{L}_{\text{FR}}\] \[+\gamma_{4}\mathcal{L}_{\text{FR,Morph}}+\gamma_{5}\mathcal{L}_{ \text{FR,Morph},\alpha}. \tag{13}\] We use MobileFaceNet (MFN) [32] to estimate these losses during training, where we intentionally choose a light-weight network in order to reduce GPU memory needed. We evaluate our generated morphs with seven FR systems that were not used during training: VGG16 [33], ArcFace (AF) [34], the Inception ResNet-based FaceNet (INC) [35], ElasticFace (EF) [36], CurricularFace (CF) [37], PocketNet5-128 (PN) [38], Dlib [39], and a Commercial Off The Shelf (COTS) system. ``` \(\theta_{D},\theta_{G_{x}},\theta_{G_{x}}\leftarrow\) initialise network parameters repeat \(\mathbf{x}^{(1)}_{\text{real}},...,\mathbf{x}^{(N)}_{\text{real}}\)\(\triangleright\) Draw \(N\) samples from the dataset \(\mathbf{z}^{(1)}_{\text{fake}},...,\mathbf{z}^{(N)}_{\text{fake}}\)\(\triangleright\) Draw \(N\) random latent emb. \(\mathbf{z}^{(i)}_{\text{real}}=G_{z}(\mathbf{x}^{(i)}),\quad i=1,..,N\)\(\triangleright\) Get real embeddings \(\mathbf{x}^{(i)}_{\text{fake}}=G_{x}(\mathbf{z}^{(i)}_{\text{fake}}),\quad i=1,..,N\)\(\triangleright\) Generate fake images \(s_{\text{real}}=\frac{1}{N}\sum_{i=0}^{N}C(\mathbf{x}^{(i)}_{\text{real}},\mathbf{z}^{(i )}_{\text{real}})\)\(\triangleright\) Critic output for real data \(s_{\text{fake}}=\frac{1}{N}\sum_{i=0}^{N}C(\mathbf{x}^{(i)}_{\text{fake}},\mathbf{z}^{(i )}_{\text{fake}})\)\(\triangleright\) Critic output for fake data \(\mathbf{z}^{(i)}_{\text{recon}}=G_{x}(\mathbf{z}^{(i)}_{\text{real}}),\quad i=1,..,N\)\(\triangleright\) Reconstruct real images \(\mathbf{z}^{(i)}_{\text{morth}}=\alpha^{(i)}_{\text{real}}+(1-\alpha)\mathbf{z}^{(j)}_{ \text{real}}\)\(\triangleright\)\(j=2,..,N,1\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\)\(\triangleright\)\(j\)\(\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\)\(\triangleright\)\(j\)\(\)\(\triangleright\)\(j\)\(\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\)\(\triangleright\)\(j\)\(\)\(\triangleright\)\(j\)\(\)\(\triangleright\)\(j\)\(\triangleright\)\(j\)\(\)\(\triangleright\)\(j\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\)\(\ell\ell\)\(\ell\)\(\ell\)\(\ell\ell\)\(\ell\)\(\ell\ell\)\(\ell\ell\)\(\ell\ell\)\(\ell\ell\)\(\ell\ell\)\(\ell\ell\)\(\ell\ell\)\(\ell\ell\)\(\ell\ell\)\(\ell\ell\)\(\ell\ell\)\(\ell\ell\)\(\ell\ell\)\(\ell\ell\)\(\ell\ell\)\(\ell\ell\ell\)\(\ell\ell\)\(\ell\ell\)\(\ell\ell\)\(\ell\ell\ell\)\(\ell\ell\ell\)\(\ell\ell\ell\)\(\ell\ell\)\(\ell\ell\ell\)\(\ell\ell\)\(\ell\ell\ell\)\(\ell\ell\ell\)\(\ell\ell\ell\)\(\ell\ell\ell\)\(\ell\ell\ell\)\(\ell\ell\ell\)\(\ell\ell\ell\)\(\ell\ell\ell\)\(\ellell\ell\)\(\ell\ell\ell\)\(\ell\ell\ell\ell\)\(\ellell\ell\ell\)\(\ell\ell\ell\)\(\ellell\ell\ell\)\(\ell\ell\ell\ell\)\(\ellell\ell\ell\)\(\ell\ell\ell\)\(\ellell\ell\ell\)\(\ellell\ell\ell\)\(\ellell\ell\ell\)\(\ellell\ell\ell\)\(\ellell\ell\ell\)\(\ellell\ell\ell\)\(\ellell\ell\ell\)\(\ellell\ell\ell\)\(\ellell\ell\ell\ell\)\(\ellell\ell\ell\ell\)\(\ellell\ell\ell\ell\)\(\ellell\ell\ell\ell\)\(\ellell\ell\ell\ell\)\(\ellell\ell\ell\ell\)\(\ellell\ell\ell\ell\ell\)\(\ellell\ell\ell\ell\ell\)\(\ellell\ell\ell\ell\ell\ell\)\(\ellellell\ell\ell\ell\ell\)\(\ellell\ell\ell\ell\ell\ell\ell\)\(\ellellell\ell\ell\ell\ell\ell\ell\ell\ell\ell\ell\ell\ell\ell\ell\ell\ell\ell\ell\ell\ell\ell\ell\ell\ell\ell\ell\ell\ell\ell\ell\ell\ell\ell\ell\ell\ell\ell\ellell\ell\ellell\ell\ellell\ell\ellell\ellell\ellell\ellell\ but including FFHQ improves the results, especially when evaluating with FR (as opposed to only by visual inspection). We create four sets of morphs using the validation set: landmark-based morphs, GAN-based morphs using MIPGAN-I [7], approximations of worst-case morphs generated by our WALI method, and approximations of worst-case morphs generated using our improved MIPGAN implementation. We select 75 pairs of similar identities by calculating a mean FR embedding for each identity: \(\overline{z}=\frac{1}{L}\sum_{i}^{d}\varphi(x_{i})\), and then selecting those pairs for which the mean FR embedding are most similar. For each pair of identities we select all faces with neutral expression and from all possible combinations we randomly select 506 image pairs for morphing. For each pair \((x_{1},x_{2})\) we create three landmark morphs, one MIPGAN morph, one WALI worst-case approximation for each FR system used for optimisation (seven in total), and one improved MIPGAN worst-case approximation, see Fig. 4. The three landmark morphs comprise one full morph - the faces and also the background of both original images are morphed - and two spliced morphs - full morphs spliced into the background of each of the original images respectively to remove ghosting artefacts. After freezing WAL's weights, a worst-case approximation is generated by applying 150 optimisation steps in phase 1 and 150 steps in phase 2 (Section 3.4). For our improved MIPGAN morphs we also apply 150 optimisation steps in two phases, this time using a StyleGAN Generator and Encoder. We notice that there is a difference in behaviour between the newer FR systems ElasticFace and CurricularFace compared to the other FR systems we use for optimisation, which we describe in Section 6. For this reason, whenever we optimise with MobileFaceNet and one of these two FR systems, we weight the losses corresponding to the latter with 2. In all other cases the optimisation losses as defined in Section 3.4 are weighted equally. We did not extensively analyse the effect of weighting losses differently, so in other applications this may need to be examined further in order to select weights that suitably balance the different losses. For image generation tools and/or MAD methods, we are aware that it would be better to use datasets that are more balanced and include more variation in terms of gender, age, ethnicity and we encourage the research community to take this into consideration for future research. We also compare different GAN- and landmark-based morphs [45, 46] created using images from the FRLL [47] dataset. The FRLL dataset consists of 102 identities and two images per identity. For each identity one image with neutral expression is provided that is suitable for morph generation. Five morph datasets are provided: WebMorph, OpenCV, FaceMorpher, and AMSL are landmark-based morphs, StyleGAN mapphs are GAN-based morphs. AMSL consists of 2175 landmark-based morphs. When using a landmark-based morphing tool, a morph based on two images can be spliced into the background of either the first or the second image. Since in the AMSL dataset both options are not always provided, we only evaluate with identity pairs for which both spliced morphs are provided. We do this to enable a fair comparison of all morphing methods. WebMorph, OpenCV and FaceMorpher consist of full morphs only, i.e. they contain obvious morphing artefacts. ## 5 Evaluation Metrics To measure and compare the performance of our model, we calculate the Morphing Attack Potential (MAP) [48] for \(r=1\) verification attempt and \(c=1\) FR system, which is the same as the Mated Morph Presentation Match Rate, abbreviated MMPMR(\(t\)). We consider morphs based on two identities, in which case the MMPMR(\(t\)) [1] is the proportion of (morphing) attacks for which both contributing identities are considered a match by the FR system when using a threshold \(t\). \[\text{MMPMR}(t)=\frac{1}{M}\sum_{m=1}^{M}\left\{\max(d_{1},d_{2})<t\right\} \tag{16}\] where \(d_{1}\) and \(d_{2}\) are the dissimilarity scores between the morph and a probe image of the first and second identity, respectively. \(M\) is the number of morphed images. We report MMPMR values for nine different FR systems. For each FR system we set such that the false non-match rate is minimal while the false match rate\(<\)0.1%. Higher MMPMR values indicate higher vulnerability to morphing attacks. It would be possible to compute MAP values for \(c>1\), but for WALI morphs we always treat one or two FR systems as white-box systems, so this might lead to unfair comparisons. Instead, we would have to compute different MAP matrices for all morphing techniques excluding one or two FR systems at a time, which would become very messy. For this reason we choose to only report the MMPMR. ### Morphing Attack Detection We evaluate morphs generated using WALI with two MAD methods. The first is a single image-based morphing (S-MAD) approach, based on Support Vector Machine (SVM) trained with Local Binary Pattern (LBP) features, that learns to detect morphed images based on image texture described using LBP features [49, 50]. The second is a differential image-based (D-MAD) method that is based on Deep Learning (DL) features [51]. We train both MAD methods using the FRGC images we also used to train WALI. While the LBP-based approach can successfully detect WALI morphs, this may simply be due to the similarity of WALI morphs to the FRGC training data. To show that this is the case and that it is insufficient to train with landmark-based morphs, we also train the LBP approach using FRLL and AMSL. We include 20% of the landmark-based morphs (selected randomly) in the training set, due to the class imbalance. Because of the low number of genuine pairs (only one pair per identity) we do not train the D-MAD approach with this dataset. We report the performance of these two MAD methods using the Bona fide Presentation Classification Error Rate (BPCER): the proportion of bona fide images that are incorrectly labelled as morphs and the Attack Presentation Classification Error Rate (APCER): the proportion of (morphing) attacks that are misidentified as bona fides. Higher values of BPCER and APCER indicate higher vulnerability of an MAD system to morphing attacks. ## 6 Results In Fig. 4 we show examples of morphs generated using WALI and compare them with landmark, MIPGAN and improved MIPGAN morphs. WALI morphs are more blurry compared to MIPGAN morphs, which to a large extent is due to MIPGAN relying on a StyleGAN model that generates \(1024\times 1024\) images while the WALI morphs are \(128\times 128\)-pixel images. In Fig. 6 we show that the visual quality (from a human perspective) can be improved simply by increasing the WALI model size. We report MMPMR values for one FR system at a time for the case where optimisation was guided by MFN only and for the case where optimisation was guided by two FR systems (MFN+EF, MFN+EF, MFN+AF, MFN+INC, MFN+PN, MFN+VGG), see Table 1. Dlib is not available as a Pytorch implementation, so we did not optimise using the FR system. When WALI is optimised with two FR systems, the resulting morphs are more challenging than either landmark or MIPGAN morphs for both FR systems used for optimisation. There is an interesting difference in behaviour that sets apart ElasticFace and CurricularFace from other FR systems. Comparing WALI morphs optimised with MFN+AF, MFN+INC, MFN+PN, MFN+VGG to landmark- and MIPGAN- morphs, we see that the MMPMR is closer to the worst case for all black-box tested FR systems except ElasticFace, CurricularFace, Dlib and COTS. At first glance this could be interpreted to mean that ElasticFace and CurricularFace are generally less vulnerable to GAN-based morphing attacks. However, when WALI morphs are optimised using ElasticFace, the resulting morphs are also closer to the worst case when evaluating with CurricularFace and vice versa. When either of the two is used for optimisation, they are no less vulnerable to GAN-based morphing attacks than other FR systems. Interestingly, Dlib - and to a lesser extent also the COTS FR system - is less vulnerable to MIPGAN and WALI morphs than to landmark morphs. It is also interesting to highlight the inverse relationship between performance on normal images and vulnerability to morphing attacks. Comparing the last two columns illustrates this in theory: the two FR systems with the lowest FNMR also have the highest worst-case MMPMR. In practice the same pattern is shown: the FR systems with lower FNMR are indeed more vulnerable to landmark, MIPGAN and WALI morphs. Table 2 reports the MMPMR and confirms our hypothesis, that explicitly considering the goal of morphing _during_ training leads to more challenging morphs. There may be some amount of trade-off between the two goals when using WALI: generating visually convincing images versus successfully manipulating identity information. The following four aspects lead to more challenging morphs: 1. defining a worst-case embedding that we can use to define losses during training and optimisation, 2. explicitly training the model to generate morphs, 3. improving optimisation by splitting it into two phases: before we generate morphs, we select good initial embeddings for each input image, 4. optimising with more than one FR system. WALI does not seem to generalise well to other datasets. This can be seen in Table 3 and Fig. 5. This is to a large extent due to our WALI Generator (7.8 million parameters) not being able to compete with a more powerful Generator such as StyleGAN (28.3 million parameters). When applying a colour correction to FPLL images so that they more closely resemble FRGC images, the MMPMR of WALI morphs significantly increases, for example from 30.0% to 11.0% for CurricularFace of 14.1% to 44.5% for PocketNet, indicating that the lower performance of WALI is to large extent due to the different type of data. In order to illustrate the effect of approximating a worst case when considering FPLL data, we can apply three of the four improvements listed above to existing generative methods. We show that combining a more powerful StyleGAN Generator with the improved optimisation approach in two phases, as well as optimising with two FR systems still leads to closer approximations of worst-case morphs. Morphs generated with our improved MIPGAN implementation have higher MMPMR values than all other GAN-based morphs, and also higher MMPMR than AMSL morphs. While the MMPMR for the other three landmark-based methods is higher, those morphs contain very obvious artefacts. Since the MIPGAN optimisation process includes a perception-style loss that encourages visual similarity to both contributing identities, the MIPGAN morphs contain some ghosting artefacts. Because we do not include such a loss during optimisation phase 2, the improved MIPGAN morphs are visually more convincing than MIPGAN morphs and landmark morphs that contain visible ghosting artefacts. Some of the other three landmark-based approaches outperform improved MIPGAN, but contain significant ghosting artefacts that would not Figure 4: Examples of landmark-based and different GAN-based morphs based on FRGC images. Figure 5: Examples of morphs based on FPLL images. WALI (and other morph methods) are trained on another dataset and applied to FPLL images, which have different lighting and colour balance. WALI may not generalise well to unseen data, mainly because of the simple WALI generator which cannot compete with more powerful GANs. Incorporating StyleGAN in our WALI pipeline results in ‘Improved MIPGAN’, giving visually convincing results. -fool visual inspection by humans. If a large network such as StyleGAN were explicitly _trained_ to generate morphs, they might become even more challenging. ### S-MAD using LBP We implement an S-MAD approach based on Local Binary Patterns (LBP) followed by a Support Vector Machine (SVM). We compare the ability of this model to detect morphs: once when it was trained using the same training data as WALI and once when using a separate training set. LBP features may be appropriate for detecting WALI-based morphs when the underlying training data is known, but performance decreases significantly when the database is unknown and contains only landmark-based morphs, see Table 4 and Fig. 7. LBP features are not at all suitable for detecting (improved) MIPGAN morphs, which is probably due to the ability of StyleGAN to generate images with texture that is similar to that of real images. The APCER for images generated by a 512\(\times\)512 WALI model trained without FR losses, see the bottom row in Fig. 6) and the last column in Table 4, ranges from 78.8% to 99.8%, showing a similar effect. ### D-MAD using deep-learning-based FR feature differences While this approach seems to be very successful at detecting morphed images that were created using the same algorithm that was used to create the training set, its performance decreases significantly when evaluating (improved) MIPGAN or WALI morphs, see Table 4 and Fig. 7. Note that this D-MAD approach can detect images generated by a 512\(\times\)512 WALI model trained without FR losses much more easily than other GAN-based morphs, which makes sense, since these morphs were not optimised using FR systems. If this approach were trained with a separate training set other than FRGC we would expect its performance on MIPGAN or WALI morphs to decrease further. ### Discussion For all FR systems we evaluated, except Dlib, our approach outperforms MIPGAN morphs based on FRGC. For three out of six FR systems tested under black-box assumptions WALI morph outperform landmark morphs. This shows that it is possible to approximate the theoretical worst case for more than one FR system. As we already mentioned, this does not mean that ElasticFace and CurricularFace are generally less vulnerable to GAN-based morphing attacks. These two FR systems are newer and seem to show different behaviour from the other FR systems we tested. Using WALI to generate morphs is computationally expensive, since optimisation needs to be performed for every morph that is generated. Due to hardware limitations we report results for \(128\times 128\) images. While we did successfully generate larger images - up to \(512\times 512\), compared to MIPGAN morphs that rely on a StyleGAN model that generates \(1024\times 1024\) images - this takes significantly longer and requires more GPU memory, especially during training. However, our results do show that morphs exist that are extremely successful at exploiting the vulnerabilities of (multiple) FR systems. Therefore, the idea of a criminal tweaking their morph in ways to make it more likely to be accepted by multiple FR systems is very possible, illustrating the need to focus on quality as well as quantity when generating morphing datasets. We evaluated five different FR systems and showed that in the (theoretical) worst case up to 72-98% of FRGC morphs can trick the FR system. For four out of five FR systems we evaluated, our WALI morphs when optimised with two FR systems are closer to this upper bound than either landmark or MIPGAN morphs. As has been reported before [11], there seems to be an inverse relationship between performance of FR systems on normal data and vulnerability to morphing attacks. WALI's improvements are due to having a worst-case embedding as a goal to approximate, improved optimisation in two phases (finding a good initial embedding for each bona file image before generating morphs), optimising with more than one FR system simultaneously, and including the goal of morphing during training. The first three goals can be applied to other existing generative methods, we used StyleGAN as an example, leading to an improved MIPGAN approach that led to morphs that are more challenging than other GAN-based morphs. WALI morphs were generated in an adversarial manner and probably exploit the fact that deep-learning-based FR systems are sensitive to certain patterns in images. While such patterns might be imperceptible to humans, they can make the FR systems vulnerable to WALI morphs. These patters may not survive post-processing \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|} \hline & \multicolumn{3}{|c|}{Improved} & \multicolumn{3}{|c|}{WALI (Ours)} & \multicolumn{3}{|c|}{WALI (Ours)} & \multicolumn{3}{|c|}{WALI (Ours)} & \multicolumn{3}{|c|}{WALI (Ours)} \\ & Landmark & MIPGAN & MIPGAN & _MFN_ & _MFN_ & _MFN_ & _MFN_ & _MFN_\& _MFN_\& _MFN_\& _Worst_ \\ & mark & & (Ours) & _ELFace_ & _CurrFace_ & _ArcFace_ & _Inception_ & _PocketNet_ & _VGG16_ & Case & \\ \hline _MobileFaceNet_ & 65.7 & 71.9 & 91.8 & 96.8 & 96.6 & 96.6 & 96.8 & 97.0 & 97.0 & 96.6 & 97.5 & 0.5 \\ \hline _ElasticFace_ & 56.9 & 18.9 & 83.0 & 14.0 & 81.8 & 60.4 & 25.7 & 20.8 & 18.7 & 18.1 & 98.8 & 0.0 \\ \hline _CurricularFace_ & 45.9 & 11.1 & 60.9 & 8.6 & 46.6 & 68.3 & 14.8 & 12.2 & 10.5 & 13.2 & 99.0 & 0.0 \\ \hline _ArcFace_ & 70.7 & 62.9 & 84.5 & 64.6 & 76.2 & 75.6 & 90.4 & 71.7 & 70.2 & 70.2 & 97.9 & 0.2 \\ \hline _Inception_ & 36.8 & 37.0 & 51.1 & 37.6 & 47.2 & 46.4 & 43.7 & 58.0 & 41.2 & 42.5 & 71.8 & 3.4 \\ \hline _PocketNet_ & 34.1 & 34.2 & 49.0 & 48.0 & 49.3 & 48.7 & 51.4 & 51.3 & 63.5 & 50.3 & 84.2 & 3.8 \\ \hline _VGG16_ & 36.4 & 32.7 & 42.1 & 35.4 & 39.4 & 40.1 & 40.1 & 41.1 & 38.5 & 56.2 & 92.0 & 7.6 \\ \hline _Dlib_ & 45.1 & 37.2 & 42.4 & 27.3 & 32.6 & 31.4 & 32.5 & 32.9 & 30.0 & 32.2 & 72.3 & 5.8 \\ \hline _COTS_ & 99.8 & 93.4 & 98.6 & 71.4 & 94.6 & 95.5 & 79.6 & 80.4 & 76.3 & 75.0 & 1/a & 0.0 \\ \hline \end{tabular} \end{table} Table 1: MIPMR values for landmark- and GAN-based morphs. The second-to-last column shows the theoretical worst case for each respective FR system. Grey cells indicate evaluation was under white-box assumptions, i.e. this FR system uses used during optimisation. The more challenging the morphs, the higher the MIPMR. To show that there is a trade-off between FR performance and vulnerability to morphing attacks we report the False Non-Match Rate (FNMR) (c) at which the False Match Rate \(<0.1\%\) in the last column. The morphing methods highlighted in blue are closest to the worst case for almost all FR systems. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline & MIPGAN & WALI (Ours) & WALI without & WALI w/o & WALI w/o & WALI w/o \\ & & with all FR losses & FR losses & \(\mathcal{L}_{\text{FR\_Mesh}}\) & \(\mathcal{L}_{\text{FR\_Mesh}}\) & \(\mathcal{L}_{\text{FR\_Mesh}}\) & \(\mathcal{L}_{\text{FR\_Mesh}}\) \\ \hline _MobileFaceNet_ & 0.9 & 19.2 & 0.3 & **19.6** & 19.3 & 14.4 \\ \hline _ElasticFace_ (black box) & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ \hline _Curricularface_ (black box) & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ \hline _ArcFace_ (black box) & 0.7 & **13.0** & 0.1 & 12.0 & 6.9 & 6.6 \\ \hline _Inception_ (black box) & 0.5 & **13.2** & 0.6 & 7.9 & 9.5 & 9.0 \\ \hline _PocketNet_ (black box) & 1.5 & **18.0** & 1.1 & 16.8 & 17.7 & 14.0 \\ \hline _VGG16_ (black box) & 2.0 & **10.7** & 0.4 & 9.5 & 8.0 & 6.0 \\ \hline _Dlib_ (black box) & 5.6 & **10.1** & 0.3 & 8.1 & 3.7 & 7.5 \\ \hline _COTS_ (black box) & 0.0 & 0.1 & 0.0 & **1.5** & **1.5** & 1.4 \\ \hline \end{tabular} \end{table} Table 2: MIPMR for WALI morphs without any optimisation stage. The more challenging the morphs, the higher the MIPMR. such as printing&scanning, resizing etc. Furthermore, there are still artefacts visible to the human eye, as can be seen in Fig. 4 and 6, for example around the mouth or eyes. Visual inspection would probably allow e.g. border guards to detect that the generated morph is not a real image. Our findings therefore show room for improvement for FR systems. We hope that our proposed method WALI can contribute to such an improvement by generating more challenging training data for FR systems. ## 7 Conclusion & Future Work In this work, we showed that generating challenging morphs is possible and necessary to evaluate the robustness of FR systems. Our newly proposed WALI method outperformed existing morphing techniques on FRGC data, and since it provides a way to generate large quantities of difficult morphs, it could contribute to improving FR and MAD systems' performance. We also introduced an improved MIPGAN approach that due to the powerful underlying StyleGAN Generator generated challenging morphs on FRLL as well as on FRGC. We showed that if the goal of generating challenging morphs is not explicitly considered during the training of a GAN, then the resulting morphs will be significantly less challenging than when that goal _is_ included during training. Challenges for future research include generating such datasets while also making sure to cover the possible range of morphs by focussing on (visual) quality as well as quantify, for example by investigating the effect of time-consuming manual post-processing. It would be interesting to explore whether GAN networks that can produce images with as high quality as e.g. StyleGAN can also be adapted to explicitly include the goal of generating difficult morphs during training. We showed that optimising towards a worst case leads to more challenging morphs, similar adaptations could be made to Diffusion-based approaches as well. Additionally, further investigation could be carried out on the effect of post-processing techniques on the robustness of FR systems to morphs. Moreover, the effects of training FR systems or MAD methods with large datasets generated with WALI or improved MIPGAN could be further explored in future research. ## 8 Ethics, Broader Impact, and Reproducibility This paper introduces methods to generate morphs, which could potentially be used to apply for passports or other documents that could be shared by two people, for example allowing them to avoid travel restrictions. As long as countries allow applicants to provide their own digital or printed passport photo, this will continue to pose a risk. On the other hand, sharing our morphing generation method will allow researchers to be more aware of potential vulnerabilities, and support development of countermeasures. Our method can be used to generate large datasets of advanced morphs that can for example be used to train FR systems or to teach human border control staff to better spot morph-related artefacts. We aim to raise awareness for risks posed by morphing and without sharing our method, such vulnerabilities might remain unknown. We also intend to share our code for research purposes only. To aid reproducibility, we have included important information such as hyperparameters in this paper. All data we used is already available to researchers and we plan to release our code for research purposes after publication. ## 9 Acknowledgements This work was funded by the Rijksdienst voor Identieitsgegevens (RvIG). We thank Prof. Dr. Christoph Brune and Dr. Jelmer Wolterink for useful discussions.
2302.07253
Energy Transformer
Our work combines aspects of three promising paradigms in machine learning, namely, attention mechanism, energy-based models, and associative memory. Attention is the power-house driving modern deep learning successes, but it lacks clear theoretical foundations. Energy-based models allow a principled approach to discriminative and generative tasks, but the design of the energy functional is not straightforward. At the same time, Dense Associative Memory models or Modern Hopfield Networks have a well-established theoretical foundation, and allow an intuitive design of the energy function. We propose a novel architecture, called the Energy Transformer (or ET for short), that uses a sequence of attention layers that are purposely designed to minimize a specifically engineered energy function, which is responsible for representing the relationships between the tokens. In this work, we introduce the theoretical foundations of ET, explore its empirical capabilities using the image completion task, and obtain strong quantitative results on the graph anomaly detection and graph classification tasks.
Benjamin Hoover, Yuchen Liang, Bao Pham, Rameswar Panda, Hendrik Strobelt, Duen Horng Chau, Mohammed J. Zaki, Dmitry Krotov
2023-02-14T18:51:22Z
http://arxiv.org/abs/2302.07253v2
# Energy Transformer ###### Abstract Transformers have become the de facto models of choice in machine learning, typically leading to impressive performance on many applications. At the same time, the architectural development in the transformer world is mostly driven by empirical findings, and the theoretical understanding of their architectural building blocks is rather limited. In contrast, Dense Associative Memory models or Modern Hopfield Networks have a well-established theoretical foundation, but have not yet demonstrated truly impressive practical results. We propose a transformer architecture that replaces the sequence of feedforward transformer blocks with a single large Associative Memory model. Our novel architecture, called Energy Transformer (or ET for short), has many of the familiar architectural primitives that are often used in the current generation of transformers. However, it is not identical to the existing architectures. The sequence of transformer layers in ET is purposely designed to minimize a specifically engineered energy function, which is responsible for representing the relationships between the tokens. As a consequence of this computational principle, the attention in ET is different from the conventional attention mechanism. In this work, we introduce the theoretical foundations of ET, explore it's empirical capabilities using the image completion task, and obtain strong quantitative results on the graph anomaly detection task. ## 1 Introduction Transformers have become pervasive models in various domains of machine learning, including language, vision, and audio processing. Every transformer block uses four fundamental operations: attention, feed-forward multi-layer perceptron (MLP), residual connection, and layer normalization. Different variations of transformers result from combining these four operations in various ways. For instance, [1] propose to frontload additional attention operations and backload additional MLP layers in a sandwich-like instead of interleaved way, [2] prepend an MLP layer before the attention in each transformer block, [3] use neural architecture search methods to evolve even more sophisticated transformer blocks, and so on. Various methods exist to approximate the attention operation, multiple modifications of the norm operation, and connectivity of the block; see, for example, [4] for a taxonomy of different models. At present, however, the search for new transformer architectures is driven mostly by empirical evaluations, and the theoretical principles behind this growing list of architectural variations is missing. Additionally, the computational role of the four elements remains the subject of discussions. Originally, [5] emphasized attention as the most important part of the transformer block, arguing that the learnable long-range dependencies are more powerful than the local inductive biases of convolutional networks. On the other hand more recent investigations [6] argue that the entire transformer block is important. The "correct" way to combine the four basic operations inside the block remains unclear, as does an understanding of the core computational function of the entire block and each of its four elements. In a seemingly unrelated line of work, Associative Memory models, also known as Hopfield Networks [7, 8], have been gaining popularity in the machine learning community thanks to theoretical advancements pertaining to their memory storage capacity and novel architectural modifications. Specifically, it has been shown that increasing the sharpness of the activation functions can lead to super-linear [9] and even exponential [10] memory storage capacity for these models, which is important for machine learning applications. This new class of Hopfield Networks is called Dense Associative Memories or Modern Hopfield Networks. [11] additionally describe how the attention mechanism in transformers is closely related to a special model of this family with the \(\mathrm{softmax}\) activation function. There are high-level conceptual similarities between transformers and Dense Associative Memories, since both architectures are designed for some form of denoising of the input. Transformers are typically pre-trained on a masked-token task, e.g., in the domain of Natural Language Processing (NLP) certain tokens in the sentence are masked and the model predicts the masked tokens. Dense Associative Memory models are designed for completing the incomplete patterns. For instance, a pattern can be the concatenation of an image and its label, and the model can be trained to predict part of the input (the label), which is masked, given the query (the image). They can also be trained in a self-supervised way by predicting the occluded parts of the image, or denoising the image. There are also high-level differences between the two classes of models. Associative Memories are recurrent networks with a global energy function so that the network dynamics converges to a fixed point attractor state corresponding to a local minimum of the energy function. Transformers are typically not described as dynamical systems at all. Rather, they are thought of as feed-forward networks built of the four computational elements discussed above. Even if one thinks about them as dynamical systems with tied weights, e.g., [12], there is no reason to expect that their dynamics converge to a fixed point attractor (see the discussion in [13]). Additionally, a recent study [14] uses a form of Majorization-Minimization algorithms [15] to interpret the forward path in the transformer block as an optimization process. This interpretation requires imposing certain constraints on the operations inside the block, and attempting to find an energy function that describes the constrained block. We take a complementary approach by using intuition developed in Associative Memory models to _start_ with an energy function that is perfectly suited for the problem of interest. The optimization process and the resulting transformer block in our approach is a _consequence_ of this specifically chosen energy function. Concretely, we use the recent theoretical advancements and architectural developments in Dense Associative Memories to design an energy function tailored to route the information between the tokens. The goal of this energy function is to represent the relationships between the semantic contents of tokens describing a given data point (e.g., the relationships between the contents of the image patches in the vision domain, or relationships between the nodes' attributes in the graph domain). The core mathematical idea of our approach is that the sequence of these unusual transformer blocks, which we call the Energy Transformer (ET), minimizes this global energy function. Thus, the sequence of conventional transformer blocks is replaced with a single ET block, which iterates the token representations until they converge to a fixed point attractor state. In the image domain, this fixed point corresponds to the completed image with masked tokens replaced by plausible auto-completions of the occluded image patches. In the graph domain, the fixed point reveals the anomaly status of a given node given that node's neighbors; see Figure 1. The energy function in our ET block is designed with the goal to describe the _relationships between the tokens_. Examples of relationships in the image domain are: straight lines tend to continue through multiple patches, given a face with one eye being masked the network should impaint the missing eye, etc. In the graph domain, these are the relationships between the attributes and the anomaly status of the connected nodes. The optimization procedure during the forward path of ET uses continuous time differential equations, and describes a gradient decent on the specifically chosen energy function. The core mathematical principle of the ET block - the existence of the global energy function - dictates strong constraints on the possible operations, the order in which these operations are executed in the forward path, and the symmetries of the weights in the network. As a corollary of this theoretical principle, the attention mechanism of ET is different from the attention mechanism commonly used in feed-forward transformers [5]. In the following section we introduce the global energy function for the ET block and explain the block's architecture. We then explore the inner workings of the ET network for image completion and qualitatively assess the learned representations (the model is trained on ImageNet-1k in a general pipeline similar to [16]). Finally, we turn to the graph anomaly detection task, which is conceptually similar to the image completion setting but has a record of strong published benchmarks against which our approach can be quantitatively compared. We show that the ET network stands in line with or outperforms the latest benchmarks. Although we focus on the computer vision and anomaly detection domains in this paper, we believe that the computational principles developed can be applied to other exciting domains (e.g., NLP, audio, and video) in which conventional transformers have shown promising results. Figure 1: Overview of the Energy Transformer (ET). Instead of a sequence of conventional transformer blocks, a single recurrent ET block is used. The operation of this block is dictated by the global energy function. The token representations are updated according to a continuous time differential equation with the time-discretized update step \(\alpha=dt/\tau\). On the image domain, images are split into non-overlapping patches that are linearly encoded into tokens with added learnable positional embeddings (POS). Some patches are randomly masked. These tokens are recurrently passed through ET, and each iteration reduces the energy of the set of tokens. The token representations at or near the fixed point are then decoded using the decoder network to obtain the reconstructed image. The network is trained by minimizing the mean squared error loss between the reconstructed image and the original image. On the graph domain, the same general pipeline is used. Each token represents a node, and each node has its own positional encoding. The token representations at or near the fixed point are used for the prediction of the anomaly status of each node. ## 2 Energy Transformer Block We now introduce the theoretical framework of the ET network. For clarity of presentation, we use language associated with the image domain. For the graph domain, one should think about "image patches" as nodes on the graph. The overall pipeline is similar to the Vision Transformer networks (ViTs) and is shown in Figure 1. An input image is split into non-overlapping patches. After passing these patches through the encoder and adding the positional information, the semantic content of each patch and its position is encoded in the token \(x_{iA}\). In the following the indices \(i,j,k=1...D\) are used to denote the token vector's elements, indices \(A,B,C=1...N\) are used to enumerate the patches and their corresponding tokens. It is helpful to think about each image patch as a physical particle, which has a complicated internal state described by a \(D\)-dimensional vector \(\mathbf{x_{A}}\). This internal state describes the identity of the particle (representing the pixels of each patch), and the particle's positional embedding (the patch's location within the image). The ET block is described by a continuous time differential equation, which describes interactions between these particles. Initially, at \(t=1\) the network is given a set containing two groups of particles corresponding to open and masked patches. The "open" particles know their identity and location in the image. The "masked" particles only know where in the image they are located, but are not provided the information about what image patch they represent. The goal of ET's non-linear dynamics is to allow the masked particles to find an identity consistent with their locations and the identities of open particles. This dynamical evolution is designed so that it minimizes a global energy function, and is guaranteed to arrive at a fixed point attractor state. The identities of the masked particles are considered to be revealed when the dynamical trajectory reaches the fixed point. Thus, the central question is: how can we design the energy function that accurately captures the task that the Energy Transformer needs to solve? The masked particles' search for identity is guided by two pieces of information: identities of the open particles, and the general knowledge about what patches are in principle possible in the space of all possible images. These two pieces of information are described by two contributions to the ET's energy function: the energy based attention and the Hopfield Network, respectively, for reasons that will become clear in the next sections. Below we define each element of the ET block in the order they appear in Figure 2. Figure 2: **Left**: Inside the ET block. The input token \(\mathbf{x}\) passes through a sequence of operations and gets updated to produce the output token \(\mathbf{x^{\prime}}\). The operations inside the ET block are carefully engineered so that the entire network has a global energy function, which decreases with time and is bounded from below. In contrast to conventional transformers, the ET-based analogs of the attention module and the feed-forward MLP module are applied in parallel as opposed to consecutively. **Center**: The cosine similarity between the learned position embedding of each patch and every other patch. In each cell, the brightest patch indicates the cell of consideration. **Right**: 100 selected memories stored in the HN memory matrix, visualized by the decoder as 16x16 RGB image patches. This visualization is unique to our model, as traditional Transformers cannot guarantee image representations in the learned weights. #### Layer Norm Each token is represented by a vector \(\mathbf{x}\in R^{D}\). At the same time, most of the operations inside the ET block are defined using a layer-normalized token representation \[g_{i}=\gamma\frac{x_{i}-\bar{x}}{\sqrt{\frac{1}{D}\sum\limits_{j}\left(x_{j}- \bar{x}\right)^{2}+\varepsilon}}+\delta_{i},\quad\text{ where }\quad\bar{x}=\frac{1}{D}\sum \limits_{k=1}^{D}x_{k} \tag{1}\] The scalar \(\gamma\) and the vector \(\delta_{i}\) are learnable parameters, \(\varepsilon\) is a small regularization constant. Importantly, this operation can be viewed as an activation function for the neurons and can be defined as a partial derivative of the Lagrangian function \[L=D\gamma\sqrt{\frac{1}{D}\sum\limits_{j}\left(x_{j}-\bar{x}\right)^{2}+ \varepsilon}\ +\ \sum\limits_{j}\delta_{j}x_{j},\quad\text{ so that }\quad g_{i}=\frac{ \partial L}{\partial x_{i}} \tag{2}\] See [17, 18, 19] for a detailed discussion of this property. #### Multi-Head Energy Attention The first contribution to the ET's energy function is responsible for exchanging information between the particles (patches). Similarly to the conventional attention mechanism, each token generates a pair of queries and keys (ET does not have a separate value matrix; instead the value matrix is a function of keys and queries). The goal of the energy based attention is to evolve the tokens in such a way that the keys of the open patches are aligned with the queries of the masked patches in the internal space of the attention operation. Below we use index \(\alpha=1...Y\) to denote elements of this internal space, and index \(h=1...H\) to denote different heads of this operation. With these notations the energy-based attention operation is described by the following energy function: \[E^{\text{ATT}}=-\frac{1}{\beta}\sum\limits_{h}\sum\limits_{C}\text{log}\left( \sum\limits_{B\neq C}\text{exp}\left(\beta\sum\limits_{\alpha}K_{\alpha hB}\ Q_{ \alpha hC}\right)\right) \tag{3}\] where the queries and keys tensors are defined as \[K_{\alpha hB} =\sum\limits_{j}W_{\alpha hj}^{K}\ g_{jB},\quad\quad\quad\mathbf{ K}\in R^{Y\times H\times N} \tag{4}\] \[Q_{\alpha hC} =\sum\limits_{j}W_{\alpha hj}^{Q}\ g_{jC},\quad\quad\quad\mathbf{ Q}\in R^{Y\times H\times N}\] and the tensors \(\mathbf{W}^{K}\in R^{Y\times H\times D}\) and \(\mathbf{W}^{Q}\in R^{Y\times H\times D}\) are learnable parameters. From the computational perspective each patch generates two representations: query (given the position of the patch and its current content, where in the image should it look for the prompts on how to evolve in time?), and key (given the current content of the patch and its position, what should be the contents of the patches that attend to it?). The log-sum energy function (3) is minimal when for every patch in the image its queries are aligned with the keys of a small number of other patches connected by the attention map. Different heads (index \(h\)) contribute to the energy additively. #### Hopfield Network Module The next step of the ET block, which we call the Hopfield Network (HN), is responsible for ensuring that the token representations are consistent with what one expects to see in realistic images. The energy of this sub-block is defined as: \[E^{\text{HN}}=-\frac{1}{2}\sum\limits_{B,\mu}r\Big{(}\sum\limits_{j}\xi_{\mu j }\ g_{jB}\Big{)}^{2},\quad\quad\xi\in R^{K\times D} \tag{5}\] where \(\xi_{\mu j}\) is a set of learnable weights (memories in the Hopfield Network), and \(r(\cdot)\) is an activation function. Depending on the choice of the activation function this step can be viewed either as a classical continuous Hopfield Network [8] if the activation function grows slowly (e.g., ReLU), or as a modern continuous Hopfield Network [9, 11, 17] if the activation function is sharply peaked around the memories (e.g. power or softmax). The HN sub-block is analogous to the feed-forward MLP step in the conventional transformer block but requires that the weights of the projection from the token space to the hidden neuron's space to be the same (transposed matrix) as the weights of the subsequent projection from the hidden space to the token space. Thus, the HN module here is an MLP with shared weights that is _applied recurrently_. The energy contribution of this block is low when the tokens representations are aligned with some rows of the matrix \(\xi\), which represent memories. #### Dynamics of Token Updates The forward path of the ET network is described by the continuous time differential equation, which minimizes the sum of the two energies described above \[\tau\frac{dx_{iA}}{dt}=-\frac{\partial E}{\partial g_{iA}},\quad\text{ where }\quad E=E^{\text{ATT}}+E^{\text{HN}} \tag{6}\] Here \(x_{iA}\) is the token representation (input and output from the ET block), and \(g_{iA}\) is its layer-normalized version. The first energy is low when each patch's queries are aligned with the keys of its neighbors. The second energy is low when each patch has content consistent with the general expectations about what an image patch should look like (memory slots of the matrix \(\chi\)). The dynamical system (6) finds a trade-off between these two desirable properties of each token's representation. For numerical evaluations equation (6) is discretized in time. To demonstrate that the dynamical system (6) minimizes the energy, consider the temporal derivative \[\frac{dE}{dt}=\sum_{i,j,A}\frac{\partial E}{\partial g_{iA}}\ \frac{\partial g_{iA}}{ \partial x_{jA}}\ \frac{dx_{jA}}{dt}=-\frac{1}{\tau}\sum_{i,j,A}\frac{\partial E}{ \partial g_{iA}}\ M_{ij}^{A}\ \frac{\partial E}{\partial g_{jA}}\leq 0 \tag{7}\] The last inequality sign holds if the symmetric part of the matrix \[M_{ij}^{A}=\frac{\partial g_{iA}}{\partial x_{jA}}=\frac{\partial^{2}L}{ \partial x_{iA}\partial x_{jA}} \tag{8}\] is positive semi-definite (for each value of index \(A\)). The Lagrangian (2) satisfies this condition. #### Relationship to Modern Hopfield Networks and Conventional Attention One of the theoretical contributions of our work is the design of the energy attention mechanism and the corresponding energy function (3). Although heavily inspired by prior work on Modern Hopfield Networks, our approach is fundamentally different from it. Our energy function (3) may look somewhat similar to the energy function of a continuous Hopfield Network with the \(\operatorname{softmax}\) activation function. The main difference, however, is that in order to use Modern Hopfield Networks recurrently (as opposed to applying their update rule only once) the keys must be constant parameters (called memories in the Hopfield language). In contrast, in our energy attention network the keys are dynamical variables that evolve in time with the queries. To emphasize this further, it is instructive to write explicitly the ET attention contribution to the update dynamics (6). It is given by (for clarity, assume only one head of attention): \[-\frac{\partial E^{\text{ATT}}}{\partial g_{iA}}=\sum_{C\neq A}\sum_{\alpha}W _{\alpha i}^{Q}\ K_{\alpha C}\ \text{softmax}\Big{(}\beta\sum_{\gamma}K_{\gamma C}\ Q_{\gamma A}\Big{)}+W_{ \alpha i}^{K}\ Q_{\alpha C}\ \text{softmax}\Big{(}\beta\sum_{\gamma}K_{\gamma A}\ Q_{ \gamma C}\Big{)}\] In both terms the \(\operatorname{softmax}\) normalization is done over the token index of the keys, which is indicated by the subscript in the equation. The first term in this formula is the conventional attention mechanism [5] with the value matrix equal to \(\mathbf{V}=(\mathbf{W}^{Q})^{T}\mathbf{K}=\sum_{\alpha}W_{\alpha i}^{Q}K_{ \alpha C}\). The second term is the brand new contribution that is missing in the original attention mechanism. The presence of this second term is crucial to make sure that the dynamical system (6) minimizes the energy function if applied recurrently. This second term is the main difference of our approach compared to the Modern Hopfield Networks. The same difference applies to the other recent proposals [14]. Lastly, we want to emphasize that our ET block contains two different kinds of Hopfield Networks acting in parallel, see Figure 2. The first one is the energy attention module, which is inspired by, but not identical to, Modern Hopfield Networks. The second one is the "Hopfield Network" module, which can be either a classical or modern Hopfield Network. These two should not be confused. ## 3 Qualitative Inspection of the ET framework on ImageNet We have trained the ET network on the masked image completion task using ImageNet-1k dataset [20]. Each image was broken into non-overlapping patches of 16x16 RGB pixels, which were projected with a single affine encoder into the token space. Half of these tokens were "masked", e.g., by replacing them with a learnable MASK token. A distinct learnable position encoding vector was added to each token. Our ET block then processes all tokens recurrently for \(T\) steps. The token representations after \(T\) steps are passed to a simple linear decoder (consisting of a layer norm and an affine transformation). The loss function is the standard MSE loss on the occluded patches. See more details on the implementation and the hyperparameters in Appendix A. Examples of occluded/reconstructed images (unseen during training) are shown in Figure 3. In general, our model learns to perform the task very well, capturing the texture in dog fur (col 3) and understanding meaningful boundaries of objects. However, we observe that our single ET block struggles to understand some global structure, e.g., failing to capture both eyes of the white dog (col 4) and completing irregular brick patterns in the name of extending the un-occluded borders (last col). We additionally inspect the positional encoding vectors associated with every token, Figure 2, where the model learns a locality structure in the image plane that is very similar to the original ViT [16]. The position embedding of each image patch has learned high similarity values to other patches in the same row and column, with similarity values higher for neighboring tokens than distant tokens. Our network is unique compared to standard ViTs in that the iterative dynamics only _move_ tokens around in the same space from which the final fixed point representation can be decoded back into the image plane. This functionality makes it possible to visualize essentially any _token representation_, _weight_, or _gradient of the energy_ directly in the image plane. This feature is highly desirable from the perspective of interpretability, since it makes it possible to track the updates performed by the network directly in the image plane as the computation unfolds in time. In Figure 2 this functionality is used for inspecting the learned weights of the HN module directly in the image plane. According to our theory, these weights should represent basis vectors in the space of all possible image patches. These learned representations look qualitatively similar to the representations typically found in networks trained on image datasets, e.g., [21]. We additionally visualize the gradients of the energy function (which are equal to the token updates, see Equation 6) of both ATTN block and the HN block, see Figure 4. Early in time, almost all signal to the masked tokens comes from the ATTN block, which routes information from the open patches to the masked ones; no meaningful signal comes from the HN block to the masked patch dynamics. Later in time we observe a different phenomenon: almost all signal to masked tokens comes from the HN module while ATTN contributes a blurry and uninformative signal. Thus, the attention layer is crucial early in the network dynamics, feeding signal to masked patches from the visible patches, whereas the HN is crucial later in the dynamics as the model approaches the final reconstruction, Figure 3: Reconstruction examples of our Energy Transformer using images from the ImageNet-1k validation set. _Top row:_ input images where 50% of the patches are masked with the learned MASK token. _Middle row:_ output reconstructions after 12 time steps. _Bottom row:_ original images. sharpening the masked patches. All the qualitative findings presented in this section are in accord with the core computational strategy of the ET block as it was designed theoretically in section 2. ## 4 Graph Anomaly Detection Having built the theoretical foundation of the ET network and gained an intuition about its inner workings through visualizations, we turn to quantitatively evaluating its performance on the graph anomaly detection problem, a task with plenty of strong and recently published baselines. Anomalies are outliers that significantly deviate in their properties from the majority of the samples. Detecting anomalies on graphs has broad applications in cybersecurity [22; 23], fraud detection [24; 25], and social networks [26]. Generally, there are three types of graph anomalies: node anomaly, edge anomaly, and subgraph anomaly. In this work, we focus on node anomaly detection in attributed graphs. This task is perfectly suited for the ET network, since each node's attributes can be encoded in the latent space and treated as a token (with added learnable positional embeddings). The network iterates these representations in time, and the outputs can be used for the node anomaly classification task. Graph Convolutional Networks (GCN) [27] have been widely used for this task due to their capability of learning high level representations of graph structures and node attributes [28; 29]. However, vanilla GCNs suffer from the over-smoothing problem [30]. In each layer of the forward pass, the outlier node aggregates information from its neighbors. This averaging makes the features of anomalies less distinguishable from the features of benign nodes. Our approach does not suffer from this problem, since the routing of the information between the nodes is done through the energy based attention, which uses different aggregation procedure depending on whether or not the node is anomalous. In order to turn the anomaly detection task on graphs into the ET framework, consider an undirected graph with \(N\) nodes. Every node has a vector of attributes \(\mathbf{y}_{A}\in R^{F}\), where \(F\) is the number of node's features. Additionally, every node has a binary label \(l_{A}\), indicating whether the node is benign or not. We focus on node anomaly and assume that all edges are trusted. The task is to predict the label of the node given the graph structure and the node's features. Since there are far more benign nodes in the graph than anomalous ones, anomaly detection can be regarded as an imbalanced node classification task. First, the feature vectors for every node are converted to a token representation using a linear embedding \(\mathbf{E}\) and adding a learnable positional embedding \(\lambda_{A}\) \[\mathbf{x}_{A}^{t=1}=\mathbf{E}\mathbf{y}_{A}+\lambda_{A} \tag{9}\] where the superscript \(t=1\) indicates the time of the update of the ET dynamics. This token representation is iterated through the ET block for \(T\) iterations. When the retrieval dynamics becomes stable, we have the final representation for each node \(\mathbf{x}_{A}^{t=T}\) (or more precisely \(\mathbf{g}_{A}^{t=T}\), since the outputs Figure 4: Token representations and gradients are visualized using the decoder at different times during the dynamics. The Energy Attention (ATTN) block contributes general structure information to the masked patches at _earlier_ time steps, whereas the Hopfield Network (HN) significantly sharpens the quality of the masked patches at _later_ time steps. are additionally passed through a layer norm operation after the final ET update). This output is concatenated with the initial (layer normalized) token to form the final output of the network \[\mathbf{g}_{A}^{\text{final}}=\mathbf{g}_{A}^{t=1}\mid\mid\mathbf{g}_{A}^{t=T} \tag{10}\] Following [31], the node representation \(\mathbf{g}_{A}^{\text{final}}\) is fed into an MLP with the sigmoid activation function to compute the anomaly probabilities \(p_{A}\). The weighted cross entropy \[\text{Loss}=\sum_{A}\Big{[}\sigma\;l_{A}\log(p_{A})+(1-l_{A})\log(1-p_{A}) \Big{]} \tag{11}\] is used to train the whole network. Above, \(\sigma\) is the ratio of the regular labels (\(l_{A}\) = 0) to anomalous labels (\(l_{A}\) = 1). ### Experimental Evaluation Four datasets are used for the graph anomaly detection experiments. YelpChi dataset [32] aims at opinion spam detection in Yelp reviews. Amazon dataset is used to detect anomalous users under the Musical Instrument Category on _amazon.com_[33]. T-Finance and T-Social datasets [31] are used for anomalous account detection in the transactions and social networks, respectively. For these four datasets, the graph is treated as a homogeneous graph (i.e. all the edges are of the same type), and a feature vector is associated with each node. The task is to predict the label (anomaly status) of the nodes. For each dataset, either \(1\%\) or \(40\%\) of the nodes are used for training, and the remaining \(99\%\) or \(60\%\) are split \(1:2\) into validation and testing, see Appendix B for details. We compare with state-of-the-art approaches for graph anomaly detection, which include GraphConsis [34], CAREGNN [35], PC-GNN [36] and BWGNN [31]. Additionally, multi-layer perceptrons (MLP) and Graph Transformer (GT) [37] are included in the baselines for completeness. Following previous work, macro-F1 score (unweighted mean of F1 score) and the Area Under the Curve (AUC) are used as the evaluation metrics on the test datasets [38]. See Appendix B for more details on training protocols and the hyperparameters choices. The results are reported in Table 1. Our ET network demonstrates very strong results across all the datasets. ## 5 Discussion and Conclusions A lot of recent research has been dedicated to understanding the striking analogy between Hopfield Networks and the attention mechanism in transformers. At a high level, the main message of our \begin{table} \begin{tabular}{c c|c|c c c c c c c} \hline \hline \multicolumn{2}{c|}{**Datasets**} & **Split** & **GraphConsis** & **CAREGNN** & **PC-GNN** & **BWGNN** & **MLP** & **GT** & **ET (Ours)** \\ \hline \multirow{8}{*}{**Category**} & \multirow{2}{*}{Yelp} & \(1\%\) & \(56.8_{\pm 2.8}\) & \(62.1_{\pm 1.3}\) & \(59.8_{\pm 1.4}\) & \(61.1_{\pm 0.4}\) & \(53.9_{\pm 0.2}\) & \(61.7_{\pm 0.4}\) & \(\mathbf{63.0_{\pm 0.6}}\) \\ & & \(40\%\) & \(58.7_{\pm 2.0}\) & \(63.3_{\pm 0.9}\) & \(63.0_{\pm 2.3}\) & \(71.0_{\pm 0.9}\) & \(57.5_{\pm 0.8}\) & \(68.7_{\pm 0.4}\) & \(\mathbf{71.5_{\pm 0.1}}\) \\ & & \(1\%\) & \(68.5_{\pm 3.4}\) & \(68.7_{\pm 1.6}\) & \(79.8_{\pm 5.6}\) & \(\mathbf{90.9_{\pm 0.7}}\) & \(74.6_{\pm 1.2}\) & \(88.6_{\pm 0.5}\) & \(89.3_{\pm 0.7}\) \\ & & \(40\%\) & \(75.1_{\pm 3.2}\) & \(86.3_{\pm 1.7}\) & \(89.5_{\pm 0.7}\) & \(92.2_{\pm 0.4}\) & \(79.1_{\pm 1.2}\) & \(91.7_{\pm 0.8}\) & \(\mathbf{92.8_{\pm 0.3}}\) \\ \cline{3-10} & \multirow{2}{*}{T-Finance} & \(1\%\) & \(71.7\) & \(73.3\) & \(62.0\) & \(84.8\) & \(61.0\) & \(81.5\) & \(\mathbf{85.1_{\pm 1.0}}\) \\ & & \(40\%\) & \(73.4\) & \(77.5\) & \(63.1\) & \(86.8\) & \(70.5\) & \(83.6\) & \(\mathbf{88.2_{\pm 1.0}}\) \\ \cline{3-10} & & \(1\%\) & \(52.4\) & \(55.8\) & \(51.1\) & \(75.9\) & \(50.0\) & \(64.3\) & \(\mathbf{79.1_{\pm 0.7}}\) \\ & & \(40\%\) & \(56.5\) & \(56.2\) & \(52.1\) & \(\mathbf{83.9}\) & \(50.3\) & \(68.2\) & \(83.5_{\pm 0.4}\) \\ \hline \multirow{8}{*}{**Category**} & \multirow{2}{*}{Yelp} & \(1\%\) & \(66.4_{\pm 3.4}\) & \(75.0_{\pm 3.8}\) & \(\mathbf{75.4_{\pm 0.9}}\) & \(72.0_{\pm 0.5}\) & \(59.8_{\pm 0.4}\) & \(72.5_{\pm 0.6}\) & \(73.2_{\pm 0.8}\) \\ & & \(40\%\) & \(69.8_{\pm 3.0}\) & \(76.1_{\pm 2.9}\) & \(79.8_{\pm 0.1}\) & \(84.0_{\pm 0.9}\) & \(66.5_{\pm 1.0}\) & \(81.9_{\pm 0.5}\) & \(\mathbf{84.9_{\pm 0.3}}\) \\ \cline{3-10} & & \(1\%\) & \(74.1_{\pm 3.5}\) & \(88.6_{\pm 3.5}\) & \(90.4_{\pm 2.0}\) & \(89.4_{\pm 0.3}\) & \(83.6_{\pm 1.7}\) & \(89.0_{\pm 1.2}\) & \(\mathbf{91.9_{\pm 1.0}}\) \\ \cline{3-10} & & \(40\%\) & \(87.4_{\pm 3.3}\) & \(90.5_{\pm 1.6}\) & \(95.8_{\pm 0.1}\) & \(\mathbf{98.0_{\pm 0.4}}\) & \(89.8_{\pm 1.0}\) & \(95.4_{\pm 0.6}\) & \(97.3_{\pm 0.4}\) \\ \cline{3-10} & & \(1\%\) & \(90.2\) & \(90.5\) & \(90.7\) & \(91.1\) & \(82.9\) & \(90.0\) & \(\mathbf{92.8_{\pm 1.1}}\) \\ \cline{3-10} & & \(40\%\) & \(91.4\) & \(92.1\) & \(91.2\) & \(94.3\) & \(87.1\) & \(88.2\) & \(\mathbf{95.0_{\pm 3.0}}\) \\ \cline{3-10} & & \(1\%\) & \(65.2\) & \(71.2\) & \(59.8\) & \(88.0\) & \(56.3\) & \(81.4\) & \(\mathbf{91.9_{\pm 0.6}}\) \\ \cline{3-10} & & \(40\%\) & \(71.2\) & \(71.8\) & \(68.4\) & \(\mathbf{95.2}\) & \(56.9\) & \(82.5\) & \(93.9_{\pm 0.2}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Performance of all the methods on Yelp, Amazon, T-Finance, and T-Social datasets with different training ratios. Following [31], mean and standard deviation over 5 runs with different train/dev/test split are reported for our method and the baselines (standard deviations are only included if they are available in the prior work). Best results are in **bold**. Our model is state of the art or near state of the art on every category. work is that the _entire_ transformer block (including feed-forward MLP, layer normalization, and residual connections) can be viewed as a single large Hopfield Network, not just attention alone. At a deeper level, we use recent advances in the field of Hopfield Networks to design a novel energy function that is tailored for dynamical information routing between the tokens and representation of a large number of relationships between those tokens. When used in the encoder-decoder setting, an appealing feature of our network is that any state, weight, or state update can be mapped directly into the data domain. This provides the possibility to inspect the inner workings of the whole network, contributing to its interpretability. The attention mechanism in our network contains an important extra term compared to conventional attention. We have tested the ET network on the image completion task (qualitatively) and node anomaly detection on graphs (quantitatively). The qualitative investigation reveals the perfect alignment between the theoretical design principles of our network and its empirical computation. The quantitative evaluation demonstrates strong results, which stand in line or exceed the methods recently developed specifically for this task. Although we have only tested ET on two tasks, we intentionally picked two entirely different data domains (images and graphs). We believe that the proposed network will be useful for other tasks and domains and deserves a comprehensive investigation in line with other popular variants of transformers. ## 6 Reproducibility Statement In the experiments presented in this paper we have taken several steps to ensure reproducibility of our results. Namely, all the training protocols and implementation details including the hyperparameter selection are described in Appendix A, Appendix B, and Appendix F. The model and training code for images can be found here1. The code for image reconstruction is written in JAX [39] with a single entry script to launch the training process, with defaults set to the configuration that produced the models used in this paper. The training script sets a seed that can recreate the exact same training setup as we had, ensuring the exact same weight initialization and random data augmentation provided default arguments. No additional training data was used beyond the training set (ImageNet-1k 2012), which is publicly available. The code for Graph Anomaly Detection is written in PyTorch. Given the nature of the anomaly detection problem (random splits into training, validation, testing sets) all our results on graphs are reported with mean and standard deviations, describing typical variability in the performance. The ET architecture can also be built with HAMUX2[40], a JAX-based Deep Learning library that builds hierarchical associative memories (of which ET is a particular instance) using energy fundamentals similar to the Attention, LayerNorm, and Hopfield Network introduced in this paper. Like the original Energy Transformer code, HAMUX uses JAX's autograd to perform inference through the network. Footnote 1: Energy Transformer: [https://github.com/bhoov/energy-transformer-jax](https://github.com/bhoov/energy-transformer-jax) Details of Training on ImageNet We trained the ET network on a masked-image completion task on the ImageNet-1k (IN1K) dataset. We treat all images in IN1K as images of shape \(224\times 224\) that are normalized according to standard IN1K practices (mean 0, variance 1 on the channel dimension) and use data augmentations provided by the popular timm library [41] (See Table 2). Following the conventional ViT pipeline [16], we split these images into non-overlapping patches of 16x16 RGB pixels which are then projected with a single affine encoder into the token dimension \(D\) for a total of \(196\) encoded tokens per image. We proceed to randomly and uniformly assign 100 of these tokens as "occluded" which are the only tokens considered by the loss function. "Occluded" tokens are designated as follows: of the \(100\) tokens, \(90\) tokens are replaced with a learnable MASK token of dimension \(D\) and \(10\) we leave untouched (which we find important for the HN to learn meaningful patch representations). To all tokens we then add a distinct learnable position bias. These tokens are then passed to our Energy Transformer block which we recur for \(T\) steps (the "depth" of the model in conventional Transformers). At each step, the feedback signal (the sum of the energy gradients from our attention block and HN block) is subtracted from our original token representation with a scalar step size \(\alpha=\frac{dt}{\tau}\) which we treat as a non-learnable hyperparameter in our experiments. The token representations after \(T\) steps are passed to a simple linear decoder (consisting of a layer norm and an affine transformation) to project our representations back into the image plane. We then use the standard MSE Loss between the original pixels and reconstructed pixels for only the \(100\) occluded patches. We allow self attention as in the following formula for the energy of multiheaded attention. \[E^{\text{ATT}}=\sum_{h}-\frac{1}{\beta}\sum_{C}\text{log}\left(\sum_{B}\text {exp}\left(\beta\sum_{\alpha}K_{\alpha hB}\ Q_{\alpha hC}\right)\right) \tag{12}\] We give details of our architectural choices in Table 2. In the main paper we present our Energy Transformer with a configuration similar to the standard base Transformer configuration (e.g., token dimension \(768\), \(12\) heads each with \(Y=64\), softmax's \(\beta=\frac{1}{\sqrt{Y}}\),...), with several considerations learned from the qualitative image evaluations: * The \(\frac{dt}{\tau}\) (step size) of \(1\) implicitly used in the traditional transformer noticeably degrades our ability to smoothly descend the energy function. We find that a step size of \(0.1\) provides a smoother descent down the energy function and benefits the image reconstruction quality. * We observe that our MSE loss must include some subset of un-occluded patches in order for the HN to learn meaningful filters. * Values of \(\beta\) in the energy attention that are too high prevent our model from training. This is possibly due to vanishing gradients in our attention operation from a softmax operation that is too spiky. * Without gradient clipping, our model fails to train at the learning rates we tried higher than 1e-4. We observe that gradient clipping not only helps our model train faster at the trainable learning rates, it also allows us to train at higher learning rates. Our architecture and experiments for the image reconstruction task were written in JAX [39] using Flax [42]. This engineering choice means that our architecture definitions are quite lightweight, as we can define the desired energy function of the ET and use JAX's autograd to automatically calculate the desired update. All training code and software will be released upon the paper's acceptance. ### Exploring the Hopfield Memories A distinctive aspect of our network is that any variable that has a vector index \(i\) of tokens can be mapped into the data domain by applying the decoder network to this variable. This makes it possible to inspect all the weights in the model. For instance, the concept of "memories" is crucial to understanding how Hopfield networks function. The memories within the HN module represent the building blocks of _all possible image patches_ in our data domain, where an encoded image patch is a superposition of a subset of memories. The complete set of memory vectors from the HN module is shown in Figure 5. The same analysis can be applied to the weights of the ET-attention module. In Figure 6, we show all the weights from this module mapped into the image plane. ### Bias Correlations The relationships between our position embeddings exhibit similar behavior to the position correlations of the original ViT in that they are highly susceptible to choices of the hyperparameters (Figure 10 of [16]). In particular, we consider the effect of weight decay and the \(\beta\) parameter that serves as the inverse temperature of the attention operation (see Equation 3). The lower the temperature (i.e., the higher the value of \(\beta\)), the spikier the softmax distribution. By using a lower \(\beta\), we encourage the attention energy to distribute its positional embeddings across a wider range of patches in the model. ### Observing Energy Dynamics We include as part of our supplemental submission a video showing the dynamics of one of our trained models through time together with the corresponding energy at every step. From the video, it is clear that the image progressively improves in quality as the energy decreases up to the point when the token representations are passed to the decoder. At the same time, we found it challenging to find the time constants and number of training steps so that the energy of each image reaches the energy minimum (fixed point) when the loss is computed. For instance, for the image shown in the video, its quality actually starts to degrade when the dynamics is allowed to run for longer than what was used at training (while the energy is still decreasing). Additionally, when training models at greater depth the gradients can vanish since many recurrent applications of the ET block are necessary. We hope to comprehensively investigate these questions/limitations in future work. We include a screenshot of the video in Figure 8 and encourage readers to watch the full video for the dynamics. \begin{table} \begin{tabular}{r|c} \hline \multicolumn{2}{c}{**Training**} \\ \hline batch\_size & 768 \\ epochs & 100 \\ lr & 5e-4 \\ warmup\_epochs & 2 \\ start \& end lr & 5e-7 \\ b1, b2 (ADAM) & 0.9, 0.99 \\ weight\_decay & 0.05 \\ grad\_clipping & 1. \\ \end{tabular} \begin{tabular}{r|c} \hline \multicolumn{2}{c}{**Architecture**} \\ \hline token\_dim & 768 \\ num\_heads & 12 \\ head\_dim & 64 \\ 2 & 2 \\ 2 & 1/8 \\ train\_betas & No \\ step size \(\alpha\) & 0.1 \\ depth & 12 \\ hidden\_dim HN & 3072 \\ bias in HN & None \\ bias in ATT & None \\ bias in LNORM & Yes \\ \end{tabular} \begin{tabular}{r|c} \hline \multicolumn{2}{c}{**Data Augmentation**} \\ \hline random\_eras & None \\ horizontal\_flip & 0.5 \\ vertical\_flip & 0 \\ color\_jitter & 0.4 \\ scale & (0.08, 1) \\ ratio & (3/4, 4/3) \\ auto\_augment & None \\ \end{tabular} \end{table} Table 2: Hyperparameter, architecture, and data augmentation choices for ET-base during ImageNet-1k masked training experiments. Data augmentations are listed as parameters passed to the equivalent timmataloader functionality. Figure 5: Visualizing a randomly selected 3025 patch memories of the 3072 learned by weight matrix in the Hopfield Network module (HN) of our model. These memories are vectors of the same dimensions \(D\) as the patch tokens, stored as rows in the weight matrix \(\xi\). Each image patch is visualized using the model’s trained decoder. Figure 6: Visualizing the token dimension of the “key” and “query” matrices of the attention as image patches. Each head is represented as a cell on the \(4\times 3\) grid above. We use the trained decoder of our model to visualize each weight. Figure 7: The cosine similarity between position biases of patches when the ET-base model is trained under different hyperparameter choices for \(\beta\) (inverse temperature of the attention energy) and weight decay. Our ET sees a trend where smoother correlations are observed with smaller \(\beta\) and weight decay. Figure 8: Screenshot of the accompanying video showcasing the energy dynamics of our model. _Top left:_ the energy of our model over time (with units \(\tau\)) for the dog image highlighted at the _top right_. Each cell of the dog represents the (masked input, reconstructed image at time \(t\), original image). We step through time using a time step of \(dt=0.1\) and record the total energy of our system on the image tokens as the black dot descending the blue energy curve. The dashed vertical black line shows the point in the energy curve where the representations were passed to the loss function when training the model, whereas the horizontal red dashed line shows the “fixed point” at the end of the simulated dynamics (in reality, the energy still descends slightly after that). _Bottom:_ We display 11 other images as (masked image, reconstructed image at time \(t\), original image) aligned with the time step. Each image’s energy trajectory will be slightly different (not shown). Details of ET training on Anomaly Detection Task Graph anomaly detection refers to the process of detecting outliers that deviate significantly from the majority of the samples. Neural network based methods are very popular due to their capability of learning sophisticated data representations. DOMINANT [28] utilizes an auto-encoder framework, using a GCN as an encoder and two decoders for structural reconstruction and attribute reconstruction. ALARM [29] aggregates the encoder information from multiple view of the node attributes. Another study [43], propose a novel loss function to train graph neural networks for anomaly-detectable node representations. In [44] generative adversarial learning is used to detect anomaly nodes where a novel layer is designed to learn the anomaly-aware node representation. Recently, [31] pointed out that anomalies can lead to the "righthshift" of the spectral energy distribution - the spectral energy concentrates more on the high frequencies. They designed a filter that can better handle this phenomenon. We propose a new anomaly detection model from the perspective of Associative Memory (pattern matching), which does not have the over-smoothing problem often faced by GCNs, and has better model interpretability (outliers should be far from the common pattern). We also notice that Modern Hopfield Networks have been used before for node classification, link prediction, and graph coarsening tasks [45]. ### Detailed Model Structure for the Graph Anomaly Detection First, we compute the features that are passed to our energy-based transformer. Each node's features \(\mathbf{y}_{A}\in R^{F}\) are mapped into the token space \(\mathbf{x}_{A}\in R^{D}\), using a linear projection \(\mathbf{E}\). Learnable positional embeddings \(\lambda_{A}\) are added to this token at \(t=1\), \[\mathbf{x}_{A}^{t=1}=\mathbf{E}\mathbf{y}_{A}+\lambda_{A} \tag{13}\] At each time step the input to the ET-block is layer normalized: \[\mathbf{g}_{A}^{t}=\text{LayerNorm}(\mathbf{x}_{A}^{t}) \tag{14}\] Let \(\mathbf{W}^{Q}\in R^{Y\times H\times D}\) and \(\mathbf{W}^{K}\in R^{Y\times H\times D}\) be the query and key weight matrices, respectively. Here \(Y\) is the projection dimension in the attention operation, \(H\) is the number of heads. We define \[\begin{split} K_{\alpha hB}&=\sum_{j}W_{\alpha hj}^ {K}\ g_{jB}\\ Q_{\alpha hC}&=\sum_{j}W_{\alpha hj}^{Q}\ g_{jC} \end{split} \tag{15}\] If we let \(h\) indicate the index of the head, we have \[\Delta x_{iA}^{t}=\sum_{C\in\mathcal{N}_{A}}\sum_{h,\alpha}\Big{[}W_{\alpha hi }^{Q}\ K_{\alpha hC}\ \omega_{CA}+W_{\alpha hi}^{K}\ Q_{\alpha hC}\ \omega_{AC}\Big{]}+\sum_{\mu}\xi_{\mu i}\ r \Big{(}\sum_{j}\ \xi_{\mu j}g_{jA}\Big{)} \tag{16}\] where \[\omega_{CA}=\underset{C}{\text{softmax}}\Big{(}\beta\sum_{\gamma}K_{\gamma hC} \ Q_{\gamma hA}\Big{)} \tag{17}\] Here \(\beta\) controls the temperature of the softmax, \(\mathcal{N}_{A}\) stands for the neighbors of node \(A\) --a set of all the nodes connected to node \(A\), \(r\) is the ReLU function. Restriction of the attention operation to the neighborhood of a given node is similar to that used in the Graph Attention Networks (GAT), see [46]. Finally, we have residual connection \[\mathbf{x}_{A}^{t+1}=\mathbf{x}_{A}^{t}+\mathbf{\Delta x}_{A}^{t} \tag{18}\] Intuitively, the first term considers the influence (attention score) of the neighbor nodes with respect to the target node, the second term considers the influence of the target node with respect to each of its neighbor, and the third term is the contribution of the Hopfield Network module. It can be shown that the forward pass of our energy-based transformer layer minimizes the following energy function: \[E=-\frac{1}{\beta}\sum_{C}\sum_{h}\text{log}\left(\sum_{B\in\mathcal{N}_{C}} \text{exp}\left(\beta\sum_{\alpha}K_{\alpha hB}\ Q_{\alpha hC}\right)\right)- \frac{1}{2}\sum_{C,\mu}r\Big{(}\sum_{j}\chi_{\mu j}\ g_{jC}\Big{)}^{2} \tag{19}\] This energy function will decrease as the forward pass progresses until it reaches a local minimum. After \(T\) iterations when the retrieval is stable, we have the final representation for each node \(\mathbf{g}_{A}^{\text{final}}\) as \[\mathbf{g}_{A}^{\text{final}}=\mathbf{g}_{A}^{t=1}\mid\mathbf{g}_{A}^{t=T} \tag{20}\] where \(\mid\mid\) is the concatenation sign. Following [31], we treat anomaly detection as semi-supervised learning task in this work. The node representation \(\mathbf{g}_{A}^{\text{final}}\) is fed to another MLP with the sigmoid function to compute the abnormal probability \(p_{A}\), weighted log-likelihood is then used to train the network. The loss function is as follow: \[\text{Loss}=\sum_{A}\Big{[}\sigma\;l_{A}\log(p_{A})+(1-l_{A})\log(1-p_{A}) \Big{]} \tag{21}\] where \(\sigma\) is the ratio of normal labels (\(l_{A}\) = 0) to anomaly labels (\(l_{A}\) = 1). ### Experimental Details We train all models for 100 epochs using the Adam optimizer with a learning rate of 0.001, and use the model with the best Macro-F1 on the validation set for reporting the final results on the test set. Following [31], we use training ratios 1% and 40% respectively (randomly select 1% and 40% nodes of the dataset to train the model, and use the remaining nodes for the validation and testing). These remaining nodes are split 1:2 for validation:testing. The statistics of the datasets are listed in Table 3. For the four datasets used in the experiments, Amazon and Yelp datasets can be obtained from the DGL library, T-Finance and T-Social can be obtained from [31]. We report the average performance of 5 runs on the test datasets. The hyperparameters of our model are tuned based on the validation set, selecting the best parameters within 100 epochs. To speedup the training process, for the large graph datasets T-Finance and T-Social, we sample a different subgraph to train for each epoch (subgraphs have \(5\%\) of the nodes with respect to the whole training data). The hyperparameters include the number of hidden dimensions in ET-attention \(Y\), the number of neurons \(K\) in the hidden layer within the Hopfield Network Module, the number of time iterations \(T\), and the number of heads \(H\). The weights are learned via backpropagation, which includes embedding projection \(\mathbf{E}\), positional embedding \(\lambda_{A}\), softmax inverse temperature parameter \(\beta\), ET-attention weight tensors \(\mathbf{W}^{Q}\) and \(\mathbf{W}^{K}\). The optimal hyperparameters used in Table 1 are reported in Table 4. The last row in that table summarizes the range of the hyperparameter search that was performed in our experiments. In general, we have observed that for small datasets (Yelp, Amazon, T-Finance) a 1 or 2 applications of our network is sufficient for achieving strong results, for larger datasets (T-Social) more iterations (3) are necessary. For even bigger dataset (ImageNet) our network needs about \(12\) iterations. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Dataset & \(Y\) & \(K\) & \(T\) & \(H\) \\ \hline Amazon (40\%) & 128 & 640 & 1 & 2 \\ Amazon (1\%) & 64 & 128 & 1 & 1 \\ Yelp (40\%) & 128 & 256 & 1 & 1 \\ Yelp (1\%) & 128 & 256 & 1 & 1 \\ T-Finance (40\%) & 128 & 256 & 1 & 3 \\ T-Finance (1\%) & 128 & 256 & 1 & 1 \\ T-Social (40\%) & 128 & 256 & 3 & 3 \\ T-Social (1\%) & 128 & 256 & 3 & 3 \\ \hline Range of hyperparameters & \{64, 128, 256\} & \{2Y, 3Y, 4Y, 5Y\} & \{1,2,3\} & \{1,2,3\} \\ \hline \end{tabular} \end{table} Table 4: Hyperparameters choice of our method on all the datasets. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Dataset & \(|V|\) & \(|E|\) & Anomaly(\%) & Features \\ \hline Amazon & 11944 & 4398392 & 6.87\% & 25 \\ Yelp & 45954 & 3846979 & 14.53\% & 32 \\ T-Finance & 39357 & 2122543 & 4.58\% & 10 \\ T-Social & 5781065 & 73105508 & 3.01\% & 10 \\ \hline \end{tabular} \end{table} Table 3: Summary of all the datasets. ## Appendix C Notations Table 5 lists all the notations used in this paper. ## Appendix D Ablation Study for Attention and Hopfield Network Modules As we described in the main text the the ET network consists of two modules processing the tokens in parallel: the attention module (ATT) and the Hopfield Network module (HN). The ATT module is responsible for routing the information between the tokens, while the HN module is responsible for reinforcing the token representation to be consistent with the general expectation about the particular data domain. It is interesting to explore the contribution that these two subnetworks produce on the task performed by the network. In this section we ablate the ET architecture by dropping each of the two subnetworks and measuring the impact of the ablation on the performance. ### On Graphs The results on graphs are reported in Table 6. From this table it is clear that most of the computation is performed by the ATT block on this task, which pools the information about other tokens to the token of interest. When ATT block is kept, but HN block is removed the network looses \(1\%\) or less relative to the full ET (occasional improvements of the ablated model compared to the full ET are within the statistical error bars). In contrast, removing ATT module and keeping only the HN, the ET network effectively turns into an MLP with shared weights that is recurrently applied. In this regime the network can only use the features of a given node for that node's anomalous status prediction. This results in a more significant drop in performance, which is about \(5\%\) on average. ### On Images The ablation results for image reconstruction are shown in Table 7. Each experiment was trained using the same hyperparameter settings as shown in Table 2. After training the model on IN1K, we calculate the average MSE on the reconstructed masked tokens for the validation set (using the same \(50\)% masking ratio used for training) across \(10\) different random seeds for the masking. We make several conclusions from these ablation studies. \begin{table} \begin{tabular}{c l} \hline \hline **Notation** & **Description** \\ \hline \(F\) & dimension of node’s feature space \\ \(D\) & dimension of token space \\ \(N\) & number of tokens \\ \(Y\) & number of hidden dimensions in the attention \\ \(M\) & number of hidden dimensions in the Hopfield Network \\ \(T\) & number of recurrent time steps \\ \(H\) & number of heads \\ \(k_{h}\) & height of each image patch \\ \(k_{w}\) & width of each image patch \\ \(P\) & number of pixels per image patch (\(3\times k_{h}\times k_{w}\)) \\ \(Y_{A}\) & input feature vector of node \(A\) \\ \(\mathbf{x}_{A}\) & vector representation of token \(A\) \\ \(x_{iA}\) & each element of vector representation of token \(A\) \\ \(\mathbf{g}_{A}\) & vector representation of token \(A\) after layernorm \\ \(g_{iA}\) & each element of vector representation of token \(A\) after layernorm \\ \(\mathbf{K}\) & key tensor \\ \(\mathbf{Q}\) & query tensor \\ \(K_{\alpha hB}\) & each element of the key tensor \(\mathbf{K}\) \\ \(Q_{\alpha hC}\) & each element of the query tensor \(\mathbf{Q}\) \\ \(l_{A}\) & label of node A on graph \\ \hline \hline \end{tabular} \end{table} Table 5: Notations used in the paper. * We gain several insights regarding the use of "self-attention" in our ET (when a token patch query is allowed to consider itself as a key in the attention weights). When both self-attention and HN are present (ET-Full+Self), there is no noticeable benefit over ET-Full for a token to attend to itself. In fact, preventing the ATTN energy module from attending to itself slightly improves the performance. However, when the HN is removed (ET-NoHN*), we notice that allowing self-attention (ET-NoHN+Self) outperforms the version that prevents self-attention (ET-NoHN). * On its own, allowing self-attention (ET-NoHN+Self) in the ATTN module performs nearly as well as the full ET at a fraction of the total parameters. However, MSE is a forgiving metric for blurry reconstructions. While ATTN can capture the global structure of the image quite well, it does so at the expense of image sharpness (Figure 4). * As expected, removal of the ATTN energy module performs the worst, because the HN operates tokenwise and has no way to aggregate token information across the global image without ATTN. Figure 9 shows our best performing model (ET-Full) on the qualitative image reconstructions corresponding to the _largest_ errors across IN1K validation images, averaged across all masking seeds. Likewise, Figure 10 shows the _lowest_ errors across IN1K validation images and masking seeds. In general, image reconstructions that require ET to produce sharp, high frequency, and high contrast lines negatively impact MSE performance. \begin{table} \begin{tabular}{l l|c c c|c|c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{Has ATTN?} & \multicolumn{2}{c|}{Allow} & \multirow{2}{*}{Has HN?} & \multicolumn{2}{c|}{**NPParams**} & \multirow{2}{*}{**MSE**} \\ & & & self-attn? & & (in ET block) & \\ \hline ET-Full **(Ours)** & ✓ & ✗ & ✓ & \(3.7\)M & \(\mathbf{0.306_{\pm 0.10}}\) \\ ET-Full+Self & ✓ & ✓ & ✓ & ✓ & \(3.7\)M & \(0.312_{\pm 0.10}\) \\ ET-NoHN+Self & ✓ & ✓ & ✗ & \(1.3\)M & \(0.343_{\pm 0.10}\) \\ ET-NoHN & ✓ & ✗ & ✗ & \(1.3\)M & \(0.403_{\pm 0.11}\) \\ ET-NoATT & ✗ & ✗ & ✓ & \(2.5\)M & \(0.825_{\pm 0.20}\) \\ \hline \hline \end{tabular} \end{table} Table 7: Module ablation tests for image reconstruction task, reporting average IN1K validation MSE on masked tokens after 100 epochs. Reported number of parameters excludes the constant number of parameters in the affine encoder and decoder. \begin{table} \begin{tabular}{l l|c|c c|c c} \hline \hline \multicolumn{2}{c|}{**Datasets**} & \multicolumn{1}{c|}{**Split**} & \multicolumn{1}{c|}{**ATT\(\diagdown\)**} & \multicolumn{1}{c|}{**HN\(\diagdown\)**} & \multicolumn{1}{c}{**ATT\(\diagdown\)**} & \multicolumn{1}{c}{**full model (Ours)**} \\ \hline \multirow{5}{*}{**Dataset**} & \multirow{2}{*}{Yelp} & \(1\%\) & \(62.5_{\pm 0.3}\) ✔ & \(57.4_{\pm 0.5}\) ✔(-5.6) & \(\mathbf{63.0_{\pm 0.6}}\) \\ & & \(40\%\) & \(70.6_{\pm 0.5}\) ✔ & \(71.2_{\pm 0.7}\) ✔ & \(\mathbf{71.5_{\pm 0.1}}\) \\ \cline{3-6} & & \(1\%\) & \(\mathbf{89.5_{\pm 0.9}}\) ✔ & \(87.4_{\pm 1.0}\) ✔ & \(89.3_{\pm 0.7}\) \\ & & \(40\%\) & \(91.7_{\pm 0.5}\) ✔ (-1.1) & \(88.7_{\pm 0.3}\) ✔ (-4.1) & \(\mathbf{92.8_{\pm 0.3}}\) \\ \cline{3-6} & \multirow{2}{*}{T-Finance} & \(1\%\) & \(84.7_{\pm 1.0}\) ✔ & \(80.3_{\pm 0.6}\) ✔ (-4.8) & \(\mathbf{85.1_{\pm 1.0}}\) \\ & & \(40\%\) & \(87.4_{\pm 0.7}\) ✔ & \(82.3_{\pm 0.8}\) ✔ (-5.9) & \(\mathbf{88.2_{\pm 1.0}}\) \\ \cline{3-6} & \multirow{2}{*}{T-Social} & \(1\%\) & \(\mathbf{79.8_{\pm 0.6}}\) ✔ & \(72.7_{\pm 1.0}\) ✔ (-6.4) & \(79.1_{\pm 0.7}\) \\ & & \(40\%\) & \(82.9_{\pm 1.0}\) ✔ & \(78.6_{\pm 1.2}\) ✔ (-4.9) & \(\mathbf{83.5_{\pm 0.4}}\) \\ \hline \multirow{5}{*}{**Dataset**} & \multirow{2}{*}{Yelp} & \(1\%\) & \(72.9_{\pm 0.3}\) ✔ & \(67.4_{\pm 0.7}\) ✔ (-5.8) & \(\mathbf{73.2_{\pm 0.8}}\) \\ & & \(40\%\) & \(83.5_{\pm 0.4}\) ✔ (-1.4) & \(83.1_{\pm 0.6}\) ✔ (-1.8) & \(\mathbf{84.9_{\pm 0.3}}\) \\ \cline{3-6} & & \(1\%\) & \(90.7_{\pm 0.8}\) ✔ & \(89.8_{\pm 1.2}\) ✔ & \(\mathbf{91.9_{\pm 1.0}}\) \\ \cline{3-6} & \multirow{2}{*}{Amazon} & \(40\%\) & \(96.8_{\pm 0.6}\) ✔ & \(95.7_{\pm 0.5}\) ✔ (-1.6) & \(\mathbf{97.3_{\pm 0.4}}\) \\ \cline{3-6} & & \(1\%\) & \(91.7_{\pm 1.2}\) ✔ & \(90.2_{\pm 0.8}\) ✔ (-2.6) & \(\mathbf{92.8_{\pm 1.1}}\) \\ \cline{3-6} & & \(40\%\) & \(94.3_{\pm 2.6}\) ✔ & \(90.2_{\pm 2.1}\) ✔ & \(\mathbf{95.0_{\pm 3.0}}\) \\ \cline{3-6} & \multirow{2}{*}{T-Social} & \(1\%\) & \(\mathbf{92.2_{\pm 0.8}}\) ✔ & \(86.4_{\pm 0.7}\) ✔ (-5.5) & \(91.9_{\pm 0.6}\) \\ \cline{3-6} & & \(40\%\) & \(93.1_{\pm 0.8}\) ✔ & \(88.3_{\pm 1.3}\) ✔ (-5.6) & \(\mathbf{93.9_{\pm 0.2}}\) \\ \hline \hline \end{tabular} \end{table} Table 6: Ablation study with respect to ATT block and HN Block. Best results are in **bold**. ## Appendix E ET for heterogeneous Graph In this section, we show the performance of our ET model in the heterogeneous graph case. For the heterogeneous graph case (where more than one type of edges exist in the graph), we first run our ET model for different subgraphs (corresponding to different types of edges), and then aggregate the final representations using two methods. We have tried max pooling and concatenation for the aggregation step. Max pooling means to pick the largest value across all the subgraph representations, and concatenation means to concatenate the representations obtained from different subgraphs. Table 8 shows the comparison between these two variants of our model and BWGNN (heterogeous case). ET performs better than heterogeneous BWGNN on Amazon, but loses to heterogeneous BWGNN on Yelp. Interestingly, BWGNN in the heterogeneous setting performs worse than BWGNN in the homogeneous setting on Amazon. ## Appendix F Graph Classification with ET As mentioned prior, GNNs have emerged as a popular approach to handle graph-related tasks due to their effective and automatic extraction of graph structural information. There have been attempts of leveraging Transformer into the graph domain, but only certain key modules, such as feature aggregation, are replaced in GNN variants by the softmax attention [47]. However, the Transformer model has yet to achieved competitive performance on popular leader boards of graph-level prediction compared to mainstream GNN variants [47]. In this section we explain how ET can be used for graph classification. Figure 10: Reconstruction examples of images with the best (lowest) MSE from the IN1k validation set. _Top row_: input images where 50% of the patches are masked with the learned MASK token. _Middle row_: all tokens reconstructed after 12 time steps. _Bottom row_: original images. Figure 9: Reconstruction examples of images with the worst MSE from the IN1k validation set. _Top row_: input images where 50% of the patches are masked with the learned MASK token. _Middle row_: all tokens reconstructed after 12 time steps. _Bottom row_: original images. ### Details of Graph Classification ET Model Given a graph \(G=(V,A)\), where \(V=\{v_{1},v_{2},\ldots,v_{N}\}\) and \(A\in\{0,1\}^{N\times N}\) is the adjacency matrix of the graph, each feature vector \(\mathbf{x}_{\mathbf{B}}\in\mathbb{R}^{\bar{F}}\) corresponding to node \(v_{\mathbf{B}}\) is first projected to the token space \(\tilde{\mathbf{x}}_{\mathbf{B}}\in\mathbb{R}^{D}\) via a linear embedding. Then, the CLS token \(\tilde{\mathbf{x}}_{\text{CLS}}\) is concatenated to the set of tokens resulting in \(\tilde{X}\in\mathbb{R}^{(N+1)\times D}\) and the positional embedding \(\tilde{\lambda}\in\mathbb{R}^{(N+1)\times D}\) is added afterwards. To obtain the positional embedding \(\tilde{\lambda}\), the adjacency matrix \(A\) is first padded in the upper left corner with ones resulting in \(\tilde{A}\in\{0,1\}^{(N+1)\times(N+1)}\). This particular step provides the CLS token full connectivity with all of the nodes in the graph. The top \(k\) smallest eigen-vectors \(\lambda\in\mathbb{R}^{(N+1)\times k}\) are then obtained from the eigen-value decomposition of the unnormalized Laplacian matrix \(\tilde{L}\) \[\tilde{L}=\tilde{D}^{-\frac{1}{2}}\tilde{A}\tilde{D}^{-\frac{1}{2}} \tag{22}\] and projected to the token space via a linear embedding to form the positional embedding \(\tilde{\lambda}\in\mathbb{R}^{(N+1)\times D}\). Meanwhile, the attention in ET is modified to take in \(\mathbf{\hat{A}}\in\mathbb{R}^{H\times(N+1)\times(N+1)}\) the parameterized adjacency tensor, which acts as the weighted 'attention mask' that enables the model to consider the graph structural information. To obtain \(\mathbf{\hat{A}}\), a 2D-convolutional layer with \(H\) filters equals to the number of heads in the attention block, 'SAME' padding, and a stride of 1 is performed on the outer product of \(\tilde{X}\) to itself. The result is then multiplied with \(\tilde{A}\) element-wise (denoted by \(\odot\)) via broadcasting. \[\mathbf{\hat{A}}=\text{Conv2D}(\tilde{X}\otimes\tilde{X})\odot\tilde{A} \tag{23}\] Altogether, the resulting energy equation is \[E^{\text{ATT}}=-\frac{1}{\beta}\sum_{h}\sum_{C}\text{log}\left(\sum_{B\neq C }\text{exp}\left(\beta\sum_{\alpha}K_{\alpha hB}\ Q_{\alpha hC}\odot\hat{A}_{ \alpha hC}\right)\right) \tag{24}\] Additionally, in this implementation, the overall model is consisted of \(S\) vertically stacked ET blocks, where each block shares the same number of \(T\) depth and has a different LayerNorm. Similarly, the token representation \(\tilde{X}^{t,\ell}\) at each dynamic step \(t\) corresponding to a block \(\ell\) is first layer-normalized. Keep in mind, \(\tilde{X}=\tilde{X}^{t=1,\ \ell=1}\) is the initial token representation. \[\mathbf{g}^{t,\ell}=\text{LayerNorm}(\tilde{X}^{t,\ \ell}) \tag{25}\] Following the dynamic equations 6, we inject a small amount of noise \(\epsilon^{t,\ \ell}\in(0,1)\), generated from a normal distribution with standard deviation of \(0.02\) and zero mean, into the gradient of energy function to produce \(\tilde{X}^{t+1,\ \ell}\), the new token representation of block \(\ell\). The premise of this noise injection is to 'robustify' the model and help push it towards a local minimum of the energy function. \[\tilde{X}^{t+1,\ \ell}=\tilde{X}^{t,\ \ell}-\alpha(\nabla_{\mathbf{g}}E^{t,\ \ell}+ \epsilon^{t,\ \ell}) \tag{26}\] Once stability is reached in the retrieval dynamics of block \(\ell\), the final representation \(X^{t=T,\ \ell}\) is then passed on to the next block \(\ell+1\) and the whole process is repeated again. When the final token representation \(\hat{Y}=\tilde{X}^{t=T,\ \ell=S}\) is computed by the last block \(S\), the resultant CLS token \(\hat{Y}_{0}\in\mathbb{R}^{D^{\prime}}\) extracted from \(\hat{Y}\) is utilized as the predictor of the current graph \(G\). \begin{table} \begin{tabular}{c c|c|c c c} \hline \hline & **Datasets** & **Split** & **MaxPool** & **Concatenation** & **BWGNN (Heterogenous)** \\ \hline \multirow{4}{*}{**CELS**} & \multirow{2}{*}{Yelp} & \(1\%\) & \(61.5_{\pm 0.4}\) & \(61.7_{\pm 0.2}\) & **67.02** \\ & & \(40\%\) & \(70.7_{\pm 0.6}\) & \(71.1_{\pm 0.1}\) & **76.96** \\ & & \(1\%\) & \(\mathbf{88.3_{\pm 1.6}}\) & \(87.4_{\pm 1.4}\) & \(83.83\) \\ & & \(40\%\) & \(\mathbf{92.1_{\pm 0.2}}\) & \(91.8_{\pm 0.2}\) & \(91.72\) \\ \hline \multirow{4}{*}{**CELS**} & \multirow{2}{*}{Yelp} & \(1\%\) & \(72.2_{\pm 0.5}\) & \(72.8_{\pm 0.2}\) & **76.95** \\ & & \(40\%\) & \(84.3_{\pm 0.4}\) & \(85.1_{\pm 0.1}\) & **90.54** \\ \cline{1-1} \cline{3-6} & & \(1\%\) & \(\mathbf{90.8_{\pm 1.4}}\) & \(90.6_{\pm 1.0}\) & \(86.59\) \\ \cline{1-1} \cline{3-6} & & \(40\%\) & \(\mathbf{97.5_{\pm 0.1}}\) & \(97.2_{\pm 0.6}\) & \(97.42\) \\ \hline \hline \end{tabular} \end{table} Table 8: ET for anomaly detection in heterogeneous graph setting. Best results are in **bold**. ### Experimental Evaluation Eight datasets of the TUDataset [48] collection are used for experimentation. NCI1, NCI109, MUTAG, MUTAGENICITY, and FRANKENSTEIN are a common class of graph datasets consists of small molecules with class labels representing toxicity or biological activity determined in drug discovery projects [48]. Meanwhile, DD, ENZYMES, and PROTEINS represent macromolecules. The task for both DD and PROTEINS is to classify whether a protein is an enzyme. Lastly, for ENZYMES, the task is to assign enzymes to one of the 6 EC-top-level classes, which reflect the catalyzed chemical reaction [48]. We compare ET with the current state-of-the-art approaches for the mentioned datasets, which include WKPI-kmeans [49], WKPI-kcenters [49], DSGCN [50], HGP-SL [51], U2GNN [52], and Evolution Graph Classifier (EvoG) [53]. Additionally, approaches that are close to the baselines are included to further contrast the performance of our model. Following the 10-fold cross validation process delineated in [48], accuracy score is used as the evaluation metric and reported in Table 9. In general, we have observed that the modified ET demonstrates consistent performance across all datasets that is near the current state-of-the-art approaches. Based on the statistics of the experimental datasets in table 10, ET performs extremely well when trained on large graph datasets (e.g., MUTAGENICITY and FRANKENSTEIN). However, with respect to NCI1, NCI109, and DD datasets, there remains an open question on which graph characteristics (e.g., assortativity and density) impair the performance of the model. ### Experimental Details In the graph domain, it is common to concatenate all of the feature vectors of all graphs in a batch together. However, in order for ET to work, we form the batch dimension by separating the feature vectors of all graphs in a given batch and utilize the largest node count to pad all graphs such that they all share the same number of nodes. Additionally, we set a limit on the number of nodes (set as 400) to prevent out-of-memory error. Specifically, if a graph has a node count exceeding the limit, the number of utilized nodes is equal to the limit. Hence, a portion of the graph structural information is ignored as a result. However, it is worth mentioning such a graph is rare in the experimental datasets. \begin{table} \begin{tabular}{l l l l l l l l l} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{6}{c}{**Dataset**} \\ \cline{2-9} & PROTEINS & NCI1 & NCI109 & DD & ENZYMES & MUTAG & MUTAGENICITY & FRANKENSTEIN \\ \hline \hline **WERP** & 75.5\({}_{\pm 0.4}\) \(\star\) (6.4) & **87.5\({}_{\pm 0.6}\)** & 85.9\({}_{\pm 0.4}\) \(\star\) (1.5) & 82.0\({}_{\pm 0.4}\) \(\star\) (13.7) & - & 85.8\({}_{\pm 0.2}\) \(\star\) (14.2) & - & - \\ **WKPI** & 75.2\({}_{\pm 0.4}\) \(\star\) (9.7) & 84.5\({}_{\pm 0.5}\) \(\star\) (3.0) & **87.4\({}_{\pm 0.8}\)** & 80.3\({}_{\pm 0.4}\) \(\star\) (15.4) & - & 88.3\({}_{\pm 0.4}\) \(\star\) (11.7) & - & - \\ **Spec-GN** & - & 84.5\({}_{\pm 0.1}\) \(\star\) (2.7) & 83.6\({}_{\pm 0.8}\) \(\star\) (3.8) & - & 72.5\({}_{\pm 0.3}\) \(\star\) (5.9) & - & - & - \\ **Norm-GN** & - & 84.9\({}_{\pm 1.7}\) \(\star\) (2.6) & 83.5\({}_{\pm 1.3}\) \(\star\) (3.9) & - & 73.3\({}_{\pm 0.8}\) \(\star\) (5.1) & - & - & - \\ **GWL-WL** & 75.5\({}_{\pm 0.6}\) \(\star\) (9.1) & - & - & - & 71.3\({}_{\pm 1.1}\) \(\star\) (7.1) & - & - & 78.9\({}_{\pm 0.3}\) \\ **IGP-SL** & **84.9\({}_{\pm 1.6}\)** & 78.5\({}_{\pm 0.8}\) \(\star\) (9.1) & 80.7\({}_{\pm 1.2}\) \(\star\) (6.7) & 81.0\({}_{\pm 1.3}\) \(\star\) (14.7) & 68.5\({}_{\pm 0.2}\) \(\star\) (9.6) & - & 82.2\({}_{\pm 0.6}\) & - \\ **DSGCN** & 77.3\({}_{\pm 0.4}\) \(\star\) (7.6) & - & - & - & 78.4\({}_{\pm 0.6}\) & - & - & - \\ **U2GNN** & 80.0\({}_{\pm 0.2}\) \(\star\) (4.9) & - & - & **80.7\({}_{\pm 1.8}\)** & - & 88.5\({}_{\pm 1.7}\) \(\star\) (11.5) & - & - \\ **NDP** & 73.4\({}_{\pm 1.3}\) \(\star\) (11.5) & 74.2\({}_{\pm 1.7}\) \(\star\) (13.3) & - & 72.8\({}_{\pm 0.4}\) \(\star\) (72.9) & 44.5\({}_{\pm 1.7}\) \(\star\) (34.9) & 87.9\({}_{\pm 1.2}\) \(\star\) (12.1) & 77.9\({}_{\pm 1.4}\) \(\star\) (4.3) & - \\ **ASAP** & 74.2\({}_{\pm 0.8}\) \(\star\) (10.7) & 71.5\({}_{\pm 0.6}\) \(\star\) (16.0) & 70.1\({}_{\pm 0.4}\) \(\star\) (17.3) & 76.9\({}_{\pm 0.7}\) \(\star\) (18.8) & - & - & 66.6\({}_{\pm 0.6}\) \(\star\) (12.6) \\ **EvoG** & - & - & - & - & 55.7 \(\star\) (22.7) & **100.0** & - & - \\ \hline **ET (Ours)** & 78.9\({}_{\pm 0.8}\) \(\star\) (6.0) & 83.6\({}_{\pm 0.2}\) \(\star\) (4.0) & 82.4\({}_{\pm 0.2}\) \(\star\) (5.0) & 84.6\({}_{\pm 0.3}\) \(\star\) (11.1) & **93.8\({}_{\pm 0.4}\) \(\star\) 15.4** & 99.7 \(\star\) (9.3) & **88.3\({}_{\pm 0.2}\) \(\star\) 6.2** & **99.9\({}_{\pm 0.6}\) \(\star\) 21.0** \\ \hline \hline \end{tabular} \end{table} Table 9: Performance of all the methods on the graph classification datasets, where additional features are used if exist. Following [48], mean and standard deviation obtained from 10 runs of 10-fold cross validation are reported and the baselines (standard deviations are only included if they are available in the prior work). If the entry is unavailable in prior literature it is denoted by ‘-’; best results are in **bold**. The performance difference between non-baseline approaches (including ours) and the baseline (specified by the gray cell in each column) is indicated by \(\star\)(decrease) and \(\Delta\)(increase) along with the value. We train all models for 200 epochs using AdamW [54]. The best model is selected based on its performance obtained from the 10-fold cross validation process delineated in [48]. Since the task is classification, all models are trained with the cross-entropy loss function with no temperature. Additionally, the cosine-annealing with warm-up learning rate scheduler is utilized, where the initial and peak learning rates are set as \(5e-8\) and \(0.001\), respectively. The number of warm-up steps is set to 30 epochs while the batch size is 64 for all datasets with the exception of the DD dataset, which requires a batch size of 256 for faster training time. The whole experiment is implemented using JAX[39], Flax [42], Jraph [55], and PyTorch Geometric [56] packages. Lastly, we report the average performance of 10 runs on the 10-fold cross validation process with random seeding. The hyperparameters of our model are tuned based on the performance of the cross validation, selecting within 100 epochs. The optimal hyper-parameters are reported in table 11 and the statistics of the used datasets are reported in table 10. ## Appendix G Parameter comparison The energy function enforces symmetries in our model, which means ET has fewer parameters than its ViT counterparts. In particular, ET has no "Value Matrix" \(\mathbf{W}^{V}\) in the attention mechanism, and the HN module has only one of the two matrices in the standard MLP of the traditional Transformer block. We report these differences in Table 12. We take the ET configuration used in this paper, which has an architecture fully comparable to the original ViT-base [16] with patch_size=16, and report the number of parameters against ViT-base and an "ALBERT" version of ViT [13] where a single ViT block is shared across layers. We saw no benefit when including biases in ET, so we also \begin{table} \begin{tabular}{|c|c c c c c|} \hline **Dataset** & **Graphs** & **Avg. Nodes** & **Avg. Edges** & **Node Attr** & **Classes** \\ \hline MUTAG & 188 & 17.93 & 19.79 & 7 & 2 \\ ENZYMES & 600 & 32.63 & 62.14 & 18 + 3 & 6 \\ PROTEINS & 1113 & 39.06 & 72.82 & 0 + 4 & 2 \\ DD & 1178 & 284.32 & 715.66 & 89 & 2 \\ NCII & 4110 & 29.87 & 32.30 & 37 & 2 \\ NCI109 & 4127 & 29.68 & 32.13 & 38 & 2 \\ MUTAGENICITY & 4337 & 30.32 & 30.77 & 14 & 2 \\ FRANKENSTEIN & 4337 & 16.90 & 17.88 & 780 & 2 \\ \hline \end{tabular} \end{table} Table 10: Graph classification dataset statistics and properties (additional node attributes are indicated by ‘+’ if exist). \begin{table} \begin{tabular}{c|c} \hline **Training** & **Architecture** \\ \hline batch\_size & 64 \\ batch\_size\({}_{DD}\) & 256 \\ epochs & 200 \\ lr & 1e-3 \\ warmup\_epochs & 30 \\ start \& end lr & 5e-7 \\ b1, b2 (ADAM) & 0.9, 0.99 \\ weight\_decay & 0.05 \\ grad\_clipping & None \\ \end{tabular} \begin{tabular}{c|c} \hline **Machine** & 128 \\ num\_heads & 12 \\ head\_dim & 64 \\ \(\beta\) & \(\frac{1}{\sqrt{64}}\) \\ train\_betas & No \\ step size \(\alpha\) & 0.01 \\ k eigenvalues & 15 \\ depth & 2 \\ block\_size & 2 \\ kernel\_size & [3, 3] \\ dilation\_size & [1, 1] \\ hidden\_dim HN & 512 \\ bias in HN & None \\ bias in ATT & None \\ bias in LNORM & Yes \\ no. of params & 530,084 \\ no. of params per block & 262,929 \\ \end{tabular} \end{table} Table 11: Hyperparameter and architecture choices for ET during graph classification training experiments. exclude the biases from the total parameter count in the configuration of ViT and ALBERT-ViT. We report both the total number of parameters and the number of parameters per Transformer block. ## Appendix H Formal Algorithm We describe the algorithm for the training and inference of ET in algorithm 1, assuming backpropagation through time using SGD. Symbols not defined in the algorithm itself are reported in Table 5. We define the **Infer** function to operate independently over each item in a batch. \begin{table} \begin{tabular}{l|c c|c c} \hline \hline **Model** & \multicolumn{2}{c|}{**NParams**} & \multicolumn{2}{c}{**NParams**} \\ & & \multicolumn{2}{c}{(per block)} \\ \hline ViT-Base & \(86.28\)M & \(\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\triangledown\)0.00\% & \(7.08\)M & \(\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\triangledown\)0.00\% \\ ALBERT\_ViT-Base & \(8.41\)M & \(\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\triangledown\)0.25\% & \(7.08\)M & \(\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\triangledown\)0.00\% \\ ET & \(\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\triangledown\)**4.87M** & \(\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\triangledown\)**94.36\% & \(\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\triangledown\)**3.54M** & \(\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\triangledown\)**50.02\%** \\ \hline \hline \end{tabular} \end{table} Table 12: Comparison between the number of parameters in a standard ViT, an ALBERT version of ViT where standard Transformer blocks are shared across layers, and our ET. Comparison is done assuming no biases in any operation. ``` 1HyperParameters 2\(\alpha\): Energy descent stepsize 3\(\epsilon\): Learning rate 4\(p\): Token mask probability 5\(b\): batch size 6Parameters 7\(\mathbf{W}^{K}\in\mathbb{R}^{Y\times H\times D}\), \(\mathbf{W}^{Q}\in\mathbb{R}^{Y\times H\times D}\): Key, Query kernels of the Energy Attention 8\(\xi\in\mathbb{R}^{M\times D}\): Kernel of Hopfield Network 9\(\gamma_{\text{norm}}\in\mathbb{R}\), \(\delta_{\text{norm}}\in\mathbb{R}^{D}\): Scale, bias of LayerNorm 10mask\(\in\mathbb{R}^{D}\): Mask token 11\(\delta_{\text{pos}}\in\mathbb{R}^{N\times D}\): Position bias, added to each token 12\(\mathbf{W}_{\text{enc}}\in\mathbb{R}^{P\times D}\), \(\delta_{\text{enc}}\in\mathbb{R}^{D}\): Kernel, bias of affine Encoder 13\(\mathbf{W}_{\text{dec}}\in\mathbb{R}^{D\times P}\), \(\delta_{\text{dec}}\in\mathbb{R}^{D}\): Kernel, bias of affine Decoder 14 15 16 17 18Inputs 19Corrupted image tokens \(\tilde{X}\in\mathbb{R}^{N\times D}\) 20Add position biases: \(\tilde{X}\leftarrow\tilde{X}+\delta_{\text{pos}}\); 21fortimesteps \(t=1,\dots,T\)do 22Normalize each token: \(\tilde{g}\leftarrow\mathrm{LayerNorm}(\tilde{X};\gamma_{\text{norm}},\delta_{ \text{norm}})\); \(\tilde{g}\in\mathbb{R}^{N\times D}\) 23Calculate Energy of tokens: \(E\leftarrow\mathrm{EnergyTransformer}(\tilde{g};\mathbf{W}^{K},\mathbf{W}^{Q},\xi)\); \(E\in\mathbb{R}\) 24\(\tilde{X}\leftarrow\tilde{X}-\alpha\nabla_{\tilde{g}}E\); 25 26return\(\tilde{X}\) 27 28Train 29Inputs 30Dataset \(S_{\text{train}}\) with elements \(X\in\mathbb{R}^{\text{channels}\times\text{height}\times\text{width}}\) 31Initialize 32Randomly initialize from \(\mathcal{N}(0,0.02)\): \(\mathbf{W}^{K},\mathbf{W}^{Q},\xi,\text{mask},\mathbf{W}_{\text{enc}}, \mathbf{W}_{\text{dec}},\delta_{\text{pos}}\sim\mathcal{N}(0,0.02)\) 33Set other biases to zero: \(\delta_{\text{enc}},\delta_{\text{dec}},\delta_{\text{norm}}\gets 0\) 34Set LayerNorm scale to one: \(\gamma_{\text{norm}}\gets 1\) 35 36forepoch \(n=1,\dots,N_{\text{epoch}}\)do 37\(S_{\text{epoch}}\gets S_{\text{train}}\) 38forbatch \(B\subset S_{\text{epoch}}\) \(B\in\mathbb{R}^{b\times\text{channels}\times\text{height}\times\text{width}}\)do 39Convert image into non-overlapping patches: \(B_{\text{patch}}\leftarrow\mathrm{Patchify}(B)\); \(B_{\text{patch}}\in\mathbb{R}^{b\times N\times P}\) 40Embed image patches into tokens: \(X\leftarrow\mathrm{Encode}(B_{\text{patch}};\mathbf{W}_{\text{enc}},\delta_{ \text{enc}})\); \(X\in\mathbb{R}^{b\times N\times D}\) 41Replace image tokens randomly by mask: \(\tilde{X},I_{\text{mask}}\leftarrow\mathrm{Mask}(X;\text{mask},p)\) \(\tilde{X}\in\mathbb{R}^{b\times N\times D}\), \(I_{\text{mask}}\in\{0,1\}^{b\times N}\) 42Reconstruct tokens with ET: \(\tilde{X}\leftarrow\textbf{Infer}(\tilde{X})\) 43Decode tokens: \(\hat{B}_{\text{patch}}\leftarrow\mathrm{Decode}(\tilde{X}[I_{\text{mask}}]; \mathbf{W}_{\text{dec}},\delta_{\text{dec}})\); \(\hat{B}_{\text{patch}}\in\mathbb{R}^{b\times N\times P}\) 44Calculate MSE loss on corrupted tokens: \(L\leftarrow\mathrm{Mean}(|\hat{B}_{\text{patch}}[I_{\text{mask}}]-B_{\text{ patch}}[I_{\text{mask}}]|^{2})\) \(L\in\mathbb{R}\) 45params \(\leftarrow\mathrm{params}-\epsilon\nabla_{\text{params}}L\) 46\(S_{\text{epoch}}\gets S_{\text{epoch}}\setminus B\) 47 48returnparams ``` **Algorithm 1**Training and inference pseudocode of ET for image reconstruction task
2301.08430
Heat kernel on Ricci shrinkers (II)
This paper is the sequel to our study of heat kernels on Ricci shrinkers in \cite{LW20}. In this paper, we improve many estimates in \cite{LW20} and extend the recent progress of Bamler \cite{Bam20a}. In particular, we drop the compactness and curvature boundedness assumptions and show that the theory of $\IF$-convergence holds naturally on any Ricci flows induced by Ricci shrinkers.
Yu Li, Bing Wang
2023-01-20T06:06:38Z
http://arxiv.org/abs/2301.08430v1
# Heat kernel on Ricci shrinkers (II) ###### Abstract This paper is the sequel to our study of heat kernels on Ricci shrinkers in [28]. In this paper, we improve many estimates in [28] and extend the recent progress of Bamler [2]. In particular, we drop the compactness and curvature boundedness assumptions and show that the theory of \(\mathbb{F}\)-convergence holds naturally on any Ricci flows induced by Ricci shrinkers. ###### Contents * 1 Introduction * 2 Preliminaries * 3 Variance, \(H\)-center and Nash entropy * 4 Heat kernel estimates * 5 Parabolic neighborhoods and \(\epsilon\)-regularity theorem * 6 Metric flows and \(\mathbb{F}\)-convergence * A Integral estimates for the conjugate heat kernel ## 1 Introduction A Ricci shrinker \((M^{n},g,f)\) is a complete Riemannian manifold \((M^{n},g)\) coupled with a smooth function \(f\) satisfying \[Rc+\operatorname{Hess}f=\frac{1}{2}g, \tag{1.1}\] where the potential function \(f\) is normalized so that \[R+|\nabla f|^{2}=f. \tag{1.2}\] The study of shrinkers is an essential component of analyzing the singularity formation of solutions to the Ricci flow. For a Ricci flow with type-I curvature bound, it is proved by Enders-Muller-Topping [19] that any proper blow-up sequence converges smoothly to a nontrivial Ricci shrinker. For general compact Ricci flows, it is proved by Bamler [4] that the finite-time singularities are modeled on Ricci shrinkers containing a singular set by using the theory of \(\mathbb{F}\)-convergence developed in [2, 3, 4]. In dimension \(2\) or \(3\), all Ricci shrinkers are completely classified (cf. [21][31][33][8], etc). We know that \(\mathbb{R}^{2},S^{2},\mathbb{R}^{3},S^{3},S^{2}\times\mathbb{R}\) and their quotients form the complete list. In particular, all low-dimensional Ricci shrinkers have bounded and nonnegative sectional curvature. In higher dimensions, the complete classification of Ricci shrinkers seems out of reach. Subject to an additional curvature positivity assumption, some partial classifications are also known (cf. [31][29][26][25][32]). In general, it is still unclear if there exists any Ricci shrinker with unbounded sectional curvature. On the one hand, Ricci shrinkers can be regarded as critical metrics which generalize the classical positive Einstein manifolds. On the other hand, for any Ricci shrinker, there exists an associated self-similar solution to the Ricci flow (cf. Section 2). As a special class of Ricci flows, Ricci shrinkers have many known important properties of compact Ricci flows. In [28], many fundamental analytic tools, including the maximum principle, optimal log-Sobolev constant estimate, the no-local-collapsing theorems, etc., are established for Ricci flows associated with Ricci shrinkers. Many heat kernel estimates include the differential Harnack inequality and the pseudolocality theorem are also known in [28]. In this paper, we continue to focus on Ricci flows associated with Ricci shrinkers without any curvature assumption. Based on the techniques and results in [28] and [2], we further obtain results, including a Gaussian bound on the heat kernel, no-local-collapsing and non-expanding estimates, an \(\epsilon\)-regularity theorem, etc. All those results are stronger than their counterparts in [28]. It is important to notice that we have no assumption of curvature at all. If we assume bounded curvature on non-compact manifold, then many results are already known (cf. [5][10]). The pointed Nash entropy (cf. Definition 3.18) plays an important role in [2], which first appears in [34, Section 5] and is systematically studied in [23]. In [28], we use Perelman's entropy \(\boldsymbol{\mu}\) (see (2.1)) to characterize the optimal log-Sobolev constant and the local non-collapsing. The pointed Nash entropy, which is always bounded below by \(\boldsymbol{\mu}\), has the advantage of being local in the space-time of Ricci flows. In [23], it is proved that the Nash entropy is Lipschitz. Moreover, the oscillation of the Nash entropy in the spacetime is established in [2]. We generalize the Nash entropy and its fundamental estimates to the Ricci flows associated with Ricci shrinkers; see Theorem 3.23 and Corollary 4.19. **Theorem 1.1**.: _Let \((M^{n},g(t))_{t<1}\) be the Ricci flow associated with a Ricci shrinker. Then for any \(s<t<1\), the Nash entropy \(\mathcal{N}_{s}^{*}(x,t):=\mathcal{N}_{(x,t)}(t-s)\) is smooth and satisfies the following estimates on \(M\times(s,1)\)._ \[|\nabla\mathcal{N}_{s}^{*}|\leq\sqrt{\frac{n}{2(t-s)}}\quad\text{and}\quad- \frac{n}{2(t-s)}\leq\Box\mathcal{N}_{s}^{*}\leq 0. \tag{1.3}\] The proof of (1.3) is based on an integral estimate of the heat kernel (cf. Theorem 3.16), which was initially obtained in [2] for compact Ricci flows. A key application of Theorem 1.1 is to estimate the local oscillation of the Nash entropy (cf. Corollary 3.25). Using the Nash entropy properties and the heat kernel estimates, we obtain the improved no-local-collapsing and non-expanding result (cf. Theorem 4.2 and Theorem 4.7). **Theorem 1.2** (**No-local-collapsing and non-expanding**).: _Let \((M^{n},g(t))_{t<1}\) be the Ricci flow associated with a Ricci shrinker. For any \(x\in M\) and \(t<1\),_ \[|B_{t}(x,r)|_{t}\leq C(n)\exp\left(\mathcal{N}_{x,t}(r^{2})\right)r^{n}\] _and if \(R\leq r^{-2}\) on \(B_{t}(x,r)\), then_ \[|B_{t}(x,r)|_{t}\geq c(n)\exp\left(\mathcal{N}_{x,t}(r^{2})\right)r^{n}.\] Note that \(\mathcal{N}_{x,t}(r^{2})\leq 0\) (cf. Corollary 3.22), it is clear that Theorem 1.2 provides a uniform volume ratio upper bound, independent of base point and radius. This clearly improves the known volume upper bounds (cf. [9], [22], [24]). On the other hand, as \(\boldsymbol{\mu}\leq\mathcal{N}_{x,t}(r^{2})\), the non-collapsing estimate in Theorem 1.2 also improves the one in [28]. An important concept introduced in [2] is the \(H\)-center (cf. Definition 3.11). Roughly speaking, an \(H\)-center is a point around which the conjugate heat kernel is concentrated (cf. Proposition 3.13). In addition, for any two conjugate heat kernels, the \(W_{1}\)-Wasserstein distance between them can be roughly measured by the distance between two \(H\)-centers. We prove the existence of an \(H_{n}\)-center, where \(H_{n}=(n-1)\pi^{2}/2+4\), for any conjugate heat kernel, by generalizing the monotonicity of the variance obtained in [2] to our setting (cf. Proposition 3.10, Proposition 3.12). By using these concepts and related techniques, we have the following heat kernel estimates (cf. Theorem 4.9, Theorem 4.15, Theorem 4.16). **Theorem 1.3** (**Heat kernel estimates**).: _Let \((M^{n},g(t))_{t<1}\) be the Ricci flow associated with a Ricci shrinker satisfying \(\boldsymbol{\mu}\geq-A\). Then the following properties hold._ * _There exists a constant_ \(C=C(n,A,\delta)>1\) _such that_ \[\frac{C^{-1}}{(t-s)^{\frac{n}{2}}}\exp\left(-\frac{d_{s}^{2}(x,y)}{C^{-1}(t- s)}\right)\leq H(x,t,y,s)\leq\frac{C}{(t-s)^{\frac{n}{2}}}\exp\left(-\frac{d_{s}^{ 2}(x,y)}{C(t-s)}\right)\] (1.4) _for any_ \(-\delta^{-1}\leq s<t\leq 1-\delta\) _and_ \(d_{t}(p,x)\leq\delta^{-1}\)_._ * _For any_ \(\epsilon>0\)_, there exists a constant_ \(C=C(n,\epsilon)>0\) _such that_ \[H(x,t,y,s)\leq\frac{C\exp\left(-\mathcal{N}_{(x,t)}(t-s)\right)}{(t-s)^{\frac {n}{2}}}\exp\left(-\frac{d_{s}^{2}(z,y)}{(4+\epsilon)(t-s)}\right),\] (1.5) _for any_ \(s<t<1\) _and any_ \(H_{n}\)_-center_ \((z,s)\) _of_ \((x,t)\)_._ Here, the point \(p\) is a minimum point of \(f\), regarded as the Ricci shrinker's base point. The Gaussian estimate (1.5) is previously proved in [2] for compact Ricci flows, with \(4+\epsilon\) replaced by \(8+\epsilon\). Our proof uses an iteration argument by showing that if (1.5) fails, one can find a new spacetime point \((x^{\prime},t^{\prime})\) with an \(H_{n}\)-center \((z^{\prime},s)\) such that \(H(x^{\prime},t^{\prime},y,s)\) has a worse bound than (1.5). Eventually, we will arrive at a contradiction if \(t^{\prime}\) is sufficiently close to \(s\). The proof in our case is more involved since we do not have a global heat kernel bound as (1.5) when \(t\) is close to \(s\), which is always available for compact Ricci flows. Therefore, in the iteration process, we must carefully choose the sequence of spacetime points, so they all fall into a compact set. Then the contradiction comes from the local heat kernel estimate (cf. Corollary 4.12) since locally the scalar curvature is bounded. Once we have the estimate (1.5), the upper bound in (1.4) follows since the distance between \((x,s)\) and \((z,s)\) can be well-controlled. Moreover, the lower bound in (1.4) is already contained in [28] in a different guise. We also obtain the gradient estimate of the heat kernel; see Theorem 4.6. By the monotonicity of the \(W_{1}\)-Wasserstein distance between two conjugate heat kernels (cf. Proposition 3.7), it is natural to consider new \(P^{*}\)-parabolic neighborhoods in the spacetime of the Ricci flow, as pointed out in [2] (cf. Definition 5.1, (5.1), (5.2)). Comparing the \(P^{*}\)-parabolic neighborhoods with the conventional ones, we have the following result (cf. Proposition 5.7, Proposition 5.9, Proposition 5.10, Proposition 5.13). **Theorem 1.4**.: _Let \((M^{n},g(t))_{t<1}\) be the Ricci flow associated with a Ricci shrinker satisfying \(\mu\geq-A\). Then the following properties hold._ 1. _Given_ \(\delta\in(0,1)\)_,_ \(t_{0}\in(-\infty,1)\)_,_ \(T^{\pm}\geq 0\) _and_ \(S\geq 0\)_, there exists a constant_ \(C=C(n,A,\delta)>1\) _such that_ \[P^{*}(p,t_{0};S,-T^{-},T^{+})\subset Q(p,t_{0};\sqrt{2}S+C,-T^{-},T^{+}) \subset P^{*}(p,t_{0};\sqrt{2}S+2C,-T^{-},T^{+})\] _provided that_ \(t_{0}-T^{-}\geq-\delta^{-1}\)_._ 2. _There exists a constant_ \(\rho=\rho(n,A)\in(0,1)\) _satisfying the following property. Given_ \((x_{0},t_{0})\in M\times(-\infty,1)\) _and_ \(r>0\)_, suppose that_ \(R\leq r^{-2}\) _on_ \(P(x_{0},t_{0};r,-(\rho r)^{2},(\rho r)^{2})\)_. Then_ \[P(x_{0},t_{0};\rho r)\subset P^{*}(x_{0},t_{0};r,-(\rho r)^{2},(\rho r)^{2}) \quad\text{and}\quad P^{*}(x_{0},t_{0};\rho r)\subset P(x_{0},t_{0};r,-(\rho r )^{2},(\rho r)^{2}).\] The proof of Theorem 1.4 involves the distance distortion estimates globally with respect to \(p\) and locally under the scalar curvature control. Moreover, one needs to locate the \(H_{n}\)-center of \((p,t_{0})\) or \((x_{0},t_{0})\). Notice that, if \(t_{0}+T^{+}<1\), Theorem 1.4 implies that any \(P^{*}(p,t_{0};S,-T^{-},T^{+})\) is precompact, i.e., its closure is compact. By using the estimates of the Nash entropy and \(P^{*}\)-neighborhoods, one has the following \(\epsilon\)-regularity theorem (cf. Theorem 5.15), which is proved in [2] for compact Ricci flows. Here, \(r_{\rm Rm}\) is the spacetime curvature radius, whose definition can be found in Definition 5.14. **Theorem 1.5** (\(\epsilon\)**-regularity theorem)**.: _There exists a small constant \(\epsilon=\epsilon(n)>0\) satisfying the following property._ _Let \((M^{n},g(t))_{t<1}\) be the Ricci flow associated with a Ricci shrinker. Given \((x,t)\in M\times(-\infty,1)\) and \(r>0\), suppose that \(\mathcal{N}_{(x,t)}(r^{2})\geq-\epsilon\), then \(r_{\rm Rm}(x,t)\geq\epsilon r\)._ Based on the results and techniques generalized (or slightly improved) from [2], we can generalize the theory about metric flows and \(\mathbb{F}\)-convergence in [3] and [4] from compact Ricci flows to the setting of Ricci flows associated with or induced by Ricci shrinkers (cf. Definition 2.2). In particular, a pointed Ricci flow induced by a Ricci shrinker can be regarded as a metric flow pair in the sense of [3, Definition 5.1]. Therefore, any sequence of pointed Ricci shrinkers induced by Ricci shrinkers with \(\mu\geq-A\), by taking a subsequence, will \(\mathbb{F}\)-converge to a limit metric flow admitting concrete structure theorems (cf. Theorem 6.10, Theorem 6.12). As an application of the theory of \(\mathbb{F}\)-convergence, we have the following two-sided pseudolocality theorem. Notice that the forward pseudolocality theorem is proved in [28, Theorem 24]. Thus, to obtain a two-sided pseudolocality, it suffices to obtain a backward pseudolocality, which is proved in Theorem 6.21. **Theorem 1.6** (**Two-sided pseudolocality theorem**).: _For any \(\alpha>0\), there is an \(\epsilon(n,\alpha)>0\) such that the following holds._ _Let \((M^{n},g(t))_{t<1}\) be a Ricci flow associated with a Ricci shrinker. Given \((x_{0},t_{0})\in M\times(-\infty,1)\) and \(r>0\), if_ \[|B_{t_{0}}(x_{0},r)|\geq\alpha r^{n},\qquad|Rm|\leq(\alpha r)^{-2}\quad on \quad B_{t_{0}}(x_{0},r),\] _then_ \[|Rm|\leq(\epsilon r)^{-2}\quad on\quad P(x_{0},t_{0};(1-\alpha)r,-(\epsilon r )^{2},(\epsilon r)^{2}).\] Another application of the \(\mathbb{F}\)-converge is the following integral estimate of curvature, which originates from the estimate of Cheeger-Naber [12]. For more details, see Theorem 6.23 and Corollary 6.24. **Theorem 1.7**.: _Let \((M^{n},g,f,p)\) be a Ricci shrinker in \(\mathcal{M}(A)\). Then_ \[\int_{d(p,\cdot)\leq r}|Rm|^{2-\epsilon}\,dV\leq\int_{d(p,\cdot) \leq r}r_{\rm Rm}^{-4+2\epsilon}\,dV\leq Cr^{n+2\epsilon-2},\] \[\int_{d(p,\cdot)\geq 1}\frac{|Rm|^{2-\epsilon}}{d^{n+2\epsilon-2}(p,\cdot)}\,dV\leq\int_{d(p,\cdot)\geq 1}\frac{r_{\rm Rm}^{-4+2\epsilon}}{d^{n+2 \epsilon-2}(p,\cdot)}\,dV\leq C\] _for any \(\epsilon>0\) and \(r\geq 1\), where \(r_{\rm Rm}(\cdot)=r_{\rm Rm}(\cdot,0)\) and \(C=C(n,A,\epsilon)\)._ This paper is organized as follows. Section 2 discusses some properties of Ricci flows associated with Ricci shrinkers, including the existence of cutoff functions and maximum principles. In Section 3, we prove some estimates and properties regarding the variance, \(H\)-centers and the Nash entropy. Section 4 focuses on various estimates of the heat kernel. In Section 5, we prove the theorems about the parabolic neighborhoods and the \(\epsilon\)-regularity theorem. In the last section, we generalize the theory of \(\mathbb{F}\)-convergence in our setting and prove some applications in Ricci shrinkers. **Acknowledgements**: Yu Li is supported by YSBR-001, NSFC-12201597 and research funds from USTC (University of Science and Technology of China) and CAS (Chinese Academy of Sciences). Bing Wang is supported by YSBR-001, NSFC-11971452, NSFC-12026251 and a research fund from USTC. Preliminaries For any Ricci shrinker \((M^{n},g,f)\), the scalar curvature \(R\geq 0\) from [14, Corollary 2.5] and \(R>0\) unless \((M^{n},g)\) is isometric to the Gaussian soliton \((\mathbb{R}^{n},g_{E})\), by the strong maximum principle. With the normalization (1.2), the entropy is defined as \[\mathbf{\mu}=\mathbf{\mu}(g)\coloneqq\log\int\frac{e^{-f}}{(4\pi)^{n/2}}\,dV. \tag{2.1}\] Notice that \(e^{\mathbf{\mu}}\) is uniformly comparable to the volume of the unit ball \(B(p,1)\) (cf. [24, Lemma 2.5]). It was proved in [28, Theorem 1] that \(\mathbf{\mu}\) is the optimal log-Sobolev constant for all scales. Following [27], we have the following definition. **Definition 2.1**.: _Let \(\mathcal{M}(A)\) be the family of Ricci shrinkers \((M^{n},g,f)\) satisfying_ \[\mathbf{\mu}(g)\geq-A. \tag{2.2}\] Recall that any Ricci shrinker \((M^{n},g,f)\) can be considered a self-similar solution to the Ricci flow. Let \(\psi^{t}:M\to M\) be a family of diffeomorphisms generated by \(\dfrac{1}{1-t}\nabla f\) and \(\psi^{0}=\mathrm{id}\). In other words, we have \[\frac{\partial}{\partial t}\psi^{t}(x)=\frac{1}{1-t}\nabla f\left(\psi^{t}(x )\right). \tag{2.3}\] It is well known that the rescaled pull-back metric \(g(t)\coloneqq(1-t)(\psi^{t})^{*}g\) satisfies the Ricci flow equation for any \(-\infty<t<1\), \[\partial_{t}g=-2Rc_{g(t)}\quad\text{and}\quad g(0)=g. \tag{2.4}\] Sometimes we encounter Ricci flow obtained from the above Ricci flow through time-shifting and rescaling. We emphasize whether there exist extra time-shifting and rescaling by the following definition. **Definition 2.2**.: _For any Ricci shrinker, the Ricci flow defined in (2.4) is called the **associated Ricci flow**. Any Ricci flow obtained from the associated Ricci flow via time-shifting and rescaling is called the **Ricci flow induced by a Ricci shrinker**._ Clearly, a Ricci flow associated to a Ricci shrinker must be a Ricci flow induced by a Ricci shrinker, but the reverse is generally not true. In this article, if not mentioned explicitly, the associated Ricci flow is the default one. Next, we recall the function \(F(x,t):=\bar{\tau}f(x,t)\), where \(\bar{\tau}:=1-t\) and \(f(x,t):=(\psi^{t})^{*}f\), satisfies the following identities (see [28, Section 2] for proofs): \[\partial_{t}f=|\nabla f|^{2}, \tag{2.5}\] \[\partial_{t}F=-\bar{\tau}R,\] (2.6) \[\bar{\tau}R+\Delta F=\frac{n}{2},\] (2.7) \[\bar{\tau}^{2}R+|\nabla F|^{2}=F,\] (2.8) \[\Box F=-\frac{n}{2}. \tag{2.9}\] Here, we define \(\Box:=\partial_{t}-\Delta_{t}\) and have dropped the subscript \(g(t)\) or \(t\) if there is no confusion. Based on these identities, we have the following estimates of \(F\). **Lemma 2.3** (Lemma 1 of [28]).: _There exists a point \(p\in M\) where \(F\) attains its infimum and \(F\) satisfies the quadratic growth estimate_ \[\frac{1}{4}\left(d_{t}(x,p)-5n\bar{\tau}-4\right)_{+}^{2}\leq F(x,t)\leq\frac{ 1}{4}\left(d_{t}(x,p)+\sqrt{2n\bar{\tau}}\right)^{2} \tag{2.10}\] _for all \(x\in M\) and \(t<1\), where \(a_{+}:=\max\{0,a\}\)._ Thanks to (2.10), \(F(x,t)\) grows like \(d_{t}^{2}(x,p)/4\) and hence one can obtain a family of cutoff functions by composing \(F\) with a cutoff function on \(\mathbb{R}\). More precisely, we fix a function \(\eta\in C^{\infty}([0,\infty))\) such that \(0\leq\eta\leq 1\), \(\eta=1\) on \([0,1]\) and \(\eta=0\) on \([2,\infty)\). Furthermore, \(-C\leq\eta^{\prime}/\eta^{\frac{1}{2}}\leq 0\) and \(|\eta^{\prime\prime}|\leq C\) for a universal constant \(C>0\). For each \(r\geq 1\), we define \[\phi^{r}:=\eta\left(\frac{F}{r}\right). \tag{2.11}\] Then \(\phi^{r}\) is a smooth function on \(M\times(-\infty,1)\). The following estimates of \(\phi^{r}\) are proved in [28, Lemma 3]: \[(\phi^{r})^{-1}|\nabla\phi^{r}|^{2} \leq Cr^{-1}, \tag{2.12}\] \[|\phi^{r}_{t}| \leq C\bar{\tau}^{-1},\] (2.13) \[|\Delta\phi^{r}| \leq C(\bar{\tau}^{-1}+r^{-1}),\] (2.14) \[|\Box\phi^{r}| \leq Cr^{-1}, \tag{2.15}\] where the constant \(C\) depends only on the dimension \(n\). For later applications, we recall the following volume estimate proved in [28, Lemma 2]. **Lemma 2.4**.: _There exists a constant \(C=C(n)>0\) such that for any Ricci shrinker \((M^{n},g,f)\) with \(p\in M\) a minimum point of \(f\),_ \[|B_{t}(p,r)|_{t}\leq Cr^{n}.\] Next, we recall the following version of the maximum principle on Ricci shrinkers, which is proved in [28, Theorem 6] and will be frequently used. **Theorem 2.5** (Maximum principle on Ricci shrinkers I).: _Let \((M,g(t))_{t<1}\) be the Ricci flow associated with a Ricci shrinker. Given any closed interval \([a,b]\subset(-\infty,1)\) and a function \(u\) which satisfies \(\Box u\leq 0\) on \(M\times[a,b]\), suppose that_ \[\int_{a}^{b}\int_{M}u_{+}^{2}(x,t)e^{-2f(x,t)}\,dV_{t}(x)\,dt<\infty. \tag{2.16}\] _If \(u(\cdot,a)\leq c\), then \(u(\cdot,b)\leq c\)._ We also need the following version of the maximum principle, which is proved in [18, Theorem 12.14] for Ricci flows with bounded curvature. Notice that if \(X\equiv 0\), Theorem 2.6 follows from Theorem 2.5. **Theorem 2.6** (Maximum principle on Ricci shrinkers II).: _Let \((M,g(t))_{t<1}\) be the Ricci flow associated with a Ricci shrinker. Given any closed interval \([a,b]\subset(-\infty,1)\) and a function \(u\) which satisfies_ \[Lu:=\Box u-\langle\nabla u,X(t)\rangle\leq 0\] _on \(M\times[a,b]\), suppose that \(X(t)\) is a bounded vector field on \(M\times[a,b]\) and_ \[u(x,t)\leq Ke^{kf(x,t)} \tag{2.17}\] _on \(M\times[a,b]\) for some constants \(K>0\) and \(k<1\). If \(u(\cdot,a)\leq c\), then \(u(\cdot,b)\leq c\)._ Proof.: We first construct a barrier function \[\phi(x,t):=Ke^{B(t-a)+(1-\epsilon)f(x,t)},\] where \(1-\epsilon>k\) and \(B\) is a constant determined later. **Claim:** There exists a constant \(B>0\) such that \[L\phi\geq\phi. \tag{2.18}\] _Proof of Claim_: By direct computations, we have \[L\phi =\phi\left(B+(1-\epsilon)f_{t}-(1-\epsilon)^{2}|\nabla f|^{2}-(1- \epsilon)\Delta f-(1-\epsilon)\langle\nabla f,X\rangle\right)\] \[=\phi\left(B+\epsilon(1-\epsilon)|\nabla f|^{2}-\frac{n(1- \epsilon)}{2\bar{\tau}}+(1-\epsilon)R-(1-\epsilon)\langle\nabla f,X\rangle\right)\] \[\geq\phi\left(B+\epsilon(1-\epsilon)|\nabla f|^{2}-C_{1}|\nabla f |-\frac{n(1-\epsilon)}{2(1-b)}\right),\] where we have used (2.6), (2.7) and the assumption that \(|X|\leq C_{1}\). Therefore, (2.18) holds if we choose \[B=\frac{C_{1}^{2}}{4\epsilon(1-\epsilon)}+\frac{n(1-\epsilon)}{2(1-b)}+1.\] Now, we assume \(c=0\) by considering \(u-c\) instead of \(u\). To complete the proof, we only need to verify that for any \(\delta>0\), \(u\leq\delta\phi\) on \(M\times[a,b]\). Otherwise, then there exists \((x^{\prime},t^{\prime})\in M\times[a,b]\) such that \((u-\delta\phi)\left(x^{\prime},t^{\prime}\right)>0\). Due to the estimate (2.17) and our definition of \(\phi\), we know that \((u-\delta\phi)\left(x,t\right)\longrightarrow-\infty\) as \(d_{t}(x,p)\longrightarrow+\infty\) uniformly in \(t\), i.e., \(u-\delta\phi<0\) for \(d_{t}(x,p)\) large enough independent of \(t\). Moreover, \((u-\delta\phi)\left(x,a\right)<0\) for all \(x\in M\). Consequently, there exists \((x^{\prime\prime},t^{\prime\prime})\in M\times(a,t^{\prime})\) such that \((u-\delta\phi)\left(x,t\right)\leq 0\) for all \((x,t)\in M\times[a,t^{\prime\prime}]\) and \((u-\delta\phi)\left(x^{\prime\prime},t^{\prime\prime}\right)=0\). At \((x^{\prime\prime},t^{\prime\prime})\), we compute \[0\leq L\left(u-\delta\phi\right)\leq-\delta\phi<0,\] which is a contradiction. In sum, our proof is complete. ## 3 Variance, \(H\)-center and Nash entropy Let \((M^{n},g(t))_{t<1}\) be the Ricci flow associated with a Ricci shrinker. It is proved in [28, Theorem 7] that there exists a positive heat kernel function \(H(x,t,y,s)\) for \(x,y\in M\) and \(s<t<1\). More precisely, \[\Box H(\cdot,\cdot,y,s)=0,\quad\lim_{t\searrow s}H(\cdot,t,y,s)= \delta_{y}\] and \[\Box^{*}H(x,t,\cdot,\cdot)=0,\quad\lim_{s\nearrow t}H(x,t,\cdot,s )=\delta_{x},\] where \(\Box:=\partial_{t}-\Delta\) and \(\Box^{*}:=-\partial_{t}-\Delta+R\). Furthermore, the heat kernel \(H\) satisfies the semigroup property \[H(x,t,y,s)=\int_{M}H(x,t,z,\rho)H(z,\rho,y,s)\,dV_{\rho}(z), \quad\forall\ x,y\in M,\ \rho\in(s,t)\subset(-\infty,1), \tag{3.1}\] and the following integral relationships \[\int_{M}H(x,t,y,s)\,dV_{t}(x)\leq 1, \tag{3.2}\] \[\int_{M}H(x,t,y,s)\,dV_{s}(y)=1. \tag{3.3}\] For any \((x,t)\in M\times(-\infty,1)\), we define the conjugate heat kernel measure \(v_{x,t,s}\) by \(dv_{x,t,s}(y)=K(x,t,y,s)\,dV_{s}(y)\). It follows immediately from (3.3) that \(v_{x,t,s}\) is a probability measure on \(M\). In particular, \(v_{x,t,t}=\delta_{x}\). With the help of the heat kernel, one can solve the (conjugate) heat solution from the given initial condition. More precisely, it follows from [28, Lemma 5, Lemma 6] that **Theorem 3.1**.: _Suppose \([a,b]\subset(-\infty,1)\) and \(u_{a}\) is a bounded function on the time slice \((M,g(a))\). Then_ \[u(x,t)\coloneqq\int_{M}H(x,t,y,a)u_{a}(y)\,dV_{a}(y),\quad \forall\ t\in[a,b] \tag{3.4}\] _is the unique bounded heat solution with the initial value \(u_{a}\). Similarly, suppose \(w_{b}\) is an integrable function on the time slice \((M,g(b))\). Then_ \[w(y,s)\coloneqq\int_{M}H(x,b,y,s)w_{b}(x)\,dV_{b}(x) \tag{3.5}\] _is the unique conjugate heat solution with initial value \(w_{b}\) such that_ \[\sup_{s\in[a,b]}\int|w|\,dV_{s}<\infty. \tag{3.6}\] Next, we recall the following gradient estimate, which slightly strengthens [28, Corollary 1]. **Lemma 3.2**.: _Let \(u\) be a bounded heat solution on \(M\times[a,b]\) such that \(\sup_{M}|\nabla u(\cdot,a)|<\infty\). Then_ 1. _We have_ \[\sup_{M}|\nabla u(\cdot,b)|\leq\sup_{M}|\nabla u(\cdot,a)|.\] (3.7) 2. _Assume_ \(w\) _is a nonnegative conjugate heat solution on_ \(M\times[a,b]\) _such that_ \[\sup_{t\in[a,b]}\int_{M}w\,dV_{t}<\infty,\] (3.8) _then we have_ \[2\int_{a}^{b}\int_{M}|\mathrm{Hess}\ u|^{2}w\,dV_{t}dt=\int_{M}| \nabla u|^{2}w\,dV\bigg{|}_{b}^{a}<\infty.\] (3.9) Proof.: (i) From \(\Box u=0\) and direct computation, we have \[\Box|\nabla u|^{2}=-2|\mathrm{Hess}\ u|^{2}\leq 0. \tag{3.10}\] Therefore, (3.7) follows from Theorem 2.5 provided that \[\int_{a}^{b}\int_{M}|\nabla u|^{2}e^{-2f}\,dV_{t}dt<\infty. \tag{3.11}\] Now, we fix \(r\gg 1\) and multiply both sides of \(\Box u=0\) by \(u(\phi^{r})^{2}e^{-2f}\). By integrating on \(M\times[a,b]\), we obtain \[\frac{1}{2}\int_{M}u^{2}(\phi^{r})^{2}e^{-2f}\,dV\bigg{|}_{a}^{b}\] \[-\int_{a}^{b}\int_{M}u^{2}\phi^{r}\phi^{r}_{t}e^{-2f}\,dV_{t}dt+ \int_{a}^{b}\int_{M}u^{2}(\phi^{r})^{2}f_{t}e^{-2f}\,dV_{t}dt+\frac{1}{2}\int_ {a}^{b}\int_{M}u^{2}(\phi^{r})^{2}Re^{-2f}\,dV_{t}dt\] \[=\int_{a}^{b}\int_{M}\left\{-|\nabla(u\phi^{r})|^{2}+|\nabla\phi^ {r}|^{2}u^{2}+\langle\nabla u^{2},\nabla f\rangle(\phi^{r})^{2}\right\}e^{-2 f}dV_{t}dt\] \[=\int_{a}^{b}\int_{M}\left\{-|\nabla(u\phi^{r})|^{2}+|\nabla\phi^ {r}|^{2}u^{2}+(2|\nabla f|^{2}-\Delta f)u^{2}(\phi^{r})^{2}-2u^{2}\phi^{r} \langle\nabla\phi^{r},\nabla f\rangle\right\}e^{-2f}dV_{t}dt.\] Since \(R\geq 0\) and \(f_{t}=|\nabla f|^{2}\) by (2.5), we have \[\int_{a}^{b}\int_{M}|\nabla(u\phi^{r})|^{2}e^{-2f}dV_{t}dt+\frac{ 1}{2}\int_{M}u^{2}(\phi^{r})^{2}e^{-2f}\,dV\bigg{|}_{a}^{b}\] \[\leq\int_{a}^{b}\int_{M}\left\{|\nabla\phi^{r}|^{2}u^{2}+(|\nabla f |^{2}-\Delta f)u^{2}(\phi^{r})^{2}+u^{2}\phi^{r}(\phi^{r}_{t}-2\langle\nabla \phi^{r},\nabla f\rangle)\right\}e^{-2f}dV_{t}dt\] \[=\int_{a}^{b}\int_{M}\left\{|\nabla\phi^{r}|^{2}u^{2}+\frac{1}{1- t}\left(f-\frac{n}{2}\right)u^{2}(\phi^{r})^{2}+u^{2}\phi^{r}(\phi^{r}_{t}-2 \langle\nabla\phi^{r},\nabla f\rangle)\right\}e^{-2f}dV_{t}dt,\] where we have used the identity \(\Delta f-|\nabla f|^{2}=\bar{\tau}(f-n/2)\) from (2.7) and (2.8). Since \(u\) is bounded on \(M\times[a,b]\) and \(|\nabla f|^{2}\leq f/(1-b)\), it follows from (2.12), (2.13), Lemma 2.3 and Lemma 2.4 that by letting \(r\to+\infty\), \[\int_{a}^{b}\int_{M}|\nabla u|^{2}e^{-2f}dV_{t}dt\leq\left.\frac{1}{2}\int_{M}u ^{2}e^{-2f}\,dV\right|_{b}^{a}+\int_{a}^{b}\int_{M}\frac{1}{1-t}\Big{(}f-\frac {n}{2}\Big{)}u^{2}e^{-2f}dV_{t}dt<\infty\] and hence (3.11) holds. (ii) Fix \(r\gg 1\) and \(\epsilon\ll 1\). We calculate \[\partial_{t}\int_{M}|\nabla u|^{2}\phi^{r}w\,dV =\int_{M}\left\{\Box(|\nabla u|^{2}\phi^{r})w-(|\nabla u|^{2}\phi^ {r})\Box^{*}w\right\}dV\] \[=\int_{M}\left\{|\nabla u|^{2}\Box\phi^{r}+\phi^{r}\Box|\nabla u| ^{2}-2\langle\nabla|\nabla u|^{2},\nabla\phi^{r}\rangle\right\}w\,dV\] \[\leq\int_{M}\left\{|\nabla u|^{2}\Box\phi^{r}-2|\text{Hess }u|^{2}\phi^{r}+4|\text{Hess }u ||\nabla u||\nabla\phi^{r}|\right\}w\,dV\] \[\leq\int_{M}\left\{|\nabla u|^{2}|\Box\phi^{r}|-(2-\epsilon^{2}) |\text{Hess }u|^{2}\phi^{r}+4\epsilon^{-2}|\nabla u|^{2}|\nabla\phi^{r}|^{2}(\phi^{r})^{-1 }\right\}w\,dV. \tag{3.12}\] Since \(|\nabla u|\) is uniformly bounded by (3.7), it follows from (3.12), (2.12) and (2.15) that \[2\int_{a}^{b}\int_{M}|\text{Hess }u|^{2}w\,dV_{t}dt\leq\left.\int_{M}|\nabla u |^{2}w\,dV\right|_{b}^{a}\] if we let \(r\to+\infty\) and \(\epsilon\to 0\). The other inequality can be proved similarly and hence (3.9) holds. Next, we prove **Proposition 3.3**.: _For any \([a,b]\subset(-\infty,1)\), suppose \(u\) and \(w\) are two smooth functions on \(M\times[a,b]\) satisfying \(\Box u=\Box^{*}w=0\). Then, the identity_ \[\int_{M}uw\,dV_{a}=\int_{M}uw\,dV_{b} \tag{3.13}\] _holds under one of the following additional assumptions:_ * \(\sup_{t\in[a,b]}\int_{M}|wu|\,dV_{t}+\int_{a}^{b}\int_{M}|w||\nabla u|\,dV_{t }dt<\infty\)_._ * \(\sup_{t\in[a,b]}\int_{M}|wu|\,dV_{t}+\int_{a}^{b}\int_{M}|u||\nabla w|\,dV_{t }dt<\infty\)_._ Proof.: (i) We take \(r\gg 1\) and calculate \[\partial_{t}\int_{M}wu\phi^{r}\,dV =\int_{M}\left\{w\Box(u\phi^{r})-(u\phi^{r})\Box^{*}w\right\}dV\] \[=\int_{M}w\left\{u\Box\phi^{r}+\phi^{r}\Box u-2\langle\nabla u, \nabla\phi^{r}\rangle\right\}dV\] \[=\int_{M}w\left\{u\Box\phi^{r}-2\langle\nabla u,\nabla\phi^{r} \rangle\right\}dV.\] By using (2.12) and (2.15), we conclude \[\left|\int_{M}wu\phi^{r}dV\right|_{a}^{b}\right|\leq C(r^{-1}+r^{-\frac{1}{2}}) \int_{a}^{b}\int_{M}|w|(|u|+|\nabla u|)\,dV_{t}dt.\] By taking \(r\to\infty\), we arrive at (3.13). (ii) Similarly, we have \[\partial_{t}\int_{M}uw\phi^{r}\,dV =\int_{M}\left\{(\Box u)w\phi^{r}-u\Box^{*}(w\phi^{r})\right\}dV\] \[=\int_{M}u\left\{-(\Box^{*}w)\phi^{r}+w(\Delta\phi^{r}+\phi_{t}^{ r})+2\langle\nabla w,\nabla\phi^{r}\rangle\right\}\,dV\] \[=\int_{M}u\left\{w(\Delta\phi^{r}+\phi_{t}^{r})+2\langle\nabla w, \nabla\phi^{r}\rangle\right\}\,dV.\] Therefore, by (2.12), (2.13) and (2.14), we have \[\left|\int_{M}wu\phi^{r}dV\right|_{a}^{b}\right|\leq C(r^{-1}+r^{-\frac{1}{2} }+(1-a)^{-1})\iint_{K_{r}}|u|(|w|+|\nabla w|)\,dV_{t}dt,\] where \(K_{r}:=\{r\leq F(x,r)\leq 2r,\,a\leq t\leq b\}\). Consequently, by our assumption, (3.13) holds if \(r\to\infty\). **Remark 3.4**.: _Suppose \(\Box u=\Box^{*}w=0\)._ * _If_ \(\sup_{M}|\nabla u(\cdot,a)|+\sup_{M\times[a,b]}|u|+\sup_{t\in[a,b]}\int_{M}|w| \,dV_{t}<\infty\)_, then assumption (i) holds by (_3.7_). If_ \(\sup_{M\times[a,b]}|u|+\sup_{t\in[a,b]}\int_{M}|w|\,dV_{t}<\infty\) _and_ \(u\) _is positive, then_ \(|\nabla u|\leq C/\sqrt{t-a}\) _by_ _[_28_, Lemma 18]__. Therefore, (_3.13_) also holds by taking the limit for_ \(t\searrow a\)_._ * _If_ \(\sup_{M\times[a,b]}|u|+\sup_{t\in[a,b]}\int_{M}|w|\,dV_{t}<\infty\) _and_ \(w(\cdot,b)\) _is a nonnegative function with compact support, then assumption (ii) holds. Indeed, it follows from_ _[_28_, Lemma 9]_ _that_ \(\int_{a}^{b}\int_{M}\frac{|\nabla w|^{2}}{w}\,dV_{t}dt<\infty\) _and hence_ \[\int_{a}^{b}\int_{M}|u||\nabla w|\,dV_{t}dt\leq C(\sup_{M\times[a,b]}|u|) \left(\int_{a}^{b}\int_{M}\frac{|\nabla w|^{2}}{w}\,dV_{t}dt\right)^{\frac{1 }{2}}\left(\int_{a}^{b}\int_{M}w\,dV_{t}dt\right)^{\frac{1}{2}}<\infty.\] For later applications, we prove the following estimate of the heat kernel. **Lemma 3.5**.: _For any \(y\in M\) and \(s<t<1\), we set \(u(x,t):=H(x,t,y,s)\) and \(\bar{w}(x,t):=(4\pi\bar{\tau})^{-\frac{n}{2}}e^{-f(x,t)}\). Then_ \[\int_{M}u(x,t)\bar{w}(x,t)\,dV_{t}(x)=\bar{w}(y,s). \tag{3.14}\] Proof.: It is clear from the definition of \(\widetilde{w}\) that \(\Box^{*}\widetilde{w}=0\); see [28, Equation (28)]. Moreover, for any \([a,b]\subset(s,t]\), the assumption (ii) of Proposition 3.3 holds since \[\int_{a}^{b}\int_{M}u|\nabla\widetilde{w}|\,dV_{t}dt \leq\int_{a}^{b}\int_{M}u\widetilde{w}(1+|\nabla f|)\,dV_{t}dt\] \[\leq (4\pi(1-b))^{-\frac{\pi}{2}}\int_{a}^{b}\int_{M}u(1+(1-b)^{-\frac{ 1}{2}}f^{\frac{1}{2}})e^{-f}\,dV_{t}dt\] \[\leq C\int_{a}^{b}\int u\,dV_{t}dt\leq C(b-a)\] where we have used (2.8) and (3.2) and the constant \(C\) depends only on \(n\) and \(b\). By choosing \(b=t\) and letting \(a\searrow s\), the proof of Proposition 3.3 yields \[\int_{M}u\widetilde{w}\phi^{r}\,dV_{t}-\widetilde{w}(y,s)\phi^{r}(y,s)=o(r)\] where \(o(r)\to 0\) as \(r\to\infty\). Therefore, we immediately obtain (3.14) by letting \(r\to\infty\). Next, we recall the definition of \(W_{1}\)-Wasserstein distance. **Definition 3.6**.: _Let \((X,d)\) be a complete metric space and \(\mu_{1},\mu_{2}\) two probability measures on \(X\). Then the \(W_{1}\)-Wasserstein distance between \(\mu_{1}\) and \(\mu_{2}\) is defined by_ \[d_{W_{1}}(\mu_{1},\mu_{2}):=\sup_{f}\left(\int f\,d\mu_{1}-\int f\,d\mu_{2}\right)\] _where the supremum is taken for all bounded \(1\)-Lipschitz functions \(f\). We also use \(d^{t}_{W_{1}}\) to denote the \(W_{1}\)-distance with respect to \(g(t)\)._ We prove the following monotonicity of the Wasserstein distance as [2, Lemma 2.7]. **Proposition 3.7**.: _Let \((M^{n},g(t))_{t<1}\) be a Ricci flow associated with a Ricci shrinker. For \([a,b]\subset(-\infty,1)\), let \(w_{1},w_{2}\in C^{\infty}(M\times[a,b])\) be two nonnegative conjugate heat solutions such that \(\int_{M}w_{i}\,dV_{t}=1\) for any \(t\in[a,b]\) and \(i=1,2\). We define the probability measures with \(d\mu_{i,t}=w_{i}(\cdot,t)\,dV_{t}\), \(i=1,2\). Then_ \[d^{t}_{W_{1}}(\mu_{1,t},\mu_{2,t})\] _is increasing for \(t\in[a,b]\). In particular, if \(t_{1}\leq t_{2}<1\), then for any \(x_{1},x_{2}\in M\) and \(t\leq t_{1}\),_ \[d^{t}_{W_{1}}(v_{x_{1},t_{1},t},v_{x_{2},t_{2},t})\] _is increasing and_ \[d^{t}_{W_{1}}(v_{x_{1},t_{1},t},v_{x_{2},t_{2},t})\leq d_{t_{1}}(x_{1},x_{2}).\] Proof.: Let \(t_{1}\leq t_{2},t_{1},t_{2}\in[a,b]\) and consider a bounded function \(u_{1}\in C^{\infty}(M)\) with \(\sup_{M}|\nabla u_{1}(\cdot,t_{1})|\leq 1\). Suppose \(u\) is the unique bounded heat solution on \(M\times[t_{1},t_{2}]\) starting from \(u_{1}\). Then it follows from Lemma 3.2 (i) that \[\sup_{M}|\nabla u(\cdot,t)|\leq 1\] for any \(t\in[t_{1},t_{2}]\). Clearly, we have \[\int_{M}u\,d\mu_{1,t_{1}}-\int_{M}u\,d\mu_{2,t_{1}} =\int_{M}u(x,t_{1})w_{1}(x,t_{1})\,dV_{t_{1}}(x)-\int_{M}u(x,t_{1} )w_{2}(x,t_{1})\,dV_{t_{1}}(x)\] \[=\int_{M}u(x,t_{2})w_{1}(x,t_{2})\,dV_{t_{2}}(x)-\int_{M}u(x,t_{2} )w_{2}(x,t_{2})\,dV_{t_{2}}(x)\] \[=\int_{M}u\,d\mu_{1,t_{2}}-\int_{M}u\,d\mu_{2,t_{2}}\leq d_{W_{1} }^{t_{2}}(\mu_{1,t_{2}},\mu_{2,t_{2}}).\] Here, we have used [28, Proposition 1] for the second equality. By taking the supremum over all such \(u_{1}\), one obtains \[d_{W_{1}}^{t_{1}}(\mu_{1,t_{1}},\mu_{2,t_{1}})\leq d_{W_{1}}^{t_{2}}(\mu_{1,t_ {2}},\mu_{2,t_{2}}).\] Next, we recall the following definition from [2, Definition 3.1]. **Definition 3.8** (Variance).: _The variance between two probability measures \(\mu_{1},\mu_{2}\) on a Riemannian manifold \((M,g)\) is defined as_ \[\mathrm{Var}(\mu_{1},\mu_{2}):=\int_{M}\int_{M}d^{2}(x_{1},x_{2})d\mu_{1}(x_{1 })d\mu_{2}(x_{2}).\] _In the case \(\mu_{1}=\mu_{2}=\mu\), we write_ \[\mathrm{Var}(\mu)=\mathrm{Var}(\mu,\mu)=\int_{M}\int_{M}d^{2}(x_{1},x_{2})d\mu (x_{1})d\mu(x_{2}).\] _We also define \(\mathrm{Var}_{t}\) as the variance with respect to the metric \(g(t)\)._ For some basic properties of the variance, we refer the readers to [2, Lemma 3.2]. Next, we prove the following results which originate from [2, Corollary 3.7, Corollary 3.8]. Before that, we first prove the following maximum principle on the product manifold (cf. [1][7] for related survey). **Theorem 3.9** (Maximum principle on the product).: _Let \((M^{n},g(t))_{t<1}\) be a Ricci flow associated with a Ricci shrinker. Given any closed interval \([a,b]\subset(-\infty,1)\) and a function \(u\) on \(M\times M\times[a,b]\) such that_ \[(\partial_{t}-\Delta_{x}-\Delta_{y})u(x,y,t)\leq 0. \tag{3.15}\] _Suppose that_ \[\int_{a}^{b}\int_{M\times M}u_{+}^{2}(x,y,t)e^{-2f(x,t)-2f(y,t)}\,dV_{t}(x) dV_{t}(y)\,dt<\infty. \tag{3.16}\] _If \(u(\cdot,a)\leq c\), then \(u(\cdot,b)\leq c\)._ Proof.: The proof follows almost verbatim from [28, Theorem 6], except that we multiply (3.15) by \(u_{+}(x,y,t)(\phi^{\prime}(x)\phi^{\prime}(y))^{2}e^{-2f(x,t)-2f(y,t)}\) and do the integration. Since no other new ingredient is needed, we omit the details here. **Proposition 3.10**.: _Under the same assumptions as in Proposition 3.7, if we further assume \(w_{1}(\cdot,b)\) and \(w_{2}(\cdot,b)\) have compact supports, then_ \[\operatorname{Var}_{t}(\mu_{1,t},\mu_{2,t})+H_{n}t\] _is increasing for \(t\in[a,b]\), where \(H_{n}:=(n-1)\pi^{2}/2+4\). Moreover, for any \(x_{1},x_{2}\in M\),_ \[\operatorname{Var}_{t}(v_{x_{1},b,t},v_{x_{2},b,t})+H_{n}t\] _is increasing for \(t\leq b\). In particular,_ \[\operatorname{Var}_{t}(v_{x_{1},b,t},v_{x_{2},b,t})\leq d_{b}^{2}(x_{1},x_{2}) +H_{n}(b-t)\quad\text{and}\quad\operatorname{Var}_{t}(v_{x,b,t})\leq H_{n}(b- t).\] Proof.: For any \([c,d]\subset[a,b]\), we set \(u\in C^{0}(M\times M\times[c,b])\cap C^{\infty}(M\times M\times(c,b])\) be the solution to the following heat equation \[(\partial_{t}-\Delta_{x}-\Delta_{y})u=-H_{n},\quad u(\cdot,c)=d_{c}^{2}.\] Indeed, by the existence of the heat kernel, one may define \[u(x,y,t):=\int_{M}\int_{M}H(x,t,z,c)H(y,t,w,c)d_{c}^{2}(z,w)\,dV_{c}(z)dV_{c}(w )-H_{n}(t-c). \tag{3.17}\] We first show (3.17) is well-defined. In fact, it is clear that \[\int_{M}\int_{M}H(x,t,z,c)H(y,t,w,c)d_{c}^{2}(z,w)\,dV_{c}(z)dV_{c }(w)\] \[\leq 2\left(\int_{M}H(x,t,z,c)d_{c}^{2}(z,p)\,dV_{c}(z)+\int_{M}H(y,t, w,c)d_{c}^{2}(w,p)\,dV_{c}(w)\right) \tag{3.18}\] and the convergence of the last two integrals follows from [28, Corollary 5]. On the other hand, it follow from [2, Theorem 3.5] that \[(\partial_{t}-\Delta_{x}-\Delta_{y})d_{t}^{2}(x,y)\geq-H_{n}. \tag{3.19}\] Combining (3.17) and (3.19), we claim that \(u(x,y,t)\leq d_{t}^{2}(x,y)\) for any \(t\in[c,b]\). Indeed, this follows from the maximum principle Theorem 3.9 as long as the condition (3.16) is satisfied. First, notice that \[\int_{c}^{b}\int_{M\times M}d_{t}^{4}(x,y)e^{-2f(x,t)-2f(y,t)}\, dV_{t}(x)dV_{t}(y)\,dt\] \[\leq 8\int_{c}^{b}\int_{M\times M}\left(d_{t}^{4}(x,p)+d_{t}^{4}(y,p) \right)e^{-2f(x,t)-2f(y,t)}\,dV_{t}(x)dV_{t}(y)\,dt. \tag{3.20}\] From Lemma 2.3 and Lemma 2.4, it is clear that (3.20) is bounded. In addition, it follows from (3.18) that \[\int_{c}^{b}\int_{M\times M}\left(u(x,y,t)+H_{n}(t-c)\right)^{2}e^{- 2f(x,t)-2f(y,t)}\,dV_{t}(x)dV_{t}(y)\,dt\] \[\leq 8\int_{c}^{b}\int_{M\times M}\left(\int_{M}H(x,t,z,c)d_{c}^{2}(z, p)\,dV_{c}(z)\right)^{2}e^{-2f(x,t)-2f(y,t)}\,dV_{t}(x)dV_{t}(y)\,dt\] \[+8\int_{c}^{b}\int_{M\times M}\left(\int_{M}H(y,t,w,c)d_{c}^{2}(w, p)\,dV_{c}(w)\right)^{2}e^{-2f(x,t)-2f(y,t)}\,dV_{t}(x)dV_{t}(y)\,dt\] \[\leq 8\int_{c}^{b}\int_{M\times M}\int_{M}H(x,t,z,c)d_{c}^{4}(z,p)e^{ -2f(x,t)-2f(y,t)}\,dV_{c}(z)dV_{t}(x)dV_{t}(y)\,dt\] \[+8\int_{c}^{b}\int_{M\times M}\int_{M}H(y,t,w,c)d_{c}^{4}(w,p)e^{ -2f(x,t)-2f(y,t)}\,dV_{c}(w)dV_{t}(x)dV_{t}(y)\,dt, \tag{3.21}\] where we have used Cauchy-Schwarz inequality for the last inequality. From Lemma 3.5, we obtain \[\int_{c}^{b}\int_{M\times M}\int_{M}H(x,t,z,c)d_{c}^{4}(z,p)e^{-2f (x,t)-2f(y,t)}\,dV_{c}(z)dV_{t}(x)dV_{t}(y)\,dt\] \[\leq \int_{c}^{b}\int_{M}\int_{M}\int_{M}H(x,t,z,c)d_{c}^{4}(z,p)e^{-f (x,t)-2f(y,t)}\,dV_{t}(x)dV_{c}(z)dV_{t}(y)\,dt\] \[\leq \int_{c}^{b}\int_{M}\int_{M}\left(\frac{1-t}{1-c}\right)^{\frac{n }{2}}d_{c}^{4}(z,p)e^{-f(z,c)-2f(y,t)}\,dV_{c}(z)dV_{t}(y)\,dt\] \[\leq \int_{c}^{b}\int_{M}\int_{M}d_{c}^{4}(z,p)e^{-f(z,c)-2f(y,t)}\,dV_ {c}(z)dV_{t}(y)\,dt<\infty\] by Lemma 2.3 and Lemma 2.4. Similarly, the second term in (3.21) is also bounded. Therefore, we have proved that \(u(x,y,t)\leq d_{t}^{2}(x,y)\) for any \(t\in[c,b]\). By our assumption, \(w_{1}(\cdot,b)\) and \(w_{2}(\cdot,b)\) have compact supports, then it follows [28, Lemma 8, Lemma 9] that \[w_{t}(x,t)\leq C\tilde{w}(x,t) \tag{3.22}\] for any \(c\leq t\leq b\) and \[\int_{c}^{b}\int_{M}\frac{|\nabla w_{t}|^{2}}{w_{i}}\,dV_{t}dt\leq C \tag{3.23}\] for some constant \(C>0\). Next, we set \(w_{1}=w_{1}(x,t)\), \(w_{2}=w_{2}(y,t)\), \(\phi_{x}^{r}=\phi^{r}(x)\) and \(\phi_{y}^{r}=\phi^{r}(y)\), then we compute \[\partial_{t}\int_{M}\int_{M}uw_{1}w_{2}\phi_{x}^{r}\phi_{y}^{r}\, dV_{t}(x)dV_{t}(y)\] \[= \int_{M}\int_{M}(\partial_{t}-\Delta_{x}-\Delta_{y})(u\phi_{x}^{ r}\phi_{y}^{r})w_{1}w_{2}\,dV_{t}(x)dV_{t}(y)\] \[= \int_{M}\int_{M}\left(-H_{n}\phi_{x}^{r}\phi_{y}^{r}-u(\Delta_{x} \phi_{x}^{r}\phi_{y}^{r}+\Delta_{y}\phi_{y}^{r}\phi_{x}^{r})\right)w_{1}w_{2} \,dV_{t}(x)dV_{t}(y)\] \[+2\int_{M}\int_{M}(\Delta_{x}\phi_{x}^{r}+\langle\nabla\phi_{x}^{ r},\nabla w_{1}\rangle)u\phi_{y}^{r}w_{2}+(\Delta_{y}\phi_{y}^{r}+\langle\nabla \phi_{y}^{r},\nabla w_{2}\rangle)u\phi_{y}^{r}w_{1}\,\,dV_{t}(x)dV_{t}(y). \tag{3.24}\] From (3.23), we have \[\left(\int_{c}^{b}\int_{M}\int_{M}|\nabla\phi_{x}^{r}||\nabla w_{1} ||u|\phi_{y}^{r}w_{2}\,dV_{t}(x)dV_{t}(y)dt\right)^{2}\] \[\leq \left(\int_{c}^{b}\int_{M}\int_{M}|\nabla\phi_{x}^{r}|^{2}u^{2}( \phi_{y}^{r})^{2}w_{1}w_{2}\,dV_{t}(x)dV_{t}(y)dt\right)\left(\int_{c}^{b}\int_ {M}\frac{|\nabla w_{1}|^{2}}{w_{1}}\,dV_{t}dt\right)\] \[\leq C\int_{c}^{b}\int_{M}\int_{M}|\nabla\phi_{x}^{r}|^{2}u^{2}w_{1}w_ {2}\,dV_{t}(x)dV_{t}(y)dt. \tag{3.25}\] Similarly, we have \[\left(\int_{c}^{b}\int_{M}\int_{M}|\nabla\phi_{y}^{r}||\nabla w_{2 }||u|\phi_{x}^{r}w_{1}\,dV_{t}(x)dV_{t}(y)dt\right)^{2}\] \[\leq C\int_{c}^{b}\int_{M}\int_{M}|\nabla\phi_{y}^{r}|^{2}u^{2}w_{1} w_{2}\,dV_{t}(x)dV_{t}(y)dt. \tag{3.26}\] Combining (3.22), (3.24), (3.25), (3.26) and the fact that \(-H_{n}(t-c)\leq u\leq d_{t}^{2}(x,y)\), we conclude by letting \(r\to\infty\) that \[\int_{M}\int_{M}uw_{1}w_{2}\,dV_{d}(x)dV_{d}(y)-\int_{M}\int_{M}uw_{1}w_{2}\,dV _{c}(x)dV_{c}(y)=-H_{n}(d-c). \tag{3.27}\] Since \(u\leq d_{t}^{2}(x,y)\), it follows from (3.27) and the definition of the variance that \[\operatorname{Var}_{d}(\mu_{1,d},\mu_{2,d})+H_{n}d\geq\operatorname{Var}_{c}( \mu_{1,c},\mu_{2,c})+H_{n}c.\] Now, we assume \(w_{i}=H(x_{i},b,\cdot,\cdot)\) for \(i=1,2\). Then it follows from [28, Lemma 23] that \[\int_{a}^{b-\epsilon}\int_{M}\frac{|\nabla w_{i}|^{2}}{w_{i}}\,dV_{t}dt\leq C \log\epsilon^{-1}. \tag{3.28}\] Therefore, one can use the same arguments as above, thanks to (3.28) and [28, Corollary 5], to conclude that (3.27) still holds if \([c,d]\subset[a,b-\epsilon]\). Since \(\epsilon\) is arbitrary, we immediately show that \[\operatorname{Var}_{t}(v_{x_{1},b;t},v_{x_{2},b;t})+H_{n}t\] is increasing for any \(t\leq b\). Next, we recall the definition of \(H\)-center, where the conjugate heat kernel measure is concentrated. **Definition 3.11** (\(H\)-center).: _Given a constant \(H>0\), a point \((z,t)\in M\times(-\infty,1)\) is called an \(H\)-center of \((x_{0},t_{0})\in M\times(-\infty,1)\) if \(t\leq t_{0}\) and_ \[\operatorname{Var}_{t}(\delta_{z},v_{x_{0},t_{0};t})\leq H(t_{0}-t).\] _In particular, we have_ \[d_{W_{1}}^{t}(\delta_{z},v_{x_{0},t_{0};t})\leq\sqrt{H(t_{0}-t)}. \tag{3.29}\] From Proposition 3.10, the following result is immediate; see [2, Proposition 3.12]. **Proposition 3.12**.: _Let \((M^{n},g(t))_{t<1}\) be the Ricci flow associated with a Ricci shrinker. Given \((x_{0},t_{0})\in M\times(-\infty,1)\) and \(t\leq t_{0}\) there is at least one point \(z\in M\) such that \((z,t)\) is an \(H_{n}\)-center of \((x_{0},t_{0})\) and for any two such points \(z_{1},z_{2}\in M\) we have \(d_{t}(z_{1},z_{2})\leq 2\sqrt{H_{n}(t_{0}-t)}\)._ The following result ensures that the conjugate heat kernel measure is concentrated around an \(H\)-center; see [2, Proposition 3.13]. **Proposition 3.13**.: _If \((z,t)\) is an \(H\)-center of \((x_{0},t_{0})\), then for any \(L>0\),_ \[v_{x_{0},t_{0};t}\left(B_{t}(z,\sqrt{LH(t_{0}-t)})\right)\geq 1-\frac{1}{L}. \tag{3.30}\] Combining the above Proposition with [28, Theorem 14], we obtain the following integral bound for the conjugate heat kernel; see also [2, Theorem 3.14]. **Proposition 3.14**.: _If \((z,t)\) is an \(H_{n}\)-center of \((x_{0},t_{0})\), then for all \(r\geq 0\) and \(\epsilon>0\) we have_ \[v_{x_{0},t_{0};t}(M\setminus B_{t}(z,r))\leq C(n,\epsilon)\exp\bigg{(}-\frac{r^{2}}{(4+ \epsilon)(t_{0}-t)}\bigg{)}.\] Proof.: We apply [28, Theorem 14] for \(A=M\setminus B_{t}(z,r)\), \(B=B_{t}(z,\sqrt{2H_{n}(t_{0}-t)})\) and \(\sigma=\epsilon/8\) to obtain \[v_{x_{0},t_{0};t}(M\setminus B_{t}(z,r))\leq v_{x_{0},t_{0};t}^{-\frac{8}{\epsilon}}(B_{t}(z,\sqrt{2H_{n}(t_{0}-t)})) \exp\left(-\frac{\big{(}r-\sqrt{2H_{n}(t_{0}-t)}\big{)}_{+}^{2}}{(4+\epsilon/ 2)(t_{0}-t)}\right)\] \[\leq C(n,\epsilon)\exp\bigg{(}-\frac{r^{2}}{(4+\epsilon)(t_{0}-t)} \bigg{)},\] where we have used (3.30) for \(L=2\) and \(H=H_{n}\). In order to obtain the estimates on the Nash entropy, we first generalize the improved gradient estimate [2, Theorem 4.1] to our setting. We define the following antiderivative of the \(1\)-dimensional heat kernel: \[\Phi(x)=\int_{-\infty}^{x}(4\pi)^{-1/2}e^{-r^{2}/4}\;dt. \tag{3.31}\] Notice that \(\Phi_{t}(x):=\Phi(t^{-1/2}x)\) is a solution to the \(1\)-dimensional heat equation with initial condition \(\chi_{\{0,\infty\}}\). **Theorem 3.15**.: _Let \((M^{n},g(t))_{t<1}\) be the Ricci flow associated with a Ricci shrinker. Given \([a,b]\subset(-\infty,1)\) and a solution \(u\in C^{\infty}(M\times[a,b])\) to the heat equation \(\Box u=0\) and a constant \(T\geq 0\), suppose that \(u\) only takes values in \((0,1)\) and \(|\nabla(\Phi_{T}^{-1}(u(\cdot,a)))|\leq 1\) if \(T>0\). Then \(|\nabla(\Phi_{T+t-a}^{-1}(u(\cdot,t)))|\leq 1\) for all \(t\in[a,b]\)._ Proof.: We may assume that \(u\) takes values in \((\epsilon,1-\epsilon)\). Indeed, we can consider \((1-2\epsilon)u+\epsilon\) instead and let \(\epsilon\searrow 0\). With the extra assumption, it follows from [28, Lemma 18] that \[|\nabla u|\leq\frac{C_{1}}{\sqrt{t-a}} \tag{3.32}\] on \(M\times(a,b]\). It is clear from the definition of \(\Phi_{t}\) that \(\sup_{M}|\nabla(\Phi_{T}^{-1}(u(\cdot,a+\epsilon)))|\to 0\) if \(T\searrow 0\). Therefore, we only need to prove the case for \(T>0\) and then let \(T\searrow 0\) and \(\epsilon\searrow 0\). Now, we set \(u(x,t)=\Phi_{T+t-a}\circ h(x,t)\). It follows from the definition of \(\Phi_{t}\) that \[|h|\leq C_{2} \tag{3.33}\] on \(M\times[a,b]\). Moreover, since \(|\nabla h(\cdot,a)|\leq 1\), it follows from (3.33) and Lemma 3.2(i) that \[|\nabla h|\leq C_{3} \tag{3.34}\] on \(M\times[a,b]\). By direct computation, see [2, Theorem 4.1] for details, we have \[\Box|\nabla h|^{2}=-2|\text{Hess }h|^{2}-\frac{1}{T+t-a}\langle\nabla h^{2}, \nabla|\nabla h|^{2}\rangle+\frac{1}{2(T+t-a)}(1-|\nabla h|^{2})|\nabla h|^{2}. \tag{3.35}\] Therefore, if we set \(v=(|\nabla h|^{2}-1)_{+}\), then it follows from (3.35) that \[\Box v+\frac{1}{T+t-a}\langle\nabla h^{2},\nabla v\rangle\leq 0.\] Since \(|\nabla h^{2}|\) and \(v\) are uniformly bounded on \(M\times[a,b]\) by (3.33) and (3.34), it follows from Theorem 2.6 that \(v\leq 0\) on \(M\times[a,b]\). In other words, \(|\nabla h|\leq 1\) on \(M\times[a,b]\). Thus the proof is complete. With the help of Theorem 3.15, one can follow verbatim as [2, Proposition 4.2] and [30, Proposition 3.4] to obtain the following estimate. **Theorem 3.16**.: _Let \((M^{n},g(t))_{t<1}\) be the Ricci flow associated with a Ricci shrinker and \([s,t]\subset(-\infty,1)\). Then for any \(x\in M\), \(1\leq p<\infty\) and measurable subset \(X\subset M\), we have_ \[(t-s)^{\frac{p}{2}}\int_{X}\left(\frac{|\nabla_{x}H(x,t,\cdot,s)|}{H(x,t,\cdot,s)}\right)^{p}\,dv\leq C(n,p)v\,(X)\left(-\log\left(\frac{v\,(X)}{2}\right) \right)^{\frac{p}{2}},\] _where \(dv=H(x,t,\cdot,s)\,dV_{s}\) is the conjugate heat kernel measure. Moreover, for any \(x\in M\) and \(w\in T_{x}M\) with \(|w|_{t}=1\), there holds that_ \[(t-s)\int_{M}\left(\frac{\partial_{w}H(x,t,\cdot,s)}{H(x,t,\cdot,s)}\right)^{ 2}\,dv\leq\frac{1}{2}. \tag{3.36}\] _In particular, we have_ \[(t-s)\int_{M}\left|\frac{\nabla_{x}H(x,t,\cdot,s)}{H(x,t,\cdot,s)}\right|^{2} \,dv\leq\frac{n}{2}. \tag{3.37}\] Another application of Theorem 3.15 is the following \(L^{p}\)-Poincare inequality; see [2, Theorem 11.1]. **Theorem 3.17** (\(L^{p}\)-Poincare inequality).: _Let \((M^{n},g(t))_{t<1}\) be a Ricci flow associated with a Ricci shrinker. Then for \(p\geq 1\) and any \([s,t]\subset(-\infty,1)\) we have_ \[\int_{M}u^{p}\,dv_{s}\leq C(p)(t-s)^{\frac{p}{2}}\int_{M}|\nabla u|^{p}\,dv_{ s},\] _for any \(u\in W^{1,p}(M,dv_{s})\) with \(\int_{M}u\,dv_{s}=0\). Here, \(dv_{s}(y)=H(x,t,y,s)dV_{s}(y)\). One may choose \(C(1)=\sqrt{\pi}\) and \(C(2)=2\)._ Proof.: The proof for \(p\neq 2\) follows verbatim from [2, Theorem 11.1]. Only the last statement for \(p=2\) needs to be proved. It follows from [28, Theorem 13] that the probability measure \(dv_{s}\) satisfies the log-Sobolev inequality with the constant \(\frac{1}{2(t-s)}\). It is a standard fact that the log-Sobolev condition implies the Poincare inequality; see [35, Theorem 22.17]. Next, we recall the definitions of the Nash entropy and \(\mathcal{W}\)-entropy based at \((x_{0},t_{0})\). **Definition 3.18**.: _Given a Ricci flow \((M^{n},g(t))_{t<1}\) associated with a Ricci shrinker and a point \((x_{0},t_{0})\in M\times(-\infty,1)\), let_ \[dv=dv_{x_{0},t_{0};t}(x)=(4\pi\tau)^{-\frac{n}{2}}e^{-b(x,t)}\,dV_{t}=H(x_{0}, t_{0},x,t)\,dV_{t}\] _where \(\tau=t_{0}-t\). Then Perelman's \(\mathcal{W}\)-entropy and the Nash entropy based at \((x_{0},t_{0})\) are respectively defined as_ \[\mathcal{W}_{(x_{0},t_{0})}(\tau) =\int_{M}\left(\tau(2\Delta b-|\nabla b|^{2}+R)+b-n\right)\,dv, \tag{3.38}\] \[\mathcal{N}_{(x_{0},t_{0})}(\tau) =\int_{M}b\,dv-\frac{n}{2}. \tag{3.39}\] Now, we prove some basic properties of \(\mathcal{N}\) and \(\mathcal{W}\). **Proposition 3.19**.: _The following properties hold with Definition 3.18._ * \(\mathcal{W}_{(x_{0},t_{0})}(0)=0\) _and for any_ \(\tau_{0}>0\)_,_ \[\mathcal{W}_{(x_{0},t_{0})}(\tau_{0})=-2\int_{0}^{\tau_{0}}\tau\int_{M}\left| Rc+\mathrm{Hess}\,\,b-\frac{g}{2\tau}\right|^{2}\,dvd\tau.\] (3.40) _In particular,_ \(\mathcal{W}_{(x_{0},t_{0})}(\tau)\) _is nonpositive and decreasing._ * \(\mathcal{N}_{(x_{0},t_{0})}(0)=0\) _and for any_ \(\tau_{0}>0\)_,_ \[\mathcal{N}_{(x_{0},t_{0})}(\tau_{0})=\frac{1}{\tau_{0}}\int_{0}^{\tau_{0}} \mathcal{W}_{(x_{0},t_{0})}(\tau)\,d\tau\geq\mathcal{W}_{(x_{0},t_{0})}(\tau _{0}).\] (3.41) * _For any_ \(0<\tau_{1}\leq\tau_{2}\)_,_ \[\mathcal{N}_{(x_{0},t_{0})}(\tau_{1})-\frac{n}{2}\log\left(\frac{\tau_{2}}{ \tau_{1}}\right)\leq\mathcal{N}_{(x_{0},t_{0})}(\tau_{2})\leq\mathcal{N}_{(x_{ 0},t_{0})}(\tau_{1}).\] (3.42) Proof.: Given \((x_{0},t_{0})\) and \(\tau\), we first prove that \(\mathcal{N}_{(x_{0},t_{0})}(\tau)\) and \(\mathcal{W}_{(x_{0},t_{0})}(\tau)\) are well-defined. In the following, all constants \(C_{i}>1\) depend on \((x_{0},t_{0})\), \(\tau\) and the given Ricci shrinker. It follows from [28, Theorem 19] that for any \(r\geq 1\), \[\int_{d_{t}(x_{0},x)\geq r\sqrt{\tau}}dv_{t}(x)\leq C_{1}e^{-\frac{r^{2}}{8}}. \tag{3.43}\] Therefore, there exists \(C_{2}>1\) such that \[\int_{d_{t}(p,x)\geq r}dv_{t}(x)\leq C_{2}e^{-\frac{r^{2}}{C_{2}}} \tag{3.44}\] if \(r\geq C_{2}\). In addition, it follows from [28, Theorem 15, Formula (203)] that \[\boldsymbol{\mu}\leq b(x,t)\leq-3\boldsymbol{\mu}+\frac{d_{t_{0}}^{2}(x_{0}, x)}{3\tau}+\frac{4\tau}{3(1-t_{0})^{2}}F(x,t_{0}). \tag{3.45}\] From (3.45) and Lemma 2.3, there exists \(C_{3}>1\) such that \[-C_{3}\leq b(x,t)\leq C_{3}(1+F(x,t_{0})). \tag{3.46}\] Since \(F\) is decreasing with respect to \(t\) by (2.6), it follows from (3.46) and Lemma 2.3 that \[b(x,t)\leq C_{3}(1+F(x,t))\leq C_{4}(1+d_{t}^{2}(p,x)).\] for some \(C_{4}\geq C_{3}\). Consequently, we obtain \[|b(x,t)|\leq C_{4}(1+d_{t}^{2}(p,x)). \tag{3.47}\] Combining (3.44) and (3.47), we can estimate \[\int_{M}|b(x,t)|\,dv_{t}(x)\leq C_{4}+C_{4}\int_{M}d_{t}^{2}(p,x)\,dv_{t}(x)\] \[= C_{4}+C_{4}\int_{d_{t}(p,x)\leq C_{2}}d_{t}^{2}(p,x)\,dv_{t}(x)+ C_{4}\sum_{k=1}^{\infty}\int_{2^{k-1}C_{2}\leq d_{t}(p,x)\leq 2^{k}C_{2}}d_{t}^{2} (p,x)\,dv_{t}(x)\] \[\leq C_{4}+C_{4}C_{2}^{2}+C_{4}\sum_{k=1}^{\infty}(2^{k}C_{2})^{2}C_{2} e^{-2^{2k-2}C_{2}}<\infty. \tag{3.48}\] Therefore, it follows from the definition (3.39) that \(\mathcal{N}_{(x_{0},t_{0})}(\tau)\) is finite. Now, the fact that \(\mathcal{W}_{(x_{0},t_{0})}(\tau)\) is well-defined follows from Perelman's differential Harnack inequality [28, Theorem 21]. (a): The identity (3.40) follows from [28, Remark 6]. Notice that the integral in (3.40) is always finite by [28, Lemma 30]. In particular, \(\mathcal{W}_{(x_{0},t_{0})}(0)=\lim_{\tau\searrow 0}\mathcal{W}_{(x_{0},t_{0})}( \tau)=0\). (b): We fix \(r\gg 1\) and compute \[\partial_{\tau}\left(\tau\int_{M}b\phi^{r}\,dv\right)-\frac{n}{2}\] \[= \int_{M}b\phi^{r}\,dv-\tau\int_{M}\Box(b\phi^{r})\,dv-\frac{n}{2}\] \[= \int_{M}\left(\tau(2\Delta b-|\nabla b|^{2}+R)\phi^{r}+b\phi^{r} +\tau b\Box\phi^{r}-2\tau\langle\nabla b,\nabla\phi^{r}\rangle-\frac{n}{2}(1 +\phi^{r})\right)\,dv, \tag{3.49}\] where we have used the fact that \(\Box b=-2\Delta b+|\nabla b|^{2}-R+\frac{n}{2\tau}\). For \(\tau_{0}>0\), we integrate (3.49) from \(0\) to \(\tau_{0}\) and obtain \[\tau_{0}\left(\int_{M}b\phi^{r}\,dv-\frac{n}{2}\right)\] \[= \int_{0}^{\tau_{0}}\int_{M}\left(\tau(2\Delta b-|\nabla b|^{2}+R) \phi^{r}+b\phi^{r}+\tau b\Box\phi^{r}-2\tau\langle\nabla b,\nabla\phi^{r} \rangle-\frac{n}{2}(1+\phi^{r})\right)\,dv\tau, \tag{3.50}\] where we have used (3.43) and (3.47). On the one hand, it follows from (2.15), (3.43) and (3.47) that \[\lim_{r\to\infty}\int_{0}^{\tau_{0}}\int_{M}\tau|b|\Box\phi^{r}|\,dvd\tau=0. \tag{3.51}\] On the other hand, we estimate \[\int_{0}^{\tau_{0}}\int_{M}\tau|\nabla b||\nabla\phi^{r}|\,dvd\tau \leq Cr^{-\frac{1}{2}}\tau_{0}^{2}\left(\int_{0}^{\tau_{0}}\int_{M}\tau^{2}| \nabla b|^{2}\,dvd\tau\right)^{2}. \tag{3.52}\] Since the last integral is finite by [28, Lemma 25], it follows from (3.52) that \[\lim_{r\to\infty}\int_{0}^{\tau_{0}}\int_{M}\tau|\nabla b||\nabla \phi^{r}|\,dvd\tau=0. \tag{3.53}\] Combining (3.50), (3.51) and (3.53), if we let \(r\to\infty\), then \[\tau_{0}\left(\int_{M}b\,dv-\frac{n}{2}\right)=\int_{0}^{\tau_{0} }\int_{M}\left(\tau(2\Delta b-|\nabla b|^{2}+R)+b-n\right)\,dvd\tau,\] which is exactly (3.41). Notice that the last inequality in (3.41) follows from the fact that \(\mathcal{W}_{(x_{0},t_{0})}(\tau)\) is decreasing. Moreover, it follows from (3.41) and \(\mathcal{W}_{(x_{0},t_{0})}(0)=0\) that \(\mathcal{N}_{(x_{0},t_{0})}(0)=0\). (c): The inequality (3.42) follows exactly the same as [2, Proposition 5.2 (5.7)] and we omit the proof. **Corollary 3.20**.: _Under the same assumptions, we have_ \[\int_{M}(|\nabla b|^{2}+R)dv\leq\frac{n}{2\tau}. \tag{3.54}\] \[\int_{M}\left(b-\mathcal{N}_{(x_{0},t_{0})}(\tau)-\frac{n}{2} \right)^{2}\,dv\leq n. \tag{3.55}\] Proof.: From the fact that \(\mathcal{N}_{(x_{0},t_{0})}(\tau)\geq\mathcal{W}_{(x_{0},t_{0})}(\tau)\), we conclude that \[\lim_{r\to\infty}\int_{M}(2\Delta b-|\nabla b|^{2})\phi^{r}+R\, dv\leq\frac{n}{2\tau}, \tag{3.56}\] where we have used the differential Harnack inequality [28, Theorem 21]. From integration by parts, we have \[\int_{M}(2\Delta b-|\nabla b|^{2})\phi^{r}\,dv=\int_{M}|\nabla b |^{2}\phi^{r}-2\langle\nabla b,\nabla\phi^{r}\rangle\,dv. \tag{3.57}\] In addition, we can estimate \[2\int_{M}|\nabla b||\nabla\phi^{r}|\,dv\leq\int_{M}|\nabla b|| \nabla\phi^{r}|\,dv\leq\int_{M}\epsilon|\nabla b|^{2}\phi^{r}+\epsilon^{-1} \frac{|\nabla\phi^{r}|^{2}}{\phi^{r}}\,dv \tag{3.58}\] Therefore, it follows from (2.12), (3.56), (3.57) and (3.58) that \[\int_{M}(1-\epsilon)|\nabla b|^{2}+R\,dv\leq\frac{n}{2\tau}.\] By letting \(\epsilon\searrow 0\), we obtain (3.54). Now, it follows from the Poincare inequality Theorem 3.17 and (3.54) that \[\int_{M}\left(b-\mathcal{N}_{(x_{0},t_{0})}(\tau)-\frac{n}{2}\right)^{2}\,dv \leq 2\tau\int|\nabla b|^{2}\,dv\leq n\] and (3.55) is proved. **Remark 3.21**.: _From the proof of (3.54), \(\mathcal{W}\) can be rewritten as_ \[\mathcal{W}_{(x_{0},t_{0})}(\tau)=\int_{M}\left(\tau(|\nabla b|^{2}+R)+b-n \right)\,dv,\] _which agrees with the original definition of Perelman [34, Formula (3.1)]._ **Corollary 3.22**.: _Let \((M^{n},g(t))_{t<1}\) be the Ricci flow associated with a Ricci shrinker \((M^{n},g,f)\in\mathcal{M}(A)\), then_ \[0\geq\mathcal{N}_{(x_{0},t_{0})}(\tau)\geq\mathcal{W}_{(x_{0},t _{0})}(\tau)\geq\mu\geq-A \tag{3.59}\] _for any \((x_{0},t_{0})\in M\times(-\infty,1)\) and \(\tau>0\). In particular, given a Ricci shrinker, the Nash entropy is always uniformly bounded._ Proof.: For fixed \((x_{0},t_{0})\) and \(\tau>0\), it follows from [28, Theorem 20] that \(b\) increases quadratically. Therefore, it is easy to see the function \(u\), defined by \(u^{2}=(4\pi\tau)^{-\frac{n}{2}}e^{-b}\), belongs to \(W_{*}^{1,2}(M)\) defined in [28, (92)]. From [28, Theorem 1], we immediately conclude that \[\mathcal{W}_{(x_{0},t_{0})}(\tau)\geq\mu(g(t_{0}-\tau),\tau)\geq \mu\geq-A.\] Following [2], we use the notation \[\mathcal{N}_{s}^{*}(x,t):=\mathcal{N}_{(x,t)}(t-s).\] Similar to [2, Theorem 5.9], we have **Theorem 3.23**.: _Let \((M^{n},g(t))_{t<1}\) be the Ricci flow associated with a Ricci shrinker. Then for any \(s<t<1\), the following properties hold._ * \(\mathcal{N}_{s}^{*}\) _is a Lipschitz function with Lipschitz constant_ \(\sqrt{\frac{n}{2(t-s)}}\)_._ * _In the distribution sense, we have_ \[-\frac{n}{2(t-s)}\leq\Box N_{s}^{*}\leq 0.\] (3.60) Proof.: Without loss of generality, we assume \(s=0\) and consider \(t\in(0,1)\). We first define the following modified Nash entropy: \[\mathcal{N}^{r}=\mathcal{N}^{r}(x,t):=\int b\phi^{r}\,dv-\frac{n}{2}, \tag{3.61}\] where, as before, \(b=b_{(x,t)}(y,0)=-\frac{n}{2}\log(4\pi t)-\log H(x,t,y,0)\) and \(dv=H(x,t,y,0)\,dV_{0}(y)\). **Claim**: \(\mathcal{N}^{r}\) converges to \(\mathcal{N}_{0}^{*}\) in \(C^{0}_{\text{loc}}\) on \(M\times(0,1)\), as \(r\to\infty\). _Proof of Claim_: Given a spacetime compact set \(K\subset M\times(0,1)\), all constants \(C_{i}>1\) below depends only \(K\) and the Ricci shrinker. Similar to (3.44), there exists \(C_{1}>1\) such that \[\int_{d_{0}(p,y)\geq r}dv(y)\leq C_{1}e^{-\frac{r^{2}}{C_{1}}} \tag{3.62}\] for any \(r\geq C_{1}\). From the same argument leading to (3.47), we have \[|b_{(x,t)}(y,0)|\leq C_{2}(1+d_{0}^{2}(p,y)). \tag{3.63}\] Combining (3.62), (3.63) and the fact that \(\text{supp}(\phi^{r})\cap M\times\{0\}\subset\{C_{3}r\leq d_{0}^{2}(p,\cdot) \leq C_{4}r\}\), it is easy to show as (3.48) that \[\lim_{r\to\infty}\int_{M}|b_{(x,t)}(y,0)|(1-\phi^{r}(y))\,dv(y)=0 \tag{3.64}\] uniformly for \((x,t)\in K\). From (3.64), the Claim is proved. Next, for any vector \(w\in T_{x}M\) with \(|w|_{t}=1\) we compute \[\partial_{w}\mathcal{N}^{r}(x,t) =\int_{M}\left\{(\partial_{w}b)H\phi^{r}+b(\partial_{w}H)\phi^{r} \right\}dV_{0}\] \[=\int_{M}\left\{-(\partial_{w}H)\phi^{r}+b(\partial_{w}H)\phi^{ r}\right\}dV_{0}=:I+II, \tag{3.65}\] where \(H=H(x,t,y,0)\). Notice that \[\int_{M}H\phi^{r}\,dV_{0}=\int_{M}H(x,t,y,0)\phi^{r}(y)\,dV_{0}(y)\] is the heat solution starting from \(\phi^{r}\). Therefore, it follows from (3.7) and (2.12) that \[|I|\leq\left|\nabla_{x}\int_{M}H(x,t,y,0)\phi^{r}(y)\,dV_{0}(y)\right|\leq Cr ^{-\frac{1}{2}}. \tag{3.66}\] Next, we estimate \[II=\int_{M}b\frac{\partial_{w}H}{H}\phi^{r}\,dv=\int_{M}\left(b- \mathcal{N}_{0}^{*}-\frac{n}{2}\right)\frac{\partial_{w}H}{H}\phi^{r}\,dv- \left(\mathcal{N}_{0}^{*}+\frac{n}{2}\right)I.\] Therefore, we have \[|II|\leq \left(\int_{M}\left(b-\mathcal{N}_{0}^{*}-\frac{n}{2}\right)^{2}\,dv \right)^{\frac{1}{2}}\left(\int_{M}\left(\frac{\partial_{w}H}{H}\right)^{2}dv \right)^{\frac{1}{2}}+Cr^{-\frac{1}{2}}\leq\sqrt{\frac{n}{2t}}+Cr^{-\frac{1}{2 }}, \tag{3.67}\] where we have used (3.36), (3.55) and (3.66). Combining (3.65), (3.66) and (3.67), we obtain \[\left|\nabla_{x}\mathcal{N}^{r}(x,t)\right|\leq\sqrt{\frac{n}{2t}}+Cr^{-\frac {1}{2}}. \tag{3.68}\] Since \(\mathcal{N}^{r}\) converges to \(\mathcal{N}_{0}^{*}\) locally uniformly by the Claim, we immediately conclude from (3.68) that \(\mathcal{N}_{0}^{*}\) is \(\sqrt{\frac{n}{2t}}\)-Lipschitz. Next, by direct computation, we have \[\Box\mathcal{N}^{r}(x,t)=\int_{M}\left|\frac{\nabla_{x}H}{H}\right|^{2}\phi^{ r}\,dv-\frac{n}{2t}\int_{M}\phi^{r}\,dv. \tag{3.69}\] Combining (3.37), (3.69) and the Claim, it follows immediately that \[-\frac{n}{2t}\leq\Box\mathcal{N}_{0}^{*}\leq 0\] in the distribution sense. **Remark 3.24**.: _Later, we will show that the conclusions in Theorem 3.23 hold in the classical sense once we know the decay of the conjugate heat kernel; see Corollary 4.19._ As an application of Theorem 3.23, we prove the following oscillation of the Nash entropy. **Corollary 3.25**.: _For any \(x_{1},x_{2}\in M\) and \(s<t^{*}\leq t_{1},t_{2}<1\), we have_ \[\mathcal{N}_{s}^{*}(x_{1},t_{1})-\mathcal{N}_{s}^{*}(x_{2},t_{2})\leq\sqrt{ \frac{n}{2(t^{*}-s)}}d_{W_{1}}^{*}\left(v_{x_{1},t_{1};x^{*}},v_{x_{2},t_{2}; r^{*}}\right)+\frac{n}{2}\log\left(\frac{t_{2}-s}{t^{*}-s}\right). \tag{3.70}\] _In particular, if \(s<t^{*}=t_{2}\leq t_{1}<1\), then_ \[\mathcal{N}_{s}^{*}(x_{1},t_{1})-\mathcal{N}_{s}^{*}(x_{2},t_{2})\leq\sqrt{ \frac{n}{2(t_{2}-s)}}d_{W_{1}}^{*2}\left(v_{x_{1},t_{1};t_{2}},\delta_{x_{2}} \right). \tag{3.71}\] _If we further assume \((x_{2},t_{2})\) is an \(H_{n}\)-center of \((x_{1},t_{1})\), then_ \[\mathcal{N}_{s}^{*}(x_{1},t_{1})-\mathcal{N}_{s}^{*}(x_{2},t_{2})\leq\sqrt{ \frac{nH_{n}(t_{1}-t_{2})}{2(t_{2}-s)}}. \tag{3.72}\] Proof.: The proof follows verbatim from [2, Corollary 5.11]. The only difference is that we consider \(\mathcal{N}^{r}\) as defined in (3.61) instead and let \(r\to\infty\) ## 4 Heat kernel estimates Throughout this section, we assume \((M^{n},g(t))_{t<1}\) is the Ricci flow associated with a Ricci shrinker in \(\mathcal{M}(A)\). First, we recall the following no-local-collapsing theorem proved in [28, Theorem 22]. **Theorem 4.1**.: _For any \(x\in M\) and \(t<1\), if \(R\leq r^{-2}\) on \(B_{t}(x,r)\), then_ \[|B_{t}(x,r)|_{t}\geq ce^{\mu}r^{n} \tag{4.1}\] _for some constant \(c=c(n)>0\)._ One can improve (4.1) by using the Nash entropy. Based on the Lipschitz property of the Nash entropy, we can follow the same proof of [2, Theorem 6.1] to obtain the following result. Notice that by (3.59), (4.2) is stronger than (4.1) **Theorem 4.2**.: _For any \(x\in M\) and \(t<1\), if \(R\leq r^{-2}\) on \(B_{t}(x,r)\), then_ \[|B_{t}(x,r)|_{t}\geq c\exp\left(\mathcal{N}_{x,t}(r^{2})\right)r^{n} \tag{4.2}\] _for some constant \(c=c(n)>0\)._ By using (3.55), we also have the following volume estimate around an \(H_{n}\)-center by following the same proof of [2, Theorem 6.2]. **Theorem 4.3**.: _For any \(x\in M\) and \(t<1\), if \((z,t-r^{2})\) is an \(H_{n}\)-center of \((x,t)\), then_ \[|B_{t-r^{2}}(z,r)|_{t-r^{2}}\geq c\exp\left(\mathcal{N}_{x,t}(r^{2})\right)r^ {n} \tag{4.3}\] _for some constant \(c=c(n)>0\) and any \(r\geq 0\)._ Next, we recall the following upper bound estimate of the heat kernel proved in [28, Theorem 15], which has already been used in the last section. **Theorem 4.4**.: _For any \(x,y\in M\) and \(s<t<1\), we have_ \[H(x,t,y,s)\leq\frac{e^{-\mu}}{(4\pi(t-s))^{\frac{n}{2}}}. \tag{4.4}\] Instead of using the entropy \(\mu\), one can include the Nash entropy and obtain the following result; see [2, Theorem 7.1]. **Theorem 4.5**.: _For any \(x,y\in M\) and \(s<t<1\), we have_ \[H(x,t,y,s)\leq\frac{C(n)}{(t-s)^{\frac{n}{2}}}\exp\left(-\mathcal{N}_{x,t}(t- s)\right). \tag{4.5}\] Proof.: The proof follows almost the same as [2, Theorem 7.1]. The main idea is to improve the bound \(Z\) of the estimate \[H(x,t,y,s)\leq\frac{Z}{(t-s)^{\frac{n}{2}}}\exp\left(-\mathcal{N}_{x,t}(t-s) \right).\] Notice that such \(Z\) always exists by (4.4) and (3.59), which may depend on the Ricci shrinker. Thanks to (3.70) and (4.3), we can follow the same argument as in [2, Theorem 7.1] to improve \(Z\) to be \(Z/2\), if \(Z\geq\tilde{Z}(n)\) With the help of Theorem 3.16, Corollary 3.25 and Theorem 4.5, we obtain the following gradient estimate of the heat kernel as [2, Theorem 7.5], which improves [28, Lemma 18]. **Theorem 4.6**.: _For any \(x,y\in M\) and \(s<t<1\), then_ \[\frac{|\nabla_{x}H|(x,t,y,s)}{H(x,t,y,s)}\leq\sqrt{\frac{C}{t-s}} \sqrt{\log\left(\frac{C\exp\left(-\mathcal{N}_{x,t}(t-s)\right)}{(t-s)^{\frac {n}{2}}H(x,t,y,s)}\right)} \tag{4.6}\] _for some constant \(C=C(n)>0\)._ With the gradient estimate (4.6), one obtains the following non-expanding estimate as [2, Theorem 8.1]. Notice that (4.7) generalizes the global volume estimate Lemma 2.4. **Theorem 4.7**.: _For any \(x\in M\), \(t<1\) and \(r\geq 0\), we have_ \[|B_{t}(x,r)|_{t}\leq C(n)\exp\left(\mathcal{N}_{x,t}(r^{2}) \right)r^{n}\leq C(n)r^{n}. \tag{4.7}\] Before we prove more refined heat kernel estimates, we first prove a series of lemmas. **Lemma 4.8** (Distance comparison).: _For any \(\delta\in(0,1)\), there exists a constant \(L_{1}=L_{1}(n,\delta)>1\) such that_ \[d_{t}(x,p)\leq d_{s}(x,p)+L_{1}\leq\frac{L_{1}}{\sqrt{1-t}}(d_{ t}(x,p)+1)+L_{1}^{2} \tag{4.8}\] _for any \(x\in M\) and \(-\delta^{-1}\leq s\leq t<1\)._ Proof.: From (2.6) and (2.8), we have \[-\frac{F}{1-t}\leq\partial_{t}F=-(1-t)R\leq 0.\] Therefore, for any \(x\in M\), \[\frac{1-t}{1-s}F(x,s)\leq F(x,t)\leq F(x,s). \tag{4.9}\] Consequently, (4.8) follows from the combination of Lemma 2.3 and (4.9). As an application of the distance comparison, we have the following lower bound of the heat kernel. **Theorem 4.9**.: _For any \(K>1\), \(\delta\in(0,1)\) and \(A>0\), there exists a constant \(C=C(n,K,\delta,A)>1\) satisfying the following property._ _Suppose \(-\delta^{-1}\leq s<t\leq 1-\delta\) and \(d_{t}(x,p)\leq K\), then_ \[H(x,t,y,s)\geq\frac{C^{-1}}{(t-s)^{\frac{n}{2}}}\exp\left(- \frac{d_{s}^{2}(x,y)}{C^{-1}(t-s)}\right). \tag{4.10}\] Proof.: In the proof, all constants \(C_{i}>1\) depend on \(n,K,\delta\) and \(A\). It follows from [28, Formula (203)] that \[H(x,t,y,s)\geq\frac{C_{1}^{-1}}{(t-s)^{\frac{s}{2}}}\exp\left(- \frac{d_{t}^{2}(x,y)}{3(t-s)}-\frac{4(t-s)}{3(1-t)^{2}}F(y,t)\right). \tag{4.11}\] From (4.8), we have \[d_{t}^{2}(x,y)\leq 2(d_{t}^{2}(x,p)+d_{t}^{2}(p,y))\leq C_{2}(d_{s}^{2}(x,p) +d_{s}^{2}(p,y)+1)\leq C_{3}(d_{s}^{2}(x,y)+1), \tag{4.12}\] where we have used \(d_{s}(x,p)\leq C(d_{t}(x,p)+1)\leq C(K+1)\) by (4.8). In addition, since \(F\) is decreasing with respect to \(t\), \[F(y,t)\leq F(y,s)\leq C_{4}(d_{s}^{2}(p,y)+1)\leq C_{5}(d_{s}^{2} (x,y)+1), \tag{4.13}\] by Lemma 2.3. Combining (4.11), (4.12) and (4.13), it is easy to see (4.10) holds for some \(C\). **Lemma 4.10**.: _For any \(K>1\), \(\delta\in(0,1)\) and \(A>0\), there exist constants \(L_{2}=L_{2}(n,K,\delta,A)>1\) and \(L_{3}=L_{3}(n,\delta,A)>1\) satisfying the following property._ _Suppose \(-\delta^{-1}\leq s<t\leq 1-\delta\) and \(d_{t}(p,x)\leq K\), then for any \(H_{n}\)-center \((z,s)\) of \((x,t)\), we have_ \[d_{s}(x,z)\leq L_{2}\sqrt{t-s} \tag{4.14}\] _and_ \[d_{s}(z,p)\leq d_{t}(x,p)+L_{3}\sqrt{t-s}. \tag{4.15}\] Proof.: Since \(d_{t}(p,x)\leq K\), it follows from [28, Theorem 19] that \[v_{x,r,s}\left(M\setminus B_{s}(x,r\sqrt{t-s})\right)\leq C_{2} \exp\left(-\frac{r^{2}}{5}\right) \tag{4.16}\] for any \(r\geq 1\) and \(C_{2}=C_{2}(n,K,\delta,A)>0\). On the other hand, by Proposition 3.14, we have \[v_{x,r,s}\left(M\setminus B_{s}(z,r\sqrt{t-s})\right)\leq C(n) \exp\left(-\frac{r^{2}}{5}\right) \tag{4.17}\] for any \(r\geq 0\). Combining (4.16) and (4.17), (4.14) follows immediately. If we assume \((z^{\prime},s)\) to be an \(H_{n}\)-center of \((p,t)\), then (4.14) indicates that \[d_{s}(z^{\prime},p)\leq C_{3}(n,\delta,A)\sqrt{t-s}. \tag{4.18}\] Then it follows from Proposition 3.7 and (3.29) that \[d_{s}(z,p)\leq d_{s}(z,z^{\prime})+d_{s}(z^{\prime},p)\] \[\leq d_{W_{1}}^{s}(\delta_{z},\delta_{z^{\prime}})+C_{3}\sqrt{t-s}\] \[\leq d_{W_{1}}^{s}(v_{x,r,s},v_{p,r,s})+d_{W_{1}}^{s}(\delta_{z},v_ {x,r,s})+d_{W_{1}}^{s}(\delta_{z^{\prime}},v_{p,r,s})+C_{3}\sqrt{t-s}\] \[\leq d_{t}(x,p)+2\sqrt{H_{n}(t-s)}+C_{3}\sqrt{t-s}.\] Therefore, (4.15) holds for \(L_{3}=C_{3}+2\sqrt{H_{n}}\). Next, we prove the following rough heat kernel estimate. **Proposition 4.11**.: _For any \(K>1\), \(\delta\in(0,1)\) and \(A>0\), there exists a constant \(L_{4}=L_{4}(n,K,\delta,A)>1\) satisfying the following property._ _Suppose \(-\delta^{-1}\leq s<t\leq 1-\delta\) and \(d_{s}(x,p)+d_{s}(y,p)\leq K\), then_ \[H(x,t,y,s)\leq\frac{L_{4}}{(t-s)^{\frac{n}{2}}}\exp\left(-\frac{d_{s}^{2}(x,y )}{L_{4}(t-s)}\right). \tag{4.19}\] Proof.: Without loss of generality, we assume \(s=0\). In the proof, all constants \(C_{i}\) depend on \(n,K,\delta\) and \(A\). Given \(0<t\leq 1-\delta\) and \(x,y\in M\) with \(d_{0}(x,p)+d_{0}(y,p)\leq K\), we set \(d:=d_{0}(x,y)\). It follows from Lemma 4.8 that \[d_{l}(x,p)+d_{l}(y,p)\leq C_{1} \tag{4.20}\] for any \(l\in[0,t]\). Therefore, it follows from the local distance distortion estimate [28, Theorem 18] that there exists \(C_{2}>1\) such that if \(d\geq C_{2}\sqrt{t}\), \[C_{2}^{-1}d\leq d_{l}(x,y)\leq C_{2}d \tag{4.21}\] for any \(l\in[0,t]\). Notice that if \(d\leq C_{2}\sqrt{t}\), (4.19) follows immediately from (4.4). Consequently, we may assume \(d\geq C_{2}\sqrt{t}\) and hence (4.21) holds. For any \(l\in[0,t/2]\), we apply [28, Theorem 14] for sets \(B_{l}(x,\sqrt{t})\), \(B_{l}(y,\sqrt{t})\) and parameter \(\sigma=1\) to obtain \[v_{x,t,l}\left(B_{l}(x,\sqrt{t})\right)v_{x,t,l}\left(B_{l}(y, \sqrt{t})\right)\leq\exp\left(-\frac{(d_{l}(x,y)-2\sqrt{t})_{+}^{2}}{16t} \right)\leq C_{3}\exp\left(-\frac{d^{2}}{C_{3}t}\right) \tag{4.22}\] for some \(C_{3}>1\), where we have used (4.21). In addition, for any \(l\in[0,t/2]\) and \(d_{l}(x,z)\leq\sqrt{t}\), it follows from [28, Theorem 18] that \(d_{l}(x,z)\leq C_{4}\sqrt{t}\). Therefore, it follows from [28, Theorem 17] that \[H(x,t,z,l)\geq C_{5}^{-1}t^{-\frac{n}{2}}\] and hence \[v_{x,t,l}\left(B_{l}(x,\sqrt{t})\right)\geq C_{5}^{-1}t^{-\frac{ n}{2}}|B_{l}(x,\sqrt{t})|\geq C_{6}^{-1}, \tag{4.23}\] where we have used the fact that \(R\) is bounded on \(B_{l}(x,\sqrt{t})\) and the no-local-collapsing Theorem 4.1. Combining (4.22) and (4.23), we obtain for any \(l\in[0,t/2]\) that \[\int_{B_{l}(y,\sqrt{t})}H(x,t,z,l)\,dV_{l}(z)\leq C_{7}\exp\left(- \frac{d^{2}}{C_{3}t}\right). \tag{4.24}\] In light of (4.4), for any \(l\in[0,t/2]\), the above inequality implies that \[\int_{B_{l}(y,\sqrt{t})}H^{2}(x,t,z,l)\,dV_{l}(z)\leq\frac{C_{8}}{t ^{\frac{6}{2}}}\exp\left(-\frac{d^{2}}{C_{3}t}\right). \tag{4.25}\] Integrating \(l\) from \(0\) to \(t/2\), we have \[\int_{0}^{\frac{t}{2}}\int_{B_{l}(y,\sqrt{t})}H^{2}(x,t,z,l)\,dV_{ l}(z)ds\leq\frac{C_{9}}{t^{\frac{6}{2}-1}}\exp\left(-\frac{d^{2}}{C_{3}t} \right). \tag{4.26}\] Consequently, the desired heat kernel estimate (4.19) follows from (4.26) and a parabolic mean value inequality [6, Lemma 4.2]. Here, [6, Lemma 4.2] can be applied in our setting since the key ingredient is the existence of a nice local cutoff function, which is constructed in [6, Theorem 1.3] (see also Proposition 5.12). Once the existence of the local cutoff function is guaranteed, one can follow verbatim the proof of [6, Lemma 4.2] to obtain the mean value inequality. We immediately obtain the following result by combining Lemma 4.10 and Proposition 4.11. **Corollary 4.12**.: _For any \(K>1\), \(\delta\in(0,1)\) and \(A>0\), there exists a constant \(L_{5}=L_{5}(n,K,\delta,A)>1\) satisfying the following property._ _Suppose \(-\delta^{-1}\leq s<t\leq 1-\delta\) and \(d_{t}(x,p)+d_{s}(y,p)\leq K\), then for any \(H_{n}\)-center \((z,s)\) of \((x,t)\), we have_ \[H(x,t,y,s)\leq\frac{L_{5}}{(t-s)^{\frac{6}{2}}}\exp\left(-\frac{ d_{s}^{2}(z,y)}{L_{5}(t-s)}\right). \tag{4.27}\] Proof.: From Lemma 2.3 and (4.8), we have \(d_{s}(x,p)+d_{s}(y,p)\leq C\) for some \(C=C(n,K,\delta)>0\). Then (4.27) follows from (4.14) and (4.19). Next, we prove the following technical result. **Lemma 4.13**.: _There exists a positive constant \(\bar{Q}=\bar{Q}(n)>0\) satisfying the following property._ _Suppose \(x,y\in M\), \(T\in(0,1)\) and there exists an \(H_{n}\)-center \((z,0)\) of \((x,T)\) such that_ \[H(x,T,y,0)\geq Q\frac{\exp\left(-N_{0}^{*}(x,T)\right)}{T^{\frac {6}{2}}}\exp\left(-\frac{d_{0}^{2}(z,y)}{QT}\right) \tag{4.28}\] _for some \(Q\geq\bar{Q}\). Then for any \(H_{n}\)-center \((z^{\prime},T_{1})\) of \((x,T)\), there exist a point \(x_{1}\in M\) and an \(H_{n}\)-center \((z_{1},0)\) of \((x_{1},T_{1})\) such that_ \[d_{T_{1}}(x_{1},z^{\prime})\leq\frac{10}{\sqrt{\bar{Q}}}d_{0}(z,y) \tag{4.29}\] _and_ \[H(x_{1},T_{1},y,0)\geq Q_{1}\frac{\exp\left(-N_{0}^{*}(x_{1},T_{1 })\right)}{T_{1}^{\frac{6}{2}}}\exp\left(-\frac{d_{0}^{2}(z_{1},y)}{Q_{1}T_{1} }\right), \tag{4.30}\] _where \(T_{1}=T/8\) and \(Q_{1}=2Q\)._ Proof.: In the proof, all constants \(C_{i}>1\) depend only on \(n\). We set \[d:=d_{0}(z,y),\quad a:=H(x,T,y,0),\quad v_{t}:=v_{x,T,t},\quad\text{and}\quad V:= \left\{w\in M\mid H(w,T_{1},y,0)\geq\frac{a}{2}\right\}.\] Notice that by (4.28) and (4.5), we have \[C_{1}\frac{\exp\left(-\mathcal{N}_{0}^{*}(x,T)\right)}{T^{\frac{n}{2}}}\geq a \geq\sqrt{Q}\cdot\sqrt{Q}\frac{\exp\left(-\mathcal{N}_{0}^{*}(x,T)\right)}{T^ {\frac{n}{2}}}\exp\left(-\frac{d^{2}}{QT}\right).\] Thus if \(\bar{Q}\) is sufficiently large, we have \(Q>\bar{Q}>C_{1}^{2}\) and derive from the above inequality that \[\frac{d^{2}}{T}\geq\frac{Q\log Q}{2}. \tag{4.31}\] It follows from the semigroup property (3.1) that \[a= \int_{M}H(w,T_{1},y,0)\,dv_{T_{1}}(w)\] \[= \int_{M\setminus V}H(w,T_{1},y,0)\,dv_{T_{1}}(w)+\int_{V}H(w,T_{ 1},y,0)\,dv_{T_{1}}(w)\] \[\leq \frac{a}{2}v_{T_{1}}(M\setminus V)+\int_{V}H(w,T_{1},y,0)\,dv_{ T_{1}}(w)\] \[\leq \frac{a}{2}+C_{1}T_{1}^{-\frac{n}{2}}\int_{V}\exp\left(-\mathcal{ N}_{0}^{*}(w,T_{1})\right)\,dv_{T_{1}}(w), \tag{4.32}\] where we have used (4.5) for the last inequality. Moreover, it follows from (3.72) and the Lipschitz property of \(\mathcal{N}_{0}^{*}\) that \[-\mathcal{N}_{0}^{*}(z^{\prime},T_{1})\leq-\mathcal{N}_{0}^{*}(x,T)+C_{2}\, \sqrt{\frac{T-T_{1}}{T_{1}}}\leq-\mathcal{N}_{0}^{*}(x,T)+C_{3} \tag{4.33}\] Figure 1: Find a new point with improved lower bound \[-\mathcal{N}_{0}^{*}(w,T_{1})\leq-\mathcal{N}_{0}^{*}(z^{\prime},T_{1})+\sqrt{ \frac{n}{2T_{1}}}d_{T_{1}}(w,z^{\prime})\leq-\mathcal{N}_{0}^{*}(x,T)+C_{4}T^{- \frac{1}{2}}d_{T_{1}}(w,z^{\prime})+C_{4}. \tag{4.34}\] Now, we define \(B:=B_{T_{1}}(z^{\prime},10Q^{-\frac{1}{2}}d)\). Then it follows from (4.34)that \[\int_{V}\exp\left(-\mathcal{N}_{0}^{*}(w,T_{1})\right)\,dv_{T_{1} }(w)\] \[\leq e^{C_{4}}\exp\left(-\mathcal{N}_{0}^{*}(x,T)\right)\int_{V}e^{ C_{4}T^{-\frac{1}{2}}d_{T_{1}}(w,z^{\prime})}\,dv_{T_{1}}(w)\] \[\leq e^{C_{4}}\exp\left(-\mathcal{N}_{0}^{*}(x,T)\right)\left(e^{10 C_{4}T^{-\frac{1}{2}}Q^{-\frac{1}{2}}d}v_{T_{1}}(V\cap B)+\int_{M\setminus B}e^{C_{4}T^{- \frac{1}{2}}d_{T_{1}}(w,z^{\prime})}\,dv_{T_{1}}(w)\right). \tag{4.35}\] For a small constant \(\beta>0\) to be determined later, it follows from Proposition 3.14 that \[\int_{M\setminus B}e^{C_{4}T^{-\frac{1}{2}}d_{T_{1}}(w,z^{\prime} )}\,dv_{T_{1}}(w)\] \[= \sum_{k=1}^{\infty}\int_{2^{k-1}(10Q^{-\frac{1}{2}}d)\leq d_{T_{ 1}}(w,z^{\prime})\leq 2^{k}(10Q^{-\frac{1}{2}}d)}e^{C_{4}T^{-\frac{1}{2}}d_{T_{1}}(w,z^{ \prime})}\,dv_{T_{1}}(w)\] \[\leq \sum_{k=1}^{\infty}e^{C_{4}2^{k}T^{-\frac{1}{2}}10Q^{-\frac{1}{2} }d}\int_{d_{T_{1}}(w,z^{\prime})\geq 2^{k-1}(10Q^{-\frac{1}{2}}d)}\,dv_{T_{1}}(w)\] \[\leq C_{5}\sum_{k=1}^{\infty}\exp\left(C_{4}2^{k}T^{-\frac{1}{2}}10Q^ {-\frac{1}{2}}d-\frac{(2^{k-1}10Q^{-\frac{1}{2}}d)^{2}}{5(T-T_{1})}\right)\] \[\leq C_{5}\sum_{k=1}^{\infty}\exp\left(-\frac{(2^{k-1}10Q^{-\frac{1}{2 }}d)^{2}}{5T}+C_{6}\right)\leq C_{7}\exp\left(-\frac{20d^{2}}{QT}\right), \tag{4.36}\] where we have used the fact that \(\exp\left(-\frac{20d^{2}}{QT}\right)\leq Q^{-10}\ll 1\) by (4.31). Combining (4.28), (4.32), (4.35) and (4.36), we have \[QT^{-\frac{n}{2}}\exp\left(-\frac{d^{2}}{QT}\right)\leq C_{8}T_{1}^{-\frac{n} {2}}\left(e^{10C_{4}T^{-\frac{1}{2}}Q^{-\frac{1}{2}}d}v_{T_{1}}(V\cap B)+\exp \left(-\frac{20d^{2}}{QT}\right)\right). \tag{4.37}\] Since \(Q\) is large, by (4.31) we have \[Q\exp\left(\frac{19d^{2}}{QT}\right)\geq Q^{\frac{21}{2}}\geq 2C_{8}8^{\frac{n} {2}}.\] Then it is not hard to see from (4.37) that \(v_{T_{1}}(V\cap B)>0\). Thus there exists a point \(x_{1}\in V\cap B\) which satisfies (4.29). Then we take an \(H_{n}\)-center \((z_{1},0)\) of \((x_{1},T_{1})\). The point selecting process is illustrated in Figure 1. It follows from Proposition 3.7 and (3.29) that \[d_{0}(z,z_{1})= d_{W_{1}}^{0}(\delta_{z},\delta_{z_{1}})\] \[\leq d_{W_{1}}^{0}(v_{x,T;0},v_{x_{1},T;1;0})+d_{W_{1}}^{0}(\delta_{z},v _{x,T;0})+d_{W_{1}}^{0}(\delta_{z_{1}},v_{x_{1},T;0})\] \[\leq d_{W_{1}}^{T_{1}}(v_{x,T;T_{1}},\delta_{x_{1}})+\sqrt{H_{n}T}+ \sqrt{H_{n}T_{1}}\] \[\leq d_{W_{1}}^{T_{1}}(v_{x,T;T_{1}},\delta_{z^{\prime}})+d_{T_{1}}(z^ {\prime},x_{1})+\sqrt{H_{n}T}+\sqrt{H_{n}T_{1}}\] \[\leq \sqrt{H_{n}(T-T_{1})}+\sqrt{H_{n}T}+\sqrt{H_{n}T_{1}}+10Q^{-\frac {1}{2}}d\] \[\leq 3\sqrt{H_{n}T}+10Q^{-\frac{1}{2}}d. \tag{4.38}\] Therefore, we conclude \[d_{0}(z_{1},y)\geq d-d_{0}(z,z_{1})\geq(1-10Q^{-\frac{1}{2}})d-3 \sqrt{H_{n}T} \tag{4.39}\] and hence \[d_{0}^{2}(z_{1},y)\geq\frac{(1-10Q^{-\frac{1}{2}})^{2}}{2}d^{2}- 9H_{n}T. \tag{4.40}\] Since \(x_{1}\in V\), from the definition of \(V\) and (4.28) we have \[H(x_{1},T_{1},y,0)\geq\frac{a}{2}\geq Q\frac{\exp\left(-\mathcal{ N}_{0}^{*}(x,T)\right)}{2T^{\frac{a}{2}}}\exp\left(-\frac{d^{2}}{QT}\right), \tag{4.41}\] which enables us to claim \[Q\frac{\exp\left(-\mathcal{N}_{0}^{*}(x,T)\right)}{2T^{\frac{a}{2 }}}\exp\left(-\frac{d^{2}}{QT}\right)\geq Q_{1}\frac{\exp\left(-\mathcal{N}_{ 0}^{*}(x_{1},T_{1})\right)}{T_{1}^{\frac{a}{2}}}\exp\left(-\frac{d_{0}^{2}(z_{ 1},y)}{Q_{1}T_{1}}\right). \tag{4.42}\] Indeed, it follows from (4.34) that \[-\mathcal{N}_{0}^{*}(x_{1},T_{1})\leq-\mathcal{N}_{0}^{*}(x,T)+ C_{9}(1+d(QT)^{-\frac{1}{2}}). \tag{4.43}\] On the other hand, by (4.40) we have \[\exp\left(\frac{d_{0}^{2}(z_{1},y)}{Q_{1}T_{1}}-\frac{d^{2}}{QT}- C_{9}\sqrt{\frac{d^{2}}{QT}}-C_{9}\right)\] \[\geq \exp\left(\frac{1}{QT}\left(2(1-10Q^{-\frac{1}{2}})^{2}-0.9\right) d^{2}-\frac{36H_{n}}{Q}-C_{10}\right)\] \[\geq \exp\left(\frac{d^{2}}{QT}-\frac{36H_{n}}{Q}-C_{10}\right)\geq \sqrt{Q}\exp\left(-\frac{36H_{n}}{Q}-C_{10}\right)\geq 4\cdot 8^{\frac{a}{2}}, \tag{4.44}\] where we have used (4.31) for the last inequality. As \(\bar{Q}\) is sufficiently large, it is clear that (4.42) follows from the combination of (4.43) and (4.44). Consequently, we obtain (4.30). **Proposition 4.14**.: _For any \(x,y\in M\) and \(t\in(0,1)\),_ \[H(x,t,y,0)\leq\bar{Q}\frac{\exp\left(-\mathcal{N}_{0}^{*}(x,t)\right)}{t^{\frac{ n}{2}}}\exp\left(-\frac{d_{0}^{2}(z,y)}{\bar{Q}t}\right) \tag{4.45}\] _where \((z,0)\) is any \(H_{n}\)-center of \((x,t)\) and \(\bar{Q}\) is the same constant in Proposition 4.13._ Proof.: Suppose otherwise, there exist \(x,y\in M\), \(T\in(0,1)\) and an \(H_{n}\)-center \((z,0)\) of \((x,T)\) such that \[H(x,T,y,0)\geq\bar{Q}\frac{\exp\left(-\mathcal{N}_{0}^{*}(x,T)\right)}{T^{ \frac{n}{2}}}\exp\left(-\frac{d_{0}^{2}(z,y)}{\bar{Q}T}\right). \tag{4.46}\] Now, we define \(Q_{k}=2^{k}\bar{Q}\) and \(T_{k}:=8^{-k}T\) for \(k\in\mathbb{N}\). If we set \(x_{0}=x\) and \(z_{0}=z\), then we claim there are sequences \(x_{k},z_{k}^{\prime}\) and \(z_{k}\) satisfying 1. \((z_{k}^{\prime},T_{k})\) is an \(H_{n}\)-center of \((x_{k-1},T_{k-1})\). 2. \((z_{k},0)\) is an \(H_{n}\)-center of \((x_{k},T_{k})\). 3. \(d_{T_{k}}(x_{k},z_{k}^{\prime})\leq 10Q_{k-1}^{-\frac{1}{2}}d_{0}(z_{k-1},y)\). 4. \(d_{0}(z_{k},z_{k-1})\leq 3\sqrt{H_{n}T_{k-1}}+10Q_{k-1}^{-\frac{1}{2}}d_{0}(z_{k -1},y)\). 5. We have the heat kernel estimate \[H(x_{k},T_{k},y,0)\geq Q_{k}\frac{\exp\left(-\mathcal{N}_{0}^{*}(x_{k},T_{k}) \right)}{T_{k}^{\frac{n}{2}}}\exp\left(-\frac{d_{0}^{2}(z_{k},y)}{Q_{k}T_{k}} \right).\] (4.47) The existence of \(x_{k},z_{k}^{\prime}\) and \(z_{k}\) satisfying (a)-(e) is obtained by Lemma 4.13 and an inductive argument. Notice that (d) is guaranteed by (4.38). **Claim**: \(b_{k}:=d_{T_{k}}(x_{k},p)\) is uniformly bounded. _Proof of the Claim_: We set \(d_{k}:=d_{0}(z_{k},y)\) for \(k\in\mathbb{N}\). It follows from (d) that \[d_{k}\leq d_{k-1}+d_{0}(z_{k},z_{k-1})\leq\left(1+10Q_{k-1}^{-\frac{1}{2}} \right)d_{k-1}+3\sqrt{H_{n}T_{k-1}}. \tag{4.48}\] Therefore, it is easy to derive from (4.48) and the definitions of \(Q_{k}\) and \(T_{k}\) that \[d_{k}\leq K_{1}<\infty \tag{4.49}\] for some constant \(K_{1}\) depending on \(d_{0}(z,y),T,\bar{Q}\) and \(n\). From (c) and (4.49), we have \[d_{T_{k}}(x_{k},z_{k}^{\prime})\leq 10Q_{k-1}^{-\frac{1}{2}}d_{k-1}\leq 10K_{1} Q_{k-1}^{-\frac{1}{2}}. \tag{4.50}\] Moreover, since \((z_{k}^{\prime},T_{k})\) is an \(H_{n}\)-center of \((x_{k-1},T_{k-1})\), it follows from (4.15) that \[d_{T_{k}}(z_{k}^{\prime},p)\leq d_{T_{k-1}}(x_{k-1},p)+L_{3}\sqrt{T_{k-1}-T_{k }}\leq b_{k-1}+L_{3}T_{k-1}^{\frac{1}{2}}, \tag{4.51}\] where \(L_{3}=L_{3}(n,\delta,A)>0\) for some fixed constant \(\delta\in(0,1)\) with \(T\leq 1-\delta\). Combining (4.50) and (4.51), we obtain \[b_{k}\leq b_{k-1}+10K_{1}Q_{k-1}^{-\frac{1}{2}}+L_{3}T_{k-1}^{\frac{1}{2}}. \tag{4.52}\] From (4.52), it is clear that \(b_{k}\) is uniformly bounded, and the Claim is proved. Thanks to the Claim, we can apply Corollary 4.12 to obtain an upper bound of heat kernel, which contradicts the lower bound (4.47) when \(k\) is sufficiently large. Now, we state the main theorem of this section regarding the heat kernel upper bound, which generalizes and slightly improves [2, Theorem 7.2]. **Theorem 4.15**.: \((M^{n},g(t))_{t<1}\) _is the Ricci flow associated with a Ricci shrinker. For any \(\epsilon>0\), there exists a constant \(C=C(n,\epsilon)>0\) such that_ \[H(x,t,y,s)\leq\frac{C\exp\left(-\mathcal{N}_{(x,t)}(t-s)\right)}{(t-s)^{\frac {n}{2}}}\exp\left(-\frac{d_{s}^{2}(z,y)}{(4+\epsilon)(t-s)}\right), \tag{4.53}\] _for any \(s<t<1\) and any \(H_{n}\)-center \((z,s)\) of \((x,t)\)._ Proof.: Without loss of generality, we assume \(s=0\). The proof is a modification of the proof of Lemma 4.13 and all constants \(C_{i}>1\) depend on \(n\) and \(\epsilon\). Suppose otherwise, there exist \(x,y\in M\), \(T\in(0,1)\), \(\epsilon>0\) and an \(H_{n}\)-center \((z,0)\) of \((x,T)\) such that \[H(x,T,y,0)\geq Q\frac{\exp\left(-\mathcal{N}_{0}^{*}(x,T)\right)}{T^{\frac{n }{2}}}\exp\left(-\frac{d_{0}^{2}(z,y)}{(4+\epsilon)T}\right), \tag{4.54}\] where \(Q\) is a large constant determined later. We also set \(\theta\in(0,1)\) as a small parameter and \(\theta_{1}^{3}=\theta\). Define \[d :=d_{0}(z,y),\quad a:=H(x,T,y,0),\quad v_{t}:=v_{x,T,t},\quad T_{ \theta}:=\theta T,\] \[V :=\left\{w\in M\mid H(w,T_{\theta},y,0)\geq\frac{a}{2}\right\}.\] From (4.54) and (4.5), we have \[\exp\left(\frac{d^{2}}{T}\right)\geq\left\{C_{1}^{-1}Q\right\}^{4+\epsilon}. \tag{4.55}\] Now, we assume \((z^{\prime},T_{\theta})\) is an \(H_{n}\)-center of \((x,T)\) and set \(B:=B_{T_{\theta}}(z^{\prime},(1-\theta_{1})d)\). Similar to (4.34), we have \[-\mathcal{N}_{0}^{*}(w,T_{\theta})\leq-\mathcal{N}_{0}^{*}(x,T)+C_{2}\theta^{ -\frac{1}{2}}T^{-\frac{1}{2}}d_{T_{\theta}}(w,z^{\prime})+C_{2}\theta^{-\frac {1}{2}}. \tag{4.56}\] By the same argument as (4.36), we apply Proposition 3.14 for \(\epsilon/4\) to obtain if \(\theta<\bar{\theta}(\epsilon)\), \[\int_{M\setminus B}e^{C_{2}\theta^{-\frac{1}{2}}T^{-\frac{1}{2}}d \tau_{\theta}(w,z^{\prime})}\,dv_{\tau_{\theta}}(w)\] \[\leq C_{3}\sum_{k=1}^{\infty}\exp\left(C_{2}2^{k}\theta^{-\frac{1}{2}} T^{-\frac{1}{2}}(1-\theta_{1})d-\frac{(2^{k-1}(1-\theta_{1})d)^{2}}{(4+\epsilon/4)(1 -\theta)T}\right)\] \[\leq C_{3}\sum_{k=1}^{\infty}\exp\left(-\frac{(2^{k-1}(1-\theta_{1})d) ^{2}}{(4+\epsilon/3)(1-\theta)T}+C_{4}\theta^{-1}\right)\] \[\leq C_{5}\exp\left(-\frac{((1-\theta_{1})d)^{2}}{(4+\epsilon/3)(1- \theta)T}+C_{4}\theta^{-1}\right). \tag{4.57}\] Similar to (4.37), we obtain \[QT^{-\frac{\theta}{2}}\exp\left(-\frac{d^{2}}{(4+\epsilon)T} \right)\leq C_{6}T_{\theta}^{-\frac{\theta}{2}}\left(e^{C_{2}\theta^{-\frac{1} {2}}T^{-\frac{1}{2}}(1-\theta_{1})d}v_{\tau_{\theta}}(V\cap B)+e^{C_{6}\theta^ {-1}}\exp\left(-\frac{((1-\theta_{1})d)^{2}}{(4+\epsilon/3)(1-\theta)T}\right) \right). \tag{4.58}\] We claim that \(v_{T_{\theta}}(V\cap B)>0\). Indeed, it follows from (4.55) that \[Q\exp\left(\left(\frac{(1-\theta_{1})^{2}}{(4+\epsilon/3)(1- \theta)}-\frac{1}{(4+\epsilon)}\right)\frac{d^{2}}{T}\right)\geq Q\exp\left( \frac{c(\epsilon)d^{2}}{T}\right)\geq Q^{1+c(\epsilon)}C_{1}^{-c(\epsilon)} \geq 2C_{6}\theta^{-\frac{\theta}{2}}e^{C_{6}\theta^{-1}},\] where \(c(\epsilon)>0\) depends only on \(\epsilon>0\) and we choose \(\theta<\bar{\theta}(\epsilon)\) and \(Q\) sufficiently large. Therefore, the claim follows from (4.58). We choose a point \(x_{1}\in V\cap B\) and an \(H_{n}\)-center \((z_{1},0)\) of \((x_{1},T_{\theta})\). Similar to (4.38), we have \[d_{0}(z,z_{1})\leq 3\sqrt{H_{n}T}+(1-\theta_{1})d \tag{4.59}\] and hence \[d_{0}(z_{1},y)\geq\theta_{1}d-3\sqrt{H_{n}T}. \tag{4.60}\] Moreover, as (4.43), we have by (4.56), \[-\mathcal{N}_{0}^{*}(x_{1},T_{\theta})\leq-\mathcal{N}_{0}^{*}(x,T)+C_{2}\theta^{-\frac{1}{2}}(T^{-\frac{1}{2}}d+1). \tag{4.61}\] Now, by virtue of Proposition 4.14 and the definition of \(V\), we have \[\bar{Q}\exp\left(-\frac{d_{0}^{2}(z_{1},y)}{\bar{Q}T_{\theta}} \right)\geq H(x_{1},T_{\theta},y,0)\cdot T^{\frac{n}{2}}\cdot\exp\left( \mathcal{N}_{0}^{*}(x_{1},T_{\theta})\right)\geq\frac{1}{2}Q\exp\left(-\frac{ d^{2}}{(4+\epsilon)T}\right). \tag{4.62}\] Since \(d_{0}^{2}(z_{1},y)\geq\theta_{1}^{2}d^{2}/2-9H_{n}T\) from (4.60), it follows from (4.61) and (4.62) that \[Q\exp\left(\left(\frac{1}{2\theta_{1}\bar{Q}}-\frac{1}{(4+ \epsilon)}-1\right)\frac{d^{2}}{T}-C_{7}\theta^{-1}\right)\] \[\leq Q\exp\left(\left(\frac{1}{2\theta_{1}\bar{Q}}-\frac{1}{(4+ \epsilon)}\right)\frac{d^{2}}{T}-C_{2}\theta^{-\frac{1}{2}}T^{-\frac{1}{2}}d \right)\leq 2\bar{Q}\theta^{-\frac{\theta}{2}}\exp\left(\frac{9H_{n}}{\bar{Q} \theta}\right), \tag{4.63}\] provided that \(\theta\leq\bar{\theta}(\epsilon,\bar{Q})\). However, (4.63) is impossible by (4.55) if \(Q\) is sufficiently large. In sum, we obtain a contradiction and (4.53) holds. Combining Lemma 4.10 and Theorem 4.15, we have the following estimate, which improves [28, Theorem 20]. **Theorem 4.16**.: _For any \(K>1\), \(\delta\in(0,1)\) and \(A>0\), there exists a constant \(C=C(n,K,\delta,A)>1\) satisfying the following property._ _Suppose \(-\delta^{-1}\leq s<t\leq 1-\delta\) and \(d_{t}(x,p)\leq K\), then_ \[H(x,t,y,s)\leq\frac{C}{(t-s)^{\frac{\alpha}{2}}}\exp\left(-\frac{d_{s}^{2}(x, y)}{C(t-s)}\right). \tag{4.64}\] **Remark 4.17**.: _Given \((x_{0},t_{0})\in M\times(-\infty,1)\), if we set \(H(x_{0},t_{0},y,s)=(4\pi(t_{0}-s))^{-\frac{\alpha}{2}}e^{-b(y,s)}\), then it follows from Theorem 4.9 and Theorem 4.16 that \(b(y,s)\) increases quadratically._ Combining (4.64) and the standard regularity theory of the parabolic equation (cf. [20]), we have the following derivative estimate of higher orders. **Corollary 4.18**.: _Given \((x_{0},t_{0})\in M\times(-\infty,1)\) and \(s_{0}<t_{0}\), there exists a small parabolic neighborhood \(P=B_{t_{0}}(x_{0},r)\times[t_{0}-r^{2},t_{0}+r^{2}]\) such that for any \(m_{1},m_{2}\in\mathbb{N}\)_ \[|\partial_{t}^{m_{1}}\nabla_{x}^{m_{2}}H(x,t,y,s_{0})|\leq\frac{1}{r^{2m_{1}+ m_{2}}}\cdot\frac{Q}{(t_{0}-s_{0})^{\frac{\alpha}{2}}}\cdot\exp\left(-\frac{d_{s_ {0}}^{2}(x_{0},y)}{Q(t_{0}-s_{0})}\right) \tag{4.65}\] _for some constant \(Q>1\) and any \((x,t)\in P\) and \(y\in M\)._ Note that when \((y,s_{0})\) is fixed, \(H(x,t,y,s_{0})\) is a heat solution. The scale \(r\) in the above Corollary is small constant much less than the curvature radius at \((x_{0},t_{0})\). Then inequality (4.65) can be obtained by dominated convergence theorem. It indicates that one can take differentiation under the integral sign if the integrand involves the heat kernel in many cases. As an application, we can follow the same proof as in Theorem 3.23 to estimate \(|\nabla\mathcal{N}_{s}^{*}|\) and \(\Box\mathcal{N}_{s}^{*}\) without using \(\phi^{r}\). Therefore, one obtains **Corollary 4.19**.: _The Nash entropy \(\mathcal{N}_{s}^{*}(x,t)\) is smooth on \(M\times(s,1)\) satisfying_ \[|\nabla\mathcal{N}_{s}^{*}|\leq\sqrt{\frac{n}{2(t-s)}}\quad\text{and}\quad- \frac{n}{2(t-s)}\leq\Box\mathcal{N}_{s}^{*}\leq 0\] _in the classical sense._ We end this section by proving the following hypercontractivity; see [2, Theorem 12.1]. **Theorem 4.20**.: _Suppose that \((x_{0},t_{0})\in M\times(-\infty,1)\) and \(0<\tau_{1}<\tau_{2}\). Let \(u\in C^{2}(M\times[t_{0}-\tau_{2},t_{0}-\tau_{1}])\) be a nonnegative function satisfying \(\Box u\leq 0\) and having at most polynomial spatial growth in the sense that_ \[|u(x,t)|\leq m(d_{t}^{m}(p,x)+1) \tag{4.66}\] _for some \(m\in\mathbb{N}\). If \(1<q_{0}\leq p_{0}<\infty\) with_ \[\frac{\tau_{2}}{\tau_{1}}\geq\frac{p_{0}-1}{q_{0}-1},\] _then for \(dv_{t}:=dv_{x_{0},t_{0};r}\),_ \[\left(\int_{M}u^{p_{0}}dv_{t_{0}-\tau_{1}}\right)^{1/p_{0}}\leq\left(\int_{M}u ^{q_{0}}dv_{t_{0}-\tau_{2}}\right)^{1/q_{0}}. \tag{4.67}\] Proof.: Without loss of generality, we assume \(t_{0}=0\). We set \(p=p(t)=1+\tau_{2}(q_{0}-1)|t|^{-1}\) for \(t<0\). Notice that \(p(-\tau_{2})=q_{0}\) and \(p(-\tau_{1})\geq p_{0}\) by our assumption. When \(t<0\), direct calculation shows that \[\partial_{t}\int_{M}u^{p}\phi^{r}\,dv_{t}-\int_{M}u^{p}\Box\phi^{r }dv_{t}\] \[= \int_{M}\left(\dot{p}u^{p}\log u+pu^{p-1}\Box u-p(p-1)|\nabla u|^{ 2}u^{p-2}\right)\phi^{r}-2\langle\nabla u^{p},\nabla\phi^{r}\rangle\,dv_{t}\] \[\leq \frac{\dot{p}}{p}\int_{M}\phi^{r}u^{p}\log u^{p}\,dv_{t}-\frac{p- 1}{p}\int_{M}\frac{|\nabla u^{p}|^{2}}{u^{p}}\phi^{r}\,dv_{t}-2\int_{M}\langle \nabla u^{p},\nabla\phi^{r}\rangle\,dv_{t}. \tag{4.68}\] Moreover, we have for any \(\epsilon\ll 1\), \[2\int_{M}|\nabla u^{p}||\nabla\phi^{r}|\,dv_{t}\leq\epsilon\int_{M}\frac{| \nabla u^{p}|^{2}}{u^{p}}\phi^{r}\,dv_{t}+\epsilon^{-1}\int_{M}u^{p}|\nabla \phi^{r}|^{2}\,dv_{t}. \tag{4.69}\] Combining (4.68) and (4.69), we have \[\partial_{t}\left(\int_{M}u^{p}\phi^{r}\,dv_{t}\right)^{\frac{1}{ p}}\] \[= \frac{1}{p}\left(\int_{M}u^{p}\phi^{r}\,dv_{t}\right)^{\frac{1}{ p}-1}\left(\partial_{t}\int_{M}u^{p}\phi^{r}\,dv_{t}-\frac{\dot{p}}{p}\left( \int_{M}u^{p}\phi^{r}\,dv_{t}\right)\log\left(\int_{M}u^{p}\phi^{r}\,dv_{t} \right)\right)\] \[\leq \frac{1}{p}\left(\int_{M}u^{p}\phi^{r}\,dv_{t}\right)^{\frac{1}{ p}-1}\left(\frac{\dot{p}}{p}\int_{M}\phi^{r}u^{p}\log u^{p}\,dv_{t}-\frac{\dot{p}}{p} \left(\int_{M}u^{p}\phi^{r}\,dv_{t}\right)\log\left(\int_{M}u^{p}\phi^{r}\,dv_ {t}\right)\right)\] \[+\frac{1}{p}\left(\int_{M}u^{p}\phi^{r}\,dv_{t}\right)^{\frac{1}{ p}-1}\left(\left(\epsilon-\frac{p-1}{p}\right)\int_{M}\frac{|\nabla u^{p}|^{2}}{u^{p }}\phi^{r}\,dv_{t}+\epsilon^{-1}\int_{M}u^{p}\left\{|\nabla\phi^{r}|^{2}+\Box \phi^{r}\right\}dv_{t}\right). \tag{4.70}\] We integrate (4.70) from \(-\tau_{2}\) to \(-\tau_{1}\), let \(r\to\infty\) and then let \(\epsilon\to 0\). By Theorem 4.4, (4.66), (2.12) and (2.15), we obtain \[\left(\int_{M}u^{p}\,dv_{t}\right)^{\frac{1}{p}}\right|_{-\tau_{ 2}}^{-\tau_{1}}\] \[\leq \int_{-\tau_{2}}^{-\tau_{1}}\frac{1}{p}\left(\int_{M}u^{p}\,dv_{ t}\right)^{\frac{1}{p}-1}\left(\frac{\dot{p}}{p}\int_{M}u^{p}\log u^{p}\,dv_{t}- \frac{\dot{p}}{p}\left(\int_{M}u^{p}\,dv_{t}\right)\log\left(\int_{M}u^{p}\,dv _{t}\right)\right)\,dt\] \[+\int_{-\tau_{2}}^{-\tau_{1}}\frac{1}{p}\left(\int_{M}u^{p}\,dv_{ t}\right)^{\frac{1}{p}-1}\left(-\frac{p-1}{p}\int_{M}\frac{|\nabla u^{p}|^{2}}{u^{p }}\,dv_{t}\right)\,dt. \tag{4.71}\] Note that the log-Sobolev inequality [28, Theorem 13] implies that \[\frac{\dot{p}}{p}\int_{M}u^{p}\log u^{p}\,dv_{t}-\frac{\dot{p}}{p}\left(\int_{ M}u^{p}\,dv_{t}\right)\log\left(\int_{M}u^{p}\,dv_{t}\right)\leq\frac{\dot{p}}{p}|t| \int_{M}\frac{|\nabla u^{p}|^{2}}{u^{p}}\,dv_{t}=\frac{p-1}{p}\int_{M}\frac{| \nabla u^{p}|^{2}}{u^{p}}\,dv_{t}.\] Therefore, it follows from (4.71) that \[\left(\int_{M}u^{p_{0}}\,dv_{-\tau_{1}}\right)^{\frac{1}{p_{0}}}\leq\left(\int _{M}u^{p(-\tau_{1})}\,dv_{-\tau_{1}}\right)^{\frac{1}{p(-\tau_{1})}}\leq\left( \int_{M}u^{q_{0}}\,dv_{-\tau_{2}}\right)^{\frac{1}{q_{0}}}\] and the proof is complete. **Remark 4.21**.: _If \(u\in C^{2}(M\times[t_{0}-\tau_{2},t_{0}-\tau_{1}])\) and satisfies \(\Box u=0\) and (4.66), then_ \[\left(\int_{M}|u|^{p_{0}}dv_{t_{0}-\tau_{1}}\right)^{1/p_{0}}\leq \left(\int_{M}|u|^{q_{0}}dv_{t_{0}-\tau_{2}}\right)^{1/q_{0}}.\] _Indeed, one can apply (4.67) to \(\sqrt{u^{2}+\epsilon}\) since \(\Box\sqrt{u^{2}+\epsilon}\leq 0\), and let \(\epsilon\to 0\)._ ## 5 Parabolic neighborhoods and \(\epsilon\)-regularity theorem In this section, we assume \((M^{n},g(t))_{t<1}\) is the Ricci flow associated with a Ricci shrinker in \(\mathcal{M}(A)\). Given \((x_{0},t_{0})\in M\times(-\infty,1)\), we first recall the conventional parabolic neighborhoods are defined by \[P(x_{0},t_{0};S,-T^{-},T^{+}):= B_{t_{0}}(x_{0},S)\times([t_{0}-T^{-},t_{0}+T^{+}]\cap(- \infty,1)) \tag{5.1}\] \[Q(x_{0},t_{0};S,-T^{-},T^{+}):= \{d_{t}(x,x_{0})\leq S,\;t\in[t_{0}-T^{-},t_{0}+T^{+}]\cap(- \infty,1)\} \tag{5.2}\] for any \(S,T^{\pm}\geq 0\). Based on the monotonicity of \(W_{1}\)-distance in Proposition 3.7, we follow [2] to define the following new parabolic neighborhoods. **Definition 5.1** (\(P^{*}\)-parabolic neighborhoods).: _Suppose that \((x_{0},t_{0})\in M\times(-\infty,1)\) and \(S,T^{\pm}\geq 0\). The \(P^{*}\)-parabolic neighborhood \(P^{*}(x_{0},t_{0};S,-T^{-},T^{+})\subset M\times(-\infty,1)\) is defined as the set of points \((x,t)\in M\times(-\infty,1)\) with \(t\in[t_{0}-T^{-},t_{0}+T^{+}]\) and_ \[d^{t_{0}-T^{-}}_{W_{1}}(v_{x_{0},t_{0};t_{0}-T^{-}},v_{x,t_{0}-T^{-}})<S.\] _For any \(r>0\), we also define_ \[P^{*}(x_{0},t_{0};r):= P^{*}(x_{0},t_{0};r,-r^{2},r^{2})\] \[P^{*+}(x_{0},t_{0};r):= P^{*}(x_{0},t_{0};r,0,r^{2})\] \[P^{*-}(x_{0},t_{0};r):= P^{*}(x_{0},t_{0};r,-r^{2},0).\] _Similar definitions are also made for \(P^{\pm}\)._ Some basic properties of \(P^{*}\)-parabolic neighborhoods can be found in [2, Proposition 9.4, Corollary 9.6]. We state the following containment result from [2, Proposition 9.4 (d)]. **Lemma 5.2**.: _If \(A_{1},A_{2},T^{\pm}_{1},T^{\pm}_{2}\geq 0\) and \((x_{1},t_{1})\in P^{*}(x_{2},t_{2};A_{2},-T^{-}_{2},T^{+}_{2})\), then_ \[P^{*}(x_{1},t_{1};A_{1},-T^{-}_{1},T^{+}_{1})\subset P^{*}(x_{2},t_{2};A_{1}+A_{2},-(T^{-}_{1}+T^{-}_{2}),T^{+} _{1}+T^{+}_{2}).\] We immediately have the following result from the distance comparison Lemma 4.8. **Lemma 5.3**.: _Given \(\delta\in(0,1)\), \(t_{0}\in(-\infty,1)\), \(T^{\pm}\geq 0\) and \(S\geq 0\), there exists a constant \(C=C(n,A,\delta)>1\) such that_ \[P(p,t_{0};S,-T^{-},T^{+})\subset Q(p,t_{0};C(S+1),-T^{-},T^{+})\] \[Q(p,t_{0};S,-T^{-},T^{+})\subset P(p,t_{0};C(S+1),-T^{-},T^{+})\] _provided that \(-\delta^{-1}\leq t_{0}-T^{-}\leq t_{0}+T^{+}\leq 1-\delta\)._ In order to investigate the relation between \(P^{*}\)-parabolic neighborhoods and conventional ones, we first prove **Proposition 5.4**.: _Given \((x_{0},t_{0})\in M\times(-\infty,1)\) and \(r>0\), suppose \(R(x_{0},t)\leq r^{-2}\) for any \(t\in[t_{0}-r^{2},t_{0}]\). Then_ \[d_{W_{1}}^{t_{0}-r^{2}}(v_{x_{0},t_{0};t_{0}-r^{2}},\delta_{x_{0}})\leq C(n,A)r. \tag{5.3}\] Proof.: It follows from [28, Theorem 16] that \[H(x_{0},t_{0},x_{0},t_{0}-r^{2})\geq\frac{1}{(4\pi r^{2})^{\frac{n}{2}}}\exp \left(-l_{(x_{0},t_{0})}(x_{0},t_{0}-r^{2})\right), \tag{5.4}\] From the definition of \(l_{(x_{0},t_{0})}(x_{0},t_{0}-r^{2})\) we have \[l_{(x_{0},t_{0})}(x_{0},t_{0}-r^{2})\leq\frac{1}{2r}\int_{t_{0}-r^{2}}^{t_{0}} \sqrt{t_{0}-s}\,R(x_{0},s)\,ds\leq\frac{1}{3}. \tag{5.5}\] Combining (4.53) for \(\epsilon=1\), (5.4) and (5.5), it is clear that \[d_{t_{0}-r^{2}}^{2}(x_{0},z)\leq C_{1}r^{2}\] for some constant \(C_{1}=C_{1}(n,A)\), where \((z,t_{0}-r^{2})\) is an \(H_{n}\)-center of \((x_{0},t_{0})\). Therefore, \[d_{W_{1}}^{t_{0}-r^{2}}(v_{x_{0},t_{0};t_{0}-r^{2}},\delta_{x_{0}})\leq d_{W_{ 1}}^{t_{0}-r^{2}}(v_{x_{0},t_{0};t_{0}-r^{2}},\delta_{z})+d_{t_{0}-r^{2}}(x_{0 },z)\leq C_{2}r,\] where we have used (3.29) and \(C_{2}:=\sqrt{H_{n}}+\sqrt{C_{1}}\). **Remark 5.5**.: _From the proof, we conclude that (5.3) also holds for a constant \(C=C(n,A,\alpha)\) if we assume_ \[R(x_{0},t)\leq\frac{\alpha}{r^{2}(t_{0}-t)}\] _for some \(\alpha>0\) and any \(t\in[t_{0}-r^{2},t_{0}]\)._ **Corollary 5.6**.: _For any \(s_{0}<t_{0}<1\), we have_ \[d_{W_{1}}^{s_{0}}(v_{p,t_{0};s_{0}},\delta_{p})\leq C(n,A)\sqrt{t_{0}-s_{0}}. \tag{5.6}\] Proof.: From the self-similarity of the flow, we know that \[R(p,t)=\frac{R(p,0)}{1-t}\leq\frac{n}{2(1-t)}\leq\frac{n}{2(t_{0}-t)}\] for any \(t<t_{0}\). Therefore, the conclusion follows from Proposition 5.4 and Remark 5.5. **Proposition 5.7**.: _Given \(\delta\in(0,1)\), \(t_{0}\in(-\infty,1)\), \(T^{\pm}\geq 0\) and \(S\geq 0\), there exists a constant \(C=C(n,A,\delta)>1\) such that_ \[Q(p,t_{0};S,-T^{-},T^{+})\subset P^{*}(p,t_{0};S+C,-T^{-},T^{+})\] _provided that \(t_{0}-T^{-}\geq-\delta^{-1}\)._ Proof.: For any \((x,t)\in Q(p,t_{0};S,-T^{-},T^{+})\), we have \[d_{W_{1}}^{t_{0}-T^{-}}(v_{p,t_{0};t_{0}-T^{-}},v_{x,t;t_{0}-T^{-}})\] \[\leq d_{W_{1}}^{t_{0}-T^{-}}(v_{p,t_{0};t_{0}-T^{-}},v_{x,t;t_{0}-T^{-} })+d_{W_{1}}^{t_{0}-T^{-}}(v_{p,t_{0};t_{0}-T^{-}},v_{p,t;t_{0}-T^{-}})\] \[\leq d_{t}(x,p)+d_{W_{1}}^{t_{0}-T^{-}}(v_{p,t_{0};t_{0}-T^{-}},v_{p,t _{0}-T^{-}})\leq S+d_{W_{1}}^{t_{0}-T^{-}}(v_{p,t_{0};t_{0}-T^{-}},v_{p,t;t_{0}- T^{-}}), \tag{5.7}\] where we have used Proposition 3.10. In addition, it follows from Corollary 5.6 that \[d_{W_{1}}^{t_{0}-T^{-}}(v_{p,t_{0};t_{0}-T^{-}},v_{p,t;t_{0}-T^{-}})\leq d_{W_ {1}}^{t_{0}-T^{-}}(v_{p,t_{0};t_{0}-T^{-}},\delta_{p})+d_{W_{1}}^{t_{0}-T^{-}}( v_{p,t;t_{0}-T^{-}},\delta_{p})\leq C(n,A,\delta). \tag{5.8}\] Therefore, the conclusion follows from (5.7) and (5.8). Next, we recall the following version of the local distance distortion estimate, which can be proved almost exactly as [28, Theorem 18]; see also [16, Section 4.3], [17, Theorem 3.1] and [6, Theorem 1.1]. **Lemma 5.8**.: _Given \((x_{0},t_{0})\in M\times(-\infty,1)\) and \(r>0\), suppose \(R\leq r^{-2}\) on \(P^{-}(x_{0},t_{0};r)\)\((\)resp. \(P(x_{0},t_{0};r)\)\()\). Then_ \[\rho_{1}d_{t}(x,x_{0})\leq d_{t_{0}}(x,x_{0})\leq\rho_{1}^{-1}d_{t}(x,x_{0})\] _if \(d_{t_{0}}(x,x_{0})\leq\rho_{1}r\) and \(t\in[t_{0}-(\rho_{1}r)^{2},t_{0})\)\((\)resp. \(t\in[t_{0}-(\rho_{1}r)^{2},t_{0}+(\rho_{1}r)^{2}]\cap(-\infty,1))\), where \(\rho_{1}=\rho_{1}(n,A)\in(0,1)\). In particular,_ \[P^{-}(x_{0},t_{0};\rho_{1}^{2}r)\subset Q^{-}(x_{0},t_{0};\rho_{1}r)\subset P ^{-}(x_{0},t_{0};r) \tag{5.9}\] \[\left(\text{resp.}\quad P(x_{0},t_{0};\rho_{1}^{2}r)\subset Q(x_{0},t_{0};\rho _{1}r)\subset P(x_{0},t_{0};r)\right). \tag{5.10}\] Thanks to Proposition 5.4 and Lemma 5.8, we have the following result. **Proposition 5.9**.: _There exists a constant \(\rho_{2}=\rho_{2}(n,A)\in(0,1)\) satisfying the following property. Given \((x_{0},t_{0})\in M\times(-\infty,1)\) and \(r>0\), suppose that \(R\leq r^{-2}\) on \(P(x_{0},t_{0};r,-(\rho_{2}r)^{2})\)\((\)resp. \(P(x_{0},t_{0};r,-(\rho_{2}r)^{2},(\rho_{2}r)^{2})\)\()\). Then_ \[P^{-}(x_{0},t_{0};\rho_{2}r)\subset P^{*}(x_{0},t_{0};r,-(\rho_{2}r)^{2},0) \tag{5.11}\] \[\left(\text{resp.}\quad P(x_{0},t_{0};\rho_{2}r)\subset P^{*}(x_{0},t_{0};r,-( \rho_{2}r)^{2},(\rho_{2}r)^{2})\right). \tag{5.12}\] Proof.: In the proof, all constants \(C_{i}>1\) depend on \(n\) and \(A\) and \(\rho_{1}\) is from Lemma 5.8. We only prove (5.11), and the proof of (5.12) is similar. Moreover, we set \(0<\tau\ll 1\) to be determined later. For any \((y,s)\in P^{-}(x_{0},t_{0};\tau r)\), it follows from Lemma 5.8 that \[d_{t}(y,x_{0})\leq C_{1}\tau r \tag{5.13}\] for any \(t\in[t_{0}-(\tau r)^{2},s]\). In particular, \(R(y,t)\leq r^{-2}\) for any \(t\in[t_{0}-(\tau r)^{2},s]\). Therefore, it follows from Proposition 5.4 that \[d_{W_{1}}^{t_{0}-(\tau r)^{2}}(v_{y,s,t_{0}-(\tau r)^{2}},\delta_{ y})\leq C_{2}\tau r. \tag{5.14}\] It follows from (5.13) and (5.14) that \[d_{W_{1}}^{t_{0}-(\tau r)^{2}}(v_{y,s,t_{0}-(\tau r)^{2}},v_{x_{0 },t_{0};t_{0}-(\tau r)^{2}})\] \[\leq d_{W_{1}}^{t_{0}-(\tau r)^{2}}(v_{x_{0},t_{0};t_{0}-(\tau r)^{2} },\delta_{x_{0}})+d_{W_{1}}^{t_{0}-(\tau r)^{2}}(v_{y,s,t_{0}-(\tau r)^{2}}, \delta_{y})+d_{t_{0}-(\tau r)^{2}}(y,x_{0})\leq C_{3}\tau r<r,\] if \(\tau\) is sufficiently small. From this, it is immediate that (5.11) holds for small \(\rho_{2}\). Now, we prove **Proposition 5.10**.: _Given \(\delta\in(0,1)\), \(t_{0}\in(-\infty,1)\), \(T^{\pm}\geq 0\) and \(S\geq 0\), there exists a constant \(C=C(n,A,\delta)>1\) such that_ \[P^{*}(p,t_{0};S,-T^{-},T^{+})\subset Q(p,t_{0};\sqrt{2}S+C,-T^{- },T^{+}) \tag{5.15}\] _provided that \(t_{0}-T^{-}\geq-\delta^{-1}\). In particular, it implies that \(P^{*}(p,t_{0};S,-T^{-},T^{+})\) is precompact in \(M\times(-\infty,1)\) if \(t_{0}+T^{+}<1\)._ Proof.: In the proof, all constants \(C_{i}>1\) depend on \(n,A\) and \(\delta\). It follows from (5.6) that \[d_{W_{1}}^{t_{0}-T^{-}}(\delta_{p},v_{p,t_{0};t_{0}-T^{-}})\leq C_{1}. \tag{5.16}\] For any \((x_{1},t_{1})\in P^{*}(p,t_{0};S,-T^{-},T^{+})\), we assume \((z,t_{0}-T^{-})\) to be an \(H_{n}\)-center of \((x_{1},t_{1})\). By (5.16) and the definition of \(P^{*}\) neighborhood, we have \[d_{t_{0}-T^{-}}(p,z)\leq d_{W_{1}}^{t_{0}-T^{-}}(\delta_{p},v_{p,t_{0};t_{0}-T^{-}})+d_{W_{1} }^{t_{0}-T^{-}}(v_{p,t_{0};t_{0}-T^{-}},\delta_{z})\] \[\leq d_{W_{1}}^{t_{0}-T^{-}}(\delta_{z},v_{x_{1},t_{1},t_{0}-T^{-}})+ d_{W_{1}}^{t_{0}-T^{-}}(v_{p,t_{0};t_{0}-T^{-}},v_{x_{1},t_{1};t_{0}-T^{-}})+C_{1}\] \[\leq S\,+C_{2}. \tag{5.17}\] Set \(v_{t}=v_{x_{1},t_{1};t}\) and compute \[\partial_{t}\int_{M}\phi^{r}\,dv_{t}=\int_{M}\Box\phi^{r}\,dv_{t}.\] By (2.15), we have \[\phi^{r}(x_{1},t_{1})\geq\int_{M}\phi^{r}\,dv_{t_{0}-T^{-}}-C(n)r^ {-1}(t_{1}-t_{0}+T^{-})\geq\int_{F\leq r}\,1\,dv_{t_{0}-T^{-}}-C_{3}r^{-1}.\] Note that \(\phi^{r}=1\) if \(F\leq r\) and \(r\) is large. In light of (5.17) and Lemma 2.3, the set \(\{F\leq r\}\) contains a large geodesic ball centered at \(z\). Thus by Proposition 3.13, the above inequality implies that \[\phi^{r}(x_{1},t_{1})\geq\int_{F\leq r}1\,dv_{t_{0}-T^{-}}-C_{3}r^ {-1}\geq\frac{1}{2} \tag{5.18}\] if \(2\sqrt{r}=S\,+C_{4}\). Since \(\phi^{r}\) is supported on \(F\leq 2r\), we conclude from (5.18) that \[F(x_{1},t_{1})\leq 2r=\frac{(S\,+C_{4})^{2}}{2}.\] From Lemma 2.3 and Lemma 4.8, we immediate conclude that \[d_{t_{1}}(p,x_{1})\leq\sqrt{2}S\,+C_{5}.\] Now, the last conclusion follows from (5.15) and Lemma 5.3. **Corollary 5.11**.: _Given \((x_{0},t_{0})\in M\times(-\infty,1)\) and \(S,T^{\pm}\geq 0\), \(P^{*}(x_{0},t_{0};S,-T^{-},T^{+})\) is precompact in \(M\times(-\infty,1)\) if \(t_{0}+T^{+}<1\)._ Proof.: It is clear that \((x_{0},t_{0})\in P^{*}(p,t_{0};S^{\prime},-1,0)\) for some large \(S^{\prime}>0\). Therefore, it follows from Lemma 5.2 \[P^{*}(x_{0},t_{0};S,-T^{-},T^{+})\subset P^{*}(p,t_{0};S+S^{\prime},-(1+T^{-}),T^{+}).\] Therefore, the conclusion follows from Proposition 5.10. Next, we recall the following existence of the local cutoff function from [6, Theorem 1.3]. **Proposition 5.12**.: _Given \((x_{0},t_{0})\in M\times(-\infty,1)\) and \(r>0\), there exists a constant \(\rho_{3}=\rho_{3}(n,A)\in(0,1)\) satisfying the following property._ _Suppose \(R\leq r^{-2}\) on \(P(x_{0},t_{0};r,0,-\tau)\) with \(0<\tau\leq(\rho_{3}r)^{2}\). Then there exists a function \(\varphi\in C^{\infty}(M\times[t_{0}-\tau,t_{0}])\) with the following properties:_ * \(0\leq\varphi<1\) _on_ \(M\times[t_{0}-\tau,t_{0}]\)_._ * \(\varphi>\rho_{3}\) _on_ \(P(x_{0},t_{0};\rho_{3}r,0,-\tau)\)_._ * \(\varphi=0\) _outside_ \(P(x_{0},t_{0};r,0,-\tau)\)_._ * \(|\nabla\varphi|\leq r^{-1}\) _and_ \(|\partial_{t}\varphi|+|\Delta\varphi|\leq r^{-2}\)_._ * \(\Box\varphi\leq 0\) _on_ \(M\times[t_{0}-\tau,t_{0}]\)_._ Proof.: We sketch the proof for readers' convenience. In [6, Theorem 1.3], \(\varphi\) is constructed as the smoothing of \(\psi^{2}\), where \[\psi(x,t):=c_{1}\max\{K(x,t)-c,0\}\] for some constants \(c,c_{1}>0\) on \(U\times[t_{0}-\tau,t_{0}]\) for some open set \(U\subset B_{t_{0}}(x_{0},r)\), where \(\psi=0\) on \(\partial U\times[t_{0}-\tau,t_{0}]\) and can be extended to be \(0\) outside \(U\times[t_{0}-\tau,t_{0}]\). Here, \(K(x,t)=H(x,t,y,s)\) for some appropriate \((y,s)\) such that \(K(x_{0},t_{0})\geq(4\pi(t_{0}-s))^{-\frac{\pi}{2}}e^{-n/2}\) and \(t_{0}-s\) is sufficiently small. The estimates of (a)-(d) follow from [28, Lemma 20]. From the definitions of \(\psi\) and \(\varphi\), it is clear that (e) also holds. Next, we prove **Proposition 5.13**.: _There exists a constant \(\rho_{4}=\rho_{4}(n,A)\in(0,1)\) satisfying the following property. Given \((x_{0},t_{0})\in M\times(-\infty,1)\) and \(r>0\), suppose that \(R\leq r^{-2}\) on \(P(x_{0},t_{0};r,-(\rho_{4}r)^{2},0)\) (resp. \(P(x_{0},t_{0};r,-(\rho_{4}r)^{2},(\rho_{4}r)^{2})\)). Then_ \[P^{*-}(x_{0},t_{0};\rho_{4}r)\subset P(x_{0},t_{0};r,-(\rho_{4}r)^{2},0) \tag{5.19}\] \[\left(\text{resp.}\quad P^{*}(x_{0},t_{0};\rho_{4}r)\subset P(x_{0},t_{0};r,-( \rho_{4}r)^{2},(\rho_{4}r)^{2})\right). \tag{5.20}\] Proof.: In the proof, all positive constants \(C_{i}>1\) depend only on \(n\) and \(A\). Moreover, we set \(0<\tau\ll 1\) to be determined later. For any \((y,s)\in P^{*-}(x_{0},t_{0};\tau r)\), we assume \((z,t_{0}-(\tau r)^{2})\) to be its \(H_{n}\)-center. From Proposition 5.4, we have \[d_{t_{0}-(\tau r)^{2}}(z,x_{0})\] \[\leq d_{W_{1}}^{t_{0}-(\tau r)^{2}}(\delta_{x_{0}},v_{x_{0},s;t_{ 0}-(\tau r)^{2}})+d_{W_{1}}^{t_{0}-(\tau r)^{2}}(v_{y,s;t_{0}-(\tau r)^{2}},v_{ x_{0},s;t_{0}-(\tau r)^{2}})+d_{W_{1}}^{t_{0}-(\tau r)^{2}}(\delta_{z},v_{y,s;t_{ 0}-(\tau r)^{2}})\] \[\leq C_{1}\tau r. \tag{5.21}\] We assume \(\tau<\rho_{3}\) and consider the cutoff function \(\varphi\) constructed in Proposition 5.12. If we set \(v_{t}=v_{y,s;\tau}\), then by direct computation, \[\partial_{t}\int_{M}\varphi\,dv_{t}=\int_{M}\Box\varphi\,dv_{t}\geq-r^{-2},\] where we have used Proposition 5.12(d). By integration, we have \[\varphi(y,s)\geq\int_{M}\varphi\,dv_{t_{0}-(\tau r)^{2}}-\tau. \tag{5.22}\] Notice that \(\varphi>\rho_{3}\) on \(P(x_{0},t_{0};\rho_{3}r,0,(\tau r)^{2})\). Combining this fact with (5.21) and Proposition 3.13, we conclude that if \(\tau\) is sufficiently small, \[\varphi(y,s)\geq\int_{M}\varphi\,dv_{t_{0}-(\tau r)^{2}}-\tau\geq\frac{\rho_{ 3}}{2}>0.\] On the other hand, since \(\varphi=0\) outside \(P(x_{0},t_{0};r,-(\tau r)^{2})\), we conclude that \[d_{t_{0}}(x_{0},y)\leq r\] and hence (5.19) holds. Next, we recall the definition of the curvature radius. **Definition 5.14** (Curvature radius).: _For any \((x,t)\in M\times(-\infty,1)\), the curvature radii at \((x,t)\) are defined as_ \[r_{\mathrm{Rm}}(x,t) :=\sup\left\{r>0\mid|Rm|\leq r^{-2}\quad\text{on}\quad P(x,t;r) \right\},\] \[r_{\mathrm{Rm}}^{-}(x,t) :=\sup\left\{r>0\mid|Rm|\leq r^{-2}\quad\text{on}\quad P^{-}(x,t; r)\right\},\] \[r_{\mathrm{Rm}}^{s}(x,t) :=\sup\left\{r>0\mid|Rm|\leq r^{-2}\quad\text{on}\quad B_{t}(x, r)\right\}.\] It is clear from the definition that \(r_{\mathrm{Rm}}(x,t)\leq r_{\mathrm{Rm}}^{-}(x,t)\leq r_{\mathrm{Rm}}^{s}(x,t)\). In addition, it follows from Theorem 4.1 and the pseudolocality theorem [28, Theorem 24] on Ricci shrinkers that there exists a constant \(C=C(n,A)>1\) such that \[r_{\mathrm{Rm}}^{-}(x,t)\leq Cr_{\mathrm{Rm}}(x,t). \tag{5.23}\] We are in a position to obtain the following \(\epsilon\)-regularity theorem; see [2, Theorem 10.2]. **Theorem 5.15** (\(\epsilon\)-regularity).: _There exists a small constant \(\epsilon=\epsilon(n)>0\) satisfying the following property. Given \((x,t)\in M\times(-\infty,1)\) and \(r>0\), suppose that \(\mathcal{N}_{(x,t)}(r^{2})\geq-\epsilon\), then \(r_{\mathrm{Rm}}(x,t)\geq\epsilon\tau\)._ Proof.: We only sketch the proof as the details can be found in [2, Theorem 10.2]. The key step is a point-picking argument in the spacetime with respect to the curvature radius \(r_{\mathrm{Rm}}\). More precisely, one needs to show that for any \(A>0\) with \(10Ar_{\mathrm{Rm}}(x,t)\leq 1/2\), there exists a point \((x^{\prime},t^{\prime})\in P^{*-}(x^{\prime},t^{\prime};10Ar_{\mathrm{Rm}}(x,t))\) such that \(r_{\mathrm{Rm}}(x^{\prime},t^{\prime})\leq r_{\mathrm{Rm}}(x,t)\) and \(r_{\mathrm{Rm}}\geq r_{\mathrm{Rm}}(x^{\prime},t^{\prime})/10\) on \(P^{*-}(x^{\prime},t^{\prime};Ar_{\mathrm{Rm}}(x^{\prime},t^{\prime}))\). Otherwise, one can iteratively pick a sequence of spacetime points \((x_{i},t_{i})\) in a compact set of \(M\times(-\infty,1)\) satisfying \(r_{\mathrm{Rm}}(x_{i},t_{i})\to 0\). In light of Lemma 5.2, all \((x_{i},t_{i})\) fall into a given \(P^{*-}\)-parabolic neighborhood, which is precompact by Corollary 5.11. Note that the curvature radius of \((x_{i},t_{i})\) shrinks by a definite portion in each step, the bounded geometry of a compact set implies that the process must terminate in finite steps, say \((x_{k},t_{k})=(x^{\prime},t^{\prime})\). Such choice of \((x^{\prime},t^{\prime})\) guarantees that it has almost maximal curvature radius in spacetime neighborhood. Notice that similar point-picking arguments can be found in [34, Theorem 10.1] and [16, Proposition 3.43]. If the \(\epsilon\)-regularity theorem fails, we could obtain a sequence of pointed Ricci flows such that \(r_{\mathrm{Rm}}=1\) at the base points after the point-picking and appropriate rescalings. Since nearby points have curvature radii uniformly bounded from below, the sequence converges smoothly to a limit Ricci flow which is the Euclidean spacetime by the assumption of the Nash entropy. Therefore, \(r_{\mathrm{Rm}}=1\) must be violated and we obtain a contradiction. Using the \(\epsilon\)-regularity theorem, one immediately has the following gap property, following the same proof of [28, Theorem 3]. **Corollary 5.16**.: _Suppose \((M^{n},g,f,p)\) is a non-flat Ricci shrinker. Then_ \[\mathcal{N}_{(p,0)}(\epsilon^{-2})<-\epsilon,\] _where \(\epsilon\) is the same constant in Theorem 5.15._ Proof.: Suppose \(\mathcal{N}_{(p,0)}(\epsilon^{-2})\geq-\epsilon\) and \((M,g)\) is a non-flat Ricci shrinker, it follows from Theorem 5.15 that \(r_{\mathsf{Rm}}(p,0)\geq 1\). In particular, it implies that \(|Rm(p,t)|\leq 1\) for any \(t\in[0,1)\). By the self-similarity of the flow, we have \(|Rm|(p,0)=|Rm|(p,t)(1-t)\) and hence \(|Rm|(p,0)=0\), which contradicts the fact that \(R>0\) for non-flat Ricci shrinkers. We conclude this section by stating the following two results, whose proofs are more or less standard. See [2, Theorem 10.3, Theorem 10.4]. **Theorem 5.17**.: _For any \(\epsilon>0\) there is a \(\delta=\delta(\epsilon)>0\) such that the following holds. Given \((x,t)\in M\times(-\infty,1)\) and \(r>0\), if \(\mathcal{N}_{x,t}(r^{2})\geq-\delta\), then_ \[|Rm|\leq\epsilon r^{-2}\qquad\text{on}\quad P(x,t;\epsilon^{-1}r,-(1-\epsilon )r^{2},\epsilon^{-1}r^{2}).\] _Moreover, we have \(\mathcal{N}_{t-r^{2}}^{*}\geq-\epsilon\) on \(P(x,t;\epsilon^{-1}r,-(1-\epsilon)r^{2})\)._ **Theorem 5.18**.: _For any \(\epsilon>0\) and \(Y<\infty\) there is a \(\delta=\delta(\epsilon,Y)>0\) such that the following holds. Given \((x,t)\in M\times(-\infty,1)\) and \(r>0\), suppose that \(|Rm|\leq r^{-2}\) on \(P^{-}(x,t;r)\) and \(\mathcal{N}_{x,t}(r^{2})\geq-Y\). Then \(\mathcal{N}_{x,t}(\delta r^{2})\geq-\epsilon\)._ ## 6 Metric flows and \(\mathbb{F}\)-convergence In previous sections, we have generalized (or slightly improved) the theorems and tools in [2]. Notice that these results also hold for Ricci flows induced by Ricci shrinkers (cf. Definition 2.2) since most of them are scaling-invariant. In a few cases, one needs to modify the assumptions correspondingly. For instance, the conditions in Theorem 4.9 and Theorem 4.16 need to be changed to \(-\delta^{-1}\lambda\leq t<s\leq(1-\delta)\lambda\) and \(d_{t}(x,p)\leq K\lambda^{1/2}\), if the Ricci flow associated with a Ricci shrinker is parabolically rescaled by \(\lambda>0\). Based on these results and techniques, one can generalize the theory of \(\mathbb{F}\)-convergence in [3] and [4] from compact Ricci flows to the setting of Ricci flows induced by Ricci shrinkers. Notice that the results in [3] and [4] are already generalized by Bamler to Ricci flows with complete time-slices and bounded curvature on compact time-intervals (cf. [5]). In [5, Appendix A], some issues in the non-compact case are addressed and can be resolved similarly in the setting of Ricci shrinkers by the results and techniques developed in previous sections. For instance, by Theorem 4.9 and Theorem 4.16, it is known that the conjugate heat kernel decays exponentially and the function \(b\) induced by the conjugate heat kernel increases quadratically (cf. Remark 4.17). Therefore, the weak splitting maps (cf. [4, Definition 5.6]) constructed in [4, Section 10] have at most quadratic spatial growth. Moreover, it follows from [4, Proposition 12.1, Remark 12.3] that one can construct a bounded strong splitting map with bounded gradient from a given weak splitting map. At various places in [4], one also needs to consider integral \(\int u\phi^{r}\) instead of \(\int u\), and take the limit for \(r\to\infty\) after all the estimates (e.g., \(u=\Box|\omega|\) in [4, Lemma 17.37]). This technique has already appeared multiple times in previous sections. As a showcase, we generalize the integral estimates in [4, Section 6] to Ricci flows associated with Ricci shrinkers in Appendix A. These estimates are frequently used in [4] and are of independent interest. Now, we recall the following definition of the metric flow from [3, Definition 3.2]. **Definition 6.1** (Metric flow).: _Let \(I\subset\mathbb{R}\) be a subset. A metric flow over \(I\) is a tuple of the form_ \[(\mathcal{X},\mathfrak{t},(d_{t})_{t\in I},(v_{x;s})_{x\in\mathcal{X},s\in I,s \leq\mathfrak{t}(x)})\] _with the following properties:_ 1. \(\mathcal{X}\) _is a set consisting of points._ 2. \(\mathfrak{t}:\mathcal{X}\to I\) _is a map called time-function. Its level sets_ \(\mathcal{X}_{t}:=\mathfrak{t}^{-1}(t)\) _are called time-slices and the preimages_ \(\mathcal{X}_{I^{\prime}}:=\mathfrak{t}^{-1}(I^{\prime})\)_,_ \(I^{\prime}\subset I\)_, are called time-slabs._ 3. \((\mathcal{X}_{t},d_{t})\) _is a complete and separable metric space for all_ \(t\in I\)_._ 4. \(v_{x;s}\) _is a probability measure on_ \(\mathcal{X}_{s}\) _for all_ \(x\in\mathcal{X}\)_,_ \(s\in I\)_,_ \(s\leq\mathfrak{t}(x)\)_. For any_ \(x\in\mathcal{X}\) _the family_ \((v_{x;s})_{s\in I,s\leq\mathfrak{t}(x)}\) _is called the conjugate heat kernel at_ \(x\)_._ 5. \(v_{x;\mathfrak{t}(x)}=\delta_{x}\) _for all_ \(x\in\mathcal{X}\)_._ 6. _For all_ \(s,t\in I\)_,_ \(s<t\)_,_ \(T\geq 0\) _and any measurable function_ \(u_{s}:\mathcal{X}_{s}\to[0,1]\) _with the property that if_ \(T>0\)_, then_ \(u_{s}=\Phi\circ f_{s}\) _for some_ \(T^{-1/2}\)_-Lipschitz function_ \(f_{s}:\mathcal{X}_{s}\to\mathbb{R}\) _(if_ \(T=0\)_, then there is no additional assumption on_ \(u_{s}\)_), the following is true. The function_ \[u_{t}:\mathcal{X}_{t}\longrightarrow\mathbb{R},\qquad x\longmapsto\int_{ \mathcal{X}_{s}}u_{s}\,dv_{x;s}\] _is either constant or of the form_ \(u_{t}=\Phi\circ f_{t}\)_, where_ \(f_{t}:\mathcal{X}_{t}\to\mathbb{R}\) _is_ \((t-s+T)^{-1/2}\)_-Lipschitz. Here,_ \(\Phi\) _is given by (_3.31_)._ 7. _For any_ \(t_{1},t_{2},t_{3}\in I\)_,_ \(t_{1}\leq t_{2}\leq t_{3}\)_,_ \(x\in\mathcal{X}_{t_{3}}\) _we have the reproduction formula_ \[v_{x;t_{1}}=\int_{\mathcal{X}_{t_{2}}}v_{\cdot;t_{1}}dv_{x;t_{2}},\] _meaning that for any Borel set_ \(S\subset\mathcal{X}_{t_{1}}\)__ \[v_{x;t_{1}}(S)=\int_{\mathcal{X}_{t_{2}}}v_{y;t_{1}}(S)dv_{x;t_{2}}(y).\] Given a metric flow \(\mathcal{X}\) over \(I\), we recall the following definitions from [3, Definition 3.20, 3.30]. **Definition 6.2** (Conjugate heat flow).: _A family of probability measures \((\mu_{t}\in\mathcal{P}(\mathcal{X}_{t}))_{t\in I^{\prime}}\) over \(I^{\prime}\subset I\) is called a conjugate heat flow if for all \(s,t\in I^{\prime}\), \(s\leq t\) we have_ \[\mu_{s}=\int_{\mathcal{X}_{t}}v_{x;s}\,d\mu_{t}(x).\] **Definition 6.3** (\(H\)-Concentration).: _Given a constant \(H>0\), a metric flow \(\mathcal{X}\) is called \(H\)-concentrated if for any \(s\leq t\), \(s,t\in I\), \(x_{1},x_{2}\in\mathcal{X}_{t}\)_ \[\operatorname{Var}(v_{x_{1};s},v_{x_{2};s})\leq d_{t}^{2}(x_{1},x_{2})+H(t-s).\] Next, we recall the definition of the metric flow pair from [3, Definition 5.1, 5.2]. Roughly speaking, two metric flow pairs are equivalent if they are the same in the metric measure sense almost everywhere. **Definition 6.4** (Metric flow pair).: _A pair \((\mathcal{X},(\mu_{t})_{t\in I^{\prime}})\) is called a metric flow pair over \(I\subset\mathbb{R}\) if:_ 1. \(I^{\prime}\subset I\) _with_ \(|I\setminus I^{\prime}|=0\)_._ 2. \(\mathcal{X}\) _is a metric flow over_ \(I^{\prime}\)_._ 3. \((\mu_{t})_{t\in I^{\prime}}\) _is a conjugate heat flow on_ \(\mathcal{X}\) _with_ \(\text{supp}\,\mu_{t}=\mathcal{X}_{t}\) _for all_ \(t\in I^{\prime}\)_._ _If \(J\subset I^{\prime}\), then we say that \((\mathcal{X},(\mu_{t})_{t\in I})\) is fully defined over \(J\). We denote by \(\mathbb{F}_{I}^{J}\) the set of equivalence classes of metric flow pairs over \(I\) that are fully defined over \(J\). Here, two metric flow pairs \((\mathcal{X}^{i},(\mu^{i}_{t})_{t\in I^{\prime}})\), \(i=1,2\), that are fully defined over \(J\) are equivalent if there exists an isometry \(\phi:\mathcal{X}^{1}_{I^{\prime}}\to\mathcal{X}^{2}_{I^{\prime}}\) (cf. [3, Definition 3.13]) such that \(|I^{\prime,1}\setminus I^{\prime}|=|I^{\prime,2}\setminus I^{\prime}|=0\), \((\phi_{t})_{*}\mu^{1}_{t}=\mu^{2}_{t}\) for all \(t\in I^{\prime}\) and \(J\subset I^{\prime}\)._ We will only consider \(I:=(-\infty,0]\) for simplicity. Then for any pointed Ricci flow \((M^{n},g(t),x_{0})_{t\in I}\) induced by a Ricci shrinker, one can define \((\mathcal{X},(\mu_{t})_{t\in I})\) as follows. \[\left(\mathcal{X}:=M\times(I\setminus\{0\})\sqcup x_{0}\times\{0\},\text{t}: =\text{proj}_{J},(d_{t})_{t\in I},(v_{x,t,s})_{(x,t)\in M\times I,s\in I,s\in I,s\in I},\mu_{t}:=v_{x_{0},0,t}\right). \tag{6.1}\] Then we have **Proposition 6.5**.: _The pair \((\mathcal{X},(\mu_{t})_{t\in I})\) defined in (6.1) is an \(H_{n}\)-concentrated metric flow pair that is fully defined over \(I\)._ Proof.: The conditions (1)-(5) in the definition of the metric flow can be easily checked. Condition (6) follows from (3.15) and (7) from the semigroup property (3.1). The metric flow is \(H_{n}\)-concentrated by Proposition 3.10. Next, we recall the definition of a correspondence between metric flows; see [3, Definition 5.4]. **Definition 6.6** (Correspondence).: _Let \((\mathcal{X}^{i},(\mu^{i}_{t})_{t\in I^{\prime}})\) be metric flows over \(I\), indexed by some \(i\in\mathcal{I}\). A correspondence between these metric flows over \(I^{\prime\prime}\) is a pair of the form_ \[\mathfrak{C}:=((Z_{t},d^{Z}_{t})_{t\in I^{\prime\prime}},(\varphi^{i}_{t})_{t \in I^{\prime\prime},j\in I}),\] _where:_ 1. \((Z_{t},d^{Z}_{t})\) _is a metric space for any_ \(t\in I^{\prime\prime}\)_._ 2. \(I^{\prime\prime,i}\subset I^{\prime\prime}\cap I^{\prime,i}\) _for any_ \(i\in\mathcal{I}\)_._ 3. \(\varphi^{i}_{t}:(\mathcal{X}^{i}_{t},d^{i}_{t})\to(Z_{t},d^{Z}_{t})\) _is an isometric embedding for any_ \(i\in\mathcal{I}\) _and_ \(t\in I^{\prime\prime,i}\)_._ _If \(J\subset I^{\prime\prime,i}\) for all \(i\in\mathcal{I}\), we say that \(\mathfrak{C}\) is fully defined over \(J\)._ Given a correspondence, one can define the \(\mathbb{F}\)-distance, see [3, Definition 5.6, 5.8]. **Definition 6.7** (\(\mathbb{F}\)-distance within correspondence).: _We define the \(\mathbb{F}\)-distance between two metric flow pairs within \(\mathfrak{C}\) (uniform over \(J\)),_ \[d_{\mathbb{F}}^{\mathfrak{C},J}((\mathcal{X}^{1},(\mu_{t}^{1})_{t\in I^{ \prime,1}}),(\mathcal{X}^{2},(\mu_{t}^{2})_{t\in I^{\prime,2}})),\] _to be the infimum over all \(r>0\) with the property that there is a measurable subset \(E\subset I^{\prime\prime}\) with_ \[J\subset I^{\prime\prime}\setminus E\subset I^{\prime\prime,1}\cap I^{\prime \prime,2}\] _and a family of couplings \((q_{t})_{t\in I^{\prime\prime}\setminus E}\) between \(\mu_{t}^{1},\mu_{t}^{2}\) such that:_ 1. \(|E|\leq r^{2}\)_._ 2. _For all_ \(s,t\in I^{\prime\prime}\setminus E\)_,_ \(s\leq t\)_, we have_ \[\int_{\mathcal{X}^{1}_{t}\times\mathcal{X}^{2}_{t}}d_{W_{1}}^{Z_{s}}((\varphi _{s}^{1})_{*}\nu_{x^{1};s}^{1},(\varphi_{s}^{2})_{*}\nu_{x^{2};s}^{2})dq_{t}(x ^{1},x^{2})\leq r.\] Notice that (2) above implies that for any \(t\in I^{\prime\prime}\setminus E\), \[d_{GW_{1}}((\mathcal{X}^{1}_{t},d^{1}_{t},\mu^{1}_{t}),(\mathcal{X}^{2}_{t},d ^{2}_{t},\mu^{2}_{t}))\leq d_{W_{1}}^{Z_{s}}((\varphi_{t}^{1})_{*}\mu^{1}_{t}, (\varphi_{t}^{2})_{*}\mu^{2}_{t})\leq r. \tag{6.2}\] Here, \(d_{GW_{1}}\) denotes the Gromov-\(W_{1}\)-Wasserstein distance, see [3, Definition 2.11] for the precise definition. **Definition 6.8** (\(\mathbb{F}\)-distance).: _The \(\mathbb{F}\)-distance between two metric flow pairs (uniform over \(J\)),_ \[d_{\mathbb{F}}^{J}((\mathcal{X}^{1},(\mu_{t}^{1})_{t\in I^{\prime,1}}),( \mathcal{X}^{2},(\mu_{t}^{2})_{t\in I^{\prime,2}})),\] _is defined as the infimum of_ \[d_{\mathbb{F}}^{\mathfrak{C},J}((\mathcal{X}^{1},(\mu_{t}^{1})_{t\in I^{ \prime,1}}),(\mathcal{X}^{2},(\mu_{t}^{2})_{t\in I^{\prime,2}})),\] _over all correspondences \(\mathfrak{C}\) between \(\mathcal{X}^{1},\mathcal{X}^{2}\) over \(I^{\prime\prime}\) that are fully defined over \(J\)._ With all those definitions, it can be proved (cf. [3, Theorem 5.13, 5.26]) that \((\mathbb{F}^{J}_{I},d^{J}_{\mathbb{F}})\) is a complete metric space, with possible infinite distances. In addition, \(\mathbb{F}\)-convergence implies \(\mathbb{F}\)-convergence within a correspondence; see [3, Theorem 6.12]. More precisely, **Theorem 6.9**.: _Let \((\mathcal{X}^{i},(\mu_{t}^{j})_{t\in I^{\prime,j}})\), \(i\in\mathbb{N}\cup\{\infty\}\), be metric flow pairs over \(I\) that are fully defined over some \(J\subset I\). Suppose that for any compact subinterval \(I_{0}\subset I\)_ \[d_{\mathbb{F}}^{J\cap I_{0}}((\mathcal{X}^{i},(\mu_{t}^{j})_{t\in I_{0}\cap I ^{\prime,j}}),(\mathcal{X}^{\infty},(\mu_{t}^{\infty})_{t\in I_{0}\cap I^{ \prime,\infty}}))\to 0.\] _Then there is a correspondence \(\mathfrak{C}\) between the metric flows \(\mathcal{X}^{i}\), \(i\in\mathbb{N}\cup\{\infty\}\), over \(I\) such that_ \[(\mathcal{X}^{i},(\mu_{t}^{j})_{t\in I^{\prime,j}})\xrightarrow[i\to\infty]{ \mathbb{F},\mathfrak{C},J}(\mathcal{X}^{\infty},(\mu_{t}^{\infty})_{t\in I^{ \prime,\infty}})\] _on compact time intervals, in the sense that_ \[d_{\mathbb{F}}^{\mathfrak{C},J\cap I_{0}}((\mathcal{X}^{i},(\mu_{t}^{i})_{t\in I _{0}\cap I^{\prime,j}}),(\mathcal{X}^{\infty},(\mu_{t}^{\infty})_{t\in I_{0} \cap I^{\prime,\infty}}))\to 0\] _for any compact subinterval \(I_{0}\subset I\)._ For a sequence of Ricci flows \((M^{n}_{i},g_{i}(t)),x_{i})_{t\in I}\) induced by Ricci shrinkers, one can use the \(\mathbb{F}\)-compactness theorem for metric flow pairs [3, Corollary 7.5, Theorem 7.6] to obtain the following result. **Theorem 6.10** (\(\mathbb{F}\)-compactness).: _Let \((M^{n}_{i},g_{i}(t),x_{i})_{t\in I}\) be a sequence of pointed Ricci flows induced by Ricci shrinkers with the corresponding metric flow pairs \((\mathcal{X}^{i},(\mu^{i}_{t})_{t\in I})\) as described in (6.1)._ _After passing to a subsequence, there exists an \(H_{n}\)-concentrated metric flow pair \((\mathcal{X}^{\infty},(\mu^{\infty}_{t})_{t\in I})\) for which \(\mathcal{X}^{\infty}\) is future continuous in the sense of [3, Definition 4.25] such that the following holds. There is a correspondence \(\mathfrak{C}\) between the metric flows \(\mathcal{X}^{i}\), \(i\in\mathbb{N}\cup\{\infty\}\), over \(I\) such that on compact time-intervals_ \[(\mathcal{X}^{i},(\mu^{i}_{t})_{t\in I})\xrightarrow[i\to\infty]{\mathbb{F}, \mathfrak{C}}(\mathcal{X}^{\infty},(\mu^{\infty}_{t})_{t\in I}). \tag{6.3}\] _Moreover, the convergence (6.3) is uniform over any compact \(J\subset I\) that only contains times at which \(\mathcal{X}^{\infty}\) is continuous, see [3, Definition 4.25]. Notice that \(\mathcal{X}^{\infty}\) is continuous everywhere except possibly at a countable set of times, by [3, Corollary 4.35]._ We sketch the main ideas and steps of Theorem 6.10 modulo all technical details. 1. One needs a characterization of the compactness for a subset in \((\mathbb{M},d_{GW_{1}})\), the isometry classes of all metric measure space \((X,d,\mu)\), where \(\mu\in\mathcal{P}(X)\) with \(\operatorname{supp}\mu=X\) and \(d_{GW_{1}}\) denotes the Gromov-\(W_{1}\)-Wasserstein distance (cf. [3, Definition 2.11]). Let \(\mathbb{M}_{r}(V,b)\subset\mathbb{M}\) be the subset consisting of \((X,d,\mu)\) satisfying \[\operatorname{Var}(\mu)\leq Vr^{2}\quad\text{and}\quad\mu\left(\{x\in X\mid \mu(D(x,\epsilon r))<b(\epsilon)\}\right)\leq\epsilon,\quad\forall\epsilon\in( 0,1].\] (6.4) Here, \(V,r\) are two positive constants and \(b:(0,1]\to(0,1]\) is a function. Moreover, \(D(x,\epsilon r)\) denotes a closed ball with center \(x\) and radius \(\epsilon r\). It is proved by [3, Theorem 2.27] that \(\mathbb{M}_{r}(V,b)\) is compact. 2. For any metric flow pair \((\mathcal{X},(\mu_{t})_{t\in I})\) defined in (6.1). It is clear by \(H_{n}\)-concentration that \(\operatorname{Var}(\mu_{t})\leq H_{n}|t|\). It can be proved (cf. [3, Proposition 4.1] with \(\tau=\frac{\epsilon^{2}}{8H_{n}}\)) that for any \(t<0\), \[(\mathcal{X}_{t},d_{t},\mu_{t})\in\mathbb{M}_{r}(V,b),\] (6.5) where \(V=1/8\), \(r=\sqrt{8H_{n}|t|}\), \(b(\epsilon)=\Phi(\epsilon^{-2}\sqrt{8H_{n}})/2\) and \(\Phi\) is given by (3.31). The proof of (6.5) uses Definition 6.1(6)(7) in an essential way. Therefore, for any \(t\leq 0\), \((\mathcal{X}^{i},d^{i}_{t},\mu^{i}_{t})\) subconverges in \(GW_{1}\) to a limit metric measure space. 3. To compare different time-slices of \((\mathcal{X},(\mu_{t})_{t\in I})\), one considers the function \[D(t):=\int_{\mathcal{X}_{t}}\int_{\mathcal{X}_{t}}d_{t}\,d\mu_{t}d\mu_{t}\] (6.6) for \(t\in I\). It is not hard to prove (cf. [3, Lemma 4.7]) that for any \(s\leq t\in I\), \[-\sqrt{H_{n}(t-s)}\leq D(t)-D(s)\leq\sqrt{\operatorname{Var}(\mu_{t})- \operatorname{Var}(\mu_{s})+H_{n}(t-s)}+2\sqrt{H_{n}(t-s)}.\] (6.7) It follows immediately from (6.7) that \(D(t)\) is continuous on a complement of a countable subset of \(I\). In addition, it is proved (cf. [3, Theorem 4.31]) that for any \(t_{0}\leq 0\), \(D(t)\) is continuous at \(t_{0}\) is equivalent to the statement that \((\mathcal{X}_{t},d_{t},\mu_{t})\) is continuous at \(t_{0}\) in the \(GW_{1}\) sense. In this case, one can construct an isometric embedding of \((\mathcal{X}_{t},d_{t})\) and \((\mathcal{X}_{t_{0}},d_{t_{0}})\) into a metric space \((Z_{t},d_{t}^{Z})\) with an explicit coupling \(q_{t}\) between \(\mu_{t}\) and \(\mu_{t_{0}}\) for \(t\) close to \(t_{0}\). Therefore, one concludes that the metric flow \((\mathcal{X}_{t},\mu_{t})\) is continuous on \(I\) except at a countable set of times. 4. For the sequence \((\mathcal{X}^{i},(\mu_{t}^{i})_{t\in I})\) in (6.3), we consider the limit \[D^{\infty}(t):=\lim_{t\to\infty}D^{i}(t),\] (6.8) which exists for any \(t\in I\) by taking a subsequence. Indeed, by (6.7), we may assume that \(D^{\infty}(t)\) exists for \(t\in I\cap\mathbb{Q}\) and \(D^{\infty}(t)-D^{\infty}(s)\geq-\sqrt{H_{n}(t-s)}\) for any \(s,t\in I\cap\mathbb{Q}\) with \(s\leq t\), after taking a subsequence if necessary. Therefore, there exists a countable set \(S\subset I\) such that \(D^{\infty}\) is continuous on \(I\setminus S\), by extending the definition of \(D^{\infty}\). Moreover, (6.8) holds for any \(t\in I\setminus S\). Now, (6.8) also holds for \(t\in S\), by further taking a subsequence. The \(\mathbb{F}\)-convergence of \((\mathcal{X}^{i},(\mu_{t}^{i})_{t\in I})\) can be constructed as follows. We assume \(D^{\infty}(t)\) is continuous at \(I\setminus S\) for a countable set \(S\). For a large \(k>0\), we take a compact set \(I_{1}\subset[-k,0]\setminus S\) so that \(|[-k,0]\setminus I_{1}|\) is small. Then \(I_{1}\) is finitely covered by compact intervals \(I_{t_{i}}\) centered at \(t_{i}\in I_{1}\) such that \(|I_{t_{i}}|\) and the oscillation of all \(D^{i}\) and \(D^{\infty}\) on each \(I_{t_{i}}\) are sufficiently small. By steps 1 and 2 above, one can construct a correspondence \(\mathfrak{C}_{0}\) that is fully defined on the finite set \(I_{0}:=\{t_{i}\}\) between \(\mathcal{X}^{i}\), so that \[d_{\mathbb{F}}^{\mathfrak{C}_{0},I_{0}}((\mathcal{X}^{i},(\mu_{t}^{i})_{t\in I _{0}}),(\mathcal{X}^{j},(\mu_{t}^{j})_{t\in I_{0}}))<\epsilon\] (6.9) for any \(\epsilon>0\), if \(i\), \(j\) are sufficiently large. Then by using the small oscillation of \(D^{i}\) on \(I_{t_{i}}\), one can extend the correspondence \(\mathfrak{C}_{0}\) to \(\mathfrak{C}_{1}\) over \(I_{1}\) so that \((\mathcal{X}^{i},(\mu_{t}^{i})_{t\in I_{1}})\) forms a Cauchy sequence over \(I_{1}\) in the sense of (6.9) with respect to \(d_{\mathbb{F}}^{\mathfrak{C}_{1},I_{1}}\) (cf. [3, Lemma 7.24]). By letting \(k\to\infty\) and taking a diagonal sequence, we obtain from the completeness of \((\mathbb{F}_{I},d_{\mathbb{F}})\) a limit metric flow pair \((\mathcal{X}^{\infty},(\mu_{t}^{\infty})_{t\in I\setminus S})\), which has an extended definition for all \(t\in I\) by the future completion (cf. [3, Section 4.4]) so that \((\mathcal{X}^{\infty},(\mu_{t}^{\infty})_{t\in I})\) is right continuous for \(t\in I\). Notice that Definition 6.1(1)-(7) for \((\mathcal{X}^{\infty},(\mu_{t}^{\infty})_{t\in I})\) are inherited from \((\mathcal{X}^{i},(\mu_{t}^{i})_{t\in I})\). In addition, one can construct a correspondence \(\mathfrak{C}\) so that \[(\mathcal{X}^{i},(\mu_{t}^{i})_{t\in I})\xrightarrow[i\to\infty]{\mathbb{F}, \mathfrak{C}}(\mathcal{X}^{\infty},(\mu_{t}^{\infty})_{t\in I})\] on compact time intervals and the convergence is uniform over the set on which \((\mathcal{X}^{\infty},(\mu_{t}^{\infty})_{t\in I})\) is continuous. Moreover, \((\mathcal{X}^{\infty}_{t^{\prime}},d_{t^{\prime}}^{\infty},\mu_{t^{\prime}}^ {\infty})\subset\mathbb{M}_{t}(V,b)\) as (6.5) and \(\operatorname{Var}(\mu_{t}^{\infty})\leq H_{n}|t|\) for any \(t\in I\). Notice that \(\mathcal{X}^{\infty}_{0}\) consists of a single point from which \(\mu_{t}^{\infty}\) is the conjugate heat measure. **Remark 6.11**.: _In [3, Theorem 7.4], a general compactness for a subset \(\mathbb{F}_{I}^{J}(H,V,b,r)\subset\mathbb{F}_{I}^{J}\) is proved by the same method as described above._ It follows from [3, Theorem 8.2, 8.4] that the limit metric flow pair \((\mathcal{X}^{\infty},(\mu_{t}^{\infty})_{t\in I})\) obtained in (6.3) is a length space for any \(t\in I\). In general, further geometric information contained in \((\mathcal{X}^{\infty},(\mu_{t}^{\infty})_{t\in I})\) is scarce. However, if \((M_{t}^{n},s_{i}(t))\) are induced by Ricci shrinkers in \(\mathcal{M}(A)\), then, in particular, their Nash entropies are uniformly bounded by Corollary 3.22. In this case, one obtains much more concrete structure theorem regarding the limit metric flow obtained in (6.3); see [4, Theorem 2.3, 2.4, 2.5, 2.6, 2.46] and [3, Theorem 9.31]. **Theorem 6.12**.: _Let \((M_{i}^{n},g_{i}(t),x_{i})_{t\in I}\) be a sequence of pointed Ricci flows induced by Ricci shrinkers in \(\mathcal{M}(A)\) and \((\mathcal{X}^{\infty},(\mu_{t}^{\infty})_{t\in I})\) the limit metric flow pair obtained in Theorem 6.10. Then the following properties hold._ 1. _There exists a decomposition_ \[\mathcal{X}^{\infty}_{0}=\{x_{\infty}\},\quad\mathcal{X}^{\infty}_{t<0}= \mathcal{R}\sqcup\mathcal{S},\] (6.10) _such that_ \(\mathcal{R}\) _is given by an_ \(n\)_-dimensional Ricci flow spacetime_ \((\mathcal{R},\mathfrak{t},\partial_{\mathfrak{t}}^{\infty},g^{\infty})\)_, in the sense of_ _[_3_, Definition 9.1]_ _and_ \(\dim_{\mathcal{M}^{*}}(\mathcal{S})\leq n-2\)_, where_ \(\dim_{\mathcal{M}^{*}}\) _denotes the_ \(*\)_-Minkowski dimension in_ _[_3_, Definition 3.42]__._ _Moreover,_ \(\mu_{t}^{\infty}(\mathcal{S}_{t})=0\) _for any_ \(t<0\)_._ 2. _Every tangent flow_ \((\mathcal{X}^{\prime},(v_{\mathcal{X}^{\infty}_{t},\mathcal{I}})_{t\leq 0})\) _at every point_ \(x\in\mathcal{X}^{\infty}\) _is a metric soliton in the sense of_ _[_3_, Definition 3.57]__. Moreover,_ \(\mathcal{X}^{\prime}\) _is the Gaussian soliton iff_ \(x\in\mathcal{R}\)_. If_ \(x\in\mathcal{S}\)_, the singular set of_ \((\mathcal{X}^{\prime},(v_{\mathcal{X}^{\infty}_{t},\mathcal{I}})_{t\leq 0})\) _on each_ \(t<0\) _has Minkowski dimension at most_ \(n-4\)_. In particular, if_ \(n=3\)_, the metric soliton is a smooth Ricci flow associated with a_ \(3\)_-dimensional Ricci shrinker. If_ \(n=4\)_, each slice of the metric soliton is a smooth Ricci shrinker orbifold with isolated singularities._ 3. \(\mathcal{R}_{t}=\mathcal{R}\cap\mathcal{X}^{\infty}_{t}\) _is open such that the restriction of_ \(d_{t}\) _on_ \(\mathcal{R}_{t}\) _agrees with the length metric of_ \(g_{t}\)_._ 4. _The convergence (_6.3_) is smooth on_ \(\mathcal{R}\)_, in the following sense. There exists an increasing sequence_ \(U_{1}\subset U_{2}\subset\ldots\subset\mathcal{R}\) _of open subsets with_ \(\bigcup_{i=1}^{\infty}U_{i}=\mathcal{R}\)_, open subsets_ \(V_{i}\subset M_{i}\times I\)_, time-preserving diffeomorphisms_ \(\phi_{i}:U_{i}\to V_{i}\) _and a sequence_ \(\epsilon_{i}\to 0\) _such that the following holds:_ 1. _We have_ \[\|\phi_{i}^{*}g^{i}-g^{\infty}\|_{\mathcal{C}^{\{\epsilon_{i}^{-1 }\}}_{t}(U_{i})} \leq\epsilon_{i},\] \[\|\phi_{i}^{*}\partial_{i}^{i}-\partial_{\mathfrak{t}}^{\infty}\|_ {\mathcal{C}^{\{\epsilon_{i}^{-1}\}}_{t}(U_{i})} \leq\epsilon_{i},\] \[\|w^{i}\circ\phi_{i}-w^{\infty}\|_{\mathcal{C}^{\{\epsilon_{i}^{-1 }\}}_{t}(U_{i})} \leq\epsilon_{i},\] _where_ \(g^{i}\) _is the spacetime metric induced by_ \(g_{i}(t)\)_, and_ \(w^{i}\) _is the conjugate heat kernel defined by_ \(d\mu^{i}=w^{i}dg^{i}\)_,_ \(i\in\mathbb{N}\cup\{\infty\}\)_._ 2. _Let_ \(y_{\infty}\in\mathcal{R}\) _and_ \(y_{i}\in M_{i}\times(-\infty,0)\)_. Then_ \(y_{i}\) _converges to_ \(y_{\infty}\) _within_ \(\mathfrak{C}\) _(cf._ _[_3_, Definition 6.18]__) if and only if_ \(y_{i}\in V_{i}\) _for large_ \(i\) _and_ \(\phi_{i}^{-1}(y_{i})\to y_{\infty}\) _in_ \(\mathcal{R}\)_._ 3. _If the convergence (_6.3_) is uniform at some time_ \(t\in I\)_, then for any compact subset_ \(K\subset\mathcal{R}_{t}\) _and for the same subsequence we have_ \[\sup_{x\in K\cap U_{i}}d_{t}^{Z}(\varphi_{t}^{i}(\phi_{i}(x)),\varphi_{t}^{ \infty}(x))\longrightarrow 0.\] Theorem 6.12 is a flow version of the Cheeger-Colding theory (cf. [11], [12] and [13]). Its proof shares similar strategy as its elliptic counterparts. Many concepts also have counterparts. For example, tangent flow corresponds to tangent space, metric soliton corresponds to metric cone. We recall their definitions. See [3, Definition 6.55, 3.57]. **Definition 6.13** (Tangent flow).: _Let \(\mathcal{X}\) be a metric flow over \(I\) and \(x_{0}\in\mathcal{X}_{t_{0}}\) a point. We say that a metric flow pair \((\mathcal{X}^{\prime},(v^{\prime}_{x_{\max},t^{\prime}})_{t\in I})\) is a tangent flow of \(\mathcal{X}\) at \(x_{0}\) if there is a sequence of scales \(\lambda_{k}>0\) with \(\lambda_{k}\to\infty\) such that for any \(T>0\) the parabolic rescalings_ \[\big{(}\mathcal{X}^{-t_{0},\lambda_{k}}_{[-T,0]},(v^{-t_{0},\lambda_{k}}_{x_{ 0},t})_{\lambda_{k}^{-2}t+t_{0}\in I^{\prime},t\in[-T,0]}\big{)}\] \(\mathbb{F}\)_-converge to \((\mathcal{X}^{\prime}_{[-T,0]},(v^{\prime}_{x_{\max},t})_{t\in[-T,0]})\)._ **Definition 6.14** (Metric soliton).: _A metric flow pair \((\mathcal{X},(\mu_{t})_{t\in I})\) is called a metric soliton if there is a tuple_ \[(X,d,\mu,(v^{\prime}_{x,t})_{x\in X;t\leq 0})\] _and a map \(\phi:\mathcal{X}\to X\) such that the following holds:_ 1. _For any_ \(t\in I\)_, the map_ \(\phi_{t}:(\mathcal{X}_{t},d_{t},\mu_{t})\to(X,\sqrt{t}d,\mu)\) _is an isometry between metric measure spaces._ 2. _For any_ \(x\in\mathcal{X}_{t}\)_,_ \(s\in I\) _with_ \(s\leq t\)_, we have_ \((\phi_{s})_{*}v_{x;s}=v^{\prime}_{\phi_{t}(x);\log(s/t)}\)_._ Roughly speaking, a metric soliton is a metric flow pair induced by a metric measure space in a shrinking way. In general, a tangent flow of a metric flow may not be a metric soliton. In the setting of Theorem 6.12, every tangent flow of \((\mathcal{X}^{\infty},(\mu^{\infty}_{t})_{t\in I})\) is also an \(\mathbb{F}\)-limit of a sequence of Ricci flows induced by Ricci shrinkers in \(\mathcal{M}(A)\) (cf. [3, Theorem 6.58]). Notice that the limit metric flow \((\mathcal{X}^{\infty},(\mu^{\infty}_{t})_{t\in I})\) in (6.3) always admits a regular-singular decomposition \[\mathcal{X}^{\infty}_{t<0}=\mathcal{R}\sqcup\mathcal{S},\] so that \(\mathcal{R}\) is given by a Ricci flow spacetime (cf. [3, Definition 9.1]). The key point is to control the size of the singular part in the appropriate sense. To avoid the distance distortion at different time-slices, one can redefine the Hausdorff and Minkowski dimensions (denoted by \(\mathcal{H}^{*}\) and \(\mathcal{M}^{*}\) respectively) by using the \(P^{*}\)-parabolic balls instead of the conventional ones; see [3, Definition 3.41, 3.42]. One can control the size of \(\mathcal{S}\) quantitatively. Let \((M^{n},g(t))_{t\in I}\) be the Ricci flow induced by a Ricci shrinker in \(\mathcal{M}(A)\). We fix a point \((x_{0},t_{0})\in M\times I\) and define \(\tau=t_{0}-t\) and \(H(x_{0},t_{0},\cdot,\cdot)=(4\pi\tau)^{-\frac{n}{2}}e^{-b}\). We next recall the following definitions from [4, Definition 5.1, 5.5, 5.6, 5.7], which indicate the extent to which the local geometry around \((x_{0},t_{0})\) is a Ricci shrinker, Ricci flat space or splitting off an \(\mathbb{R}^{k}\). **Definition 6.15** (Almost self-similarity).: _Let \((M^{n},g(t))_{t\in I}\) be the Ricci flow induced by a Ricci shrinker. The point \((x_{0},t_{0})\in M\times I\) is called \((\epsilon,r)\)-selfsimilar if the following holds:_ \[\int_{t_{0}-\epsilon^{-1}r^{2}}^{t_{0}-\epsilon r^{2}}\int_{M} \tau\Big{|}Rc+\nabla^{2}b-\frac{1}{2\tau}g\Big{|}^{2}dv_{x_{0},t_{0};t}dt\leq\epsilon,\] \[\int_{M}\big{|}\tau(2\triangle b-|\nabla b|^{2}+R)+b-n-\mathcal{ N}_{x_{0},t_{0}}(r^{2})\big{|}dv_{x_{0},t_{0};t}\leq\epsilon,\quad\forall t \in[t_{0}-\epsilon^{-1}r^{2},t_{0}-\epsilon r^{2}].\] **Definition 6.16** (Almost static).: _The point \((x_{0},t_{0})\) is called \((\epsilon,r)\)-static if the following holds:_ \[r^{2}\int_{t_{0}-\epsilon^{-1}r^{2}}^{t_{0}-\epsilon r^{2}}\int_{ M}|Rc|^{2}dv_{x_{0},t_{0};t}dt\leq\epsilon,\] \[r^{2}\int_{M}R\,dv_{x_{0},t_{0};t}\leq\epsilon,\quad\forall t\in[ t_{0}-\epsilon^{-1}r^{2},t_{0}-\epsilon r^{2}].\] **Definition 6.17** (Weak splitting).: \((x_{0},t_{0})\) _is called weakly \((k,\epsilon,r)\)-split if there exists a vector-valued function \(\vec{y}=(y_{1},\ldots,y_{k}):M\times[t_{0}-\epsilon^{-1}r^{2},t_{0}-\epsilon r ^{2}]\to\mathbb{R}^{k}\) with the following properties for all \(i,j=1,\ldots,k\):_ 1. _We have_ \[r^{-1}\int_{t_{0}-\epsilon^{-1}r^{2}}^{t_{0}-\epsilon r^{2}}\int_{M}|\Box y_ {i}|dv_{x_{0},t_{0};t}dt\leq\epsilon.\] 2. _We have_ \[r^{-2}\int_{t_{0}-\epsilon^{-1}r^{2}}^{t_{0}-\epsilon r^{2}}\int_{M}|\nabla y _{i}\cdot\nabla y_{j}-\delta_{ij}|dv_{x_{0},t_{0};t}dt\leq\epsilon.\] **Definition 6.18** (Strong splitting).: \((x_{0},t_{0})\) _is called strongly \((k,\epsilon,r)\)-split if there exists a vector-valued function \(\vec{y}=(y_{1},\ldots,y_{k}):M\times[t_{0}-\epsilon^{-1}r^{2},t_{0}-\epsilon r ^{2}]\to\mathbb{R}^{k}\) with the following properties for all \(i,j=1,\ldots,k\):_ 1. \(y_{i}\) _solves the heat equation_ \(\Box y_{i}=0\) _on_ \(M\times[t_{0}-\epsilon^{-1}r^{2},t_{0}-\epsilon r^{2}]\)_._ 2. _We have_ \[r^{-2}\int_{t_{0}-\epsilon^{-1}r^{2}}^{t_{0}-\epsilon r^{2}}\int_{M}|\nabla y _{i}\cdot\nabla y_{j}-\delta_{ij}|dv_{x_{0},t_{0};t}dt\leq\epsilon.\] 3. _For all_ \(t\in[t_{0}-\epsilon^{-1}r^{2},t_{0}-\epsilon r^{2}]\) _we have_ \[\int_{M}y_{i}\,dv_{x_{0}t_{0};t}=0.\] It can be proved (cf. [4, Proposition 12.1]) that if \((x_{0},t_{0})\) is weakly \((k,\epsilon,r)\)-split, then it is strongly \((k,\delta(\epsilon),r)\)-split. With these definitions, one can consider the following quantitative stratification. **Definition 6.19**.: _For \(\epsilon>0\) and \(0<r_{1}<r_{2}\leq\infty\) the effective strata_ \[\widetilde{\mathcal{S}}^{\epsilon,0}_{r_{1},r_{2}}\subset\widetilde{\mathcal{ S}}^{\epsilon,1}_{r_{1},r_{2}}\subset\widetilde{\mathcal{S}}^{\epsilon,2}_{r_{1},r_{2}}\subset\ldots\subset\widetilde{\mathcal{S}}^{\epsilon,n+2}_{r_{1},r_{2 }}\subset M\times I\] _are defined as follows: \((x^{\prime},t^{\prime})\in\widetilde{\mathcal{S}}^{\epsilon,k}_{r_{1},r_{2}}\) if and only for all \(r^{\prime}\in(r_{1},r_{2})\) none of the following two properties hold:_ 1. \((x^{\prime},t^{\prime})\) _is_ \((\epsilon,r^{\prime})\)_-selfsimilar and weakly_ \((k+1,\epsilon,r^{\prime})\)_-split._ 2. \((x^{\prime},t^{\prime})\) _is_ \((\epsilon,r^{\prime})\)_-selfsimilar,_ \((\epsilon,r^{\prime})\)_-static and weakly_ \((k-1,\epsilon,r^{\prime})\)_-split._ By a delicate choice of the covering by \(P^{*}\)-parabolic balls, it can be proved, see [4, Proposition 11.2], that for any \(0<\sigma<\epsilon\), there are points \((x_{1},t_{1}),\ldots,(x_{N},t_{N})\in\widetilde{\mathcal{S}}_{\sigma r, \epsilon r}^{\epsilon,k}\cap P^{*}(x_{0},t_{0};r)\) with \(N\leq C(A,\epsilon)\sigma^{-k-\epsilon}\) and \[\widetilde{\mathcal{S}}_{\sigma r,\epsilon r}^{\epsilon,k}\cap P^{*}(x_{0},t _{0};r)\subset\bigcup_{i=1}^{N}P^{*}(x_{i},t_{i};\sigma r). \tag{6.11}\] Notice that (6.11) can be regarded as a parabolic version of the covering in [12] by Cheeger and Naber. On the complement of \(\widetilde{\mathcal{S}}_{\sigma r,\epsilon r}^{\epsilon,n-2}\), the following \(\epsilon\)-regularity theorem is proved (cf. [4, Proposition 17.1]), which can be viewed as a parabolic analogue of Cheeger-Naber's codimension \(4\) theorem in [13]. Roughly speaking, one needs to rule out the tangent flows which are Ricci-flat, and split off an \(\mathbb{R}^{n-3}\). **Proposition 6.20**.: _There exists a constant \(\epsilon=\epsilon(n,A)>0\) such that the following holds. Let \((M^{n},g(t))_{t\in I}\) be the Ricci flow induced by a Ricci shrinker in \(\mathcal{M}(A)\). Suppose that \((x_{0},t_{0})\) is strongly \((n-1,\epsilon,r)\)-split or strongly \((n-3,\epsilon,r)\)-split and \((\epsilon,r)\)-static. Then \(r_{\mathrm{Rm}}(x_{0},t_{0})\geq\epsilon r\)._ There are many implications of Proposition 6.20. Notice that one has the following decomposition: \[\mathcal{X}_{t<0}^{\infty}=\mathcal{R}^{*}\sqcup\mathcal{S}^{*},\] where \(\mathcal{R}^{*}\subset\mathcal{R}\) is the set of points where the convergence (6.3) is smooth as defined in [3, Section 9.4]. Since \(\mathcal{S}\subset\mathcal{S}^{*}\), one can obtain the estimate of \(*\)-Minkowski dimension of \(\mathcal{S}\) by that of \(\mathcal{S}^{*}\) from (6.11) and Proposition 6.20 (cf. [4, Theorem 15.28 (a)]). Moreover, it can be proved that \(\mathcal{S}^{*}\cap\mathcal{X}_{t}^{\infty}\) has measure \(0\) for any \(t<0\) (cf. [4, Theorem 15.28 (b)]). Therefore, Theorem 6.12 (1) is obtained. Since \(\mathcal{S}\) has measure \(0\) on each time-slice, one can extend the definition of the Nash entropy on \(\mathcal{X}^{\infty}\). Therefore, the Nash entropy at the base point \(x^{\prime}\) of any tangent flow \((\mathcal{X}^{\prime},(v_{x^{\prime};t})_{t\in I})\) of \((\mathcal{X}^{\infty},(\mu_{t}^{\infty})_{t\in I})\) is a constant. By the relation between the Nash entropy and the almost self-similarity (cf. [4, Proposition 7.1]), one concludes that \((\mathcal{X}^{\prime},(v_{x^{\prime};t})_{t\in I})\) is a metric soliton since its regular part admits an incomplete Ricci shrinker and the tangent flow itself is determined by its regular part due to the high codimension of the singular part (cf. [4, Theorem 15.60, 15.69]). Moreover, the singular set on each time-slice of \((\mathcal{X}^{\prime},(v_{x^{\prime};t})_{t\in I})\) has Minkowski dimension \(\leq n-4\) (cf. [4, Theorem 2.16]). Furthermore, the fact that \(x\in\mathcal{R}\) iff \(\mathcal{X}^{\prime}\) is the Gaussian soliton follows from the \(\epsilon\)-regularity theorem 5.15 and the convergence of the Nash entropies under (6.3) (cf. [4, Theorem 2.11, 2.14]). Notice that if \(n=4\), each time-slice of \((\mathcal{X}^{\prime},(v_{x^{\prime};t})_{t\in I})\) is a smooth orbifold with isolated singularities since each tangent flow at any singular point of \((\mathcal{X}^{\prime},(v_{x^{\prime};t})_{t\in I})\) is a flat cone (cf. [4, Theorem 2.46]). Therefore, we obtain Theorem 6.12 (2). For Theorem 6.12 (3), the inequality \(d_{t}\leq d_{g_{t}}\) is clear. The opposite inequality is proved by showing that any \(u\in C^{0}(\mathcal{R}_{t})\) that is \(1\)-Lipschitz with respect to \(d_{g_{t}}\) is also \(1\)-Lipschitz with respect to \(d_{t}\) (cf. [4, Theorem 15.28 (c)]). The argument uses the high codimension of \(\mathcal{S}\), the fact that \(\mathcal{X}^{\infty}\) is future continuous at \(t\), and the fact that \(\mathcal{R}=\mathcal{R}^{*}\), which can be proved by using the \(\epsilon\)-regularity theorem and the convergence of the Nash entropies (cf. [4, Corollary 15.47]). Once we know \(\mathcal{R}=\mathcal{R}^{*}\), the diffeomorphisms in Theorem 6.21 (4) can be obtained by patching all local conventional Ricci flows into a Ricci flow spacetime by a center of mass construction (cf. [3, Theorem 9.31]). Notice that similar constructions are well-known for the Cheeger-Gromov convergence (cf. Remark 7.7 of [24] and references therein). All assertions Theorem 6.21 (4) are proved by smooth convergence. Therefore, Theorem 6.12 (4) is obtained. As an application of the theory of \(\mathbb{F}\)-convergence, we have the following backward pseudolocality theorem; see [4, Theorem 2.47]. Earlier backward pseudolocality can be found in [34, Corollary 11.6(b)] [15, Lemma 4.2] [16, Theorem 4.7] [6, Theorem 1.5]. **Theorem 6.21** (Backward pseudolocality theorem).: _For any \(n\in\mathbb{N}\) and \(\alpha>0\) there is an \(\epsilon(n,\alpha)>0\) such that the following holds._ _Let \((M^{n},g(t))_{t\in I}\) be a Ricci flow induced by a Ricci shrinker. Given \((x_{0},t_{0})\in M\times I\) and \(r>0\), if_ \[|B_{t_{0}}(x_{0},r)|\geq\alpha r^{n},\qquad|Rm|\leq(\alpha r)^{-2}\quad\text{ on}\quad B_{t_{0}}(x_{0},r),\] _then_ \[|Rm|\leq(\epsilon r)^{-2}\quad\text{on}\quad P(x_{0},t_{0};(1-\alpha)r,-( \epsilon r)^{2},0).\] Note that the combination of the above theorem with the forward pseudolocality (cf. Theorem 24 of [28]), we arrive at the two-sided pseudolocality. Thus Theorem 1.6 is proved. Combining Theorem 6.21 and (5.23), we have **Corollary 6.22** (Comparison of the curvature radii).: _There exists a constant \(C(n,A)>1\) such that the following holds._ _Let \((M^{n},g(t))_{t\in I^{\prime}}\) be a Ricci flow induced by a Ricci shrinker in \(\mathcal{M}(A)\). Then for any \((x,t)\in M\times I^{\prime}\),_ \[r_{\rm Rm}(x,t)\leq r_{\rm Rm}^{-}(x,t)\leq r_{\rm Rm}^{s}(x,t)\leq Cr_{\rm Rm }(x,t).\] Another application is the following integral estimate using the quantitative stratification; see [4, Theorem 2.28]. **Theorem 6.23**.: _Let \((M^{n},g(t))_{t<1}\) be a Ricci flow associated with a Ricci shrinker in \(\mathcal{M}(A)\). Then for any \((x_{0},t_{0})\in M\times(-\infty,1)\), \(r>0\) and \(\epsilon>0\),_ \[\int_{[t_{0}-r^{2},t_{0}+r^{2}]\cap(-\infty,1)}\int_{P^{r}(x_{0}, t_{0};r)\cap M\times[t]}|Rm|^{2-\epsilon}\,dV_{t}dt\] \[\leq\int_{[t_{0}-r^{2},t_{0}+r^{2}]\cap(-\infty,1)}\int_{P^{r}(x_ {0},t_{0};r)\cap M\times[t]}r_{\rm Rm}^{-4+2\epsilon}\,dV_{t}dt\leq C(n,A, \epsilon)r^{n-2+2\epsilon}. \tag{6.12}\] As a corollary, we prove **Corollary 6.24**.: _Let \((M^{n},g,f,p)\) be a Ricci shrinker in \(\mathcal{M}(A)\). Then_ \[\int_{d(p,\cdot)\leq r}|Rm|^{2-\epsilon}\,dV\leq\int_{d(p,\cdot) \leq r}r_{\rm Rm}^{-4+2\epsilon}\,dV\leq Cr^{n+2\epsilon-2}, \tag{6.13}\] \[\int_{d(p,\cdot)\geq 1}\frac{|Rm|^{2-\epsilon}}{d^{n+2 \epsilon-2}(p,\cdot)}\,dV\leq\int_{d(p,\cdot)\geq 1}\frac{r_{\rm Rm}^{-4+2 \epsilon}}{d^{n+2\epsilon-2}(p,\cdot)}\,dV\leq C \tag{6.14}\] _for any \(\epsilon>0\) and \(r\geq 1\), where \(r_{\rm Rm}(\cdot)=r_{\rm Rm}(\cdot,0)\) and \(C=C(n,A,\epsilon)\)._ Proof.: We consider the Ricci flow \((M,g(t))_{t<1}\) associated with the given Ricci shrinker. It follows from Proposition 5.7 that \[Q(p,0;1,0,1)\subset P^{*}(p,0;C_{1},0,1)\subset P^{*}(p,0;C_{1})\] for some constant \(C_{1}=C_{1}(n,A)>1\). Therefore, it follows from Theorem 6.23 that with \((x_{0},t_{0})=(p,0)\) and \(r=C_{1}\) that \[\int_{0}^{1}\int_{d_{t}(p,\cdot)<1}|Rm|^{2-\epsilon}\,dV_{t}dt\leq C (n,A,\epsilon). \tag{6.15}\] Since \(g(t)=(1-t)(\psi^{t})^{*}g\) and \(\psi^{t}\) is defined by (2.3), we have \(d_{t}(x,p)=\sqrt{1-td}(x^{t},p)\) and \(|Rm|(x,t)=|Rm|(x^{t})/(1-t)\), where \(x^{t}=\psi^{t}(x)\). Therefore, we have \[\int_{d_{t}(p,x)<1}|Rm|^{2-\epsilon}(x,t)\,dV_{t}(x)=(1-t)^{\frac {n}{2}-2+\epsilon}\int_{d(x,p)<\frac{1}{\sqrt{1-t}}}|Rm|^{2-\epsilon}(x)\,dV( x). \tag{6.16}\] By a change of variable with \(t=1-r^{-2}\), it follows from (6.15) and (6.16) that \[\int_{1}^{\infty}r^{1-2\epsilon-n}m(r)\,dr\leq C(n,A,\epsilon), \tag{6.17}\] where \[m(r):=\int_{d(\cdot,p)<r}|Rm|^{2-\epsilon}\,dV.\] We claim that there exists a sequence \(r_{i}\to\infty\) such that \[\lim_{i\to\infty}\frac{m(r_{i})}{r_{i}^{n+2\epsilon-2}}=0. \tag{6.18}\] Otherwise, there exists a constant \(\delta>0\) such that \(m(r)\geq\delta r^{n+2\epsilon-2}\) for sufficiently large \(r\). However, it contradicts (6.17). We apply the integration by parts to (6.17) from \(1\) to \(r_{i}\) to (6.17) and obtain \[\int_{1\leq d(p,\cdot)\leq r_{i}}\frac{|Rm|^{2-\epsilon}}{d^{n+2 \epsilon-2}(p,\cdot)}\,dV\leq C(n,A,\epsilon)+m(r_{i})r_{i}^{2-2\epsilon+n}.\] By letting \(i\to\infty\), we have from (6.18) that \[\int_{d(p,\cdot)\geq 1}\frac{|Rm|^{2-\epsilon}}{d^{n+2 \epsilon-2}(p,\cdot)}\,dV\leq C(n,A,\epsilon). \tag{6.19}\] In addition, we have for any \(r\geq 1\), \[r^{2-2\epsilon-n}\int_{1\leq d(p,\cdot)\leq r}|Rm|^{2-\epsilon} \,dV\leq\int_{1\leq d(p,\cdot)\leq r}\frac{|Rm|^{2-\epsilon}}{d^{n+2\epsilon- 2}(p,\cdot)}\,dV\leq C(n,A,\epsilon).\] Therefore, for any \(r\geq 1\), \[\int_{d(p,\cdot)\leq r}|Rm|^{2-\epsilon}\,dV\leq C(n,A,\epsilon)r^{n+2 \epsilon-2}, \tag{6.20}\] since \(m(1)\) is bounded by (6.17). In sum, the inequalities involving \(|Rm|\) in (6.13) and (6.14) are proved. Notice that for any \((x_{0},t_{0})\in M\times(-\infty,1)\), it follows from the definition of \(\psi^{t}\) (2.3) that \[\psi^{\theta(t)}\circ\psi^{t_{0}}=\psi^{t},\] where \(\theta(t):=\frac{t-t_{0}}{1-t_{0}}\). Therefore, for any \(t<1\), \[g(t)=(1-t)(\psi^{t})^{*}g=(1-t_{0})(1-\theta(t))(\psi^{t_{0}})^{* }(\psi^{\theta(t)})^{*}g=(1-t_{0})(\psi^{t_{0}})^{*}g(\theta(t)).\] Therefore, \[r_{\rm Rm}(x_{0},t_{0})=\sqrt{1-t_{0}}r_{\rm Rm}(x_{0}^{t_{0}},0).\] Now, the conclusion regarding \(r_{\rm Rm}\) can be proved similarly. We end this section by proving a gap property for the volume ratio at infinity. **Corollary 6.25**.: _Let \((M^{n},g,f,p)\) be a Ricci shrinker in \(\mathcal{M}(A)\). Suppose_ \[\liminf_{r\to\infty}\frac{|B(p,r)|}{r^{n}}=0. \tag{6.21}\] _Then_ \[|B(p,r)|\leq Cr^{n-2+\epsilon} \tag{6.22}\] _for any \(r\geq 1\) and some \(C=C(n,A,\epsilon)\)._ Proof.: We claim that \(r_{\rm Rm}(x)<2\) for any \(x\). Indeed, if \(r_{\rm Rm}(x)\geq 2\), we have \[|Rm|(y,t)<1\] for any \(y\in B(x,1)\) and \(t<1\). By the same argument as in [28, Corollary 9], we obtain that \[\psi^{t}\left(B(x,1)\right)\subset B\left(p,\frac{c_{1}}{\sqrt{1- t}}\right)\setminus B\left(p,\frac{c_{2}}{\sqrt{1-t}}\right) \tag{6.23}\] for \(c_{1}>c_{2}>0\), if \(t\) is sufficiently close to \(1\). From the standard distance distortion and Theorem 4.1, we obtain that \[|\psi^{t}\left(B(x,1)\right)|\geq c_{3}(1-t)^{-\frac{n}{2}}. \tag{6.24}\] However, (6.23) and (6.24) contradict (6.21). Thus the desired inequality (6.22) follows immediately from (6.13). ## Appendix A Integral estimates for the conjugate heat kernel In this appendix, we generalize some integral estimates regarding the conjugate heat kernel from [4, Section 6] to Ricci flows associated with Ricci shrinkers. These estimates also hold for Ricci flows induced by Ricci shrinkers since they are scaling-invariant. Throughout this appendix, let \((M^{n},g(t))_{t<1}\) be the Ricci flow associated with a Ricci shrinker in \(\mathcal{M}(A)\). We fix a spacetime point \((x_{0},t_{0})\in M\times(-\infty,1)\) and set \(dv_{t}=dv_{x_{0},t_{0},t_{0},t}\) and \(\tau=t_{0}-t\). Moreover, we define \(w=w(x,t)\) and \(b=b(x,t)\) by \(w=H(x_{0},t_{0},x,t)=(4\pi(t_{0}-t))^{-\frac{n}{2}}e^{-b}\). **Lemma A.1**.: _There exists a constant \(C=C(n,A)>1\) such that for any \(0<\tau_{0}<\tau_{1}\),_ \[\int_{t_{0}-\tau_{1}}^{t_{0}-\tau_{0}}\int_{M}\left\{|Rc|^{2}+| \nabla^{2}b|^{2}\right\}dv_{t}\,dt\leq C\tau_{0}^{-1}\left(1+\log\frac{\tau_{ 1}}{\tau_{0}}\right).\] (A.1) Proof.: Without loss of generality, we assume \(t_{0}=0\). From Corollary 3.20, we have for any \(\tau>0\) that \[\int_{M}|\nabla b|^{2}+R\,dv_{-\tau}\leq\frac{n}{2\tau}.\] (A.2) Direct calculation shows that \[\partial_{t}\int_{M}Rw\phi^{r}\,dV_{t} =\int_{M}\left\{(\Box R)w\phi^{r}-R\Box^{*}(w\phi^{r})\right\}dV_ {t}\] \[=\int_{M}2|Rc|^{2}w\phi^{r}+R\left\{w(\Delta\phi^{r}+\phi_{t}^{r} )+2(\nabla w,\nabla\phi^{r})\right\}\,dV_{t}\] \[=\int_{M}\left\{2|Rc|^{2}\phi^{r}+R(\Delta\phi^{r}+\phi_{t}^{r} )-2R(\nabla b,\nabla\phi^{r})\right\}\,dv_{t}.\] Integrating the above equation from \(-\tau_{1}\) to \(-\tau_{0}\), we obtain \[\int_{-\tau_{1}}^{-\tau_{0}}\int_{M}2|Rc|^{2}\phi^{r}\,dv_{t}dt\] \[\leq\int_{M}R\phi^{r}\,dv_{-\tau_{0}}+\int_{-\tau_{1}}^{-\tau_{0} }\int_{M}\left\{R(|\Delta\phi^{r}|+|\phi_{t}^{r}|)+R^{2}|\nabla\phi^{r}|+| \nabla b|^{2}|\nabla\phi^{r}|\right\}dv_{t}dt.\] (A.3) From (2.8) and Lemma 2.3, \(R\) increases at most quadratically. Combining (2.12), (2.13), (2.14) and (A.2), it follows that the last integral in (A.3) tends to \(0\) as \(r\to\infty\). Therefore, we obtain \[\int_{-\tau_{1}}^{-\tau_{0}}\int_{M}|Rc|^{2}\,dv_{t}dt\leq\frac{1 }{2}\int_{M}R\,dv_{-\tau_{0}}\leq\frac{n}{4\tau_{0}}.\] (A.4) On the other hand, it follows from (3.40) and Corollary 3.22 that \[\int_{-\tau_{1}}^{-\tau_{0}}2\tau\int_{M}\left|Rc+\nabla^{2}b- \frac{g}{2\tau}\right|^{2}\,dv_{t}dt\leq-\mathcal{W}_{(x_{0},t_{0})}(\tau_{1}) \leq A.\] (A.5) By virtue of the elementary identity \((x-y)^{2}\geq x^{2}/2-y^{2}\), it follows from (A.5) that \[\tau_{0}\int_{-\tau_{1}}^{-\tau_{0}}\int_{M}|Rc+\nabla^{2}b|^{2}\, dv_{t}dt\leq\int_{-\tau_{1}}^{-\tau_{0}}\tau\int_{M}|Rc+\nabla^{2}b|^{2}\,dv_{t}dt \leq A+\frac{n}{2}\log\frac{\tau_{1}}{\tau_{0}}.\] (A.6) Combining (A.4) and (A.6), the conclusion follows immediately. **Lemma A.2**.: _There exists a constant \(C=C(n,A)>1\) such that the following estimates hold for any \(t<t_{0}\) and \(0\leq s\leq 1/4\)._ \[\int_{M}\left\{1+|b|+\tau(|\Delta b|+|\nabla b|^{2}+R)\right\}e^ {sb}\,dv_{t}\leq C.\] (A.7) Proof.: We compute \[\frac{d}{ds}\int_{M}e^{sb}\,dv_{t}=\int_{M}be^{sb}\,dv_{t}.\] (A.8) Here, the differentiation under the integral sign is allowed by Theorem 4.16 and Remark 4.17. By the differential Harnack inequality [28, Theorem 21], we calculate \[\int_{M}be^{sb}\,dv_{t} \leq\int_{M}(\tau(-2\Delta b+|\nabla b|^{2}-R)+n)e^{sb}\,dv_{t}\] \[=\int_{M}(\tau((2s-1)|\nabla b|^{2}-R)+n)e^{sb}\,dv_{t}\leq n\int _{M}e^{sb}\,dv_{t},\] (A.9) where the integration by parts in the equality can be justified similarly as in Remark 3.21. Combining (A.8) and (A.9), we obtain \[\int_{M}e^{sb}\,dv_{t}\leq e^{ns}.\] On the other hand, it follows from Theorem 4.4 that \(b\geq-A\). Therefore, it follows from (A.9) and the above inequality that \[\int_{M}\left\{|b|+\tau(|\nabla b|^{2}+R)\right\}e^{sb}\,dv_{t} \leq C(n,A).\] (A.10) Applying the differential Harnack inequality and the integration by parts again, we obtain \[\int_{M}2\tau|\Delta b|e^{sb}\,dv_{t} \leq\int_{M}\left\{|u|+\tau(|\nabla b|^{2}+R)+|b|+n\right\}e^{sb }\,dv_{t}\] \[=\int_{M}\left\{-u+\tau(|\nabla b|^{2}+R)+|b|+n\right\}e^{sb}\,dv _{t}\] \[\leq\int_{M}\left\{2s\tau|\nabla b|^{2}+2|b|+2n\right\}e^{sb}\,dv _{t}\leq C(n,A),\] where \(u=\tau(2\Delta b-|\nabla b|^{2}+R)+b-n\leq 0\). It is clear that (A.7) follows from the combination of (A.10) and the above inequality. **Lemma A.3**.: _There exists a constant \(C=C(n)>1\) such that the following estimates hold for any \(t<t_{0}\) and \(0\leq s\leq 1/4\)._ \[\int_{M}|\nabla b|^{4}\phi^{r}e^{sb}\,dv_{t}\leq C\int_{M}|\nabla^{2}b|^{2}e^{ sb}\,dv_{t}.\] (A.11) Proof.: In the proof, all constants \(C>1\) depend only on \(n\), which may be different line by line. We compute for \(s\leq 1/4\) that \[\int_{M}|\nabla b|^{4}\phi^{r}e^{sb}\,dv_{t}\] \[= (4\pi\tau)^{-\frac{n}{2}}\int_{M}|\nabla b|^{4}\phi^{r}e^{(s-1)b} \,dV_{t}\] \[= (4\pi\tau)^{-\frac{n}{2}}(s-1)^{-1}\int_{M}|\nabla b|^{2}\langle \nabla b,\nabla e^{(s-1)b}\rangle\phi^{r}\,dV_{t}\] \[= (4\pi\tau)^{-\frac{n}{2}}(1-s)^{-1}\int_{M}\left(2\nabla^{2}b( \nabla b,\nabla b)+|\nabla b|^{2}\Delta b\right)\phi^{r}e^{(s-1)b}\,dV_{t}+Z\] \[\leq C(4\pi\tau)^{-\frac{n}{2}}(1-s)^{-1}\int_{M}|\nabla^{2}b|| \nabla b|^{2}\phi^{r}e^{(s-1)b}\,dV_{t}+Z\] \[\leq \frac{1}{4}\int_{M}|\nabla b|^{4}\phi^{r}e^{sb}\,dv_{t}+C\int_{M }|\nabla^{2}b|^{2}\phi^{r}e^{sb}\,dv_{t}+Z,\] (A.12) where the remainder \[Z: =(1-s)^{-1}\int_{M}|\nabla b|^{2}\langle\nabla b,\nabla\phi^{r} \rangle e^{sb}\,dv_{t}\leq 2\int_{M}|\nabla b|^{3}|\nabla\phi^{r}|e^{sb}\,dv_{t}\] \[\leq\frac{1}{4}\int_{M}|\nabla b|^{4}\phi^{r}e^{sb}\,dv_{t}+4\int _{M}|\nabla b|^{2}|\nabla\phi^{r}|^{2}(\phi^{r})^{-1}e^{sb}\,dv_{t}.\] (A.13) Applying Lemma A.2 and (2.12), we conclude from (A.12) and (A.13) that \[\int_{M}|\nabla b|^{4}e^{sb}\phi^{r}\,dv_{t}\leq C\int_{M}|\nabla^{2}b|^{2}e^ {sb}\phi^{r}\,dv_{t}+\epsilon(r)\] (A.14) where \(\epsilon(r)\to 0\) as \(r\to\infty\). Thus we arrive at (A.11) by letting \(r\to\infty\) in the above inequality. The main result of this section is the following spacetime integral estimate. **Proposition A.4**.: _There exists a constant \(C=C(n,A)>1\) and \(\bar{s}=\bar{s}(n)<1\) such that the following estimates hold for any \(r>0\), \(0<\theta<1/2\) and \(s\leq\bar{s}\)._ \[\int_{t_{0}-\tau_{0}}^{t_{0}-\theta\tau_{0}}\int_{M}\tau(|Rc|^{2}+|\nabla^{2}b |^{2}+|\nabla b|^{4})e^{sb}\,dv_{t}dt\leq C\log\theta^{-1}.\] (A.15) Proof.: In the proof, all constants \(C\) depend on \(n\), and \(C^{\prime}\) depend on \(n\) and \(A\). Moreover, we use \(\epsilon(r)\) to denote a function independent of \(t\) such that \(\epsilon(r)\to 0\) if \(r\to\infty\). Those terms may be different line by line. Without loss of generality, we assume \(t_{0}=0\). We set \(u=\tau(2\Delta b-|\nabla b|^{2}+R)+b-n\leq 0\). Recall that from [34], we have the celebrated identity \[\Box^{*}(uw)=-2\tau\left|Rc+\nabla^{2}b-\frac{g}{2\tau}\right|^{2}w.\] Moreover, we have \[\Box b=-2\Delta b+|\nabla b|^{2}-R+\frac{n}{2\tau}=\tau^{-1}(b-u-n/2).\] Direct computation shows that \[\partial_{t}\int_{M}uwe^{sb}\phi^{r}\,dV_{t}\] \[=\int_{M}\left\{(\Box e^{sb}\phi^{r})uw-e^{sb}\phi^{r}\Box^{*}(uv )\right\}dV_{t}\] \[=\int_{M}\left\{\left((\Box e^{sb})\phi^{r}+e^{sb}(\Box\phi^{r}) -2\langle\nabla\phi^{r},\nabla e^{sb}\rangle\right)uw+2\tau\left|Rc+\nabla^{2} b-\frac{g}{2\tau}\right|^{2}we^{sb}\phi^{r}\right\}dV_{t}\] \[=\int_{M}\left\{\left((s\Box b-s^{2}|\nabla b|^{2})e^{sb}\phi^{r} +e^{sb}(\Box\phi^{r})-2\langle\nabla\phi^{r},\nabla e^{sb}\rangle\right)u+2 \tau\left|Rc+\nabla^{2}b-\frac{g}{2\tau}\right|^{2}e^{sb}\phi^{r}\right\}wdV_ {t},\] \[=\int_{M}\left\{\left((s\tau^{-1}(b-u-n/2)-s^{2}|\nabla b|^{2}) \phi^{r}+\Box\phi^{r}-2s\langle\nabla\phi^{r},\nabla b\rangle\right)u+2\tau \left|Rc+\nabla^{2}b-\frac{g}{2\tau}\right|^{2}\phi^{r}\right\}e^{sb}dv_{t}.\] It follows that \[\partial_{t}\int_{M}uwe^{sb}\phi^{r}\,dV_{t}\] \[\geq\int_{M}\left\{(s\tau^{-1}(b-u-n/2))u\phi^{r}+\left(\Box\phi^ {r}-2s\langle\nabla\phi^{r},\nabla b\rangle\right)u+2\tau\left|Rc+\nabla^{2} b-\frac{g}{2\tau}\right|^{2}\phi^{r}\right\}e^{sb}dv_{t}\] \[\geq\int_{M}\left\{-Cs\tau^{-1}(u^{2}+b^{2}+1)\phi^{r}+\left(\Box \phi^{r}-2s\langle\nabla\phi^{r},\nabla b\rangle\right)u+2\tau\left|Rc+\nabla ^{2}b-\frac{g}{2\tau}\right|^{2}\phi^{r}\right\}e^{sb}dv_{t}\] \[\geq\int_{M}\left\{-Cs\left(\tau((\Delta b)^{2}+|\nabla b|^{4}+R^ {2})+\tau^{-1}(b^{2}+1)\right)+2\tau\left|Rc+\nabla^{2}b-\frac{g}{2\tau}\right| ^{2}\right\}\phi^{r}e^{sb}dv_{t}+X_{t},\] (A.16) where \[X_{t}:=\int_{M}\left(\Box\phi^{r}-2s\langle\nabla\phi^{r},\nabla b\rangle \right)ue^{sb}dv_{t}.\] Define \(X^{\prime}:=\int_{-\tau_{0}}^{-\theta\tau_{0}}|X_{t}|\,dt\). Then it follows from Lemma A.2 and inequalities (2.12) to (2.15) that for any positive \(\delta\) we have \[|X^{\prime}|\leq \int_{-\tau_{0}}^{-\theta\tau_{0}}\int_{M}(|\Box\phi^{r}|+2s| \nabla\phi^{r}||\nabla b|)|u|e^{sb}\,dv_{t}dt\] \[\leq \epsilon(r)+\int_{-\tau_{0}}^{-\theta\tau_{0}}\int_{M}\left( \delta^{-1}|\nabla\phi^{r}|^{2}(\phi^{r})^{-1}\tau|\nabla b|^{2}+\delta\tau^ {-1}u^{2}\phi^{r}\right)e^{sb}\,dv_{t}dt\] \[\leq \epsilon(r)+\delta\int_{-\tau_{0}}^{-\theta\tau_{0}}\int_{M}\tau ^{-1}u^{2}\phi^{r}e^{sb}\,dv_{t}dt.\] (A.17) It is clear from the definition of \(u\) that \(u^{2}\leq C\left(\tau^{2}(|\Delta b|^{2}+|\nabla b|^{2}+R))+b^{2}+1\right)\). In addition, since \(-A\leq b\leq-\tau(2\Delta b-|\nabla b|^{2}+R)+n\), we have \[b^{2}\leq C^{\prime}\left(\tau^{2}(|\Delta b|^{2}+|\nabla b|^{4}+R^{2})+1\right).\] (A.18) Combining these facts with (A.14), we may choose \(\delta\) in (A.17) sufficiently small such that \[|X^{\prime}|\leq\epsilon(r)+\frac{1}{10}\int_{-\tau_{0}}^{-\theta\tau_{0}}\int _{M}\tau(|\nabla^{2}b|^{2}+|Rc|^{2})\phi^{r}e^{sb}\,dv_{t}dt+\log\theta^{-1}.\] (A.19) Similarly, we compute \[\partial_{t}\int_{M}\tau Re^{sb}w\phi^{r}\,dV_{t}\] \[= \int_{M}\left\{\Box(\tau Re^{sb})\phi^{r}w-\tau Re^{sb}\Box^{*}( \phi^{r}w)\right\}dV_{t}\] \[= \int_{M}\left\{\left(\Box(\tau R)e^{sb}+\tau R\Box e^{sb}-2\tau \langle\nabla R,\nabla e^{sb}\rangle\right)\phi^{r}w+\tau Re^{sb}\left(( \Delta\phi^{r}+\phi_{t}^{r})w+2\langle\nabla w,\nabla\phi^{r}\rangle\right) \right\}dV_{t}\] \[= \int_{M}\left\{\left(\Box(\tau R)e^{sb}+\tau R\Box e^{sb}\right) \phi^{r}w+2\tau R\left(\Delta e^{sb}-\langle\nabla e^{sb},\nabla b\rangle \right)\phi^{r}w\right\}dV_{t}+Y_{t}\] \[= \int_{M}\left\{2\tau|Rc|^{2}-R+\tau R\left(s\Box b+(s^{2}-2s)| \nabla b|^{2}+2s\Delta b\right)\right]\phi^{r}e^{sb}dv_{t}+Y_{t}\] \[\geq \int_{M}\left\{2\tau|Rc|^{2}-R-Cs\left(\tau(R^{2}+|\nabla b|^{4}+ (\Delta b)^{2})+R\right)\right\}\phi^{r}e^{sb}dv_{t}+Y_{t},\] (A.20) where \[Y_{t}: =\int_{M}\tau Re^{sb}\left((\Delta\phi^{r}+\phi_{t}^{r})w+2 \langle\nabla w,\nabla\phi^{r}\rangle+2s\langle\nabla b,\nabla\phi^{r} \rangle\right)\,dV_{t}\] \[=\int_{M}\tau R\left(\Delta\phi^{r}+\phi_{t}^{r}+(2s-2)\langle \nabla b,\nabla\phi^{r}\rangle\right)e^{sb}\,dv_{t}.\] We define similarly \(Y:=\int_{-\tau_{0}}^{-\theta\tau_{0}}|Y_{t}|\,dt\). Then it follows from Lemma A.2 and inequalities (2.12) to (2.15) that \[|Y^{\prime}|\leq \epsilon(r)+C\int_{-\tau_{0}}^{-\theta\tau_{0}}\int_{M}\left( \delta^{-1}|\nabla\phi^{r}|^{2}(\phi^{r})^{-1}\tau|\nabla b|^{2}+\delta\tau R^ {2}\phi^{r}\right)e^{sb}\,dv_{t}dt\] \[\leq \epsilon(r)+C\int_{-\tau_{0}}^{-\theta\tau_{0}}\int_{M}\delta \tau R^{2}\phi^{r}e^{sb}\,dv_{t}dt\] \[\leq \epsilon(r)+\frac{1}{10}\int_{-\tau_{0}}^{-\theta\tau_{0}}\int_{ M}\tau(|\nabla^{2}b|^{2}+|Rc|^{2})\phi^{r}e^{sb}\,dv_{t}dt,\] (A.21) for \(\delta\) sufficiently small. Combining (A.16) and (A.20), we obtain \[\partial_{t}\int_{M}(\tau R+u)e^{sb}\phi^{r}\,dv_{t}\] \[\geq \int_{M}\left\{-Cs\left(\tau(|\nabla b|^{4}+|\nabla^{2}b|^{2}+R^{2} )+\tau^{-1}+R\right)+2\tau\left|Rc+\nabla^{2}b-\frac{g}{2\tau}\right|^{2}+2\tau |Rc|^{2}-R\right\}\phi^{r}e^{sb}dv_{t}\] \[+X_{t}+Y_{t}\] \[\geq \int_{M}\left\{-Cs\left(\tau(|\nabla b|^{4}+|\nabla^{2}b|^{2}+R^{ 2})\right)+\tau(|\nabla^{2}b|^{2}+|Rc|^{2})\right\}\phi^{r}e^{sb}dv_{t}+X_{t}+ Y_{t}-C^{\prime}\tau^{-1},\] (A.22) where we have used Lemma A.2. If \(s\) is sufficiently small, it follows from (A.22) and (A.14) that \[\partial_{t}\int_{M}(\tau R+u)e^{sb}\phi^{r}\,dv_{t}\] \[\geq \frac{1}{2}\int_{M}\tau(|\nabla^{2}b|^{2}+|Rc|^{2})e^{sb}\phi^{r} \,dv_{t}+X_{t}+Y_{t}-C^{\prime}\tau^{-1}+\epsilon(r).\] (A.23) By integration from \(-\tau_{0}\) to \(-\theta\tau_{0}\), we obtain from (A.23), Lemma A.2, (A.19) and (A.21) that \[\int_{-\tau_{0}}^{-\theta\tau_{0}}\int_{M}\tau(|\nabla^{2}b|^{2}+|Rc|^{2})e^{ sb}\phi^{r}\,dv_{t}dt\leq C^{\prime}\log\theta^{-1}+\epsilon(r).\] Letting \(r\to\infty\), we obtain \[\int_{-\tau_{0}}^{-\theta\tau_{0}}\int_{M}\tau(|\nabla^{2}b|^{2}+|Rc|^{2})e^{ sb}\,dv_{t}dt\leq C^{\prime}\log\theta^{-1}.\] (A.24) Thus the inequality (A.15) follows from the combination of (A.24) and Lemma A.3.
2303.11718
Inducing or suppressing the anisotropy in multilayers based on CoFeB
Controlling the uniaxial magnetic anisotropy is of practical interest to a wide variety of applications. We study Co$_{40}$Fe$_{40}$B$_{20}$ single films grown on various crystalline orientations of LiNbO$_3$ substrates and on oxidized silicon. We identify the annealing conditions that are appropriate to induce or suppress uniaxial anisotropy. Anisotropy fields can be increased by annealing up to 11 mT when using substrates with anisotropic surfaces. They can be decreased to below 1 mT when using isotropic surfaces. In the first case, the observed increase of the anisotropy originates from the biaxial strain in the film caused by the anisotropic thermal contraction of the substrate when back at room temperature after strain relaxation during annealing. In the second case, anisotropy is progressively removed by applying successive orthogonal fields that are assumed to progressively suppress any chemical ordering within the magnetic film. The method can be applied to CoFeB/Ru/CoFeB synthetic antiferromagnets but the tuning of the anisotropy comes with a decrease of the interlayer exchange coupling and a drastic change of the exchange stiffness.
R. L. Seeger, F. Millo, A. Mouhoub, G. de Loubens, A. Solignac, T. Devolder
2023-03-21T10:10:22Z
http://arxiv.org/abs/2303.11718v1
# Inducing or suppressing the anisotropy in multilayers based on CoFeB ###### Abstract Controlling the uniaxial magnetic anisotropy is of practical interest to a wide variety of applications. We study Co\({}_{40}\)Fe\({}_{40}\)B\({}_{20}\) single films grown on various crystalline orientations of LiNbO\({}_{3}\) substrates and on oxidized silicon. We identify the annealing conditions that are appropriate to induce or suppress uniaxial anisotropy. Anisotropy fields can be increased by annealing up to 11 mT when using substrates with anisotropic surfaces. They can be decreased to below 1 mT when using isotropic surfaces. In the first case, the observed increase of the anisotropy originates from the biaxial strain in the film caused by the anisotropic thermal contraction of the substrate when back at room temperature after strain relaxation during annealing. In the second case, anisotropy is progressively removed by applying successive orthogonal fields that are assumed to progressively suppress any chemical ordering within the magnetic film. The method can be applied to CoFeB/Ru/CoFeB synthetic antiferromagnets but the tuning of the anisotropy comes with a decrease of the interlayer exchange coupling and a drastic change of the exchange stiffness. ## I Introduction The control of the anisotropy of a given magnetic material is very often required in the applications of magnetism [1]. The amorphous metallic CoFeB films are widely used in spintronics, both when very soft properties are desired such as in flux guides [2], or in contrast when a well defined uniaxial anisotropy is wanted as in the free layers of magnetoresistive field sensors [1]. Depending on the targeted applications, a very same material platform can even be sometimes used with opposite requirements for anisotropy. This is the case of artificial multiferroics composed of ferromagnetic films and piezoelectric layers. When meant for instance for energy harvesting, they require a well defined anisotropy [3] while when meant for racetrack applications isotropic properties are welcome [4]. Tailoring the uniaxial anisotropy -_both inducing and suppressing_- is thus an important challenge of technological interest. Various knobs can be employed to tune the magnetic anisotropy. Interface engineering can be used in ultrathin films [5; 6]. In bulk materials one can either rely (i) on some sort of chemical ordering [7] or, (ii) on the induction of anisotropic strain in magnetostrictive materials [8]. In the first case, one use generally saturates the magnetization using a strong magnetic field, and then provide thermal energy (hence atomic mobility) to let the structure of the material evolve towards a new state compatible with the desired magnetization orientation [9]. In metallic glass like CoFeB, the anisotropy is related to some degree of alignment of the Boron atoms within the material and this can be effectively tuned and reoriented by in-field annealing [7]. For the same reason, magnetic anisotropy can already be induced during deposition if done under an applied field [10]. The second case applies to magnetostrictive materials only. There, if an appropriate choice of the substrate influences the growth (e.g. epitaxy or strain relaxation) the resulting anisotropic strain leads to magnetic anisotropy [8]. This elastic coupling between the magnetic film and the substrate is systematically desired in SAW-FMR devices [11; 12; 13] and magneto-acoustics [14] when one harnesses the interaction between a surface acoustic wave (SAW) hosted by a piezoelectric substrate and the ferromagnetic resonance (FMR) of the magnetic film. Note that this situation fundamentally entails a dilemma when isotropic properties (meaning often stress-free layers) are desired in addition to a tight elastic coupling between film and substrate. This dilemma is significant in SAW-FMR of synthetic antiferromagnets (SAFs) since in this case a vanishing anisotropy is required for resonant coupling between the SAWs and the spin waves [15]. Unfortunately it is difficult to obtain quasi-isotropic SAFs, one typically is left with uniaxial anisotropy fields \(\mu_{0}H_{\mathrm{k}}\) that remain above a couple of mT [16; 17; 18; 19]. In this paper, we study how to tailor (increase or suppress) the magnetic anisotropy of magnetostrictive layers grown on piezoelectric substrates. We develop our method on Co\({}_{40}\)Fe\({}_{40}\)B\({}_{20}\) single layer films on LiNbO\({}_{3}\) single crystals that are very adequate for rf acoustical waves. We show that our method is applicable to SAFs. The paper is organized as follows. We initially quantify the uniaxial anisotropy in CoFeB and show how to control it through appropriate annealing and substrate choice. The surface orientation of LiNbO\({}_{3}\) strongly impacts how the annealing alters the anisotropy of the magnetic material. A well designed procedure can lead to quasi-isotropic CoFeB layers, and can be extend to CoFeB/Ru/CoFeB SAFs. However, spin wave spectroscopy experiments show that the tailoring of the anisotropy of the SAF comes together with an evolution of the exchange stiffness and of the interlayer exchange coupling. ## II Experiments ### Films Fig. 1 depicts our material systems. The magnetic stacks are : Ta(6, buffer)/CoFeB(34)/Ru(0.4)/Ta(3, cap) (short name: single CoFeB) and Ta(6, buffer)/CoFeB(17)/Ru(0.7)/CoFeB(17)/Ru(0.4)/Ta(3, cap) (short name: CoFeB/Ru/CoFeB) SAF. All thicknesses are given in nm. The CoFeB layer was deposited from a Co\({}_{40}\)Fe\({}_{40}\)B\({}_{20}\) (at. %) target. The deposition is done at room temperature by dc-magnetron sputtering at pressure of argon of \(5\times 10^{-3}\) mbar and base pressure below \(10^{-7}\) mbar. No intentional magnetic field is applied during growth. The thickness of the Ru(0.7) spacer of the SAF is chosen to maximize the interlayer exchange coupling [20]. ### Substrates The depositions were done on several substrates ranging from naturally oxidized silicon wafers (to be referred as Si/SiO\({}_{\text{x}}\)) to LiNbO\({}_{3}\) single crystals of various surface orientation (Z-, Y- and Y128-cut) [21]. Since the properties of the magnetic materials can be impacted by the stress induced by the underlying substrate [22], we cast the substrates in two categories. The first category gathers the substrates whose surface expands in a quasi-isotropic manner upon annealing. For the second category the thermal expansion is anisotropic at the surface, as illustrated in Fig. 1(c). ### Post-growth annealing conditions In order to unravel the respective roles of substrate induced applied stress, applied field, and Boron diffusion onto the annealing-induced evolution of the magnetic properties, we have annealed our material systems using 4 different procedures: * without any applied magnetic field. * with a 70 mT applied in a given direction in a 1 step manner. We will see that the field direction (i.e., along or orthogonal to the initial anisotropy axis) shall not influence the final result. * With a 70 mT field applied in 2 successive steps: the sample is first annealed with a field oriented at some randomly chosen in-plane direction, then at along an orthogonal direction. * In a 30 mT field rotating at 5 rpm in the sample plane. The annealing temperature \(T\) ranges from 100 to 200 \({}^{\circ}\)C, above which systematic crystallization is expected for our Boron content [23; 24; 25; 26]. The annealing time is set to 4 min on a hot plate for (i), (ii) and (iii). The annealing in rotating field (iv) is done in vacuum for 10 h. In all cases the field is applied while ramping up and down the temperature and is strong enough to saturate the magnetization. ### Magnetic characterizations The magnetic characterization of samples was performed by vibrating sample magnetometry (VSM) and Vector Net Figure 1: Schematic illustration of the samples consisting of a single CoFeB layer (a) and a CoFeB/Ru/CoFeB SAF (b). All thicknesses are given in nm. (c) Studied substrates. The red arrows indicate the direction of the ferroelectric order parameter (i.e., crystalline direction \(\text{Z}^{+}_{\text{LiNbO\text{\text{\text{\text{\text{\text{\text{ \text{\text{\text{\text work Analyzer ferromagnetic resonance [27] (VNA-FMR). An in-plane applied field \(\mu_{0}H_{\text{ap}}\) was used, and its direction \(\theta\) [see Fig. 2(a)] was varied to access to the sample's magnetic anisotropy. The resonance spectra (FMR absorption signal) are obtained by measuring the field dependence of the VNA transmission parameter \(||S_{21}(H_{\text{ap}})||-||S_{21}(H=0)||\), as plotted in Fig. 2(b). The resonance frequencies (FMR for the single CoFeB or acoustical and optical resonances of the SAF) \(f_{\text{res}}\) are defined from the maxima of absorption. The \(\theta\)-dependence of the FMR of single CoFeB films were analyzed in the macrospin approximation using numerical energy minimization and subsequent application the Smit-Beljers equation [28]. A fitting procedure allowed to extract independently the values of the uniaxial anisotropy field \(H_{\text{k}}\), the orientation of the easy axis and the saturation magnetization \(M_{\text{s}}\). Fig. 2(c) illustrates this procedure when applied to a single CoFeB film grown on a Y128-cut substrate in the as-grown state. The orientation of the easy axis and the uniaxial character of the anisotropy are systematically consistent with the hysteresis loops. The \(\theta\) and \(H\) dependence of the acoustical \(f_{\text{acou}}\) and optical \(f_{\text{opt}}\) resonances of the SAF were analyzed in the full micromagnetic framework [29] following the method described in ref. [20]. There, it was shown that the competition between the interlayer coupling \(J\) and the intralayer exchange stiffness \(A_{\text{ex}}\) results in the existence of a gradient of the magnetization orientation in the growth direction. This gradient renders the curvature of the \(f_{\text{opt}}(\theta)\) near \(H=0\) very sensitive to the ratio of \(A_{\text{ex}}\) and \(J\), that can thus be deduced reliably. A fitting procedure of \(f_{\text{acou}}(\theta)\) can then be used to extract the anisotropy fields, assumed to be exactly the same for the two magnetic layers of the SAF. ## III Evolution of the magnetic anisotropy upon annealing ### Results The main features of the evolution of the magnetic anisotropy upon annealing are illustrated in Fig. 3 and compiled in Table 1. An annealing temperature above 100\({}^{\circ}\)C appeared necessary to observe an evolution of the magnetic properties. The atomic mobility within the CoFeB films is likely insufficient below this temperature. For larger annealing temperatures, the magnetic anisotropy evolves in very different ways depending if the substrate has an isotropic or an anisotropic surface, and also on the field applied. When working on substrates with anisotropic surfaces (Y-cut and Y128-cut LiNbO\({}_{3}\)), the annealing increases substantially the anisotropy, see Fig. 3(a). The inset compares for instance the angular dependence of the FMR of a CoFeB film on a Y-cut LiNbO\({}_{3}\) substrate before and after a 200 \({}^{\circ}\)C field-free annealing. Annealing increases the anisotropy field \(\mu_{0}H_{\text{k}}\) from 3.05 mT to 9.00 mT. This comes with a reorientation of the hard axis towards the X\({}_{\text{LiNbO}_{3}}\) (i.e., \(\theta=0\) deg. in our convention). The Y128-cut samples follow a similar trend but with a lower increase of the anisotropy. As soon as the anisotropy increases, the hard axis also reorients towards X\({}_{\text{LiNbO}_{3}}\) axis and the easy axis towards the in-plane projection of the Z\({}_{\text{LiNbO}_{3}}\) axis. When working on these substrates with anisotropic surfaces, the magnetic field applied (or not) during the annealing has a minor influence on the evolution of the magnetic anisotropy. Conversely, the anisotropy can be reduced for the films grown on the substrates with quasi-isotropic surfaces: Z-cut LiNbO\({}_{3}\) and oxidized silicon. The inset in Fig. 3(b) shows for instance angular dependence of the FMR before and after a 2 steps annealing procedure in the case of a Z-cut LiNbO\({}_{3}\) substrate. This 2-step annealing lowers the anisotropy down to 0.6 mT. Notably, the rate of decrease of the anisotropy depends strongly on the field sequence used during annealing, and the hard axis systematically ends perpendicular to the field applied during the last annealing step. Figure 3: (a) Anisotropy field \(H_{\text{k}}\) dependence with the annealing temperature \(T\) as measured for single CoFeB grown on Y-cut LiNbO\({}_{3}\), as an example of effect of annealing on a substrate with anisotropic surface. (b) Representative \(H_{\text{k}}\)-dependence for samples subjected to various annealing procedures with \(T=200\)\({}^{\circ}\)C, as measured for single CoFeB grown on Z-cut LiNbO\({}_{3}\), a substrate with quasi-isotropic surface. Insets in (a) and (b) are representative \(\theta\)-dependence of \(f_{\text{res}}\) before and after annealing. The line is a fit to the experimental data. ### Physical origins of the evolution of anisotropy The previous results can be discussed by considering two thermodynamic phenomena: (i) the interplay between magneto-elasticity and anisotropic strain and, (ii) the chemical ordering within the magnetic material. We recall that for the annealing temperatures studied here, no crystallization of the CoFeB layer is expected. Let us first discuss the magneto-elastic scenario. #### ii.2.1 Magneto-elastic scenario When annealing above \(T_{\rm a}>100^{\circ}\)C, the atoms within the CoFeB film acquire some mobility, as observed in other soft magnetic materials [7]. Being amorphous, the glassy CoFeB film slowly flows, such that after a sufficient delay it reaches a relaxed (stress-free) state at the annealing temperature. Cooling (defining \(RT-T_{\rm a}=\delta T<0\)) to room temperature (RT) quenches suddenly any atomic mobility, while triggering a thermal contraction. Since the CoFeB film is clamped by the much thicker substrate, the in-plane strain \(\epsilon\) of the substrate is imposed to the magnetic film. The natural contraction of an hypothetically free-standing CoFeB film would be isotropic (its thermal expansion coefficient \(\beta\) is isotropic). That of the LiNbO\({}_{3}\) substrate is not: the thermal expansion coefficient in the \(\rm X_{LiNbO_{3}}\) direction is stronger than in the other direction of the substrate plane [22]. As a result the stress \(\overline{\epsilon}\) within the CoFeB at RT is biaxial and more compressive in the \(\rm X_{LiNbO_{3}}\) direction. Defining \(x\) and \(y\) as the two directions of the surface of the substrate with \(x\parallel\rm X_{LiNbO_{3}}\) [see Fig. 1(c)], we have: \[\epsilon_{xx}=\beta_{x}\delta T<0\ {\rm and}\ \epsilon_{yy}=\beta_{y}\delta T<0 \tag{1}\] Whatever the substrate cut, the largest deformation is always along \(\rm X_{LiNbO_{3}}\), see ref. [22]. We have \(\beta_{x}>\beta_{y}\) for both the Y-cut case and the Y128-cut case. As the CoFeB has essentially a free surface, its stress is purely biaxial, such that there is no shear strain (i.e., \(\epsilon_{xy}=0\)). This biaxial strain generates a magnetic anisotropy of CoFeB. Indeed for an in-plane magnetized film, the magneto-elastic energy is \(E_{\rm me}=B_{1}(m_{x}^{2}\epsilon_{xx}+m_{y}^{2}\epsilon_{yy})+B_{2}m_{x}m_{y }\epsilon_{xy}\), where \(B_{1}\) and \(B_{2}\) are the usual magneto-elastic coefficients. In amorphous materials, they reduce to \(-\frac{3}{2}\lambda E_{\rm Young}\) which amounts to \(-7.6\) MJ/m\({}^{3}\) with the magnetostriction coefficient \(\lambda=27\) ppm for \(\rm Co_{40}Fe_{40}B_{20}\) from ref. [30] and the Young's modulus \(E_{\rm Young}=187\) GPa, from ref. [31]. Note that \(\lambda>0\), meaning a tensile strain lowers the energy. The CoFeB film being more compressed in the X-direction than in other directions, \(\rm X_{LiNbO_{3}}\) will become the hard axis. Using the conservation of the magnetization norm and \(\epsilon_{xy}=0\), we can rewrite this energy in the form of an effective uniaxial anisotropy: \[E_{me}=B_{1}m_{x}^{2}(\epsilon_{xx}-\epsilon_{yy}) \tag{2}\] with a magneto-elastic effective anisotropy field of: \[\mu_{0}H_{k}^{\rm mel}=\frac{2B_{1}}{M_{s}}(\beta_{x}-\beta_{y})\delta T \tag{3}\] that is predicted to be linear with the annealing temperature, which bears some similarity with the experimental results [see Fig. 3(a)] above \(T_{\rm a}\)=100\({}^{\circ}\)C. Using the data of Table 2 with \(\delta T=-180\) K, Eq. 3 predicts \(\mu_{0}H_{k}^{\rm mel}=5.5\) mT for the Y128-cut case and \(\mu_{0}H_{k}^{\rm mel}=13.7\) mT for the Y-cut case, with hard axes along \(\rm X_{LiNbO_{3}}\) in both cases. This magneto-elastic contribution dominates any other contribution to the uniaxial anisotropy, including the one present in the as-grown state and those possibly related to the magnetic field applied during the annealing. This correlates with our finding on the minor influence of the field applied during the annealing of Y-cut and Y128 samples Note that the predicted values of the anisotropy field \(H_{k}^{\rm mel}\) are slightly larger than our experimental findings. This may indicate that the stress release is incomplete during the annealing, or that the magnetostriction coefficient of the literature [30] is over estimated. Another conclusion of our study concerns the evolution of the magnetization \(M_{\rm s}\) upon annealing (Table 1). There is little evolution for the Si/SiO\({}_{\rm x}\) case, but a substantial increase otherwise. For our nominal Boron concentration, bulk crystallisation or stress-induced bulk crystallization are not supposed to occur at our annealing temperatures [32] and can therefore not be \begin{table} \begin{tabular}{|c|c|c|c|} \hline Surface & \(\beta\) in first direction & \(\beta\) along a perp direction \\ \hline Y-cut & along \(\rm X_{LiNbO_{3}}\); \(\beta_{1}=1.5\) & along \(\rm X_{LiNbO_{3}}\); \(\beta_{3}=0.7\) \\ \hline Y128-cut & along \(\rm X_{LiNbO_{3}}\); \(\beta_{1}=1.5\) & \(\beta_{1}\sin^{2}(128)+\beta_{3}\cos^{2}(128)=1.2\) \\ \hline Z-cut & along \(\rm X_{LiNbO_{3}}\); \(\beta_{1}=1.5\) & along \(\rm X_{LiNbO_{3}}\); \(\beta_{1}=1.5\) \\ \hline Si[001] & along [100]: 0.468 & along [010]: 0.468 \\ \hline \end{tabular} \end{table} Table 2: Thermal expansion coefficients in the two directions of the surface plane, in units of \(10^{-5}/^{\circ}\)C. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Sub. & Thermal & \(\mu_{0}H_{k}\) (mT) & Hard axis & \(\mu_{0}M_{s}\) (T) \\ & treatment & \(\pm 5\%\) & & \(\pm 0.01\) \\ \hline \multicolumn{5}{|l|}{Substrates with quasi-isotropic surface:} \\ \hline SiOx & as grown & 3.0 & variable & 1.70 \\ & \(200^{\circ}\)C, H=0 & 3.1 & variable & 1.72 \\ & \(200^{\circ}\)C, rotH & 1.5 & variable & 1.71 \\ & \(200^{\circ}\)C 2-step H & 0.9 & \(\perp\) to last field & 1.76 \\ \hline Z & as grown & 2.2 & variable & 1.70 \\ & \(200^{\circ}\)C, H=0 & 2.8 & variable & 1.71 \\ & \(200^{\circ}\)C, rotH & 2.4 & variable & 1.82 \\ & \(200^{\circ}\)C 2-step H & 0.6 & \(\perp\) to last field & 1.70 \\ \hline \multicolumn{5}{|l|}{Substrates with anisotropic surface:} \\ \hline Y128 & as grown & 3.1 & variable & 1.77 \\ & \(200^{\circ}\)C, H=0 & 4.3 & \(\parallel\rm X_{LiNbO_{3}}\) & 1.87 \\ \hline Y & as grown & 3.0 & variable & 1.70 \\ & \(200^{\circ}\)C, H=0 & 9.0 & \(\parallel\rm X_{LiNbO_{3}}\) & 2.00 \\ & \(200^{\circ}\)C, rotH & 6.4 & \(\parallel\rm X_{LiNbO_{3}}\) & 2.00 \\ & \(200^{\circ}\)C 2-stepH & 11.4 & \(\parallel\rm X_{LiNbO_{3}}\) & 2.00 \\ \hline \end{tabular} \end{table} Table 1: Summary of the material parameters in single CoFeB subjected to different thermal treatments. “Variable” means that the value of the anisotropy field and its orientation depend on the sample position within the deposition machine – the evolution of \(H_{k}\) is however taken for a consistent sample position. invoked for the observed increase of the magnetization. However, some compression-induced diffusion of the Boron atoms out of the magnetic films may start to occur, thereby reducing the Boron concentration. The corresponding densification is generally associated with an increase of the magnetization [33]. We indeed observe that the increase of the magnetization after the 200\({}^{\circ}\)C annealings seems to correlate with the amount of compression (see Tables 1 and 2). This anisotropic strain-induced scenario is effective only for the substrates with anisotropic thermal expansion like Y-cut and Y128-cut LiNbO\({}_{3}\). Another scenario must thus be invoked for the Z-cut of LiNbO\({}_{3}\) and the oxidized silicon cases. #### iii.2.2 Chemical order scenario In many magnetic alloys, it is routinely observed that annealing in a magnetic field induces a preferred direction of magnetization [5; 9]. A plausible model often invoked to explain this mechanism is the migration of atoms on a local scale in such a way as to favor magnetization in a given direction. At annealing temperatures leading to some atomic mobility, some atom pairs orient themselves relative to the direction of magnetization set by the field, so as to decrease their magnetic anisotropy energy. Cooling to a temperature where atomic diffusion gets quenched, the anisotropy axis remains along the direction it has acquired during annealing. Metaloids like Boron play an important role in this process thanks to their high mobility and chemical interaction with transition metals [7]. In metallic glass like CoFeB the applied field drives an anisotropic distribution of atoms pairs among Co, Fe and B, and this mechanism is active at the time scales used for annealing for the temperatures considered here [7]. This process is generally used to induce anisotropy, using a _fixed_ field orientation and a _long_ annealing. However it is important to figure out that the material evolution is a thermodynamical process: during our first annealing step, the field-induced (energy-minimization-driven) chemical ordering process competes with a temperature-induced (random, entropy-driven) disordering trend. This competition leads to a slow formation of a uniaxial anisotropy. During the subsequent annealing step, the magnetic field orientation is different. As a result, the entropy-driven and energy-driven thermodynamical forces both tend to destroy the previously favored chemical order, and therefore they concur to reduce the anisotropy. Because of this coincidence of the two thermodynamical forces at play, this destruction of the previously set anisotropy is a fast process. The building-up of any uniaxial anisotropy along a new field direction is a much slower process. In practice, it is not seen at the time scales used in our annealings. This explains how one can progressively reduce the anisotropy by applying successive orthogonal fields during annealing when there is no magneto-elastic contribution at play. Each annealing step with a new field direction statistically breaks pair alignments which results in a progressive randomization of the chemical ordering within the magnetic film and, thus, a decrease of the anisotropy. The effect of annealing under a rotating field is qualitatively similar as a two-step annealing, but their relative efficiency depends on the characteristic time-scales of atomic diffusion versus the field rotation period. To summarize the discussion so far, the findings presented in Fig. 3 indicate that, with regards to the substrate, different contributions to the uniaxial anisotropy of CoFeB may arise. For surface substrates with anisotropic thermal expansion (Y-cut and Y128-cut LiNbO\({}_{3}\)), the anisotropy is controlled by anisotropic strain, while for quasi-isotropic surface substrates (Si/SiO\({}_{\rm x}\) and Z-cut LiNbO\({}_{3}\)), the anisotropy is controlled by the chemical ordering favored or broken by the sequence of applied magnetic fields. ## IV Applicability to synthetic antiferromagnets It is important to investigate whether the conclusions previously established for single CoFeB films can be extended to multilayers. In particular, let us see if one can obtain isotropic CoFeB/Ru/CoFeB synthetic antiferromagnets (SAF) when grown on quasi-isotropic substrates. Fig. 4 shows the results for a SAF grown on Si/SiO\({}_{\rm x}\) in the as-grown state and after annealing in a rotating field. The two lowest order spin wave modes can be detected, the acoustical (\(f_{\rm acou}\)) and the optical (\(f_{\rm opt}\)) modes. The value of \(f_{\rm acou}\) at low field is known to be very sensitive on the anisotropy field [34]. The \(\theta\)-dependence of \(f_{\rm acou}\) [Fig.4(a)] clearly indicates that the annealing procedure defined for the single layer films succeeded also in suppressing the anisotropy of the SAF grown on Si/SiO\({}_{\rm x}\). As shall be explained later, we can only give a semi-quantitative measurement the anisotropy field \(\mu_{0}H_{\rm k}\) of the SAF, but its reduction (see Table 3) is almost complete. The same trend is observed when the growth is performed on Z-cut LiNbO\({}_{3}\) (see Table 3). However this quasi-suppression of the anisotropy is accompanied by an evolution of the other magnetic properties. This can be seen by comparing the frequencies of the experimental and simulated modes using the methodology defined in ref. [20]. Indeed the value of \(f_{\rm opt}\) at \(H=0\) is essentially set by the interlayer coupling \(J\). Annealing obviously reduces it [compare Fig. 4(b) and (c)]. Besides, the curvature of \(f_{\rm opt}\) versus \(H\) is very sensitive to the ratio \(\frac{A_{\rm ex}}{J}\): annealing obviously strongly affects this ratio. The values of \(A_{\rm ex}\), \(J\) that best fit the experimental data for Si/SiO\({}_{\rm x}\) and Z-cut LiNbO\({}_{3}\) substrates are listed in Table 3. The experiment-to-micromagnetics agreement is excellent except in the small field region for the acoustical spin wave, where micromagnetics systematically underestimates the frequency of the acoustical mode [Fig. 4(b) and (c)]. The same difficulty arises when attempting to account for the \(\theta\)-dependence of \(f_{\rm acou}\) with micromagnetic simulations, as shown in Fig. 4(a). The reason for this disagreement was not identified, but we believe that it may arise from a gradient of the magnetic properties in the growth direction which is not taken into account in the simulations. For this reason, we can only give a semi-quantitative measurement the anisotropy field \(\mu_{0}H_{\mathrm{k}}\). The anisotropy values in Table 3 are deduced from the sole value of \(f_{\mathrm{acou}}\) at zero field. Upon annealing, \(J\) reduces from -1.5 mJ/m\({}^{2}\) to -0.9 mJ/m\({}^{2}\) upon annealing, as observed for our SAF grown on Si/SiO\({}_{\mathrm{x}}\). The atomic mobility enabled by the annealing probably reduces the sharpness of the interfaces of the Ru spacer, thereby decreasing \(J\). The evolution of the local order within the CoFeB material is also evident from the evolution of its exchange stiffness \(A_{\mathrm{ex}}\), which undergoes a very substantial increase from 14.5 pJ/m to 28.3 pJ/m upon annealing. It is noteworthy that the as-grown value of \(A_{\mathrm{ex}}\) is comparable to literature values in the amorphous state for our composition, which are found to be [35; 36; 20] in the range from 10 to 14 pJ/m. The exchange stiffness is also known to increase substantially when the layer get either crystalline or simply more dense [33]. ## V Conclusion We have studied the impact of annealing on the magnetic properties of CoFeB films and synthetic antiferromagnets. We identified the different contributions to the uniaxial anisotropy in CoFeB single films by performing various in-field thermal treatments for films grown on different substrates. The anisotropy field of CoFeB can be increased when the annealing is performed in samples grown on substrates whose surfaces have an anisotropic thermal expansion. In this case the likely scenario is a full stress relaxation occurring during the annealing, followed by the creation of a biaxial strain in the CoFeB upon cooling, which induces a strong magneto-elastic anisotropy. Anisotropy fields up to 11 mT can be induced when extremely anisotropic substrates like Y-cut LiNbO\({}_{3}\) are used. Conversely, the anisotropy field can be decreased to below 1 mT when using substrates whose surface is quasi-isotropic. In this case the anisotropy is controlled by the history of the magnetic field applied during annealing. In particular, sequences of orthogonal fields are very efficient in suppressing the anisotropy. This method was applied to obtain isotropic CoFeB/Ru/CoFeB synthetic antiferromagnets; however the annealing also affects the exchange interactions within the stack. ###### Acknowledgements. We acknowledge discussions with S. Margueron and A. Bartasyte. This work was supported by a public grant overseen by the French National Research Agency (ANR) as part of the "Investissements d'Avenir" program (Labex NanoSaclay, reference: ANR-10-LABX-0035, project SPICY). R. L. S and F. M. acknowledge the French National Research Agency (ANR) under Contract No. ANR-20-CE24-0025 (MAXSAW). ## References Figure 4: Effect of annealing in a rotating field for CoFeB/Ru/CoFeB SAFs grown on Si/SiO\({}_{\mathrm{x}}\). (a) \(\theta\)-dependence of \(f_{\mathrm{res}}\) before and after annealing. Symbols are experimental data. The lines show the calculated dependencies from micromagnetic simulations [29] using magnetic parameters fitted from the broadband VNA-FMR characterization of the acoustical and optical modes shown in (b) in the as-grown state and in (c) after annealing. Insets show color maps of the distance \(\chi\) between the experimental and simulated spin wave frequencies used to determine \(A_{\mathrm{ex}}\) and \(J\). \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Substrate & \(\mu_{0}H_{\mathrm{k}}\) (mT) & \(\mu_{0}M_{\mathrm{s}}\) (T) & \(J\) (mJ/m\({}^{2}\)) & \(A_{\mathrm{ex}}\) (pJ/m) \\ \hline SiO\({}_{\mathrm{x}}\), as grown & 4.4 & 1.70 & -1.5 & 14.5 \\ \hline SiO\({}_{\mathrm{x}}\), 200\({}^{\circ}\)C & 0.8 & 1.71 & -0.9 & 28.3 \\ \hline \(\mathrm{Z_{LiNbO_{3}}}\), as grown & 3.8 & 1.70 & -1.7 & 15.5 \\ \hline \(\mathrm{Z_{LiNbO_{3}}}\), 200\({}^{\circ}\)C & 0.5 & 1.70 & -1.1 & 21 \\ \hline \end{tabular} \end{table} Table 3: Material parameters of the synthetic antiferromagnet before and after annealing in a rotating field.
2310.11620
Enhancing modified treatment policy effect estimation with weighted energy distance
The effects of continuous treatments are often characterized through the average dose response function, which is challenging to estimate from observational data due to confounding and positivity violations. Modified treatment policies (MTPs) are an alternative approach that aim to assess the effect of a modification to observed treatment values and work under relaxed assumptions. Estimators for MTPs generally focus on estimating the conditional density of treatment given covariates and using it to construct weights. However, weighting using conditional density models has well-documented challenges. Further, MTPs with larger treatment modifications have stronger confounding and no tools exist to help choose an appropriate modification magnitude. This paper investigates the role of weights for MTPs showing that to control confounding, weights should balance the weighted data to an unobserved hypothetical target population, that can be characterized with observed data. Leveraging this insight, we present a versatile set of tools to enhance estimation for MTPs. We introduce a distance that measures imbalance of covariate distributions under the MTP and use it to develop new weighting methods and tools to aid in the estimation of MTPs. We illustrate our methods through an example studying the effect of mechanical power of ventilation on in-hospital mortality.
Ziren Jiang, Jared D. Huling
2023-10-17T23:12:08Z
http://arxiv.org/abs/2310.11620v1
# Enhancing modified treatment policy effect estimation with weighted energy distance ###### Abstract The effects of continuous treatments are often characterized through the average dose response function, which is challenging to estimate from observational data due to confounding and positivity violations. Modified treatment policies (MTPs) are an alternative approach that aim to assess the effect of a modification to observed treatment values and work under relaxed assumptions. Estimators for MTPs generally focus on estimating the conditional density of treatment given covariates and using it to construct weights. However, weighting using conditional density models has well-documented challenges. Further, MTPs with larger treatment modifications have stronger confounding and no tools exist to help choose an appropriate modification magnitude. This paper investigates the role of weights for MTPs showing that to control confounding, weights should balance the weighted data to an unobserved hypothetical target population, that can be characterized with observed data. Leveraging this insight, we present a versatile set of tools to enhance estimation for MTPs. We introduce a distance that measures imbalance of covariate distributions under the MTP and use it to develop new weighting methods and tools to aid in the estimation of MTPs. We illustrate our methods through an example studying the effect of mechanical power of ventilation on in-hospital mortality. _Keywords--_ Continuous Treatments, Covariate Balance, Balancing weights, Causal Inference, Observational Studies Introduction Understanding the causal effects of changes in doses or otherwise continuous values of a treatment is important in many scientific disciplines. A typical approach to quantifying the causal effect of varying a continuous treatment is to estimate the causal average dose response function (ADRF), which is the expected potential outcome as a function of all possible or likely treatment values. In our motivating example, the treatment is the mechanical power of a ventilator applied to patients in intensive care who have acute respiratory distress syndrome. In this setting, the ARDF is the expected in-hospital mortality as a function of the power of ventilation. While the ADRF provides an intuitive way to evaluate the causal effect of a continuous treatment, its estimation requires one to assume that every individual could hypothetically receive all possible values of the treatment, which is often unlikely in practice as some values may be clinically unlikely or impossible for subsets of the population. Moreover, confounding is exacerbated in causal ADRF estimation because the characteristics of subjects with a higher treatment value may differ substantively from those with a lower value. Additionally, the performance of causal ADRF estimators is often poor (i.e. cannot attain root-\(n\) consistency nonparametrically), even with flexible, doubly robust methods (Kennedy et al., 2017). Alternative definitions of causal effects in this setting have been introduced in part to help alleviate some of these problems while still providing clinically useful results. Diaz and van der Laan (2013) proposed the concept of stochastic interventions which estimates the effect of assigning each subject's intervention based on a random draw from a given distribution depending on their own characteristics. Haneuse and Rotnitzky (2013) further proposed modified treatment policies (MTPs) which generalize the idea of stochastic interventions. Each subject's counterfactual treatment under a given MTP is defined as a function of their baseline characteristics and their _observed_ values of the treatment without the MTP (the "natural" value of the treatment); the MTP thus imagines a slight manipulation of each individual's treatment value from their actual values. The estimand is then defined as the expected potential outcome under the specified manipulation. For example, in our motivating application, one could estimate the effect of slightly reducing or increasing the mechanical power of ventilation, compared with standard of care, on in-hospital mortality among patients in the intensive care unit (ICU) who have acute respiratory distress (Neto et al., 2018). Although an MTP cannot be implemented precisely in practice, it can help policymakers evaluate hypotheses about the treatment and generate practical interventions that can be later tested via experiments. Another advantage of MTPs is that, by imagining counterfactual worlds where the treatment is modified slightly from reality, the corresponding counterfactual world is not substantially different from reality, making positivity hold by construction. Diaz et al. (2021) generalized MTPs to longitudinal data and proposed efficient estimators for the causal effect. Diaz and Hejazi (2020) used stochastic interventions for causal mediation analysis. MTPs are thus an attractive tool for causal inference as they can be quite general and can be used in a wide variety of settings. Confounding is a major hurdle in causal inference from observational studies, both for standard estimands and for MTPs. Weighting approaches to confounding control re-weight each subject to balance the distribution of the covariates and do not require the use of outcome information. Traditionally, weights are generated through inverse-probability weighting (IPW) which models the treatment assignment mechanism given covariates and inverts it. However, the performance of IPW relies critically on the choice of the propensity score model and model misspecification can result in severe bias (Kang and Schafer, 2007). Alternative weighting methods have been proposed to mitigate this issue in the setting of estimating standard causal effects of discrete-valued treatments by using weights that encourage (Imai and Ratkovic, 2014) or enforce (Hainmueller, 2012) balance of pre-specified moments of covariates. Particularly relevant to our article is the work by Huling and Mak (2020) who consider weights that minimize the energy distance between the weighted empirical distribution of the intervention groups and the joint population, aiming to balance the full joint distributions of covariates. This nonparametric distributional balancing approach has been shown to work empirically well with no need for careful modeling decisions. Several challenging issues remain unresolved for weighting methods for MTPs. Existing weighting methods for MTPs have focused on the estimation of the conditional density of treatment given the covariates, also known as the generalized propensity score (GPS). Diaz and van der Laan (2013) and Haneuse and Rotnitzky (2013) both proposed weighting estimators based on estimation of the GPS. These methods show great promise, but their success hinges on accurately specifying the conditional density model. Yet, estimating a conditional density is highly challenging especially with an increasing number of covariates (Huling et al., 2023). Even slight misspecification of the estimated density function from the truth function can impact the performance of the IPW-style estimators of the ADRF (Naimi et al., 2014); these issues persist in the estimation of MTPs. Moreover, the impact of weights on finite sample performance is not yet fully understood and no diagnostic tools are available to assess the performance of a given set of weights or to understand whether a given set of weights is likely to yield an unconfounded comparison for a given dataset. As MTPs involve investigating the effect of a shift of treatment values from their observed values, larger shifts are likely to yield analyses with more confounding and with more potential for positivity violations. Yet no tools exist that can assess what range of shifts in a class of MTPs are "safe" (i.e. have confounding that can be fully adjusted using a given weighting method) for a given application. Clear guidance is thus needed for practitioners to determine an appropriate magnitude of the MTP shift. In this work, we introduce distance-based tools that provide solutions to the aforementioned issues. By examining the role of the weights in finite sample error of a generic weighting estimator, we show that confounding bias depends on the distributional distance between the observed population (the real world) and the population induced by a given MTP (the counterfactual world). Towards this aim, we first extend the identification result of the modified treatment policy from Haneuse and Rotnitzky (2013) to clarify the form of the target counterfactual pop ulation for an MTP and provide a clear mathematical expression of it. Even though the target population is purely hypothetical, we show its distribution can be estimated using the observed data, a fact used extensively in our methods. We then present a novel error decomposition to show that the estimation error of a weighted estimate of the causal effect of MTP is directly related to the imbalance of the weighted empirical distribution (referring to the empirical distribution of the weighted observed sample) and the targeted empirical distribution induced by an MTP. We provide a measure of this distance with a modification of the energy distance (Szekely and Rizzo, 2013), a computationally simple measure of the distance between two multivariate distributions. It has been used to measure distributional imbalance in the context of categorical treatments (Huling and Mak, 2020) and to quantify dependence between continuous treatments and confounders (Huling et al., 2023). We propose a set of energy distance-based tools to enhance the estimation of MTP effects. Specifically, our methods provide 1) an assessment of a feasible range of shifts for a class of MTPs for arbitrary weighting methods for a given dataset and 2) a comparison of different weighting methods for MTPs. Finally, we introduce energy balancing weights which have optimal reduction of distributional imbalance to the target population, providing a new robust weighting approach for MTP estimation. The remainder of this article is as follows. In Section 2, we describe the MTP framework and propose a novel error decomposition to reveal the relationship between the estimation bias and distributional imbalance. Section 3 introduces our modification of the energy distance. In Section 4, we propose energy-distance-based methods for evaluating the feasibility of an MTP and for evaluating different weighting methods. In Section 5, we propose new weights that directly balance data to the target distribution. In Section 6, we introduce an augmented estimator based on the energy balancing weights and show its asymptotic normality in estimating the MTP effect. We also propose a multiplier bootstrap method that is more computationally efficient in estimating the uncertainty of the augmented estimator and provide statistical guarantees for its use. We illustrate the robust performance of our proposed estimators through extensive numerical experiments in Section 7. In Section 8, we illustrate the use of our proposed set of methods through a real example of the mechanical power of ventilation. ## 2 Setup ### Causal framework for modified treatment policies (MTPs) Consider data \(\{\mathbf{X}_{i},A_{i},Y_{i}\}_{i=1}^{n}\) collected from an observational study, where \(\mathbf{X}_{i}\in\mathcal{X}\subseteq\mathbb{R}^{p}\) denotes a p-dimensional vector of pre-treatment covariates for subject \(i\), \(A_{i}\in\mathcal{A}\subseteq\mathbb{R}\) denotes the received value of the treatment, and \(Y_{i}\in\mathbb{R}\) denotes the outcome. Here we assume \(A_{i}\) is a continuous treatment variable (e.g., the taken dose of a treatment for individual \(i\)). We adopt the potential outcome framework in which we assume that for each subject \(i\), there exists a potential outcome \(Y_{i}^{a}\) defined as the outcome that subject \(i\) would have if subject \(i\) is intervened on to take the treatment at value \(A=a\). An MTP analysis starts with an analyst-specified treatment policy \(q(\mathbf{\mathrm{x}},a)\) that inputs patient characteristics and observed treatment values and outputs a modified value of the treatment. The goal of an MTP analysis is to estimate the mean potential outcome \(\mu^{q}\equiv\mathbb{E}[Y^{q(\mathbf{X},A)}]\) under this policy; this can then be contrasted with the average _observed_ outcome \(\mathbb{E}[Y^{A}]=\mathbb{E}[Y]\) to understand whether the policy improves or harms outcomes on average. To ensure \(Y_{i}^{a}\) is well-defined (i.e, \(Y_{i}^{a}\) is unique for a given treatment level \(a\) and subject \(i\)), we assume the following consistency condition: * (A0 Consistency): For any subject \(i\) in the population, if \(A_{i}=a\) then \(Y_{i}=Y_{i}^{a}\). It is not realistic for some patients to have extreme values of treatment, e.g. extremely low mechanical power of ventilation for patients who have serious respiratory symptoms (see section 8 for the case study). As such, as in the work of Haneuse and Rotnitzky (2013), we define the enforceable set \(\mathcal{A}_{i}\) for subject \(i\) to be the set of treatment values that the potential outcomes are meaningful for the subject. We further assume that the enforceable set for subject \(i\) is fully determined by the treatment value received \(A_{i}\) and the covariates \(\mathbf{X}_{i}\), i.e, \(\mathcal{A}_{i}=\mathcal{A}(\mathbf{X}_{i},A_{i})\) Using the enforceable set, we can work with a positivity assumption that is reasonable even in the presence of strong confounding. For example, if there is a positive density of observing subject \(i\) with covariate \(\mathbf{x}_{i}\) and treatment \(a_{i}\), then it is reasonable to assume that we have a positive density to observe a subject with the same covariate \(\mathbf{x}_{i}\) and treatment value \(a\in\mathcal{A}(\mathbf{x}_{i},a_{i})\). In contrast, calculating the causal effect of an arbitrary treatment value \(a\) may not be meaningful since this treatment value may not be feasible or applicable to the entire population. Therefore, we focus on estimating the causal effects of an MTP that modifies or shifts the observed treatment values to be within the enforceable set. Since the enforceable set is determined by \(\mathbf{X}\) and \(A\), the MTP can be designed to be realistic for the entire population, i.e., the treatment policy \(q(\mathbf{x},a)\in\mathcal{A}(\mathbf{x},a)\) for any \((\mathbf{x},a)\in(\mathcal{X},\mathcal{A})\), where \((\mathcal{X},\mathcal{A})\) be the joint support of the random variables \(\mathbf{X}\) and \(A\). It is important to note that MTPs are distinct from dynamic treatment regimes, where the current treatment values are determined by the past treatment values. MTPs, on the other hand, define the modified treatment value based on the natural (observed) treatment value without intervention. This means that the modified treatment value and the natural treatment value are supposed to happen at the same time, and since we do not know the actual treatment value without intervention, the MTP cannot be implemented exactly in practice. However, MTPs are still pertinent to clinical practice, as they can provide insight into how modifications to clinical practice could have changed patient outcomes. By considering the effects of a collection of MTPs, researchers can gain insights into the underlying mechanisms that lead to different outcomes and identify potential strategies for improving patient outcomes. For example, to evaluate a care policy that encourages a slight reduction in intensity of mechanical ventilation, we could estimate the causal effect of the MTP \(q(\mathbf{x},a)=a-2\) which reduced mechanical power for all subjects by 2 Joules/min compared with current standards. The MTP could be further refined to modify the amount of reduction differently for people with different characteristics or prior ventilation patterns. For any \((\mathbf{x},a)\in(\mathcal{X},\mathcal{A})\), we can define \(\mathbb{E}(Y^{q(\mathbf{x},a)}|\mathbf{X}=\mathbf{x},A=a)\) to be the expected potential outcome averaging over all subjects who have \(\mathbf{X}=\mathbf{x},A=a\). The mean potential outcome of the MTP \(q(\mathbf{x},a)\) is then the mean of the conditional average potential outcome \(\mathbb{E}(Y^{q(\mathbf{x},a)}|\mathbf{X}=\mathbf{x},A=a)\) over the entire population, i.e., \[\mu^{q}:=\int_{(\mathcal{X},\mathcal{A})}\mathbb{E}[Y^{q(\mathbf{x},a)}|\mathbf{X}=\bm {x},A=a]dF_{\mathbf{X},A}(\mathbf{x},a) \tag{1}\] where \(F_{\mathbf{X},A}(\mathbf{x},a)\) is the distribution function of \((\mathbf{X},A)\). In order to make the integral well-defined, we follow Haneuse and Rotnitzky (2013) to assume the continuity of \(\mathbb{E}[Y^{q(\mathbf{x},a)}|\mathbf{X}=\mathbf{x},A=a]\) over \((\mathbf{x},a)\in(\mathcal{X},\mathcal{A})\) and continuity of \(A|\mathbf{X}=\mathbf{x}\) over \(\mathbf{x}\in\mathcal{X}\). In order to estimate the causal effect from observational data, \(\mu^{q}\) needs to be causally-identifiable. In other words, we need to express \(\mu^{q}\) as a function of the distribution of \((Y,\mathbf{X},A)\). From Haneuse and Rotnitzky (2013), \(\mu^{q}\) can be identified with the following assumptions: * (A1 Positivity): If \((\mathbf{x},a)\in(\mathcal{X},\mathcal{A})\) then \((\mathbf{x},q(\mathbf{x},a))\in(\mathcal{X},\mathcal{A})\). * (A2 Conditional exchangeability of related populations) For each \((\mathbf{x},a)\in(\mathcal{X},\mathcal{A})\), let \(a^{\prime}=q(\mathbf{x},a)\), then \(Y^{a^{\prime}}|\mathbf{X}=\mathbf{x},A=a\) and \(Y^{a^{\prime}}|\mathbf{X}=\mathbf{x},A=a^{\prime}\) have the same distribution. The positivity assumption states that for each subject in the population, it is always possible to find some other subjects who have the same covariates and receive the modified treatment. The second assumption state that the potential outcome of the modified treatment will not be affected by the original treatment allocation. In other words, subjects with \(\mathbf{X}=\mathbf{x}\) who received treatment \(a\) could have received treatment \(q(\mathbf{x},a)\). The difference between A2 and the usual no-unmeasured confounders assumption is that the latter is much stronger, as it requires the subjects with \(\mathbf{X}=\mathbf{x}\) who received treatment \(a\) could have received any possible dose \(a^{\prime}\in\mathcal{A}_{\mathbf{x}}\) where \(\mathcal{A}_{\mathbf{x}}:=\{a:(\mathbf{x},a)\in(\mathcal{X},\mathcal{A})\}\) be the support of \(A\) given \(\mathbf{X}=\mathbf{x}\). In order to have a closed-form derivation of our proposed estimator, we further restrict the MTP to have a piecewise differentiable inverse function. * (A3 Piece-wise smooth invertibility): For each \(\mathbf{x}\in\mathcal{X}\), there exists a partition of \(\mathcal{A}_{\mathbf{x}}\) where \(q(\mathbf{x},\cdot)\) is smooth and invertible within the partition. Specifically, let \(q_{j}(\mathbf{x},\cdot)\) denote the function \(q(\mathbf{x},\cdot)\) on the \(I_{j,\mathbf{x}}\) part. Then \(q_{j}(\mathbf{x},\cdot)\) has differentiable inverse function \(h_{j}(\mathbf{x},\cdot)\) on the interior of \(I_{j,\mathbf{x}}\), such that \(h_{j}(\mathbf{x},q_{j}(\mathbf{x},a))=a\) Under Assumption A3, for a given \(\mathbf{x}\in\mathcal{X}\), the MTP \(q(\mathbf{x},a)\) can be expressed as the sum of the functions \(\{q_{j}(\mathbf{x},a)\}_{j=1}^{J(\mathbf{x})}\) on each interval \(\{I_{j,\mathbf{x}}\}_{j=1}^{J(\mathbf{x})}\), \(q(\mathbf{x},a)=\sum_{j=1}^{J(\mathbf{x})}I_{j,\mathbf{x}}(a)q_{j}(\mathbf{x},a)\) where \(I_{j,\mathbf{x}}(a)\) be the indicator function s.t. \(I_{j,\mathbf{x}}(a)=1\) if \(a\in I_{j,\mathbf{x}}\) and \(I_{j,\mathbf{x}}(a)=0\) otherwise. Assumption A3 is important, as it allows us to cleanly express the expected potential outcome under the MTP as a function of the observed data distribution. With the assumption of the piece-wise smooth invertibility, we have the following identification result from Haneuse and Rotnitzky (2013), \[\mu^{q}=\int_{\mathcal{X}}\sum_{j=1}^{J(\mathbf{x})}\int_{h_{j}(\mathbf{x},a)\in I_{j, \mathbf{x}}}\mathbb{E}(Y|\mathbf{X}=\mathbf{x},A=a)dF_{\mathbf{X},A}(\mathbf{x},h_{j}(\mathbf{x},a)). \tag{2}\] The proof which follows similar arguments as Haneuse and Rotnitzky (2013) is presented in the Supplementary Material. ### Novel error decomposition for weighted MTP estimators In this section, we present a novel error decomposition that allows a clear inspection of the role of sample weights in finite sample errors when estimating \(\mu^{q}\) using weighting methods. This decomposition provides insights that enable us to provide a measure of confounding bias as a function of the sample weights. This measure then allows us to provide tools that compare weights and further tools to assess when a given MTP is so far from the observed treatments that measured confounding is excessive and/or too difficult to control with weights. Denoting \(\mu(\mathbf{x},a)\equiv\mathbb{E}(Y|\mathbf{X}=\mathbf{x},A=a)\), we have \(Y_{i}=\mu(\mathbf{X}_{i},A_{i})+\epsilon_{i}\) where \(\epsilon_{i}\equiv Y_{i}-\mu(\mathbf{X}_{i},A_{i})\) is independent of \(\mathbf{X}_{i}\) and \(A_{i}\) with mean zero. We can express the estimand (2) as \[\mu^{q}=\int_{(\mathcal{X},\mathcal{A})}\mu(\mathbf{x},a)dF_{\mathbf{X},A}^{q}(\mathbf{x},a),\text{ where} \tag{3}\] \[F_{\mathbf{X},\mathbf{A}}^{q}(\mathbf{x},a)=\sum_{j=1}^{J(\mathbf{x})}I_{j,\mathbf{x}}(h_{j}(\mathbf{ x},a))F_{\mathbf{X},A}(\mathbf{x},h_{j}(\mathbf{x},a)) \tag{4}\] and \(F_{\mathbf{X},A}(\mathbf{x},a)\) is the CDF of \((\mathbf{X},A)\). In the Supplementary Materials, we prove that \(F_{\mathbf{X},\mathbf{A}}^{q}(\mathbf{x},a)\) is the CDF of \((\mathbf{X},q(\mathbf{X},A))\), as long as \(q(\mathbf{x},a)\) has the property that \(\lim_{(\mathbf{x},a)\rightarrow-\mathbf{\infty}}q(\mathbf{x},a)=-\mathbf{\infty}\) and \(\lim_{(\mathbf{x},a)\rightarrow\mathbf{\infty}}q(\mathbf{x},a)=\mathbf{\infty}\). In this paper, we focus on weighted estimators of \(\mu^{q}\) which do not require one to use outcome information, allowing the analyst to conduct objective causal inference. A weighting estimator with arbitrary sample weights \(\mathbf{w}=(w_{1},...,w_{n})\) can be expressed as: \(\hat{\mu}_{\mathbf{w}}^{q}=\frac{1}{n}\sum_{i=1}^{n}w_{i}Y_{i}=\frac{1}{n}\sum_{i =1}^{n}w_{i}\mu(\mathbf{X}_{i},A_{i})+\frac{1}{n}\sum_{i=1}^{n}w_{i}\epsilon_{i}\) where \(\sum_{i=1}^{n}w_{i}=n\). This weighted estimator can be expressed as \[\hat{\mu}_{\mathbf{w}}^{q}=\int_{(\mathcal{X},\mathcal{A})}\mu(\mathbf{x},a)dF_{n,\bm {w},\mathbf{X},A}+\frac{1}{n}\sum_{i=1}^{n}w_{i}\epsilon_{i},\] where \(F_{n,\mathbf{w},\mathbf{X},\mathbf{A}}=\frac{1}{n}\sum_{i=1}^{n}w_{i}I(\mathbf{X}_{i}\leq\mathbf{x },A_{i}\leq a)\) is the weighted empirical CDF of the sample. We can express the error of the weighted estimator \(\hat{\mu}_{\mathbf{w}}^{q}\) as: \[\hat{\mu}_{\mathbf{w}}^{q}-\mu^{q} = \underbrace{\int_{(\mathcal{X},\mathcal{A})}\mu(\mathbf{x},a)d(F_{n, \mathbf{w},\mathbf{X},A}-F_{n,\mathbf{X},A}^{q})(\mathbf{x},a)}_{\text{error due to confounding bias}} \tag{5}\] \[+\underbrace{\int_{(\mathcal{X},\mathcal{A})}\mu(\mathbf{x},a)d(F_{n,\mathbf{X},A}^{q}-F_{\mathbf{X},A}^{q})}_{\text{sampling error}}+\frac{1}{n}\sum_{i =1}^{n}w_{i}\epsilon_{i}, \tag{6}\] where \(F_{n,\mathbf{X},A}^{q}(\mathbf{x},a)=\frac{1}{n}\sum_{i=1}^{n}\sum_{j=1}^{J(\mathbf{X}_{ i})}I_{j,\mathbf{X}_{i}}(A_{i})I(\mathbf{X}_{i}\leq\mathbf{x},q_{j}(\mathbf{X}_{i},A_{i}) \leq a)\) is the empirical estimator of the shifted distribution \(F_{\mathbf{X},A}^{q}\). The error decomposition emphasizes the importance of sample weights in mitigating confounding bias. The second term depends on how well the CDF can be estimated using the empirical CDF. The third term is affected by the variance of the weights, however, it always has mean 0. Thus, confounding bias can be mitigated by minimizing the first term \(\int_{(\mathcal{X},\mathcal{A})}\mu(\mathbf{x},a)d(F_{n,\mathbf{w},\mathbf{X},A}-F_{n,\bm {X},A}^{q})(\mathbf{x},a)\) which measures the distance between the two distribution functions. Note that \(F_{n,\mathbf{X},A}^{q}(\mathbf{x},a)\) is the empirical CDF of the random variable \((\mathbf{X},q(\mathbf{X},A))\) and \(F_{n,\mathbf{w},\mathbf{X},A}(\mathbf{x},a)=\frac{1}{n}\sum_{i=1}^{n}w_{i}I(\mathbf{X}_{i}\leq \mathbf{x},A_{i}\leq a)\) is the weighted empirical CDF of random variable \((\mathbf{X},A)\). Thus, the term \(\int_{(\mathcal{X},\mathcal{A})}\mu(\mathbf{x},a)d(F_{n,\mathbf{w},\mathbf{X},A}-F_{n,\bm {X},A}^{q})(\mathbf{x},a)\) depends on both the weights \(\mathbf{w}\) and the pre-specified modified treatment policy \(q(\mathbf{x},a)\). Larger shifts on the MTP will lead to the larger difference between the distribution of \((\mathbf{X},q(\mathbf{X},A))\) and \((\mathbf{X},A)\) which makes the confounding bias term larger, generally speaking. ## 3 Weighted Energy Distance for MTPs After identifying the key component of confounding bias in estimating the causal effect of MTP, a natural question arises: can we measure or bound its magnitude for a given set of weights? Doing so would enable objective evaluation of a set of weights in the specific context of a given MTP for a given dataset. Since the confounding bias component \(\int_{(\mathcal{X},\mathcal{A})}\mu(\mathbf{x},a)d(F_{n,\mathbf{w},\mathbf{X},A}-F_{n,\mathbf{ X},A}^{q})(\mathbf{x},a)\) is an integral over the difference of the two empirical distributions, a metric that can measure the distance between these two distributions can be used for this purpose. In this paper, we adopt the energy distance (Szekely and Rizzo, 2013; Huling and Mak, 2020), a metric based on powers of the Euclidean distance, for use in estimating weights that minimize the key component of the confounding bias. Although there are other measures of distributional distance, we focus on the energy distance due to its simplicity in form, implementation, and lack of need for tuning parameters, making it a widely applicable tool for a broad ranger. However, other distances can be used in its place, such as the maximum mean discrepancy (MMD) which is the distance of two distributions embedded to the reproducing kernel Hilbert spaces (RKHS). Maximum mean discrepancies with the distance kernel is shown to be equivalent to the energy distance when it is calculated with a semimetric of negative type (Sejdinovic et al., 2013). In Supplementary Material Section 2.2, we introduce the formula of MMD for researchers to use. Following Huling and Mak (2020), we generalize the energy distance (see Supplementary Material Section 2.1 for the definition) to the weighted energy distance which measures the distance between the weighted empirical CDF \(F_{n,\mathbf{w},\mathbf{X},A}\) and the CDF under MTP \(F_{n,\mathbf{X},A}^{q}\), \[\mathcal{E}(F_{n,\mathbf{w},\mathbf{X},A},F_{n,\mathbf{X},A}^{q})\] \[=\frac{1}{n^{2}}\bigg{\{}2\times\sum_{i=1}^{n}\sum_{k=1}^{n}\sum_{ j=1}^{J(\mathbf{X}_{k})}I_{j,\mathbf{X}_{k}}(A_{k})w_{i}||(\mathbf{X}_{i},A_{i})-(\mathbf{X}_{k},q_ {j}(\mathbf{X}_{k},A_{k}))||_{2}\] \[\quad-\sum_{i=1}^{n}\sum_{k=1}^{n}w_{i}w_{k}||(\mathbf{X}_{i},A_{i})-( \mathbf{X}_{k},A_{k})||_{2}\] \[\quad-\sum_{i=1}^{n}\sum_{j=1}^{J(\mathbf{X}_{i})}\sum_{k=1}^{n}\sum_ {j^{\prime}=1}^{J(\mathbf{X}_{k})}I_{j,\mathbf{X}_{i}}(A_{i})I_{j^{\prime},\mathbf{X}_{k}} (A_{k})||(\mathbf{X}_{i},q_{j}(\mathbf{X}_{i},A_{i}))-(\mathbf{X}_{k},q_{j^{\prime}}(\mathbf{ X}_{k},A_{k}))||_{2}\bigg{\}}. \tag{7}\] Note that since the empirical CDF under the MTP \(F_{n,\mathbf{X},A}^{q}\) can be viewed as the empirical CDF for the sample \(\{\mathbf{X}_{i},q(\mathbf{X}_{i},A_{i})\}_{i=1}^{n}\), the empirical characteristic function of \(\{\mathbf{X}_{i},q(\mathbf{X}_{i},A_{i})\}_{i=1}^{n}\) is \[\varphi_{n,\mathbf{X},A}^{q}(\mathbf{t})=\frac{1}{n}\sum_{i=1}^{n}\sum_{j=1}^{J(\mathbf{X }_{i})}I_{j,\mathbf{X}_{i}}(A_{i})\exp(\mathbf{it}^{T}(\mathbf{X}_{i},q_{j}(\mathbf{X}_{i},A_{ i})))=\frac{1}{n}\sum_{i=1}^{n}\exp(\mathbf{it}^{T}(\mathbf{X}_{i},Q_{i})), \tag{8}\] where \(Q_{i}=q(\mathbf{X}_{i},A_{i})\). The following lemma shows that the weighted energy distance of \(F_{n,\mathbf{w},\mathbf{X},A}\) and \(F_{n,\mathbf{X},A}^{q}\) is a distance between the intended distributions. **Lemma 3.1**.: _Let \(\mathbf{w}\) be a vector of weights such that \(\sum_{i=1}^{n}w_{i}=n\) and \(w_{i}>0\) for \(\forall i\). Then_ \[\mathcal{E}(F_{n,\mathbf{w},\mathbf{X},A},F_{n,\mathbf{X},A}^{q})=\int_{\mathbb{R}^{p}}| \varphi_{n,\mathbf{w},\mathbf{X},A}(\mathbf{t})-\varphi_{n,\mathbf{X},A}^{q}(\mathbf{t})|^{2}v(\bm {t})d\mathbf{t}, \tag{9}\] _where \(v(\mathbf{t})=1/(C_{d}||\mathbf{t}||_{1+d}^{1+d})\), \(C_{d}\) is a constant, \(d=p+1\) is the dimension of the variate \((\mathbf{X},A)\), \(\varphi_{n,\mathbf{X},A}^{q}(\mathbf{t})\) is the characteristic function of \(F_{n,\mathbf{X},A}^{q}\), and \(\varphi_{n,\mathbf{w},\mathbf{X},A}(\mathbf{t})=\frac{1}{n}\sum_{i=1}^{n}w_{i}e^{\mathbf{it}^{ T}(\mathbf{X}_{i},A_{i})}\) is the weighted empirical characteristic function (ECHF) of sample \(\{\mathbf{X}_{i},A_{i}\}_{i=1}^{n}\)._ This lemma follows the results of Szekely and Rizzo (2013) and Huling and Mak (2020), which states that the energy distance is some (predefined) norm about the two distribution's characteristic function. Therefore, it can be used to determine whether the weighted distribution is approaching the target distribution (i.e., the empirical distribution of \((\mathbf{X},q(\mathbf{X},A))\)). The following theorem shows that the weighted energy distance converges to the energy distance between the limiting CDF and the target CDF \(F_{\mathbf{X},A}^{q}\). **Theorem 3.2**.: _Assume \(\lim_{n\to\infty}\varphi_{n,\mathbf{w},\mathbf{X},A}(\mathbf{t})=\tilde{\varphi}_{\mathbf{X},A} (\mathbf{t})\) almost surely for \(\forall\mathbf{t}\), where \(\tilde{\varphi}_{\mathbf{X},A}(\mathbf{t})\) is some integrable characteristic function with associated CDF \(\tilde{F}_{\mathbf{X},A}\). Then we have almost surely that_ \[\lim_{n\rightarrow\infty}\mathcal{E}(F_{n,\mathbf{w},\mathbf{X},A},F_{n,\mathbf{ X},A}^{q})=\mathcal{E}(\tilde{F}_{\mathbf{X},A},F_{\mathbf{X},A}^{q}). \tag{10}\] The following lemma follows the results of Mak and Joseph (2018) and Huling and Mak (2020) and builds the connection between the integral (5) and the energy distance. Since (5) is the key component of the bias, it therefore connects the bias with the distributional balance. **Lemma 3.3**.: _Let \(\mathcal{H}\) be the native space induced by the radial kernel \(\Phi(.)=-||.||_{2}\) on \((\mathcal{X},\mathcal{A})\), and suppose \(\mu(.)\in\mathcal{H}\). Then for any weights \(\mathbf{w}\) satisfying \(\sum_{i=1}^{n}w_{i}=n\), \(w_{i}\geq 0\), we have_ \[\bigg{[}\int_{(\mathcal{A},\mathcal{X})}\mu(\mathbf{x},a)d(F_{n,\mathbf{ X},A}^{q}-F_{n,\mathbf{w},\mathbf{X},A})(\mathbf{x},a)\bigg{]}^{2}\leq C_{\alpha}\mathcal{E}(F_{n,\mathbf{X},A}^{q},F_{n,\mathbf{w},\mathbf{X},A}), \tag{11}\] _where \(C_{\alpha}\geq 0\) is a constant depending only on \(\mu(\mathbf{x},a)\)._ This lemma states that under the condition that the conditional mean function \(\mu(\mathbf{x},a)\) belongs to \(\mathcal{H}\), the key component of the bias is bounded by the energy distance. Here we emphasize that \(\mathcal{H}\) contains a wide class of functions, and \(C_{\alpha}=\langle\mu,\mu\rangle\) where \(\langle\cdot\rangle\) be the inner product in the Hilbert space \(\mathcal{H}\). For extended discussion and details of the specific form of the native space, readers can refer to Mak and Joseph (2018). ## 4 Using a weighted energy distance for MTPs In this section, we introduce several important tools that facilitate and enhance the application of MTP techniques. In particular, these tools help provide critical and rigorous guidance on important decisions necessary within the MTP framework. Each of the tools is based on the weighted energy distance due to its connection with estimation bias for MTPs. The first tool centers on determining a reasonable range of treatment policies that can be reliably addressed with a given weighting approach. The second involves assessing arbitrary weighting approaches in their ability to control for measured confounding for a specific dataset and MTP, enabling researchers to determine the most effective weighting method for confounding control. ### Choosing a feasible MTP scale for a given weighting method When exploring the causal effect of an MTP, researchers often aim to understand the effect of interventions that operate by modifying a treatment's value in a specific direction (For example, Haneuse and Rotnitzky (2013) consider the intervention that shortens the surgery time of patients.); the magnitude of the modification then determines how different the treatment under the MTP is compared with assigned treatments in reality. Larger magnitudes or scales of an MTP's intervention/policy may allow researchers to explore a wider variety of manipulations, but more extreme policies differ more starkly from reality, making confounding more severe and thus more difficult to control adequately. Thus, the scale of the intervention or modification must be chosen carefully. To our knowledge, no tools yet exist to aid in this choice. From the error decomposition in Section 2.2, confounding bias operates only through the first term \(\int_{(\mathcal{X},\mathcal{A})}\mu(\mathbf{x},a)d(F_{n,\mathbf{w},\mathbf{X},A}-F_{n,\mathbf{ X},A}^{q})(\mathbf{x},a)\) which depends on the difference between the weighted empirical CDF for the observed population and the shifted empirical CDF for the target population. If the MTP scale is too large, a given set of weights or weighting approach may not properly balance these two populations to have the same distribution, leading to a large potential for bias in the estimator. To address this issue, we propose a method based on the weighted energy distance to identify a scale of intervention for a given class of MTPs for which a given weighting approach can adequately balance the two populations and thus adequately control for confounding. By doing so, researchers can ensure that the MTP scale is not too big to be estimated well, reducing potential for bias. Consider the null hypothesis that the weights from a given balancing method perfectly balance the observed population \(\{\mathbf{X}_{i},A_{i}\}_{i=1}^{n}\) and the target population \(\{\mathbf{X}_{i},q(\mathbf{X}_{i},A_{i})\}_{i=1}^{n}\) for an MTP and thus completely control for measured confounding. The energy distance of the weighted empirical CDF \(F_{n,\mathbf{w},\mathbf{X},A}\) and the target empirical CDF \(F_{n,\mathbf{X},A}^{q}\) is nonzero only due to sampling variability. We can estimate the distribution of the energy statistic under this null hypothesis of no measured confounding to determine whether a given weighting method has balanced the weighted distribution to the target. We consider threshold values of this null distribution above which one may declare that the energy distance between a weighted distribution and the target shifted distribution is large enough that confounding is not plausibly controlled for. The threshold can be estimated through the following nonparametric bootstrap: For the \(r=1,...,R\)-th bootstrap iteration, obtain the bootstrap sample \(\{\mathbf{X}_{i}^{r},A_{i}^{r}\}_{i=1}^{n}\) by sampling with replacement from the observed sample \(\{\mathbf{X}_{i},A_{i}\}_{i=1}^{n}\) and calculating the corresponding bootstrap energy distance as \(\mathcal{E}^{r}=\mathcal{E}(F_{n,\mathbf{X},A}^{rq},F_{n,\mathbf{X},A}^{q})\) where \(F_{n,\mathbf{X},A}^{rq}\) is the empirical CDF for the bootstrap Figure 1: Four general scenarios that demonstrate our method for selecting a feasible MTP scale. The X-axis represents the MTP scale, with larger values of \(\tau\) indicating greater shifts. The blue curve shows the energy distance after balancing, while the three horizontal lines represent the calculated thresholds. The MTP corresponding to the \(\tau\) values, for which the weighted energy distance exceeds these thresholds, are considered to be MTP magnitudes with concern for uncontrolled measured confounding. The estimation error curve (red curve) validates our method, as its value appears to increase significantly when employing the danger MTP magnitudes. sample \(\{\mathbf{X}_{i}^{r},q(\mathbf{X}_{i}^{r},A_{i}^{r})\}_{i=1}^{n}\). The threshold can then be calculated as the one-sided upper \(1-\alpha\) (typically \(0.95\)) quantile of the bootstrap sample \(\{\mathcal{E}^{r}\}_{r=1}^{R}\). If the energy distance is larger than the threshold, we may conclude that the weights do not balance the observed distribution to the target population under the given MTP \(q(\mathbf{X},A)\), indicating that the magnitude of the MTP is too large and thus must be reduced in order to obtain a reliable estimate. Note that the bootstrap threshold and the energy distance \(\mathcal{E}(F_{n,\mathbf{w},\mathbf{X},A},F_{n,\mathbf{X},A}^{q})\) all depend on the choice of the MTP, so the bootstrap procedure must be run anew for each change to the MTP scale. We illustrate this new approach through the example using our novel energy balancing weights (introduced in Sections 5 and 6). Simulated data are generated using the first condition of our simulation study described in Section 7 with \(n=300\) and dimensionality \(p=20\). For simplicity, we consider the monotone shift function \(q(\mathbf{x},a)\) where all interventions are shifted with the same value \(\tau\): \(q(\mathbf{x},a)=a+\tau\) where \(\tau\) ranges from \([0,20]\) in increments of \(0.1\). Larger values of \(\tau\) represent the larger discrepancies from the original treatment. For each \(\tau\), we calculate the weighted energy distance for the estimated energy balancing weights, the error of the energy balancing weights estimator, the \(97.5\%\), \(95\%\) and \(90\%\) upper quantiles of the bootstrap null distribution (described above). Figure 1 displays four general scenarios with different random seeds. The energy distance for the energy balancing weights increases with a larger MTP scale \(\tau\); along with this increase, the error of the estimator also increases monotonically. The three vertical dashed lines indicate the intersection of the energy distance and the three thresholds based on the upper quantiles of the null distribution. We can see that for \(\tau\) value less than the vertical lines, the estimation error is generally controlled, indicating that this procedure is a reasonable criterion for choosing a feasible MTP scale. ### Evaluation of arbitrary weights for a given MTP and dataset The previous section evaluates different MTP scales with a fixed weighting method. Similarly, we can also use the weighted energy distance to evaluate the performance of different weights for a fixed MTP. Since smaller weighted energy distances indicate better balance between the weighted sample and the shifted sample, a natural criterion is to select the method that yields the smallest weighted energy distance. We demonstrate this approach to weight selection using the simulation results with sample sizes \(n=800\) and covariate dimensionality \(p=80\) under the first simulation scenario described in Section 7. Figure 2 displays the distribution of the rank of the absolute estimation error for a given dataset compared with the rank of the weighted energy distance for the four weighting methods. From this, the weighting method with the smallest energy distance (rank 1) tends to perform best in most cases. In Section 3 of the Supplementary Material, we further explore the validity of this method under broader conditions. Figure 2: The histogram depicts the association between the rank of estimation error and the rank of weighted energy distance using the same data as in Figure 2. The x-axis of each plot is the rank of each method in terms of performance. The figure shows that, in most cases, the method with the lowest weighted energy distance yields the best performance. Energy balancing weights for MTP estimation As demonstrated in the error decomposition in Section 2.2, weighted distributional imbalance is a critical component of the estimation bias of weighted estimators for MTPs. Further, we showed in Lemma 3.1 and Theorem 3.2 that the weighted energy distance characterizes weighted distributional imbalance. In this section we propose a new set of weights designed as the optimizer of this measure. By minimizing the distance between the weighted empirical distribution and target distribution, our proposed weights, which we call energy balancing weights (EBWs), control for confounding in a robust and flexible manner. Specifically, these weights minimize the energy distance between the observed sample (weighted empirical CDF) and the target sample under the MTP (MTP-shifted empirical CDF). Thus, the energy balancing weights balance the weighted empirical distribution to the MTP-shifted empirical distribution. Extensively highlighted in the weighting literature, extreme weights can inflate the variance of the estimator (Li et al., 2018; Chattopadhyay et al., 2020). To ensure root-\(n\) consistency of our energy balancing estimator, it is necessary to impose a restriction to avoid extreme weights. One approach is to introduce an additional penalization term for the sum of squared weights, resulting in penalized energy balancing weights. In this section, we introduce the penalized energy balancing weights \(\mathbf{w}_{n}^{ep}\). Under mild conditions, we show these weights result in root-\(n\) consistency in estimating the mean potential outcome under an MTP. ### Penalized Energy Balancing Estimator The penalized energy balancing weights account for the magnitude of the weights in addition to the weighted energy distance. Specifically, we define the penalized energy balancing weights \(\mathbf{w}_{n}^{ep}\) with a user-specified parameter \(\lambda>0\) as follows: \[\mathbf{w}_{n}^{ep}\in\operatorname*{arg\,min}_{\mathbf{w}=(w_{1},\dots,w_{n})} \mathcal{E}(F_{n,\mathbf{w},\mathbf{X},A},F_{n,\mathbf{X},A}^{q})+\frac{\lambda}{n^{2}} \sum_{i=1}^{n}w_{i}^{2}\text{ s.t. }\sum_{i=1}^{n}w_{i}=n\,,w_{i}\geq 0. \tag{12}\] The corresponding penalized energy balancing estimator is then \(\hat{\mu}^{q}_{\mathbf{w}^{ep}_{n}}=n^{-1}\sum_{i=1}^{n}\mathbf{w}^{ep}_{i,n}Y_{i}\). Analogous to Huling and Mak (2020), we prove the following properties of the energy balancing weights. Theorem 5.1 shows that the penalized energy balancing weights make the weighted empirical CDF of the observed data converge to the true MTP-shifted CDF. **Theorem 5.1**.: _Assume the assumptions in the previous theorems hold. Let \(w^{ep}_{n}\) be the penalized energy balancing weights. Then, we have \(\lim_{n\rightarrow\infty}F_{n,\mathbf{w}^{ep}_{n},\mathbf{X},A}(\mathbf{x},a)=F^{q}_{\mathbf{X },A}(\mathbf{x},a)\) almost surely for every continuity point \((\mathbf{x},a)\in\mathcal{X}\). Furthermore, the following holds almost surely_ \[\lim_{n\rightarrow\infty}\mathcal{E}(F_{n,\mathbf{w}^{ep}_{n},\mathbf{X},A},F^{q}_{n, \mathbf{X},A})=0. \tag{13}\] We now show that the penalized energy balancing estimator achieves root-\(n\) consistency. **Theorem 5.2**.: _Assume the conditions in Theorem 2. Let \(\mathcal{H}\) be the native space induced by the radial kernel \(\Phi(.)=-||.||_{2}\) on \((\mathcal{X},\mathcal{A})\). Suppose the following mild conditions hold:_ * **CP-1:**__\(\mu(\cdot,\cdot)\in\mathcal{H}\)__ * **CP-2:** _Var_ \([\mu(\mathbf{X},A)]<\infty\)__ * **CP-3:** _Var_ \([Y|X=\mathbf{x},A=a]\) _are bounded over_ \((\mathbf{x},a)\in(\mathcal{X},\mathcal{A})\)_._ * **CP-4:**__\(\mathbb{E}[g^{2}(\mathbf{W},\mathbf{W}^{\prime},\mathbf{W}^{\prime\prime},\mathbf{W}^{\prime \prime\prime})]\leq\infty\) _where_ \(\mathbf{W},\mathbf{W}^{\prime},\mathbf{W}^{\prime\prime},\mathbf{W}^{\prime\prime\prime}\stackrel{{ i.i.d.}}{{\sim}}F^{q}_{\mathbf{X},A}\) _be a vector of_ \((\mathbf{X},A)\) _and, with_ \(h(\mathbf{w})=\frac{f^{q}_{\mathbf{X},A}(\mathbf{x},a)}{f_{\mathbf{X},A}(\mathbf{x},a)}\)_, the kernel function_ \(g(.)\) _is defined as:_ \[g(\mathbf{w},\mathbf{w}^{\prime},\mathbf{w}^{\prime\prime},\mathbf{w}^{\prime\prime\prime})=h (\mathbf{w})||\mathbf{w}-\mathbf{w}^{\prime\prime}||_{2}+h(\mathbf{w}^{\prime})||\mathbf{w}^{ \prime}-\mathbf{w}^{\prime\prime\prime}||_{2}-h(\mathbf{w})h(\mathbf{w}^{\prime})||\mathbf{w}- \mathbf{w}^{\prime}||_{2}-||\mathbf{w}^{\prime\prime\prime}-\mathbf{w}^{\prime\prime}||_{2}\] _Then, the proposed penalized EBW estimator \(\hat{\mu}_{q,\mathbf{w}^{ep}_{n}}\) is root-\(n\) consistent in that_ \[\sqrt{\mathbb{E}_{\mathbf{X},A,Y}[(\hat{\mu}^{q}_{\mathbf{w}^{ep}_{n}}-\mu^{q})^{2}]}=O _{p}(n^{-1/2}). \tag{14}\] Theorem 5.2 shows that the penalized energy balancing weights yield a root-\(n\) consistent estimate of \(\mu^{q}\) under mild conditions on the data-generating process even though the energy balancing weights are not shown to be consistent for the true density ratio weights in any sense. We briefly comment on the conditions required for Theorem 5.2. Condition **CP-1** requires that the outcome regression function be contained in the native space \(\mathcal{H}\). As demonstrated in Huling and Mak (2020), this condition is a smoothness assumption on the true outcome regression function \(\mu(\mathbf{x},a)\). **CP-2** requires that the outcome regression function have finite variance and **CP-3** requires that the conditional variance function be bounded. These two conditions are both mild and fairly weak in practice. **CP-4** is a requirement that certain moments of the covariates and treatments be bounded. ## 6 Augmented Energy Balancing Estimator Augmented estimator is a widely-used approach in the causal inference literature that combine balancing weights and an estimated outcome model to improve the estimation of a causal effect, potentially increasing efficiency and reducing sensitivity to model misspecification. In this context, we propose an augmented version of the energy balancing estimator that incorporates an estimated outcome model. Because the energy balancing weights were shown to yield consistent estimates of \(\mu^{q}\) without the need for an outcome regression model, this allows the analyst to utilize an outcome regression model primarily for the purpose of variance reduction. Let \(\hat{\mu}(\mathbf{x},a)\) be an estimate of the outcome regression function \(\mu(\mathbf{x},a)\). The augmented estimator is constructed by subtracting \(\int_{\mathcal{X}}\int_{\mathcal{A}}\hat{\mu}(\mathbf{x},a)d(F_{n,\mathbf{X},A}^{q}-F_ {n,\mathbf{w}_{n}^{e},\mathbf{X},A})(\mathbf{x},a)\) from the weighted estimator \(\hat{\mu}_{\mathbf{w}_{n}^{e}}^{q}=\sum_{i=1}^{n}w_{i}^{ep}Y_{i}\). The resulting augmented energy balancing estimator is \[\begin{split}\hat{\mu}_{AG}^{q}&=\hat{\mu}_{\mathbf{w} _{n}^{ep}}^{q}-\int_{\mathcal{X}}\int_{\mathcal{A}}\hat{\mu}(\mathbf{x},a)d(F_{n, \mathbf{X},A}^{q}-F_{n,w_{n}^{ep},\mathbf{X},A})(\mathbf{x},a)\\ &=\frac{1}{n}\sum_{i=1}^{n}w_{i}^{ep}(Y_{i}-\hat{\mu}(\mathbf{X}_{i},A_{i}))+\frac{1}{n}\sum_{i=1}^{n}\sum_{j=1}^{J(\mathbf{X}_{i})}I_{j,\mathbf{X}_{i}}(A _{i})\hat{\mu}(\mathbf{X}_{i},q_{j}(\mathbf{X}_{i},A_{i})).\end{split} \tag{15}\] In the second form, the weighted residuals can be view as a bias correction term for the outcome regression-based estimate of the MTP effect. We show its asymptotic normality and propose a statistically-valid and computationally-efficient multiplier bootstrap for inference. ### Asymptotic normality In this section we demonstrate the asymptotic distribution of augmented energy balancing estimator \(\hat{\mu}_{AG}^{q}\). The required conditions for asymptotic normality are: * **CA-1:**\(\hat{\mu}(\mathbf{x},a)\) is such that \(\hat{\mu}-\mu\in\mathcal{H}\). * **CA-2:**\(\mathbf{w}_{n}^{e}\) satisfy \(1\leq\mathbf{w}_{i}^{e}\leq BN^{1/3}\) for some constant \(B\). * **CA-3:**\(\max_{i}E(|\epsilon_{i}|^{3})<\infty\) for each \(n\). * **CA-4:**\(\hat{\mu}(\mathbf{x},a)\) is a consistent estimator of \(\mu(\mathbf{x},a)\). Condition **CA-1** requires that the outcome regression model minus the true outcome regression model be contained in the native space \(\mathcal{H}\). This condition somewhat limits the complexity of the error function of the estimated outcome regression model. Condition **CA-2** has been used before in the literature; see Athey et al. (2016); Huling et al. (2023). This condition can be imposed directly in the weight optimization procedure without changing the empirical performance of the weights or any of the asymptotic results of the weights, as in Huling et al. (2023). Condition **CA-3** indicates that the error term has finite third moments, and condition **CA-4** requires consistency of the outcome regression model; **CA-4** is only required for studying the asymptotic distribution of \(\hat{\mu}_{AG}^{q}\) and is not needed to show consistency or the convergence rate of \(\hat{\mu}_{AG}^{q}\) for \(\mu^{q}\). With these conditions, we have the following: **Theorem 6.1**.: _Under Conditions C1-C4, let \(J_{n}=n^{1/2}(\hat{\mu}_{AG}^{q}-\mu^{q})\) and \(J_{n}^{*}=[Var(\mu(\mathbf{X},A))]^{1/2}F+\sigma n^{-1/2}\sum_{j=1}^{n}w_{j}^{e}G _{j}\) where \(F,G_{1},...,G_{n}\) are independently and identically distributed standard normal random variables independent of \(\mathbf{X}_{1},...,\mathbf{X}_{n},A_{1},...,A_{n}\) and \(\epsilon_{1},...,\epsilon_{n}\). Let \(\psi_{n}\) and \(\psi_{n}^{*}\) be the corresponding characteristic functions of \(J_{n}\) and \(J_{n}^{*}\). Then as \(n\rightarrow\infty\), \(|\psi_{n}(t)-\psi_{n}^{*}(t)|\to 0,t\in\mathbb{R}\) where \(\psi_{n}^{*}(t)\) is twice differentiable, and_ \[\limsup_{n}\text{Var}(J_{n})\leq\text{Var}(\mu(\mathbf{X},Q))+\sigma^{2}V, \tag{16}\] _where \(\mu(\mathbf{X},Q)=\sum_{j=1}^{J(X)}I_{j,\mathbf{X}}(A)\mu(\mathbf{X},q_{j}(\mathbf{X},A))\) is the conditional mean under the MTP and \(V=E\Big{(}\frac{f_{\mathbf{X},A}^{q}(\mathbf{X},A)}{f_{\mathbf{X},A}(\mathbf{X},A)}\Big{)}^{2}+D\) is the variance of the Radon-Nikodym weights plus a positive number._ Theorem 6.1 demonstrates that the energy balancing weights, when paired with a consistent outcome regression model, achieve asymptotic normality and may be efficient, however, an exploration of the conditions under which efficiency is guaranteed requires additional exploration. ### Inference The asymptotic normality of our augmented estimator \(\hat{\mu}_{AG}^{q}\) allows us to use a standard bootstrap method to estimate its uncertainty and conduct statistical inference for the estimator. However, in practice, the standard nonparametric bootstrap method has some limitations. First, it can be computationally intensive because both the penalized energy balancing weights and the outcome regression estimator must be re-computed during each bootstrap iteration \(r=1,...,R\). This can be especially time-consuming for large values of \(R\) (e.g., \(R=10000\)). Further, in some extreme resamples, it may exacerbate issues of overlap by chance. We thus propose a multiplier bootstrap method originally presented by Wu (1986) that allows for quick computation. Similar to Matsouaka et al. (2023) (which they name a wild bootstrap), the method perturbs the influence function of the estimator to estimate its variance. The influence function for our augmented estimator is \[\varphi_{i}=\frac{f_{\mathbf{X},A}^{q}(\mathbf{X},A)}{f_{\mathbf{X},A}(\mathbf{X},A)}(Y_{i}- \mu(\mathbf{X}_{i},A_{i}))+\sum_{i=1}^{J(\mathbf{X}_{i})}I_{j,\mathbf{X}_{i}}(A_{i})\mu( \mathbf{X}_{i},q_{j}(\mathbf{X}_{i},A_{i}))-\mu^{q}.\] We estimate the influence function by plugging in the estimators \(\hat{\mu}^{q}\), \(\hat{\mu}(\mathbf{X},A)\), and \(w_{i}^{ep}\) for their population counterparts: \[\hat{\varphi}_{i}=w_{i}^{ep}(Y_{i}-\hat{\mu}(\mathbf{X}_{i},A_{i}))+\sum_{i=1}^{J (\mathbf{X}_{i})}I_{j,\mathbf{X}_{i}}(A_{i})\hat{\mu}(\mathbf{X}_{i},q_{j}(\mathbf{X}_{i},A_{ i}))-\hat{\mu}^{q}. \tag{17}\] Even though \(w_{i}^{ep}\) does not necessarily estimate the true inverse propensity score, we show that the following procedure is still valid. Then, the multiplier bootstrap estimator \(\hat{\Sigma}\) can be constructed as displayed in Algorithm 1. A 95% Wald-type confidence interval of \(\hat{\mu}_{AG}^{q}\) can be \(\hat{\mu}_{AG}^{q}\pm 1.96\times n^{-1/2}\hat{\Sigma}^{1/2}\). Here, we prove the validity of the proposed multiplier bootstrap in the following theorem. **Theorem 6.2**.: _We have that \(\hat{q}_{r}^{q}=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\xi_{i}\hat{\varphi}_{i}\to n ^{1/2}(\hat{\mu}_{AG}^{q}-\mu^{q})\) as \(n\rightarrow\infty\)._ Thus, Theorem 6.2 shows that one can estimate the asymptotic distribution of \(\hat{\mu}_{AG}^{q}\) using the multiplier bootstrap procedure in Algorithm 1. An advantage of the multiplier bootstrap procedure over the standard bootstrap is that the weights and outcome regression model only need to be estimated once, not \(R\) times. This results in a substantial computational speedup for most applications, as demonstrated in the next section. ## 7 Simulation studies In this section, we study the empirical properties of the proposed energy balancing weights and further to assess their relative performance to other commonly used estimators for MTPs. The aim of the simulation study is to evaluate the performance of our proposed estimators under different simulation conditions. Comparator methods include the clever classification method proposed in Diaz et al. (2021), density ratio (IPW) with estimated generalized propensity score, and the targeted minimum-loss based estimator (TMLE) Diaz et al. (2021) implemented in the R package _lmtp_(Williams and Diaz, 2023) that utilizes both a density ratio estimator and an outcome regression model. These methods will be further described in the following subsection. ### Data generating mechanisms We have two main simulation conditions: a moderately complex data-generating mechanism (simulation study # 1) and a highly complex (simulation study # 2) data-generating mechanism. For each simulation setting, we vary the following settings: the sample size (\(n=100,200,400,800\)), the number of covariates (\(p=10,20,40,80\)) for each individual, and the treatment type. This allows us to inspect the joint effects of dimensionality and increasing sample size on the empirical performance of all methods. We replicate all simulation experiments under two different conditions on the treatment variable: one condition where the treatment is continuously-distributed, and the other where the treatment has discrete, but countably infinite, support. Here, we briefly illustrate the data-generating mechanism. Readers are referred to Section 5.1 of the Supplementary Material for a detailed description of the data-generating mechanism. 1) The covariates are generated randomly following either a uniform distribution or a binomial distribution. 2) The treatment mean for a participant follows a cubic function of the covariates. 3) The moderately complex simulation condition assumes the outcome is a function of a quadratic term of the treatment times a cubic term of the covariates. The highly complex simulation condition further assumes interaction terms of the covariates. 4) The MTP function \(q\) shifts the treatment more aggressively when the observed treatment is small. We emphasize that the estimation error (i.e., the observed sample mean effect minus the true MTP effect) is designed to be relatively stable with the increase of the number of covariates \(p\), so that the complexity of the data-generating processes does not explode with dimension. ### Estimands and estimators under evaluation In each simulation replication, the estimand of interest is the mean potential outcome under the MTP. We evaluate three proposed energy balancing weight methods, including the penalized energy balancing weights described in Section 5.1, an unpenalized version of the energy balancing weights, and a kernelized energy balancing weights approach using the Gaussian kernel (see Supplementary Material Section 2). Various alternative estimators for balancing weights are considered, such as the naive, unadjusted method that assigns equal weights to all subjects, density ratio weights (IPW) with generalized propensity score estimated using a Poisson density (for the discrete intervention settings), and the classification method proposed by (Diaz et al. (2021) Section 5.4) with classification models estimated by either 1) logistic regression or 2) a random forest. The corresponding augmented estimators for each weighting method are implemented using the same outcome model, the ensemble method SuperLearner (Van der Laan et al., 2007), which incorporates the eXtreme Gradient Boosting method Friedman et al. (2000) (SL.xgboost), lasso regularized generalized linear models (SL.glmnet), and a random forest model (SL.ranger). Also included is a targeted minimum-loss based estimator (TMLE) method (Diaz et al., 2021) with all nuisance parameters estimated using SuperLearner. ### Simulation results In every simulation scenario, the true causal effect is determined using Monte Carlo with a sample size of 100,000. For each simulation setting, we repeat the experiment independently 1,000 times, applying each comparator method, allowing for the calculation of the mean squared error (MSE), bias, and coverage rate for 95% confidence intervals. The coverage rate is calculated based on Wald-type intervals with SE for each estimator estimated by the non-parametric bootstrap with 100 replications. Additionally, we also use the multiplier bootstrap method proposed in Section 6.2 for the augmented energy balancing estimator. The simulation results are displayed in Figures 3 and 4. Since the results of the two simulation studies are similar, we only display the MSE and coverage rate for the 95% confidence interval for the first simulation study. Readers can refer to Supplementary Material for the results for the highly complex setting. In all simulation conditions, our three energy balancing methods (energy balancing, penalized energy balancing, and kernel energy balancing method) consistently outperform other methods in terms of both the bias and the coverage rate, indicating their robustness and ability to handle various situations. This pattern holds both for pure weighting (not augmented) estimators and for augmented estimators, however, the non-augmented energy balancing estimators work best of all in most settings. The penalized energy balancing weights exhibit slightly worse performance in terms of bias compared to the energy balancing weights, which is probably due to the additional penalty term in the estimation. The kernel energy balancing weights (using the MMD described in the Supplementary Material Section 2.2) have the smallest bias in most cases when the sample size is relatively small; however, their performance varies significantly among different conditions and the bias does not improve as much with the increase of the sample sizes. This unstable performance may be due to the choice of bandwidth parameters in the Gaussian kernel function, which is the median heuristic that is known to not necessarily perform well in all scenarios. Therefore, although we believe the kernel energy balancing method has the potential to be very flexible and perform well in practice, its performance may depend critically on the choice of tuning parameters. Consequently, we recommend using penalized energy balancing methods that are more stable across different conditions. The augmented energy balancing estimator does not perform as well as the three energy balancing estimators, possibly because the outcome model dominates the empirical performance of the method in the particular simulation settings we investigated using the outcome modeling approach we used throughout. We note that our outcome regression model, while flexible, struggles to model the outcome regression function well; a more suitable outcome model would likely yield better performance for the augmented estimators. In fact, all the augmented estimators exhibit similar performance in terms of bias, which we believe primarily reflects the performance of the fitted outcome model rather than the weights. Further studies are necessary to gain a better understanding of this phenomenon. The coverage rate results (Figure 4) align with our observations for bias and MSE. All three energy balancing methods exhibit good coverage rate performance. Owing to its small bias, the coverage rate for the kernel energy balancing method approaches 95% most closely. For the moderately complex simulation condition (simulation study #1), most augmented methods achieve reasonable coverage with 800 sample sizes. However, their coverage remains quite low when the data-generating mechanism becomes more complex (see the supplementary material Figure 5) as the bias induced by the misspecified outcome regression models seems to have an ill-effect on their coverage. This further highlights the robustness of our methods in handling various practical conditions. It is worth emphasizing the similar performance of the multiplier bootstrap and the non-parametric bootstrap for the augmented energy balancing method. The convergence of the results validates the use of the multiplier bootstrap as a more efficient way to calculate the uncertainty of the augmented energy balancing estimator. Figure 3: Simulation results in terms of the logarithm of the absolute value of bias across different sample sizes, type of treatment/intervention variable, and the dimensionality of covariates. The data-generating mechanism is moderately complex (simulation # 1). Balancing methods are displayed in different colors and shapes of points. Weighted estimators are displayed in solid lines and the augmented estimators are in dashed lines. ## 8 Case Study: Mechanical Power of Ventilation Critically ill patients often require mechanical ventilation to support their breathing. The ventilator delivers a controlled amount of air (tidal volume) at a specific pressure and rate to ensure adequate gas exchange in the lungs. In order to overcome the resistance of the airway and expand the thorax wall, the ventilator needs to transfer a certain amount of energy to the patient's respiratory system (Neto et al., 2018, 2016; Cressoni et al., 2016). Mechanical power (MP) is a comprehensive parameter that quantifies the amount of energy transferred to the patient's respiratory system during each breath. It takes into account tidal volume, respiratory rate, peak airway pressure, and positive end-expiratory pressure (PEEP) (Gattinoni et al., 2016). Figure 4: Simulation result for the 95% coverage rate across different sample sizes and dimensionality of covariates. The data-generating mechanism is moderately complex (simulation # 1). The red dashed line indicates nominal 95% coverage. Different weighting methods are displayed in different colors and shapes of points. Weighted estimators are displayed in solid lines and the augmented estimators are in dashed lines. The multiplier bootstrap method is displayed in a larger size of the red points. 2016). Excessive ventilatory settings can result in ventilator-induced lung injury (VILI), which can worsen the patient's clinical condition and increase the risk of mortality (Neto et al., 2016; Cressoni et al., 2016). Previous research has suggested that factors like high tidal volume, high airway pressures, and high respiratory rates could be associated with an increased risk of lung injury and poor outcomes (Nieman et al., 2016). However, these individual factors do not capture the overall impact of ventilation settings on the lungs. Mechanical power can provide a more comprehensive assessment of the risk associated with different ventilatory settings. A recent study (Neto et al., 2018) examined data from the high-resolution databases of the Medical Information Mart for Intensive Case (MIMIC)-III (Johnson et al., 2016) and the eICU Collaborative Research Database (eICU) (Goldberger et al., 2000). These databases contain information on critically ill patients who required mechanical ventilation in the intensive care unit (ICU). All included patients received invasive ventilation for at least 48 consecutive hours. Time-varying variables were collected at 8-hour intervals for the entire 48-hour period. (Neto et al., 2018) defined the exposure as the mean between the highest and lowest value of MP in the second 24 hours and concluded that high MP is independently associated with higher in-hospital mortality. In this case study, we reanalyze the dataset from Neto et al. (2018) to explore the causal relationship between a postulated MTP of MP and in-hospital mortality. We excluded patients with zero MP in Joules/min or extreme MP values (MP greater than 150 Joules/min) from the analysis. The exposure of interest is the mean value between the highest and lowest MP during the second 24 hours. The potential confounding variables we aim to balance are measured during or prior to the first 24 hours. As in Neto et al. (2018), our primary focus is on the in-hospital mortality of the included participants. The dataset contains a total of 5,011 participants with 97 identified covariates; among these covariates are many of the important factors such as the PaO2/FiO2 Ratio that are used in guidelines for determining mechanical ventilator settings. We use the following MTP to explore the causal effect of MTP that decreases the MP value based on its original value. This MTP reduces MP for individuals with high MP more aggressively, as earlier studies have indicated potential harms of high MP. The shifted MP value \(q(\mathbf{x},a)\) is defined as \(q(\mathbf{x},a)=a\) if \(0<a<5\), \(q(\mathbf{x},a)=a-5\tau\) if \(5<a<10\), \(q(\mathbf{x},a)=a-10\tau\) if \(10<a<20\), \(q(\mathbf{x},a)=a-15\tau\) if \(20<a<40\), \(q(\mathbf{x},a)=a-30\tau\) if \(40<a\). Here \(a\) is the original MP value and \(\tau\) is the parameter that controls the magnitude of the MTP (a larger value of \(\tau\) results in a more dramatic decrease of MP). With the \(\tau\) ranging from 0 to 1 for the MTPs, our analysis is displayed in Figure 5. For each value of \(\tau\), the top axis displays the mean value of MP under the corresponding MTP. To identify a range of the shift magnitude \(\tau\) for which confounding control via our energy balancing weights is feasible, we calculate the sampling variation of the energy distance under the corresponding MTP as described in Section 4.1. The upper 90%, 95%, and 97.5% percentiles of the bootstrap sample are plotted with smoothed lines in the figure. Again, these values represent the intrinsic tails of variation of the energy distance when the two distributions are identical (i.e., the population is perfectly balanced). If the actual energy distance after balancing is larger than these upper bounds, the validity of the hypothesis that the population is balanced after weighting becomes questionable. The energy distance between the weighted sample and the target sample under the MTP is displayed as the blue curve for each value of \(\tau\). From the results, MTPs with \(\tau\) larger than 0.9 have an energy distance which reaches the upper 90% threshold, which indicates moderately poor balance of the covariates under the corresponding MTP shifts after weighting and thus any analysis results have the potential of confounding bias. Values of \(\tau\) ranging from 0 to 0.75 have smaller energy distances after weighting, indicating that the shifts are within a reasonable range and have low risk of bias due to measured confounding. The estimated potential in-hospital mortality rate under the MTP and the corresponding 95% confidence interval are displayed in Figure 5. Within the range of \(\tau\) that has low risk of confounding bias, the expected potential mortality under the MTP monotonically decreases with larger values of \(\tau\) (i.e., a larger decrease of MP). When \(\tau>0.29\), the estimated upper bound of the 95% confidence interval of the mortality rate is smaller than the average mortality rate of the sample which indicates that reducing MP via the defined MTP would significantly reduce in-hospital mortality. Note that when \(\tau=0.29\), the energy distance after balancing is very close to 0 and the average MP after shifting decreases from 20 to 16.5 Joules/min. These observations suggest that measured confounding after weighting is not a major issue, which strengthens the validity of the significant result that a lower MP than used in practice would likely decrease in-hospital mortality. Compared to the results from (Neto et al., 2018; Hong et al., 2021), we adopted an MTP analysis, which strengthens the evidence about the potential harms of high MP due to the weakened assumptions that our MTP estimators operate under. Figure 5: Exploring different magnitudes of MTP shifts to mechanical power of ventilation. The X-axis represents the \(\tau\) value which controls the amount of the MTP shift. For each MTP shift, the blue curve displays the energy distance of the penalized energy balancing weights described in section 5. The three grey lines represent the 90%, 95%, and 97.5% threshold for the upper limit of energy distance, which are estimated using the bootstrap method described in Section 4.1. The red curve along with the shaded area is the estimated mortality and its 95% CI with the corresponding MTP. Discussion Estimating causal effects of treatments from observational data is particularly challenging due to confounding. This issue is further compounded when the treatment takes continuous values. An MTP analysis is well-suited for this scenario and defines a hypothetical intervention on a treatment variable based on the baseline characteristics and the observed treatment value. However, the choice of the hypothetical intervention in an MTP is still subjective and it faces the dilemma that, 1) if the definition of MTP is too conservative (i.e., the shift of the intervention is very small), the causal effect of the MTP may be too small to have material clinical implications and on the other hand 2) if the definition of MTP is too aggressive, then the observed population and the population under MTP will be intrinsically different, which makes confounding a major issue again, negating many of the conceptual and practical benefits of MTPs. There is a lack of methods that can measure the magnitude of the potential for measured confounding and help to define a reasonable MTP that can be reliably estimated from the data at hand. In this paper, we demonstrate the connection between the estimation bias of weighted estimators of the causal effect under an MTP and weighted distributional imbalance. By explicitly defining the target population under MTP, we propose an error decomposition that highlights the role of covariate balance as a critical component of the estimation bias. We then introduce a distance, based on the weighted energy distance of Huling and Mak (2020), that explicitly characterizes this imbalance. As a result, the performance of any arbitrary set of weights in controlling for confounding for MTPs can be gauged through the weighted energy distance. Two methods are proposed to enhance the estimation of MTP effects: 1) we propose a method for assessing the performance of various balancing weights according to their weighted energy distance; this approach enables researchers to choose different balancing techniques suitable for specific datasets, 2) we then propose a method to detect whether the MTP shift is too large for a balancing method. We use a bootstrap to measure the variability of the energy distance under the null hypothesis of no measured confounding. Comparing this with the energy distance after applying the balancing weights enables an assessment of whether the weights can effectively balance the covariates under the current MTP and thereby control for measured confounding. This comparison also sheds light on the extent of confounding challenges associated with an MTP paired with a given dataset. Our second contribution is a novel covariate balancing weight method based on the energy distance. These energy balancing weights are generated by minimizing the weighted energy distance, thereby minimizing a measure of distributional imbalance and thus potential for bias. We additionally propose an augmented energy balancing estimator that integrates an outcome model with the balancing weights to help further reduce variance in the estimates. We establish the validity of our methods by demonstrating the root-\(n\) consistency of these estimators and further asymptotic normality of the augmented estimator. Through simulation studies involving both moderately complex and highly complex scenarios, we showcase the stability and performance of our methods in terms of bias, mean squared error (MSE), and coverage rate across various conditions, reflecting their capacity to handle diverse real-world situations. We further proposed a multiplier bootstrap method to estimate the uncertainty of the augmented energy balancing estimator. The multiplier bootstrap approach offers a more computationally efficient means for statistical inference compared to the classical nonparametric bootstrap, as it does not require calculating weights within each bootstrap iteration. We prove the large-sample properties of this multiplier bootstrap and, through simulation, illustrate its similarity to the nonparametric bootstrap in terms of the coverage rates. There remain some statistical properties that warrant further exploration. The first is to show whether the energy balancing weights converge to the true density ratio, as doing so may enable clearer studies of the potential efficiency of our proposed estimators. Secondly, as suggested in the simulation study, the coverage rate for the energy balancing estimator and the penalized energy balancing estimator is approximately 95%, implying that both estimators exhibit asymptotic normal distribution. However, establishing the asymptotic distribution for the estimators using EBWs and penalized EBWs is highly challenging. Moreover, our methods can be extended by incorporating alternative distance measures in the estimation of energy balancing weights and studying their corresponding statistical properties.
2303.16077
From Phase Space to Non-Equilibrium Dynamics: Exploring Liouville's Theorem and its Implications
The Liouville theorem is a fundamental concept in understanding the properties of systems that adhere to Hamilton's equations. However, the traditional notion of the theorem may not always apply. Specifically, when the entropy gradient in phase space fails to reach equilibrium, the phase-space density may not have a zero time derivative, i.e., $\frac{d\rho}{dt}$ may not be zero. This leads to the concept of the set of attainable states of a system forming a compressible "fluid" in phase space. This observation provides additional insights into Hamiltonian dynamics and suggests further examination in the fields of statistical physics and fluid dynamics. In fact, this finding sheds light on the limitations of the Liouville theorem and has practical applications in fields such as beam stacking, stochastic cooling, and Rabi oscillations, among others.
Mario J. Pinheiro
2023-03-24T18:59:59Z
http://arxiv.org/abs/2303.16077v1
# From Phase Space to Non-Equilibrium Dynamics: Exploring Liouville's Theorem and its Implications ###### Abstract The Liouville theorem is a fundamental concept in understanding the properties of systems that adhere to Hamilton's equations. However, the traditional notion of the theorem may not always apply. Specifically, when the entropy gradient in phase space fails to reach equilibrium, the phase-space density may not have a zero time derivative, i.e., \(\frac{d\rho}{dt}\) may not be zero. This leads to the concept of the set of attainable states of a system forming a compressible "fluid" in phase space. This observation provides additional insights into Hamiltonian dynamics and suggests further examination in the fields of statistical physics and fluid dynamics. In fact, this finding sheds light on the limitations of the Liouville theorem and has practical applications in fields such as beam stacking, stochastic cooling, and Rabi oscillations, among others. Nonlinear physics, Plasma physics, Statistical physics, Liouville theorem, Hamilton's equations, Ergontropic dynamics, Entropy gradient, Phase-space density, Fluid dynamics. pacs: + Footnote †: preprint: IST/UL.2022-M J Pinheiro; ## I Introduction A key finding in classical mechanics, Liouville's theorem [1] has significant effects on how classical systems behave. The volume of phase space covered by a system experiencing Hamiltonian motion is preserved throughout time, according to the theorem. If we think about the six-dimensional phase space that is covered by the positions and velocities of every particle in a classical system, the volume of this phase space that the system occupies will not change over time. This has several important implications: (i) Trajectories in phase space cannot intersect, according to the conservation of phase space volume. As a result, there are no "random" changes in the state of the system; rather, the evolution of a system in phase space is entirely predictable [2; 3]; (ii) According to Liouville's theorem, a system may develop from any initial state to any final state over the course of time in phase space [4; 5; 6; 7]. This is caused by the fact that phase space volume is conserved, which protects the system from "losing" information as it develops; (iii) The behavior of macroscopic systems is significantly impacted by the conservation of phase space volume. For instance, it suggests that when a gas of non-interacting particles evolves, the particle distribution in phase space stays constant [8]. The classical Boltzmann distribution, which defines the statistical behavior of classical systems at equilibrium, is based on this principle [9]. Liouville's theorem is an effective tool for understanding how classical systems behave in phase space. It forms the basis for classical mechanics' deterministic development and contributes to the explanation of the statistical behavior of macroscopic systems. We can find a description of Liouville's theorem close to the original in the well-known book of Paul Ehrenfest [9]. The Schrodinger quest for a quantum mechanical description based on the eikonal form of the Maxwell set of equations dictated their type of solutions based on amplitudes, the superposition principle for their solutions, and the typical interferences effects that are the outcome of squaring amplitudes. In a result of this path, the mathematical framework is founded in linear algebra and Hilbert spaces. The fitting of quantum mechanical theory to classical mechanics was made by forcing the relationship between the Poisson brackets and the commutator brackets, \([A,B]=i\hbar\{A,B\}\). If instead, the starting point was classical mechanics, then a different outcome would result, mainly no amplitude to intensity relationships and the superposition principle would be applied to forces (or potentials). Liouville's theorem provides a crucial link between classical and quantum mechanics through the concept of phase-space density. In classical mechanics, phase space is a six-dimensional space spanned by the position and momentum coordinates of all particles in the system. The phase-space density function, denoted by \(\rho(q,p,t)\), gives the probability density of finding a system in a particular phase-space point \((q,p)\) at time \(t\). In quantum mechanics, the phase space is replaced by the Hilbert space of the system, and the concept of the phase-space density is replaced by the wave function, denoted by \(\Psi(q,t)\) and the probability density of finding a quantum system in a particular state is given by \(|\Psi(q,t)|^{2}\). The Wigner function is a mathematical tool that provides a link between the classical and quantum descriptions of the phase-space density, defined as: \[W(q,p,t)=\frac{1}{2\pi\hbar}\int_{-\infty}^{\infty}\Psi^{*}\left(q- \frac{\lambda}{2},t\right)\Psi\left(q+\frac{\lambda}{2},t\right)\\ e^{-i\frac{p\lambda}{\hbar}}d\lambda. \tag{1}\] The Wigner function is a real-valued function that satisfies a version of Liouville's theorem known as the Wigner-Liouville equation, \(\frac{\partial W}{\partial t}+\{H,W\}=0\), where \(H\) is the Hamiltonian of the system and \(\{\cdot,\cdot\}\) denotes the Poisson bracket. The Wigner-Liouville equation describes the time evolution of the Wigner function and provides a bridge between the classical and quantum descriptions of the system. Furthermore, Liouville's theorem provides the foundation for the correspondence principle, which states that the behavior of quantum systems should reduce to classical behavior in the limit of large quantum numbers. This is achieved through the use of coherent states, which are quantum states that exhibit classical-like behavior. The Wigner function for coherent states approaches the classical phase-space density in the limit of large quantum numbers. In quantum mechanics, the non-conservation of phase space implies that the uncertainty principle is fundamental and cannot be overcome. This is because, in quantum mechanics, the phase-space density is related to the probability amplitude of the system, which determines the probability of finding the system in a particular state. The non-conservation of phase space means that the probability density of the system cannot be conserved, which implies that the uncertainty principle must hold. The uncertainty principle states that the more precisely the position of a particle is known, the less precisely its momentum can be known, and vice versa. Therefore, the non-conservation of phase space implies that the uncertainty principle is an essential feature of quantum mechanics and cannot be eliminated. There have been many experiments that have confirmed the uncertainty principle, and it is considered to be a fundamental principle of nature [10; 11]. There have been attempts to find ways around the uncertainty principle, such as using entangled particles or non-local measurements, but these approaches have not been successful in violating the principle [12; 13; 14]. The uncertainty principle is deeply embedded in the foundations of quantum mechanics and is a necessary consequence of the wave-particle duality of quantum objects. Hence, the nonconservation of phase space has significant implications for classical and quantum mechanics. Classical mechanics challenges the Liouville theorem and suggests the compressibility of phase space. In quantum mechanics, it implies the uncertainty principle, demonstrating limits to measurement precision, and has important implications for interpretation and experiment design. The aim of this work is to demonstrate that the assumption that Hamiltonian systems obey the Liouville theorem may be challenged by the inclusion of Eqs. (8)-(9). This is due to the fact that if entropy gradients in the phase-space do not equilibrate, the time derivative of the phase-space density may not be zero. Consequently, the set of states that a system could access forms a volume in phase space denoted by \(\Gamma\), which is analogous to a compressible fluid. ## II The equation of motion for physical quantities The equation of motion for any arbitrary physical quantity is \[\dot{F}=\frac{\partial F}{\partial q}\dot{q}+\frac{\partial F}{\partial p} \dot{p}=[H,F], \tag{2}\] with the hamiltonian \(H\equiv p\dot{q}-L(\dot{q},q)\), \([,]\) representing the Poisson brackets, and the Hamiltonian equations of motion \[\dot{q}=\frac{\partial H}{\partial p}\ \ ;\ \dot{p}=-\frac{\partial H}{\partial q}. \tag{3}\] The fundamental equation of motion in Hamiltonian mechanics is given by Eq. (25), which provides a mathematical description of the time evolution of any physical quantity. This equation expresses the rate of change of a physical quantity \(F\) as a function of the generalized coordinates \(q\) and momenta \(p\). The Hamiltonian \(H\), which is defined in terms of the Lagrangian \(L\) as \(H\equiv p\dot{q}-L(\dot{q},q)\), plays a central role in the formulation of Hamiltonian mechanics, and is related to the total energy of the system. The Poisson brackets \([,]\), defined as \([A,B]\equiv\sum_{i}(\frac{\partial A}{\partial q_{i}}\frac{\partial B}{ \partial p_{i}}-\frac{\partial A}{\partial p_{i}}\frac{\partial B}{\partial q _{i}})\), capture the algebraic structure of Hamiltonian mechanics and provide a powerful tool for the calculation of physical quantities. The Hamiltonian equations of motion, given by Eq. (3), are a set of first-order differential equations that describe the time evolution of the generalized coordinates and momenta. These equations are derived from the Hamiltonian \(H\) and express the principle of least action in Hamiltonian form. They relate the time derivatives of the generalized coordinates and momenta to the partial derivatives of the Hamiltonian with respect to the conjugate variables. Hamiltonian mechanics provides a powerful framework for the description of classical and quantum systems and has a wide range of applications in physics and engineering. The formalism is particularly useful for the analysis of conservative systems, where the total energy is conserved. ## III Out-of-equilibrium dynamics A new form of canonical momentum and equation of motion has been derived from a recent variational principle [15]. The fundamental equation of motion has the form of a local balance equation with the spatial gradient of entropy as the source term. The gradient of the total entropy in momentum space is also given, which, when maximized, leads to the total canonical momentum. The new set of equations of motion resembles the Hamiltonian formulation of dynamics and complies with the Helmholtz free energy. Liouville's equation has been defined, along with the Liouville operator, and the expectation value of a function has also been given. In that framework, the new form of canonical momentum is: \[\frac{\partial\bar{S}}{\partial\mathbf{p}^{(\alpha)}}\geq 0 \tag{4}\] and the fundamental equation of motion: \[\frac{\partial\bar{S}}{\partial\mathbf{r}^{(\alpha)}}=-\frac{1}{T}\mathbf{\nabla} _{r^{(\alpha)}}U^{(\alpha)}-\frac{1}{T}m^{(\alpha)}\frac{\partial\mathbf{v}^{ (\alpha)}}{\partial t}\geq 0. \tag{5}\] Eq. 5 gives the fundamental equation of dynamics and has the form of a general local balance equation having as source term the spatial gradient of entropy, \(\nabla_{a}S>0\), whilst Eq. 4 gives the canonical momentum. At thermodynamic equilibrium, the total entropy of the body has a maximum value. In the more general case of a non-equilibrium process, the entropic gradient must be positive in both Eqs. 4- 5. The interplay between energy-minimizing tendencies and entropy maximization may introduce new physics through a set of two first-order differential equations. These equations have the potential to reveal novel insights into the underlying dynamics of physical systems. In non-equilibrium processes, the gradient of the total entropy in momentum space multiplied by factor \(T\) is given by \[\frac{\partial\overline{S}}{\partial\mathbf{p}^{(\alpha)}}=\frac{1}{T}\left\{ -\frac{\mathbf{p}^{\alpha)}}{m^{(\alpha)}}+\frac{q^{(\alpha)}}{m^{(\alpha)}} \mathbf{A}+\mathbf{v}_{e}+[\omega\times\mathbf{r}^{(\alpha)}]\right\}, \tag{6}\] so that maximizing entropy change in Eq. 4 leads to the well-known total canonical momentum: \[\mathbf{p}^{(\alpha)}=m^{(\alpha)}\mathbf{v}_{e}+m^{(\alpha)}[\omega\times \mathbf{r}^{(\alpha)}]+q^{(\alpha)}\mathbf{A}. \tag{7}\] The above formulation bears some resemblance with the Hamiltonian formulation of dynamics which expresses first-order constraints of the Hamiltonian \(H\) in a \(2n\) dimensional phase space, \(\mathbf{\dot{p}}=-\partial H/\partial\mathbf{q}\) and \(\mathbf{\dot{q}}=\partial H/\partial\mathbf{p}\), and can be solved along trajectories as quasistatic processes, revealing the same formal symplectic structure shared by classical mechanics and thermodynamics. The sharing of a formal symplectic structure between classical mechanics and thermodynamics implies a common geometric framework for the equations of motion, which enables the application of Hamiltonian mechanics to the study of thermodynamic systems and suggests the existence of underlying mathematical structures that are common to many different physical systems. In the context of our approach, the new set of equations of motion should read: \[\mathbf{\dot{p}} = -\mathbf{\nabla}H+T\mathbf{\nabla}\overline{S}=-\frac{\partial}{\partial \mathbf{q}}(H-T\overline{S}) \tag{8}\] \[\mathbf{\dot{q}} = -T\mathbf{\nabla}\overline{S}+\mathbf{\nabla}H=\ \ \frac{\partial}{ \partial\mathbf{p}}(H-T\overline{S}). \tag{9}\] We have identified \(U\) as equivalent to \(H\), and it is worth noting that the motion of the system is now governed by the Helmholtz free energy, \(\mathbf{\dot{\gamma}}=H-T\overline{S}\), rather than just the Hamiltonian alone. The gradients of the system's Hamiltonian function and the thermodynamic quantities are connected to the time derivatives of the system's position and momentum by the equations. There is an easy way to connect the macroscopic thermodynamic parameters of temperature, entropy, and energy to the microscopic characteristics of the system's particles due to the identification of the Hamiltonian function with the Helmholtz free energy, H = U - TS. The reformulation may be a useful tactic for statistical mechanics' study of the behavior of complex systems, with several applicability in physics, chemistry, and materials science, as we will suggest later. Our investigation centers around Liouville's equation \[\frac{d\rho}{dt}=\imath[\rho,H], \tag{10}\] where the function \(\rho(q,p,t)\) is defined in a way such as the product \[\rho(q,p,t)dqdp=\rho(q,p,t)d\Omega \tag{11}\] represents the number of system points in the phase volume \(d\Omega\) around the point \((q,p)\) at the time \(t\). We can write \[\imath\frac{\partial\rho}{\partial t}=L\rho \tag{12}\] where \[L=-\imath\frac{\partial H}{\partial p}\frac{\partial}{\partial q}+\imath \frac{\partial H}{\partial q}\frac{\partial}{\partial p}, \tag{13}\] represents the Liouville operator (and \(\imath=\sqrt{-1}\)). \[\langle A\rangle=\int dpdqA(p,q)\rho \tag{14}\] \[\frac{\partial\rho}{\partial t}=-\partial_{p}H\partial_{q}\rho+\partial_{q}H \partial_{p}\rho \tag{15}\] But from Eqs. we have \[\frac{\partial\rho}{\partial t}=-\dot{q}\partial_{q}\rho-\dot{p}\partial_{p} \rho-T\partial_{p}\overline{S}\partial_{q}\rho+T\partial_{q}\overline{S} \partial_{p}\rho \tag{16}\] or \[\frac{d\rho}{dt}=-T\left[\partial_{p}\overline{S}\partial_{q}\rho-\partial_{q} \overline{S}\partial_{p}\rho\right]. \tag{17}\] If we introduce now the usual Poisson bracket for two variables \(A\) and \(B\): \[[A,B]=\sum_{i}\left(\frac{\partial A}{\partial q_{i}}\frac{\partial B}{\partial p _{i}}-\frac{\partial B}{\partial p_{i}}\frac{\partial A}{\partial q_{i}}\right), \tag{18}\] we could express the Liouville equation in a more comprehensive form. \[\frac{\partial\rho}{\partial t}+\mathbf{u}\cdot\mathbf{\nabla}\rho=-T[\partial_{p} \overline{S}\partial_{q}-T\partial_{q}\overline{S}\partial_{p}]\rho. \tag{19}\] Note that \[\mathbf{u}\cdot\mathbf{\nabla}=\sum_{l}\left(\frac{\partial H}{\partial p_{l}}\frac{ \partial}{\partial q_{l}}-\frac{\partial H}{\partial q_{l}}\frac{\partial}{ \partial p_{l}}\right). \tag{20}\] Eq. 20 implies a correction to the Liouville equation, that needs to be written now under the form: \[\frac{\partial\rho}{\partial t}=[H,\rho]-T[\overline{S},\rho], \tag{21}\] or \[\frac{d\rho}{dt}=-T[\overline{S},\rho]\neq 0. \tag{22}\] But, as the free energy can be defined by \(\overline{F}=F_{0}-T\overline{S}\), we may drop out the approximation of isothermic states and write instead a more general form of Liouville's theorem for out-of-equilibrium systems: \[\frac{d\rho}{dt}=-[\overline{F},\rho]\neq 0. \tag{23}\] Non-isothermic states involves a delicate balance between energy and entropy, which can make their analysis difficult and may not be necessary for understanding the behavior of out-of-equilibrium systems. By redefining the free energy as \(\overline{F}=F_{0}-T\overline{S}\), we can instead focus on a more general form of Liouville's theorem that applies to a broader range of physical systems. The introduction of Equations (7)-(8) necessitates a reformulation of Liouville's theorem. When the entropy gradients in phase space fail to equilibrate, the time derivative of the phase-space density, \(d\rho/dt\), may not be zero, resulting in a set of possible system states forming a compressible "fluid" volume in phase space denoted as \(\Gamma\). This observation sheds light on why the Liouville theorem does not seem to hold in certain techniques, such as beam stacking, electron cooling, stochastic cooling, synchrotron radiation, and charge exchange [19; 20], or in the Boltzmann equation when the collision operator is irreversible [22]. Ref. [21] reports that Liouville's theorem is insufficient for predicting the behavior of the atmosphere and climate, even in the case of a simple linear oscillator. This failure of Liouville's theorem has significant technological implications, and we provide examples of such cases below. ### Relaxation of an initially out-of-equilibrium system towards thermal equilibrium Given the previously defined operators, the Helmholtz operator for a system can be expressed as: \[\hat{H}-T\hat{S}=\hat{U}-TS(\hat{\rho})=(\hat{\mathcal{H}}-\mu\hat{\mathcal{N }})-T\hat{S}(\hat{\rho})\] Here, \(\hat{\mathcal{H}}\) is the Hamiltonian operator, \(\mu\) is the chemical potential operator, \(\hat{\mathcal{N}}\) is the particle number operator, and \(\hat{S}(\hat{\rho})\) is the operator for the entropy of the system. Note that the entropy operator is a function of the density operator \(\hat{\rho}\), which describes the statistical distribution of the system. the von Neumann entropy of a density operator \(\rho\) can be defined as \(S(\rho)=-\text{Tr}(\rho\ln\rho)\), where Tr denotes the trace. This is a standard expression for entropy in the context of quantum mechanics. To obtain a quantum field equation from the set of classical equations of motion given in Eqs. (7) and (8), we need to promote the variables p and q to quantum operators and replace the classical Poisson brackets with quantum commutators. We also need to introduce a time-dependent parameter \(\lambda\) to control the transition from the classical to the quantum regime, such that \(\lambda=0\) corresponds to the classical limit, and \(\lambda=1\) corresponds to the fully quantum regime. This can be done using the so-called Wigner-Weyl transformation, which maps classical variables to quantum operators. Let us define the Wigner function as: \[W(q,p,t)=\int dye^{ipy/\hbar}\psi(q-y/2,t)\psi^{*}(q+y/2,t), \tag{24}\] where \(\psi(q,t)\) is the wave function of the system. The Wigner function is a quasi-probability distribution (it may contain negative values) that encodes both the position and momentum information of the system and it satisfies the following properties: i) \(W(q,p,t)\) is real; ii) \(W(q,p,t)\) is normalized: \(\int dqdpW(q,p,t)=1\). The marginal distributions of W(q, p, t) with respect to q and p recover the probability density and current of the system, respectively: \[\begin{split}\rho(q,t)&=\int dpW(q,p,t)\\ J(q,t)&=\int dp\frac{p}{m}W(q,p,t)\end{split} \tag{25}\] where \(m\) is the mass of the system. Using the Wigner function, we can rewrite the classical equations of motion in Eqs. (7) and (8) as: \[\frac{\partial W}{\partial t}=\{H-TS,W\} \tag{26}\] where \(\{A,B\}\) denotes the Poisson bracket of \(A\) and \(B\), and we have replaced the classical variables p and q with their corresponding quantum operators, \(\hat{p}\) and \(\hat{q}\). The Poisson bracket can be replaced with a commutator in the limit of large quantum numbers, \(\hbar\to 0\), using the correspondence rule, \(\{A,B\}\rightarrow(1/\hbar)[\hat{A},\hat{B}]\), where \([\hat{A},\hat{B}]\) denotes the commutator of \(\hat{A}\) and \(\hat{B}\). So, the quantum field equation is: \[\frac{\partial\hat{W}}{\partial t}=[\hat{H}-T\hat{S},\hat{W}], \tag{27}\] where \(\hat{H}\), \(\hat{S}\), and \(\hat{W}\) are the quantum operators corresponding to the classical functions \(H\), \(S\), and \(W\), respectively. Eq. 27 is the quantum field equation in the Wigner representation. It describes the time evolution of the Wigner function of the system and can be used to calculate various properties of the system, such as its energy spectrum ( distribution of energy levels), correlation functions (relationship between different properties), and coherence properties (the degree to which the phases of different parts of a wave or system are related to one another). Note that this definition of entropy assumes that \(\rho\) is a positive semi-definite operator with a trace equal to one, which is the case for density operators. To define the entropy for a general operator, we need to consider more general definitions. The von Neumann entropy of a density operator \(\rho\) is defined as \(S(\rho)=-Tr(\rho\ln\rho)\). On the other hand, the Wigner function is a quasi-probability distribution function that represents the quantum state of a system in phase space, defined as: \[W(q,p)=\frac{1}{\pi\hbar}\int\psi^{*}(q+y/2)\psi(q-y/2)e^{(-ipy/\hbar)}dy, \tag{28}\] where \(\psi\) is the wave function of the system, \(q\) and \(p\) are the position and momentum variables. To apply the von Neumann entropy to the Wigner function, we first need to convert the Wigner function into a density operator (represented by a Hermitian matrix, and containing information about the probabilities of different outcomes of a measurement of the system). This can be done using the Wigner-Weyl transform: \[\rho=\frac{1}{\pi\hbar}\int W(q,p)D(q,p)dqdp \tag{29}\] where \(D(q,p)\) is the Weyl operator, defined as: \[D(q,p)=e^{(i(p\wedge q-q\wedge p)/\hbar)}. \tag{30}\] Once we have the density operator \(\rho\), we can calculate its von Neumann entropy using the formula \(S(\rho)=-Tr(\rho\ln\rho)\), where \(\ln\rho\) is the matrix logarithm of \(\rho\), which can be obtained by diagonalizing \(\rho\) and taking the logarithm of its eigenvalues. The trace \(Tr(\rho\ln\rho)\) can then be evaluated by summing the diagonal elements of \(\rho\ln\rho\). Hence, we can apply the von Neumann entropy to the Wigner function by first converting it into a density operator using the Wigner-Weyl transform, and then using the formula \(S(\rho)=-Tr(\rho\ln\rho)\) to calculate its von Neumann entropy. To apply the equation \(d\rho/dt=-[F,\rho]\) to study the evolution of the phase space density during a system's transition, we must first define the free energy function F as a function of parameters that describe the transition. Suppose the transition is controlled by a parameter \(\lambda\). In this case, the free energy function can be expressed as: \[F(\lambda)=H-\lambda G, \tag{31}\] where \(H\) and \(G\) are Hermitian operators that represent the Hamiltonian and some other observable, respectively. As \(\lambda\) is varied, the system undergoes a transition from one phase to another, and we want to study the evolution of the phase space density as this happens. We can start by writing the Liouville equation in terms of the free energy function \(F(\lambda)\): \[\frac{d\rho}{dt}=-[F(\lambda),\rho]. \tag{32}\] Expanding the commutator, we get: \[\frac{d\rho}{dt}=-FH\rho+H\rho F+\lambda GH\rho-\lambda H\rho G \tag{33}\] Now, we can write the density operator \(\rho\) as a sum of its eigenstates: \[\rho=\sum_{n}p_{n}|n><n|, \tag{34}\] where \(p_{n}\) is the probability of finding the system in the nth eigenstate \(|n>\). By writing the density operator as a sum of its eigenstates, we can then determine the probabilities of finding the system in each of its eigenstates at a given time. Substituting this into the Liouville equation and using the orthonormality of the eigenstates, we get: \[\frac{d}{dt}(p_{n}|n><n|)=-FH(p_{n}|n><n|)+\\ H(p_{n}|n><n|)F+\lambda GH(p_{n}|n><n|)-\\ \lambda H(p_{n}|n><n|)G, \tag{35}\] which simplifies to: \[\frac{d}{dt}(p_{n})=-p_{n}(F_{n}n-F_{i}i)+\lambda p_{n}(G_{n}n-G_{i}i) \tag{36}\] where \(F_{nn}\) and \(F_{ii}\) are matrix elements of the Hamiltonian in the nth and ith eigenstates, respectively, and similarly for \(G_{nn}\) and \(G_{ii}\). The set of equations presented describes how the probabilities of the different eigenstates of the system, represented by \(p_{n}\), evolve over time as the parameter \(\lambda\) is varied. By solving these equations, we can analyze the behavior of the system as it undergoes the transition. One application of these equations is to calculate the average value of an observable \(A\) in the different phases. This can be done using the formula: \[<A>=\sum_{n}np_{n}<n|A|n> \tag{37}\] where the sum is over the eigenstates of the system, and \(<n|A|n>\) is the expectation value of the observable \(A\) in the nth eigenstate. The inclusion of temperature and entropy in the Hamiltonian can lead to effects such as decoherence and relaxation, which can significantly influence the Rabi oscillations in a quantum system. When a quantum system interacts with a thermal environment, energy exchange, and dephasing result in a reduced coherence of the Rabi oscillations. Fig. 1 illustrates one application of the previously described model, after scaling the energies of the cavity \(\omega_{c}=1\) and atoms frequencies \(\omega_{1},\omega_{2}\)[23]. In a variety of quantum computing systems, these oscillations are essential for implementing quantum gates, which are the fundamental operations needed to build quantum circuits and execute quantum algorithms. We now have a theoretical framework, relying on the equation \(d\rho/dt=-[F,\rho]\), which permits us to explore the time-dependent evolution of the phase space density as a system undergoes a transition. This framework is based on the analysis of the probabilities associated with the different eigenstates of the system. ### Nonelastic collisions between particles According to Eq. 22 the invariance of volume for canonically conjugated variables is not verified which implies that, in the presence of entropy gradients or out-of-equilibrium systems, there is no conservation of momentum nor kinetic energy (see also Ref. [18]) in particle collisions. This means that in the presence of entropy gradients or out-of-equilibrium systems, there is no conservation of momentum or kinetic energy in particle collisions. This lack of conservation of these quantities is significant because it implies that the usual assumptions made in equilibrium systems, where entropy gradients are absent, do not hold in out-of-equilibrium systems. This result is not new and has been discussed previously in the literature. Reference [5] is also cited as a source of further information on the subject. The lack of conservation of momentum and kinetic energy in particle collisions in out-of-equilibrium systems has important implications for understanding the behavior of systems far from equilibrium, such as those found in many biological, ecological, and social systems. Understanding and modeling such systems require a more complex and nuanced approach than is necessary for equilibrium systems, where the assumption of entropy gradients is usually valid. ### Brightness of an atomic beam Source Subjecting the axial or transverse velocity components of the beam to dissipative cooling dramatically compresses the phase space of the atom flux, resulting in dense, well-collimated atomic beams suitable for the study of atom optics, atom holography, or ultracold collision dynamics. Prodan et al. [24] first demonstrated the importance of this phase-space compression. In fact, atomic beams can now achieve a level of "brightness" (atom beam flux density per unit solid angle) many times greater than the phase-space conservation limit imposed by the Liouville theorem (cf. Pierce [25], Sheehy et al. [26], Kuyatt [27]). The importance of dissipative cooling in compressing the phase space of atomic beams leads to dense, well-collimated atomic beams that are useful for studying various fields of physics, such as atom optics, atom holography, and ultracold collision dynamics. The phase-space compression was first demonstrated by Prodan et al. [24] in 1994, highlighting its importance in the field of atomic physics. Moreover, recent advancements in atomic beam technology have resulted in achieving "brightness" levels (atom beam flux density per unit solid angle) that surpass the phase-space conservation limit imposed by the Liouville theorem, which describes the conservation of phase-space volume in a classical dynamical system. This breakthrough is significant because it opens up new opportunities for studying the behavior of atomic beams in various applications, including materials science, quantum optics, and precision measurements. The references cited in the text provide further information on the research related to this topic. ### The mechanics of magnetic helicity in the plasma The helicity associated with ions and electrons in plasma has opposite signs. This is because helicity is a measure of the handedness of the magnetic field, and the ions and electrons have opposite charges and therefore move in opposite directions in a magnetic field. As Figure 1: Visual representation of the interdependence of energy and entropy in Vacuum Rabi oscillations. Orange line: \(\omega_{1}=0.90\), green line: \(\omega_{2}=1.20\), blue line: \(\omega_{\rm c}=1.0\) is the cavity frequency. a result, the magnetic fields generated by the two populations will have opposite handedness. In the expression for the free energy that includes both ions and electrons, we need to take into account the opposite signs of the helicity densities. A more general expression for the free energy that accounts for this is: \[F=\frac{B^{2}}{2\mu_{0}}+(\alpha_{i}-\alpha_{e})K, \tag{38}\] where \(\alpha_{i}\) and \(\alpha_{e}\) are the helicity densities associated with the ion and electron populations, respectively, and \(K\) is the kinetic energy density. The term \((\alpha_{i}-\alpha_{e})\) accounts for the opposite signs of the ion and electron helicities. Taking the gradient of \(F\) with respect to position, we obtain: \[\mathbf{\nabla}F=\frac{1}{2\mu_{0}}\mathbf{\nabla}\big{(}B^{2}\big{)}+(\alpha_{i}- \alpha_{e})\mathbf{\nabla}(K). \tag{39}\] Using the same identity as before, \(\mathbf{\nabla}\big{(}B^{2}\big{)}=4\alpha\mathbf{B}\), where \(\alpha\) is the total helicity density, including contributions from both ions and electrons, we can write: \[\mathbf{\nabla}F=\left(\frac{4\alpha}{\mu_{0}}\right)B^{2}+(\alpha_{i}-\alpha_{e} )\mathbf{\nabla}(K). \tag{40}\] The total helicity density in plasma physics is a measure of the twistedness or knotting of magnetic field lines, and it's a conserved quantity in ideal magnetohydrodynamics (MHD). The gradient of free energy is related to the total helicity density, but with an additional term that accounts for the difference between ion and electron helicities. This term reflects how the dynamics of ion and electron populations can impact the overall plasma behavior. Mathematically, the total helicity density is defined as the volume integral of the dot product between the magnetic field \(\mathbf{B}\) and its vector potential \(\mathbf{A}\), i.e., \(\alpha=\int_{V}(\mathbf{A}\cdot\mathbf{B})dV\), where \(V\) is the volume of the plasma. In general, the total helicity density can be both positive and negative, depending on the orientation and topology of the magnetic field lines. In plasma, the total helicity density is related to the free energy of the system and plays a crucial role in determining the stability and dynamics of the plasma. The gradient of the total helicity density is related to the Lorentz force that acts on the plasma, and it can drive various instabilities and reconnection events in the magnetic field. Therefore, the total helicity density is an important quantity in plasma physics and is often used in theoretical and experimental studies of plasmas. In a plasma, both ions and electrons can contribute to the helicity of the magnetic field. The ion and electron helicities are defined as the volume integrals of the dot products between the magnetic field and the velocity of the respective species, i.e., \(\alpha_{i}=\int_{V}(\mathbf{v}_{i}\cdot\mathbf{B})dV\) and \(\alpha_{e}=\int_{V}(\mathbf{v}_{e}\cdot\mathbf{B})dV\), where \(\mathbf{v}_{i}\) and \(\mathbf{v}_{e}\) are the velocities of the ions and electrons, respectively. The total helicity density is the sum of the ion and electron helicities, i.e., \(\alpha=\alpha_{i}+\alpha_{e}\). As shown in Figure 2, the complexity of magnetic field lines increases with their writhe (measure the total amount of coiling or twisting in a knot) and twist (a measure of the local twisting or rotation of a knot). Therefore, the total helicity density includes contributions from both ions and electrons and reflects the overall twistedness or knotting of the magnetic field lines in the plasma. The difference between the ion and electron helicities, i.e., \((\alpha_{i}-\alpha_{e})\), is related to the dynamics of the ion and electron populations in the plasma. If the ion and electron populations have different velocities or distributions, they can contribute differently to the helicity of the magnetic field and create a net helicity difference. This net helicity difference can in turn affect the stability and dynamics of the plasma and can lead to various instabilities or reconnection events in the magnetic field, as shown in [28] with the effect of energy conversion and dynamics of magnetic reconnection. Therefore, the ion and electron helicities, as well as their difference, are important quantities in plasma physics and can provide valuable insights into the behavior and evolution of plasmas. The equation \(d\rho/dt=-[F,\rho]\) describes the evolution of helicity density (\(\rho\)) in plasma, where changes in free energy \(F\) are linked to changes in magnetic field line twist and writhe. Increases in free energy can lead to increases in helicity density and vice versa. The equation provides a fundamental link between free energy and magnetic topology and highlights the important role of free energy in determining magnetic dynamics in plasma. To illustrate the relationship between free energy, helicity, twist, and writhe, let us consider a simple example of a magnetic field in a plasma that has both twist and writhe. We can write the magnetic field in terms of its vector potential, A, as \(\mathbf{B}=\mathbf{\nabla}\times\mathbf{A}\), and the helicity density of this magnetic field can be written as \(\rho=\mathbf{A}\cdot\mathbf{\nabla}\times\mathbf{\nabla}A\). This equation relates the helicity density to the vector potential and its curl, and it quantifies Figure 2: The relationship between free energy and helicity in magnetic fields, visualized through the increasing complexity of magnetic field lines as their writhe and twist increases. As the helicity of the field increases, so does its free energy, leading to more complex and tangled field lines. the amount of twisting and linking of the magnetic field lines in the plasma. The free energy of plasma, defined as the energy stored in the magnetic field, can be expressed as \(F=\frac{1}{2\mu_{0}}\int B^{2}dV\). Perturbations in the magnetic field can affect the twist and writhe of the magnetic field lines and cause changes in the helicity density and free energy of the plasma. Adding twist to the magnetic field increases the helicity density and, in turn, the free energy of the plasma. The time derivative of the helicity density can express this relationship. \[\frac{d\rho}{dt}=\int(\mathbf{\nabla}\times\mathbf{A})\cdot\left(\mathbf{\nabla}\times \frac{d\mathbf{A}}{dt}\right)dV \tag{41}\] Using the equation of motion for the plasma, which is given by: \[\rho\mathbf{v}=[\mathbf{j}\times\mathbf{B}]-\epsilon_{0}\frac{d\mathbf{E}}{dt}, \tag{42}\] where \(v\) is the plasma velocity, \(j\) is the current density, \(E\) is the electric field, and \(\epsilon_{0}\) is the electric permittivity of free space, we can rewrite the time derivative of the helicity density as: \[\frac{d\rho}{dt}=-2\int(\mathbf{j}\cdot\mathbf{B})dV. \tag{43}\] This equation shows that the time rate of change of the helicity density is proportional to the current density and the magnetic field. Thus, if we increase the twist in the magnetic field, we will also increase the current density, which will in turn increase the rate of change of the helicity density, and hence the free energy of the plasma. Similarly, if we perturb the magnetic field by adding a small amount of writhe to it, the helicity density will again increase, and this will lead to an increase in the free energy of the plasma. This can be seen by considering the writhe of the magnetic field lines, which is given by: \[Wr=\int\left[\mathbf{B}\cdot\mathbf{\nabla}\times\left(\frac{\mathbf{B}}{B^{2}} \right)\right]dV, \tag{44}\] where the integral is taken over the volume of the plasma. This equation quantifies the degree of linking of the magnetic field lines, and it is related to the helicity density through the equation \(\rho=2Wr\), the greater the degree of linking between magnetic field lines, the higher the helicity density in the system. Thus, we can see that changes in the free energy of the plasma can lead to changes in the twist and writhe of the magnetic field lines and that the helicity density provides a fundamental link between these quantities. ## IV Conclusion We conclude that, if the Liouville theorem reflects the properties of systems obeying Hamilton's equations, in our approach, introducing Eqs.(7)-(8) this is not necessarily so. If the gradients of entropy in phase space do not equilibrate, then \(d\rho/dt\) is not necessarily null, which means that the set of states that a system can possibly attain form a volume in the phase-space \(\Gamma\) representing a "fluid" that may be compressible. The conclusion drawn is that the introduction of Eqs.(7)-(8) may invalidate the assumption that the Liouville theorem reflects the properties of systems obeying Hamilton's equations. The reason for this is that, if the gradients of entropy in phase space do not equilibrate, then the time derivative of the phase-space density may not necessarily be zero. This, in turn, means that the set of states that a system can possibly attain form a volume in phase space \(\Gamma\) representing a "fluid" that may be compressible. This result may be significant because it challenges the conventional understanding of the behavior of systems obeying Hamilton's equations and implies that the dynamics of such systems may be more complex than previously thought. Moreover, the idea that the phase space of a system may be compressible has important implications for understanding the thermodynamics of such systems and may have applications in fields such as statistical physics and fluid dynamics (due to non-conservative forces, such as turbulence or viscous dissipation). ### Competing Financial Interests No targeted financial assistance was provided for this research by any public, private, or non-profit entity.
2307.04184
Intrusion Resilience Systems for Modern Vehicles
Current vehicular Intrusion Detection and Prevention Systems either incur high false-positive rates or do not capture zero-day vulnerabilities, leading to safety-critical risks. In addition, prevention is limited to few primitive options like dropping network packets or extreme options, e.g., ECU Bus-off state. To fill this gap, we introduce the concept of vehicular Intrusion Resilience Systems (IRS) that ensures the resilience of critical applications despite assumed faults or zero-day attacks, as long as threat assumptions are met. IRS enables running a vehicular application in a replicated way, i.e., as a Replicated State Machine, over several ECUs, and then requiring the replicated processes to reach a form of Byzantine agreement before changing their local state. Our study rides the mutation of modern vehicular environments, which are closing the gap between simple and resource-constrained "real-time and embedded systems", and complex and powerful "information technology" ones. It shows that current vehicle (e.g., Zonal) architectures and networks are becoming plausible for such modular fault and intrusion tolerance solutions,deemed too heavy in the past. Our evaluation on a simulated Automotive Ethernet network running two state-of-the-art agreement protocols (Damysus and Hotstuff) shows that the achieved latency and throughout are feasible for many Automotive applications.
Ali Shoker, Vincent Rahli, Jeremie Decouchant, Paulo Esteves-Verissimo
2023-07-09T14:18:04Z
http://arxiv.org/abs/2307.04184v1
# Intrusion Resilience Systems for Modern Vehicles ###### Abstract Current vehicular Intrusion Detection and Prevention Systems either incur high false-positive rates or do not capture zero-day vulnerabilities, leading to safety-critical risks. In addition, prevention is limited to few primitive options like dropping network packets or extreme options, e.g., ECU Bus-off state. To fill this gap, we introduce the concept of vehicular Intrusion Resilience Systems (IRS) that ensures the resilience of critical applications despite assumed faults or zero-day attacks, as long as threat assumptions are met. IRS enables running a vehicular application in a replicated way, i.e., as a _Replicated State Machine_, over several _ECU_s, and then requiring the replicated processes to reach a form of _Byzantine_ agreement before changing their local state. Our study rides the mutation of modern vehicular environments, which are closing the gap between simple and resource-constrained "real-time and embedded systems", and complex and powerful "information technology" ones. It shows that current vehicle (e.g., Zonal) architectures and networks are becoming plausible for such modular fault and intrusion tolerance solutions--decent too heavy in the past. Our evaluation on a simulated Automotive Ethernet network running two state-of-the-art agreement protocols (Damysus and Hotstuff) shows that the achieved latency and throughout are feasible for many Automotive applications. Intrusion resilience, fault masking, cybersecurity, Byzantine agreement, automotive ## I Introduction Three trends, Automation, Digitization, and Connectivity are disrupting the ways modern vehicles are designed and used. While these trends can bring notable features like safety, efficiency, and convenience, they could turn into a curse if security and resilience are left as afterthoughts. Unfortunately, reality shows that safety and security incidents are doubling annually during the past three years, causing up to half Trillion dollars by 2024 due to cyberattacks [19], and leading to millions of car recalls [7]. Such trend, if not contradicted, jeopardizes the sought features and puts human safety at risk [16]. We need novel approaches to improve vehicles' resilience: ensuring that an acceptable service prevails, even in uncertain environment conditions, or in the presence of faults or attacks that might not have been predicted (a.k.a, 0-days). This work is motivated by two main observations in the automotive industry. The first is that the automation and digitization trends increase the complexity of vehicles and the likelihood of software faults and vulnerabilities. Digitization suggests software-defined vehicle systems (compute nodes, networks, and software) as a main enabler to automation, supporting features like x-by-wire, Advanced Driver Assistance Systems (ADAS), and Telematics. This involves a considerable number of distributed software components running on over a hundred embedded compute devices, _Electronic Control Units_ (ECU), which communicate via in-vehicle networks, e.g., CAN bus, Automotive Ethernet, FlexRay, etc. [20]. This results in a complex system with an enormous number--estimated to exceed 100 Millions--of Software Lines of Code (SLoC) in mainstream vehicles [4, 7]. Experience shows that human errors are positively correlated with both system's complexity and code footprint, and this increases the likelihood of benign faults and intrusions. The second observation is that connecting the vehicle to the cyberspace is becoming a mainstream. Connectivity is established in several networking forms like Vehicle to Everything (V2X), Cellular, 5G, Bluetooth, WIFI, GPS, or even through hardware memory sticks or USB connectivity [8]. This raises substantial security challenges as it enlarges the attack surface and entry points of the vehicle system, and thus makes it highly prone to intrusions induced by (the well experienced) attackers in the cyberspace, via exploiting the existing vulnerabilities [18, 26]. The automotive community has been recently focusing on consolidating the network security layer, leaving the higher software layers insufficiently addressed. Of particular interest is the introduction of new network security controls and tools (e.g., Gateways, Firewalls), and hardening the security of existing networks, e.g., FlexRay, CAN XL, Automotive Ethernet (100BASE-T1, 1000BASE-T1, and 10BASE-T1s), etc. [20]. This is also supported by using endpoint tools like _Intrusion Detection Systems_ (IDS) and _Intrusion Prevention Systems_ (IPS) [15]. Nevertheless, IDS systems of either "school"--signature-based and anomaly-based IDS--have limitations in the context of in-car systems, respectively blindness to zero-day vulnerabilities, and being difficult to define a "normal behavior". Not to mention the problem of real-time reaction/mitigation, which haunts IPS, and makes these ad-hoc response techniques currently very limited (e.g., detaching a vulnerable ECU from the network bus using the Bus-off state [15, 21]). In addition, since the network PHY/MAC protocols and tools (IPS/IDS) are application-agnostic, they can neither detect the anomalies and intrusions occurring at the upper layers nor stop their propagation to other ECUs. In this paper, we introduce the concept of _Intrusion Resilience Systems_ (IRS) for modern vehicles. IRS aims at contributing to a timely revolution in current in-vehicle computer and network architectures, by extending the security and safety properties of component-based architectures (e.g., AUTOSAR). We propose SW-implemented fault and intru sion tolerance, leveraging available sets of failure-independent ECUs, e.g., multi-vendor Zonal ECUs with different AutoSAR implementations. The approach is inline with the increasing demand for automotive computing and network channel redundancy, i.e., _ASIL Decomposition_, as part of the ISO 26262v.2 safety standard [13, 14]. IRS is the first system-level automotive component that allows running multiple and possibly diverse replicas of a state-full application process on different ECUs, forming a resilient deterministic _Replicated State Machine_[25]. Replicas are required to agree on a common state through a variant of _Byzantine Agreement_[5] protocols (today widely used in _Blockchain_) prior to changing their local state. As long as the process is deterministic, agreement is reached despite the existence of benign or intrusion faults in a minority of replicas. Distributed applications like door locks, window control, software Over-the-Air (OTA) update verification are few examples on feasible applications on top of IRS. IRS gives a quantum leap from IDS/IPS functions. First, it can work at a higher level of abstraction, targeting application software level anomalies and intrusions. Second, it follows an error masking approach which virtually captures all faults, even unknown ones, unlike IDS systems. Third, contrary to IPS whose response often degrades or suspends some system components or functions [15, 21], IRS makes it possible to roughly maintain the application functionality and quality under failures or attack. In this work, we present a preliminary IRS Zonal architecture, and we drive a logical reasoning for its feasibility, given the recent technological advancements in modern vehicles. To demonstrate the concept, we apply it to a multi-vendor AutoSAR-based Zonal system, thus leveraging the diversity thereof, to improve the independence of failures of ECUs--which is a requirement for Byzantine agreement. We conducted an empirical evaluation for two state-of-the-art Byzantine agreement protocols, namely _Damysus_ and _Hotstuff_--introduced in the Distributed Systems area. Our results show that IRS is feasible for modern Automotive Ethernet, since the achieved latency is less than 100ms for thousands of simultaneous operations. We argue that if more lightweight and efficient protocols are built especially for automotive, it is even possible to support time-critical applications. The rest of the paper is organized as follows. Section II presents the concept and the architecture of IRS. Section III analysis the feasibility conceptually, while Section IV shows the empirical feasibility. The paper concludes in Section V. ## II Intrusion Resilience System ### _Systems and Threat Models_ Consider an in-vehicle system of \(N\)_nodes_. A _node_ is composed of an computing device, i.e., an ECU, a corresponding software stack, and a (critical) soft real-time vehicular application for simplicity. (This can be generalized to many applications.) A node can communicate with its counterparts through messaging via a vehicular _network_, either through a direct link, a switch, or via a gateway. A sent _message_ is assumed to eventually reach its destination node despite network failures or attacks (e.g., after re-transmissions). A node has a unique identity in the system to verify message authenticity and integrity using lightweight cryptography primitives, like _Elliptic Curve Cryptography_ (ECC). A node, or the application therein, is assumed to be _deterministic_. However, an application can fail by crashing or behave arbitrarily or maliciously when subject to an intrusion. We assume that at most a fraction \(F\) of \(N\) nodes can fail at a time, which implicitly assumes some independence of failures between nodes. This can be achieved by employing ECUs from diverse vendors, different libraries, software stack, and implementation, etc., which is not uncommon in the automotive setting. Finally, we assume the existence of a technique to detect _Denial of Service_ (DoS) jamming attack in multi-hop bus networks like CAN and 10BASE-T1s [15, 20]. ### _Architecture and Concept_ ConceptThe IRS concept is based on the idea of intrusion error _masking_ rather than detection and prevention as in IPS/IDS. By running multiple (\(N\)) replicas/versions of an application and comparing their outputs on different nodes (ECUs), it is possible to mask any error caused by accidental or malicious faults occurring on \(F\) faulty nodes, by adopting the output state of an uninfected majority (\(N-F\)). This is possible through running a Byzantine agreement protocol across application replicas. In this approach, the state of a critical application can only be modified upon the agreement of at least \(N-F\) counterparts. This exploits the current replicated vehicle functionalities, often used for coordinated actuation and notification, to improve intrusion resilience. ArchitectureWe present the IRS system view architecture in Fig. 1, A. The System View shows a number \(N\) of IRS nodes (\(N=4\), in this case) replicated over \(N\) ECUs. For clarity, we use _Zonal Control Units (ZCU)_ as ECUs to host different applications (e.g., door locks and window control) on the same ECU. On the other hand, Fig. 1, B presents the Node View at one of the nodes (i.e., node 2) describing its components and relation within the Hardware/Software (HW/SW) stack. In particular, the IRS is a system component, i.e., a module or service, used by those critical applications that require intrusion resilience. N versions of the application are employed over N different nodes, making use of the IRS module. The core module of the IRS seeks to ensure _agreement_ on requests issued by the application via an _IRS proxy_. The proxy encapsulates the authentication, peer information, and the function to be made resilient through IRS in an application-agnostic way. The agreement module runs the main Byzantine agreement protocol to ensure (1) _total ordering_ on the application state and (2) output validation (i.e., comparison of results from counterpart nodes on other ECUs). The agreement module benefits from three underlying modules, namely, _Discovery_, _Broadcast_, and _Overlay_ to facilitate the membership management and networking with the peer nodes as a separate layer. Note that IRS can make use of these modules if made available by other frameworks, e.g., in the AutoSAR architecture. IRS offers _modular and incremental fault and intrusion tolerance_[22]. Not all node applications--or even functions of an application--are supposed to use the IRS, as they might not be critical, e.g., the case of App4 in the figure. Likewise, applications using IRS may resort to different models of replication (from crash to Byzantine fault tolerance), as well as different sizes of tolerance quorums (\(\#(N)\)). For instance, an application that controls the remote door locks is much more critical than the mirror tilting application. Similarly, an Over-the-Air (OTA) update application is highly critical compared to infotainment social network (e.g., chatting) update. IRS runs on top of other basic services and abstractions, such as those defined in the AutoSAR standard [1]. This way, it facilitates the integration of resilience in the existing component-based automotive architecture philosophy. At this layer, other tools like IDS, IPS may operate as well. Finally, the bottom layer encapsulates the PHY network protocols (e.g., CAN, FlexRay, Automotive Ethernet) typically managed by the physical controller. ECUs are connected via a network that could be multidrop, node-to-node, or switch-based network as long messages sent by one node are eventually delivered at the destination node. Byzantine AgreementIRS encapsulates a distributed voting logic using an intrusion tolerant protocol category based on the concept of Byzantine Agreement/Consensus. Initial practical protocols [5] would require \(N=3F+1\), had quadratic (\(O(n^{2})\)) messaging complexity, and were computationally demanding due to the heavy use of cryptography. The following generation was architecturally hybrid [3], featuring the use of trusted-trustworthy components [9, 27], dramatically reducing complexity, and requiring a smaller quorum of \(N=2F+1\). Later, the advent of _Blockchain_ inspired yet another generation of intrusion tolerant protocols, becoming even more efficient and lightweight [10]. The current state of affairs makes them feasible for environments with moderate capacities like modern vehicles (more on this in the next section). Describing a specific protocol is out of the scope of this position paper; however, we provide a brief overview of two recent protocols, namely Hotstuff and Damsus [10, 28], that are used in the evaluation of IRS in Section IV. While the above protocols assume a _partial-synchronony_ network model, special real-time protocols are needed for bus networks like FlexRay and CAN. A good start is validating the two variants of Byzantine Resilient Real-Time protocols like _PISTIS_[17]. Unlike other intrusion tolerant protocols, which are non-synchronous, these real-time protocols are suited for hard or soft real-time environments as they have _Timeliness_ properties to guarantee delivery/execution given a defined probabilistic time-bound. ## III Feasibility Discussion While the need for building resilient systems is very well understood, applying redundancy-based solutions like IRS may look infeasible for in-vehicular systems. Nevertheless, we argue that this is no longer the case as the three trends automation, digitization, and connectivity have changed modern vehicular systems dramatically. In this section, we try to alleviate these concerns by driving a conceptual analysis demonstrating the potential feasibility of IRS to modern vehicles. ### _Distributed and Redundant Applications_ The current application landscape in automotive is very rich and complex, spanning ADAS & Safety Systems, Infotainment, Body Electronics, Powertrain, and Telematics. At a fine-grained level, these applications incur millions of functionalities. For instance, a Volvo modern vehicle "contains 10 million conditional statements as well as 3 million functions, which are invoked some 30 million places in the source code" [7]. Many of these applications are becoming naturally distributed across the vehicle to manage the dependencies between functionalities and to synchronize the similar ones across the vehicle. For instance, a vehicle may have applications running four steering, braking, tyre pressure processes; four/five door lock and window processes; four light sets of processes, two mirror processes, several airbag processes, etc. Nevertheless, these processes are currently only synchronized in a passive way, i.e., propagating notifications, where "decisions", e.g., changing an actuator state, are only made locally. Given this, the overhead of enforcing distributed control through agreement protocols prior to changing the application state would be reasonably low since replicas are already being used. This is sound for safety/security critical applications that are soft-real time, in particular, like door lock/unlock, window Fig. 1: Intrusion Resilience System (IRS) Architecture. open/close, and OTA update validation by different processes on different ECUs. On the other hand, using redundancy to boost vehicle safety is becoming increasingly required [14]. Indeed, the _ASIL Decomposition_ mechanism drafted in the ISO 26262v.2 automotive safety standard [13, 14] suggests using redundant computing nodes and network channels to improve safety and reduce the costs (e.g., by using redundant cheap nodes). ### _Distributed Architecture_ The vehicular architecture has become heavily distributed as more ECUs are being added over time to cope with the application demands. Considering the evolution of distributed architectures [4], applications are becoming more aggregated in larger ECUs: (1) Domain-based ones aggregate applications with similar functionalities; (2) Zonal-based ones aggregate based on the vehicle zone, e.g., a Door Control Unit hosts many applications (like door locks, motors, windows, theme lights) at the door proximity; and (3) Centralized. The former two are considered very convenient environments to run replicated protocols as the agreement protocol suggested in IRS. In addition, multiple aggregated applications can directly benefit from the IRS being a middleware/service. Indeed, while the replication cost has always been an adoption barrier in the IT world, the costs (surprisingly) look lower in vehicular architectures being natively distributed. The latter centralized architecture is getting more traction recently. We do not recommend this architecture from a security perspective, being a single point of failure/attack. Nevertheless, transforming the central controller into a distributed cluster could be a trade-off solution to mitigate this risk significantly. ### _Efficient and Secure Networks_ Vehicular networks, especially the CAN bus, have always been considered slow and the weakest spot in a vehicle. In particular, the classical baud rate of CAN bus cannot be higher than 1Mbps, and the payload is only 8 bytes per packet [20]. This prohibits an IRS-like solution where the agreement meta-data size (identifiers, signatures, cryptographic digests, clock) is high. On the other hand, CAN frames lack the sender/receiver identifiers which makes authentication and integrity a non-trivial task. Nevertheless, as shown in the next table, the new versions of CAN, i.e., CAN FD and XL, have larger frame's payload size of 64B and 2KB, and baud rate to 2Mbps and 10Mbps, respectively. These are considered acceptable for soft real-time applications, e.g., like door locks and OTA updates, as response time is not critical. Furthermore, novel networks like Time-Triggered Ethernet (SAE AS6802), a.k.a., Automotive Ethernet and FlexRay have native security support and an order of magnitude higher bit rate. We believe that these advancements mitigate the concerns regarding the feasibility of IRS to such environment. ### _Decent HW/SW Stack_ It can be assumed that running IRS agreement protocols in a constrained device (like a micro-controller-based ECU) and networks would be an overkill due to the heavy use of cryptography. Despite being challenging, modern automotive ECUs (microprocessor-based and multi-core) are getting high computational and storage capacities that could be compared to a _Raspberry Pi_ or a mobile phone1. This is correct, in particular, for main ECUs like domain and zone controllers, gateways, telecommunication units, etc. On top of this hardware, the software stack [11] is also getting more mature while we observe more UNIX, POSIX, and Linux-based RTOS/OS, e.g., AGL, RTlinux, QNX, and Android Auto. This also means that a lot of IT/IoT libraries could now be adapted or used in automotive. New architectures are widely adopting the virtualization hypervisor technology, which facilitates application deployments on an ECU, and thus, replication in our case [11]. Therefore, the modern HW/SW stack of modern vehicles is decent enough to support a solution like IRS. Footnote 1: [https://www.embobility-engineering.com/focus-ecus/](https://www.embobility-engineering.com/focus-ecus/) ### _Diversity_ Independence of failures between replicas (i.e., ECU HW and SW) is a key challenge for the effectiveness of any Byzantine Agreement based system like IRS [2, 6, 12, 23]. The reason is that without avoiding common-mode vulnerabilities or faults, many replicas can fail at the same time, thus violating the assumption of the correctness of a majority of replicas (N-f). While _N-version programming_ is deemed an intuitive, but costly, approach to build software with independent implementations that have the same specification, it has been shown that diversifying the components, e.g., the operating systems or virtual machines, of the application's underlying layers is very effective to improve independence of failures [12, 23]. Leveraging this, we argue that diversity in automotive is less challenging than IT systems because of two reasons. First, the automotive SW/HW supply chain is big and multi-vendor, which provides a rich source of off-the-shelf black-box solutions to build diversity. For instance, it is not uncommon to have ECUs or MCUs of the same specifications, diverse software libraries, operating systems and hypervisors from many vendors. These are often used as underlying layers for applications to simplify their design and reduce the likelihood of leaving bugs or vulnerabilities. For a more conservative approach, the critical functions of an application can be chosen to run over IRS, which may optionally require only these functions to be implemented by different teams, e.g., using _N-version programming_. The second reasons is referred to the extensive use of standardized solutions in automotive. \begin{table} \begin{tabular}{l l l} **Network** & **MAX Band rate** & **Max Frame size** \\ \hline CAN-FD & 8Mbps & 64Bytes \\ CAN-XL & 10Mbps & 2048Bytes \\ FlexRay & 10Mbps & 254Bytes \\ 10BASE-T1 & 10Mbps & 1500Bytes \\ 100BASE-T1 & 100Mbps & 1500Bytes \\ 1000BASE-T1 & 1000Mbps & 1500Bytes \\ \end{tabular} \end{table} TABLE I: Modern automotive networking capabilities. By design, this facilitates the integration of these modules as long as the APIs are defined and the specifications respected. We explain this by providing a case study using the AutoSAR standard [1]. **Case Study on AutoSAR.** We provide a case study showing how to leverage the AutoSAR standard [1] to build diversity with intrusion resilience. In Fig. 2, we provide a possible integration of the AutoSAR architecture with IRS (depicted in a generic way in Fig. 1). Particularly, we build a zonal architecture of four Zonal Control Unit (ZCU) replicas, Zone 1-4, that use different implementations of the AutoSAR specification at all layers: from the _Microcontroller Abstraction Layer_ at the bottom through the _Runtime Environment_ (a hypervisor) at the top. We select four different implementations (out of many [24]) that are currently provided by well-known vendors following the AutoSAR standard. Each implementation is typically composed of up to more than 100 modules. This can generate a high level of diversity as it is less likely for one module to fail at the same time as its counterpart in the other ZCU replicas. Each Runtime Environment provides platform-agnostic access to the different modules and capabilities in the ECU Microcontroller Abstraction, ECU Abstraction and Services layers. The services layer is suggested as a good fit where the IRS modules are included. The same applications can run on top of the four ZCUs, whereas their agreement is ensured by the IRS modules and protocols. Notice that one can yet build more diversity by choosing different microcontrollers from different vendors at the HW layer as well. ## IV Evaluation The aim of this section is to drive an empirical evaluation to assess the feasibility of IRS to automotive networks and application requirements, i.e., throughout and latency. In particular, we evaluate the Byzantine agreement protocol, which is the most significant component of the IRS. Our goal is to show that the overhead of IRS is acceptable for some automotive applications even with current protocols, which are tailored for the IT heavyweight setting. ### _Brief summary on the protocols_ We consider two Byzantine agreement protocols as a baseline for our performance evaluation: HotStuff and Damysus. The former is chosen being a state-of-the-art fast protocol. The latter represents another class of protocols that can take advantage of hardware _hybrids_, e.g., _Hardware Secure Modules (HSM)_, common in modern ECU, to reduce the number of replicas needed and improve the performance. We concisely describe the main relevant features of these protocols, necessary to understand the evaluation. We refer the interested readers to learn about the protocols in [10, 28]. HotStuff [28] is a recent protocol optimized for high throughput. HotStuff's communication complexity is linear with the number of replicas/nodes, including special ones called _leaders_. HotStuff requires \(N\geq 3f+1\) nodes to tolerate \(f\) Byzantine faults. Nodes build a chain of blocks (i.e., can be seen as batches) by voting for extensions, which are proposed by the leaders of _views_ (i.e., successive rounds). Damysus [10] is a hybrid BFT protocol that builds on HotStuff and leverages two trusted components, namely a _checker_ and an _accumulator_. These can easily be implemented on modern trusted execution environments (TEE), because they only assume classical cryptographic functionalities and some memory. Therefore, these can be exploited in modern ECUs that often support HSM. The checker prevents nodes from equivocating, while the accumulator forces a leader to extend the most recent block. Thanks to these trusted components, Damysus uses only \(N\geq 2f+1\) replicas, and requires one communication phase less than HotStuff. ### _Experimental Setting_ We evaluate here a version of basic HotStuff implemented in _C++_. Replicas use _ECDSA_ signatures with _prime256v1_ elliptic curves (available in _OpenSSL_), and are connected using the _Salticidae_ library. The protocol is deployed within Docker containers on a single machine equipped with an Intel Core i5-9500 CPU (3.00 GHz) with 6 cores and 32 GB of RAM. The network latency is enforced using _netem_. The number of faults is set to be \(1\) in all experiments, so a total of \(4\) replicas, which are all directly connected with each other (i.e., no switched topology is used). The bandwidth varies between \(10\), \(100\), and \(1000\) Mbps, simulating the bandwidth of 10BASE-T1, 100BASE-T1, and 1000BASE-T1 Automotive Ethernet networks [20]. In all experiments, we fix the network latency to \(0.4\) ms, which is typical for Automotive Ethernet [29]. We only consider Automotive Ethernet for three reasons: (1) it has high bandwidth that is suitable for heavy-weight agreement protocols; (2) it has similar synchrony model as the protocols we evaluate; and (3) it is believed that Ethernet will replace Fig. 2: A zonal architecture of four Zonal Control Unit replicas with diverse AutoSAR implementations. The figure demonstrates how IRS can be integrated with AutoSAR to ensure intrusion resilience and leverage the AutoSAR standard to improve independence of failures between replicas. most in-vehicle networks in the near future. Our measurements focus on the latency and scalability of the protocols. The first measures the time for an ECU operation to complete, whereas the scalability shows the throughput limit of the protocol where latency remains acceptable under higher payloads. ### _Latency_ In this experiment, we measure the latency of HotStuff's and Damysus's while varying the payload size, and blocks contain a single transaction with payload of size \(8\), \(128\), or \(1024\) Bytes (B). In addition to the payload, a transaction contains \(2\times 4\) B for metadata (a client id, and a transaction id), as well as the hash value of the previous block of size \(32\) B, thereby adding \(40\) B to each transaction in addition to its payload. Therefore, given the above payloads, each transaction is of size \(48\), \(168\), or \(1064\) B. Each experiment presents the average of \(10\) repetitions with \(30\) views each (so a total of \(300\) instances). Fig. 3 presents HotStuff's and Damysus's latencies depending on the bandwidth of the various Automotive Ethernet (\(10\), \(100\) or \(1000\) Mbps) and depending on the payload size (\(8\), \(128\) or \(1024\) B). In this scenario, we evaluate the protocols' latency under minimal workload to measure the lowest possible latency. Our measurements indicate that a request can always be treated in between \(8\) and \(12\) ms for Hotstuff and between \(4\) and \(6\) ms for Damysus, with a bandwidth of \(10\) Mbps. Increasing the Ethernet bandwidth decreases the protocols' latency. Requests are respectively processed in less than \(5.1\) and \(4.51\) ms for Hotstuff, with a \(100\) Mbps and \(1000\) Mbps. Damysus's latency is as low as \(3\) ms in both 100BASE-T1 and 1000BASE-T1. We expected Damysus to have lower latencies because the use of HSM abstractions reduces the message exchange round-trips. These results are considered very acceptable latency numbers for many Body, Chassis, and Power-terrain applications. Larger requests should logically increase HotStuff's latency, which is the case for our experiments with \(100\) Mbps; however, this effect is difficult to observe under a low workload and high bandwidths. ### _Scalability_ We then study the influence of the system's workload on HotStuff and Damysus' throughput and latency in Fig. 3(a), Fig. 3(b) and Fig. 3(c), corresponding to 10BASE-T1, 100BASE-T1, and 1000BASE-T1, respectively. In these experiments, we increase the rate with which clients submit requests; hence the curve points in the figures correspond to the delays between issuing subsequent operations: \(900,700,500,100,50,10,5,0\) microseconds. These rates can represent the load on the IRS, i.e., where multiple applications are running simultaneously. In addition, running two clients considers the cases of concurrent views--which could incur some race conditions. Blocks/batches are composed of \(400\) transactions/operations, with \(0\) B payloads (again, plus \(40\) B for the above information). In all experiments, the protocol's throughput and latency increase with the request rates until the system saturates. Under perfect settings, it is expected that the protocol latency increases exponentially and that the throughput remains constant. HotStuff's maximum throughput with \(10\), \(100\) and \(1,000\) Mbps networks is roughly \(7\), \(14\) and \(16\) Kops/sec, respectively. More interestingly, it scales up to \(4\), \(6\) and \(8\) Kops/sec while acheiving latency less than \(100\)ms. In general, under the same settings, Damysus' throughput is higher at around \(11\), \(17\) and \(17\) Kops/sec; and \(9\), \(18\), and more (the network is not saturated here), while keeping a latency less than \(100\)ms. These are very promising results, showing that the network serves thousands simultaneous critical applications like Door locks, OTA firmware/software update, etc. Both throughput and latency improvements are expected because Damysus has one communication phase less than HotStuff. These results indicate that using an IRS for vehicles is possible with the recent advancements in vehicle networks and controller capabilities. This encourages more research on devising Byzantine agreement protocol variants that are more automotive-friendly. A promising directions seems taking advantage of the HSM hybrid to build more efficient and lightweight protocols for Automotive Ethernet. Other protocols may also be built for multi-hop networks like CAN-XL and FlexRay. This requires more work on the network synchronization modeling, that may benefit from real-time Byzantine broadcast protocols [17] that ensure a notion of _timelines_, useful for safety-critical applications. ## V Conclusion We introduced the concept of _Intrusion Resilience Systems_ (IRS) for modern vehicles. The aim is to bridge the gap left in security-by-design and intrusion detection and prevention systems at two levels: first, it is tailored for the software/application layer; second, it tolerates faults and 0-day attacks to roughly maintain the same service quality even if intrusions could not be profiled. IRS uses the _State Machine Replication_ approach in which the replicated application can only change the local state upon _Byzantine agreement_ with its counterpart nodes. The paper proposed a preliminary architecture and an analytic feasibility study that highlights Fig. 3: Latency (\(ms\)) of Hotstuff (Hot) and Damysus (Dam) with simulated Automotive Ethernet and varying payload size and link latency \(400\mu s\). the fact that modern vehicular technologies are closing the gap with IT/IoT technologies, which makes them plausible environments to adopt a replicated solution as IRS. The results of our empirical evaluation using two state-of-the-art protocols, _Danysus_ and _Hotstuff_, shows that IRS is feasible for modern Automotive Ethernet, since the achieved latency is less than 100ms for thousands of simultaneous operations. We invite researchers and practitioners to investigate this direction by studying the tradeoffs of agreement protocols, architectures, diversity, application space, etc.
2302.01622
Private, fair and accurate: Training large-scale, privacy-preserving AI models in medical imaging
Artificial intelligence (AI) models are increasingly used in the medical domain. However, as medical data is highly sensitive, special precautions to ensure its protection are required. The gold standard for privacy preservation is the introduction of differential privacy (DP) to model training. Prior work indicates that DP has negative implications on model accuracy and fairness, which are unacceptable in medicine and represent a main barrier to the widespread use of privacy-preserving techniques. In this work, we evaluated the effect of privacy-preserving training of AI models regarding accuracy and fairness compared to non-private training. For this, we used two datasets: (1) A large dataset (N=193,311) of high quality clinical chest radiographs, and (2) a dataset (N=1,625) of 3D abdominal computed tomography (CT) images, with the task of classifying the presence of pancreatic ductal adenocarcinoma (PDAC). Both were retrospectively collected and manually labeled by experienced radiologists. We then compared non-private deep convolutional neural networks (CNNs) and privacy-preserving (DP) models with respect to privacy-utility trade-offs measured as area under the receiver-operator-characteristic curve (AUROC), and privacy-fairness trade-offs, measured as Pearson's r or Statistical Parity Difference. We found that, while the privacy-preserving trainings yielded lower accuracy, they did largely not amplify discrimination against age, sex or co-morbidity. Our study shows that -- under the challenging realistic circumstances of a real-life clinical dataset -- the privacy-preserving training of diagnostic deep learning models is possible with excellent diagnostic accuracy and fairness.
Soroosh Tayebi Arasteh, Alexander Ziller, Christiane Kuhl, Marcus Makowski, Sven Nebelung, Rickmer Braren, Daniel Rueckert, Daniel Truhn, Georgios Kaissis
2023-02-03T09:49:13Z
http://arxiv.org/abs/2302.01622v5
# Private, fair and accurate: Training large-scale, privacy-preserving AI models in medical imaging ###### Abstract Artificial intelligence (AI) models are increasingly used in the medical domain. However, as medical data is highly sensitive, special precautions to ensure its protection are required. The gold standard for privacy preservation is the introduction of differential privacy (DP) to model training. Prior work indicates that DP has negative implications on model accuracy and fairness, which are unacceptable in medicine and represent a main barrier to the widespread use of privacy-preserving techniques. In this work, we evaluated the effect of privacy-preserving training of AI models for chest radiograph diagnosis regarding accuracy and fairness compared to non-private training. For this, we used a large dataset (\(N=193\,311\)) of high quality clinical chest radiographs, which were retrospectively collected and manually labeled by experienced radiologists. We then compared non-private deep convolutional neural networks (CNNs) and privacy-preserving (DP) models with respect to privacy-utility trade-offs measured as area under the receiver-operator-characteristic curve (AUROC), and privacy-fairness trade-offs, measured as Pearson's r or Statistical Parity Difference. We found that the non-private CNNs achieved an average AUROC score of \(0.90\pm 0.04\) over all labels, whereas the DP CNNs with a privacy budget of \(\varepsilon=7.89\) resulted in an AUROC of \(0.87\pm 0.04\), i.e., a mere 2.6% performance decrease compared to non-private training. Furthermore, we found the privacy-preserving training not to amplify discrimination against age, sex or co-morphidity. Our study shows that -under the challenging realistic circumstances of a real-life clinical dataset- the privacy-preserving training of diagnostic deep learning models is possible with excellent diagnostic accuracy and fairness. ## 1 Introduction The development of artificial intelligence (AI) systems for medical applications represents a delicate trade-off: On the one hand, diagnostic models must offer high accuracy and certainty, as well as treat different patient groups equitably and fairly. On the other hand, clinicians and researchers are subject to ethical and legal responsibilities towards the patients whose data is used for model training. In particular, when diagnostic models are published to third parties whose intentions are impossible to verify, care must be undertaken to ascertain that patient privacy is not compromised. Privacy breaches can occur, e.g. through data reconstruction, attribute inference or membership inference attacks against the shared model [1]. Federated learning [2; 3; 4] has been proposed as a tool to address some of these problems. However, it has become evident that training data can be reverse-engineered piecemeal from federated systems, rendering them just as vulnerable to the aforementioned attacks as centralized learning [5]. Thus, it is apparent that formal privacy preservation methods are required to protect the patients whose data is used to train diagnostic AI models. The gold standard in this regard is differential privacy (DP) [6]. DP is a formal framework encompassing a collection of techniques to allow analysts to obtain insights from sensitive datasets while guaranteeing the protection of individual data points within them. DP thus is a property of a data processing system which states that the results of a computation over a sensitive dataset must be approximately identical whether or not any single individual was included or excluded from the dataset. Formally, a randomised algorithm (mechanism) \(\mathcal{M}:\mathcal{X}\rightarrow\mathcal{Y}\) is said to satisfy \((\varepsilon,\delta)\)-DP if, for all pairs of databases \(D,D^{\prime}\in\mathcal{X}\) which differ in one row and all \(S\subseteq\mathcal{Y}\), the following holds: \[\Pr\left(\mathcal{M}(D)\in S\right)\leq e^{\varepsilon}\Pr\left(\mathcal{M}( D^{\prime})\in S\right)+\delta, \tag{1}\] where the guarantee is given over the randomness of \(\mathcal{M}\) and holds equally when \(D\) and \(D^{\prime}\) are swapped. Applied to neural network training, the randomization required by DP is ensured through the addition of calibrated Gaussian noise to the gradients of the loss function computed for each individual data point after they have been clipped in \(\ell_{2}\)-norm to ensure that their magnitude is bounded [7] (see Figure 1). By specifying the noise variance and the number of training steps, it is possible to summarize the total privacy expenditure, intuitively, the amount of information that has "flown" from the input data to the model in the form of the \(\varepsilon\) and \(\delta\)-values introduced above, which denote the so-called _privacy budget_. Stronger privacy guarantees are denoted by smaller values of \(\varepsilon\) and \(\delta\). The fact that quantitative privacy guarantees can be computed over many iterations (_compositions_) of complex algorithms like the ones used to train neural networks is unique to DP. This process is typically referred to as _privacy accounting_. Although training with DP offers formal (and empirical) protection against both membership inference and reconstruction attacks [8], whose strength is directly proportional to the chosen privacy level, the utilization of DP also creates two fundamental trade-offs. The first is a "privacy-utility trade-off", i.e., a reduction in diagnostic accuracy when stronger privacy guarantees are required [9; 10]. The other trade-off is between privacy and fairness. Intuitively, the fact that AI models learn proportionally less about under-represented patient groups [11] in the training data is amplified by DP (which further limits how much information flows about them), leading to demographic disparity in the model's predictions or diagnoses [12]. Both of these trade-offs are delicate in sensitive applications, such as medical ones, as it is not acceptable to have wrong diagnoses or discriminate against a certain patient group. The aforementioned considerations outline a fundamental tension between accuracy, fairness and privacy which exists in the training of differentially private models for medical applications. So far, these trade-offs have only been evaluated in benchmark datasets, such as CIFAR-10 or ImageNet. We thus contend that the widespread use of privacy-preserving machine learning requires testing under real-life circumstances. In the current study, we perform the first in-depth investigation into this topic. Concretely, we utilize a large clinical database of radiologist-labelled radiographic images which has previously been used to train an expert-level diagnostic AI model, but otherwise not been curated or pre-processed for private training in any way. This mirrors the type of datasets available at clinical institutions. In this setting, we then study the extent of privacy-utility and privacy-fairness trade-offs in training advanced computer vision architectures. Our main contributions can be summarized as follows: 1. We study the diagnostic accuracy ramifications of differentially private deep learning in multi-label classification of a large, curated database of intensive care unit chest radiographs. We find the accuracy reductions to be negligible compared to non-private training through the utilization of transfer learning on public datasets and careful choice of architecture. 2. We investigate the fairness implications of differentially private learning with respect to key demographic characteristics such as sex, age and co-morbidity. We find that - while differentially private learning has a mild fairness effect- it does not introduce significant discrimination concerns compared to non-private training. Prior workThe training of deep neural networks on medical data with differential privacy (DP) has so far not been widely investigated. Pati et al. [13] use privacy-preserving techniques, most notably federated learning and differential privacy to comply with privacy legislation and thus allow training on a dataset of ca. 6 000 multi-parametric magnetic resonance imaging scans. The authors show that -for this use case- privacy preservation incentivises data sharing and thus makes large datasets available. However, they do not investigate privacy-utility or privacy-fairness trade-offs. In our previous work [14] we demonstrated the utilization of a suite of privacy-preserving techniques for pneumonia classification in pediatric chest x-rays. However, the focus of this study was not to elucidate privacy-utility or privacy-fairness trade-offs, but to showcase that federated learning workflows can be used to train diagnostic AI models with good (yet diminished compared to the non-private and centralized setting) accuracy on decentralized data while minimizing data privacy and governance concerns; the authors demonstrate this using empirical data reconstruction attacks, which are thwarted by the utilization of differential privacy. Moreover, the work did not consider differential diagnosis but only coarse-label classification into normal/bacterial/viral pneumonia. To the best of our knowledge, our study is the first work to investigate the use of differential privacy in the training of complex diagnostic AI models on a real-world dataset of this magnitude (nearly 200 000 samples) and to include an extensive evaluation of privacy-utility and privacy-fairness trade-offs. Our results are of interest to medical practitioners, deep learning experts in the medical field and regulatory bodies such as legislative institutions, institutional review boards and data protection officers and we undertook specific care to formulate our main lines of investigation across the important axes delineated above, namely the provision of objective metrics of diagnostic accuracy, privacy protection and demographic fairness towards diverse patient subgroups. Figure 1: Differences between the private and non-private training process of a neural network. (A) Images from a dataset are fed to a neural network and predictions are made. (B) From the predictions and the ground truth labels, the gradient is calculated via backpropagation. (C, upper panel) In normal training all gradients are averaged and an update step is performed. (C, lower panel) In private training, each per-sample gradient is clipped to a predetermined \(\ell_{2}\)-norm, averaged and noise proportional to the norm is added. This ensures that the information about each sample is upper-bounded and perturbed with sufficient noise. ## 2 Materials and Methods ### Patient Cohort We employed UKA-CXR [15, 16], a large cohort of chest radiographs. The dataset consists of \(N=193\,311\) frontal CXR images, all manually labeled by radiologists. The available labels include: pleural effusion, pneumomic infiltrates, and atelectasis, each separately for right and left lung, congestion, and cardiomegaly. The labeling system for cardiomegaly included five classes "normal", "uncertain", "borderline", "enlarged", and "massively enlarged". For the rest of the labels, five classes of "negative", "uncertain", "mild", "moderate", and "severe" were used. Data were split into \(N=153\,502\) training and \(N=39\,809\) test images using patient-wise stratification, but otherwise completely random allocation [15, 16]. There was no overlap between the training and test sets. Table 1 shows the statistics of the dataset. ### Data Pre-processing All the images were resized to \((512\times 512)\) pixels. Afterward, a normalization scheme as described previously by Johnson et al. [17] was utilized by subtracting the lowest value in the image, dividing by the highest value in the shifted image, truncating values, and converting the result to an unsigned integer, i.e., the range of \begin{table} \begin{tabular}{l l l l l l l} \hline \hline & \multicolumn{2}{c}{Training Set} & \multicolumn{2}{c}{Test Set} & \multicolumn{2}{c}{All} \\ & N & percentage & N & percentage & N & percentage \\ \hline Total & 153,502 & & 39,809 & & 193,311 & \\ Female & 52,843 & (34.42\%) & 14,449 & (36.30\%) & 67,292 & (34.81\%) \\ Male & 100,659 & (65.58\%) & 25,360 & (63.70\%) & 126,019 & (65.19\%) \\ Aged [0, 30) & 4,279 & (2.79\%) & 1,165 & (2.93\%) & 5,444 & (2.82\%) \\ Aged [30, 60) & 42,340 & (27.58\%) & 10,291 & (25.85\%) & 52,631 & (27.23\%) \\ Aged [60, 70) & 36,882 & (24.03\%) & 10,025 & (25.18\%) & 46,907 & (24.27\%) \\ Aged [70, 80) & 48,864 & (31.83\%) & 12,958 & (32.55\%) & 61,822 & (31.98\%) \\ Aged [80, 100) & 21,137 & (13.77\%) & 5,370 & (13.49\%) & 26,507 & (13.71\%) \\ Cardiomegaly & 71,732 & (46.72\%) & 18,616 & (46.75\%) & 90,348 & (46.74\%) \\ Congestion & 13,096 & (8.53\%) & 3,275 & (8.22\%) & 16,371 & (8.47\%) \\ Pleural effusion right & 12,334 & (8.03\%) & 3,275 & (8.22\%) & 15,609 & (8.07\%) \\ Pleural effusion left & 9,969 & (6.49\%) & 2,602 & (6.53\%) & 12,571 & (6.50\%) \\ Pneumonic infiltration right & 17,666 & (11.51\%) & 4,847 & (12.17\%) & 22,513 & (11.64\%) \\ Pneumonic infiltration left & 12,431 & (8.10\%) & 3,562 & (8.94\%) & 15,993 & (8.27\%) \\ Atelectasis right & 14,841 & (9.67\%) & 3,920 & (9.84\%) & 18,761 & (9.71\%) \\ Atelectasis left & 11,916 & (7.76\%) & 3,166 & (7.95\%) & 15,082 & (7.80\%) \\ \hline \multicolumn{2}{c}{Age Training Set} & \multicolumn{2}{c}{Age Test Set} & \multicolumn{2}{c}{Age All} \\ & Mean & StD & Mean & StD & Mean & StD \\ \hline Total & 66 & 15 & 66 & 15 & 66 & 15 \\ Female & 66 & 15 & 66 & 16 & 66 & 15 \\ Male & 65 & 14 & 66 & 14 & 65 & 14 \\ Aged [0, 30) & 21 & 8 & 21 & 8 & 21 & 8 \\ Aged [30, 60) & 50 & 8 & 51 & 8 & 51 & 8 \\ Aged [60, 70) & 65 & 3 & 65 & 3 & 65 & 3 \\ Aged [70, 80) & 75 & 3 & 75 & 3 & 75 & 3 \\ Aged [80, 100) & 84 & 3 & 84 & 3 & 84 & 3 \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics over subgroups of the UKA-CXR dataset used in this study. The upper part of the table shows the number of samples in each group and their relative share in training and test set, as well as the complete dataset. The lower part shows the mean and standard deviation of the age in the subgroups again over training and test set as well as the complete dataset. \([0,255]\). Finally, we performed histogram equalization by shifting pixel values towards 0 or towards 255 such that all pixel values 0 through 255 have approximately equal frequencies [17]. We selected a binary classification paradigm for each label. The "negative" and "uncertain" classes ("normal" and "uncertain" for cardiomegaly) were treated as negative, while the "mild", "moderate", and "severe" classes ("borderline", "enlarged", and "massively enlarged" for cardiomegaly) were treated as positive. ### Deep Learning Process #### 2.3.1 Network Architecture We employed the ResNet9 architecture introduced in [18] as our classification architecture. The images were expanded to \((512\times 512\times 3)\) for compatibility with the neural network architecture. The final linear layer reduces the \((512\times 1)\) output feature vectors to the desired number of diseases to be predicted, i.e., 8. The sigmoid function was utilized to convert the output predictions to individual class probabilities. The full network contained a total of 4.9 million trainable parameters. Our utilized ResNet9 network employs the modifications proposed by Klause et al. [18] and by He et al. [19] Instead of the batch normalization [20] layers, we used group normalization [21] layers with groups of 32 to be compatible with DP processing. We pretrained the network on the MIMIC Chest X-ray JPG dataset v2.0.0, [22] consisting of \(N=210\,652\) frontal images. All training hyperparameters were selected empirically based on their validation accuracy, while no systematic/automated hyperparameter tuning was conducted. #### 2.3.2 Non-DP Training The Rectified Linear Unit (ReLU) [23] was chosen as the activation function in all layers. We performed data augmentation during training by applying random rotation in the range of \([-10,10]\) degrees and medio-lateral flipping with a probability of 0.50. The model was optimized using the NAdam [24] optimizer with a learning rate of \(5\cdot 10^{-5}\). The binary weighted cross-entropy with inverted class frequencies of the training data was selected as the loss function. The training batch size was chosen to be 128. #### 2.3.3 DP Training Mish [25] was chosen as the activation function in all layers. No data augmentation was performed during DP training as we found further data augmentation during training to be harmful to accuracy. All models were optimized using the NAdam [24] optimizer with a learning rate of \(5\cdot 10^{-4}\). The binary weighted cross-entropy with inverted class frequencies of the training data was selected as the loss function. The maximum allowed gradient norm was chosen to be 1.5 and the network was trained for 150 epochs for each chosen privacy budget. Each point in the batch was sampled with a probability of \(8\cdot 10^{-4}\) (128 divided by \(N=153,502\)). ### Quantitative Evaluation and Statistical Analysis The area under the receiver-operator-characteristic curve (AUROC) was utilized as the primary evaluation metric. We report the average AUROC over all the labels for each experiment. The individual AUROC as well as all other evaluation metrics of individual labels are reported in the supplemental material (Tables 4-10). Bootstrapping was used with \(1\,000\) redraws for each measure to determine the statistical spread [26]. For calculating sensitivity, specificity and accuracy, a threshold was chosen according to Youden's criterion [27], i.e., the threshold that maximized (true positive rate - false positive rate). To evaluate correlation between results of data subsets and their sample size Pearson's r coefficient was used. To analyze fairness between subgroups, the statistical parity difference [28] was used which is defined as \(P(\hat{Y}=1|C=\text{Minority})-P(\hat{Y}=1|C=\text{Majority})\) where \(\hat{Y}=1\) represents correct model predictions and C is the group in question. Intuitively, it is the difference in classification accuracy between the minority and majority class and thus is optimally zero. Values larger than zero mean that there is a benefit for the minority class, while values smaller than zero mean that the minority class is discriminated against. Results ### High classification accuracy is attainable despite stringent privacy guarantees Table 2 shows the detailed evaluation results for non-private and private (at \(\varepsilon=7.89\)) model training. In the case of non-private training, our model achieves an AUROC of \(0.90\) over all diagnoses. It performs best on pneumonic infiltration on the right (AUROC=0.94) while struggling the most to accurately classify cardiomegaly (AUROC=0.84). Training with DP decreases all results slightly and achieves an overall AUROC of \(0.87\). The per-diagnosis performance ranges from \(0.92\) (pleural effusion right) to \(0.81\) AUROC (congestion). We next consider classification performance at a very strong level of privacy protection (i.e. at \(\varepsilon<1\)). Here, at an \(\varepsilon\)-budget of only \(0.29\), our model achieves an average AUROC of \(0.83\) over all diagnoses. A visual overview is displayed in Figure 2, which shows the average AUROC, accuracy, sensitivity, and specificity values over all labels. Supplementary Tables 4-10 show the per-diagnosis evaluation results for non-DP and DP training for different \(\varepsilon\) values. Diagnostic accuracy is correlated with patient age and sample size for both private and non-private models Table 2 shows the difference in classification performance for each diagnosis between the non-private model evaluation and its private counterpart compared to the sample size (that is, the number of available samples with a given label) within our dataset. At an \(\varepsilon=7.89\), the largest difference of AUROC between the non-private and privacy-preserving model was observed for congestion (\(3.82\%\)) and the smallest difference was observed for pleural effusion right (\(1.55\%\), see Table 2). Of note, there is a visible trend (Pearson's r: \(0.44\)) that classes where the model exhibits good diagnostic performance in the non-private setting also suffer the smallest drop in the private setting. On the other hand, classes that are already difficult to predict in the non-private case deteriorate the most in terms of classification performance with DP (see Figure 3). Both non-private (Pearson's r: \(0.57\)) and private (Pearson's r: \(0.52\)) diagnostic AUROC exhibit a weak correlation to the number of samples available for each class (see Figure 3). However, the drop in AUROC between private and non-private is not correlated with the sample size (Pearson's r: \(0.06\)). Furthermore, we evaluate our models based on age range and patient sex (Table 3). Additionally, we calculate statistical parity difference for those groups to obtain a measure of fairness (Table 3 and Figure 4). All models perform the best on patients younger than \(30\) years of age. It appears that, the older patients are, the greater the difficulty for the models to predict the labels accurately. Statistical parity difference scores are slightly negative for the age groups between \(70\) and \(80\) years and older than \(80\) years for all models, indicating that the models discriminate slightly against these groups. In addition, while for the aforementioned age \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline \(\varepsilon\) & 0.29 & 0.54 & 1.06 & 2.04 & 4.71 & 7.89 & Non-private (\(\infty\)) \\ \hline CDM & 0.79 \(\pm\) 0.00 & 0.79 \(\pm\) 0.00 & 0.80 \(\pm\) 0.00 & 0.81 \(\pm\) 0.00 & 0.81 \(\pm\) 0.00 & 0.82 \(\pm\) 0.00 & 0.84 \(\pm\) 0.00 \\ CNG & 0.78 \(\pm\) 0.00 & 0.79 \(\pm\) 0.00 & 0.80 \(\pm\) 0.00 & 0.80 \(\pm\) 0.00 & 0.81 \(\pm\) 0.00 & 0.81 \(\pm\) 0.00 & 0.85 \(\pm\) 0.00 \\ PER & 0.88 \(\pm\) 0.00 & 0.89 \(\pm\) 0.00 & 0.90 \(\pm\) 0.00 & 0.90 \(\pm\) 0.00 & 0.92 \(\pm\) 0.00 & 0.92 \(\pm\) 0.00 & 0.94 \(\pm\) 0.00 \\ PEL & 0.84 \(\pm\) 0.00 & 0.84 \(\pm\) 0.00 & 0.86 \(\pm\) 0.00 & 0.87 \(\pm\) 0.00 & 0.89 \(\pm\) 0.00 & 0.89 \(\pm\) 0.00 & 0.92 \(\pm\) 0.00 \\ PIR & 0.87 \(\pm\) 0.00 & 0.88 \(\pm\) 0.00 & 0.89 \(\pm\) 0.00 & 0.90 \(\pm\) 0.00 & 0.90 \(\pm\) 0.00 & 0.91 \(\pm\) 0.00 & 0.93 \(\pm\) 0.00 \\ PIL & 0.88 \(\pm\) 0.00 & 0.88 \(\pm\) 0.00 & 0.89 \(\pm\) 0.00 & 0.90 \(\pm\) 0.00 & 0.91 \(\pm\) 0.00 & 0.91 \(\pm\) 0.00 & 0.94 \(\pm\) 0.00 \\ ALR & 0.82 \(\pm\) 0.00 & 0.83 \(\pm\) 0.00 & 0.84 \(\pm\) 0.00 & 0.85 \(\pm\) 0.00 & 0.86 \(\pm\) 0.00 & 0.87 \(\pm\) 0.00 & 0.89 \(\pm\) 0.00 \\ ALL & 0.80 \(\pm\) 0.00 & 0.81 \(\pm\) 0.00 & 0.82 \(\pm\) 0.00 & 0.83 \(\pm\) 0.00 & 0.85 \(\pm\) 0.00 & 0.85 \(\pm\) 0.00 & 0.87 \(\pm\) 0.00 \\ Average & 0.83 \(\pm\) 0.04 & 0.84 \(\pm\) 0.04 & 0.85 \(\pm\) 0.04 & 0.86 \(\pm\) 0.04 & 0.87 \(\pm\) 0.04 & 0.87 \(\pm\) 0.04 & 0.90 \(\pm\) 0.04 \\ \hline \hline \end{tabular} \end{table} Table 2: Evaluation results of training with DP and without DP with different \(\varepsilon\) values for \(\delta=6\cdot 10^{-6}\). The results show the individual AUROC values for each label, including cardiomegaly (CDM), congestion (CNG), pleural effusion right(PER), pleural effusion left (PEL), pneumonic infiltration right (PIR), pneumonic infiltration left (PIL), atelectasis right (ALR), and atelectasis left (ALL) tested on \(N=39\,809\) test images. The training dataset includes \(N=153,502\) images. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{\(\varepsilon\)} & \multicolumn{6}{c}{Age} & \multicolumn{3}{c}{Patient Sex} \\ & & [0, 30) & [30, 60) & [60, 70) & [70, 80) & [80, 100) & Female & Male \\ \hline \multirow{3}{*}{\(\infty\)} & Mean & 0.92 & 0.91 & 0.90 & 0.89 & 0.88 & 0.90 & 0.89 \\ & StD & 0.04 & 0.03 & 0.04 & 0.04 & 0.04 & 0.04 & 0.04 \\ & PtD & 0.04 & 0.01 & 0.00 & 0.00 & -0.03 & & 0.00 \\ \hline \multirow{3}{*}{7.89} & Mean & 0.90 & 0.89 & 0.87 & 0.86 & 0.85 & 0.88 & 0.87 \\ & StD & 0.04 & 0.04 & 0.04 & 0.05 & 0.05 & 0.04 & 0.04 \\ & PtD & 0.04 & 0.01 & 0.01 & -0.01 & -0.03 & & 0.01 \\ \hline \multirow{3}{*}{4.71} & Mean & 0.89 & 0.89 & 0.87 & 0.86 & 0.85 & 0.87 & 0.87 \\ & StD & 0.03 & 0.04 & 0.04 & 0.05 & 0.05 & 0.04 & 0.04 \\ & PtD & 0.04 & 0.02 & 0.01 & -0.02 & -0.02 & & 0.02 \\ \hline \multirow{3}{*}{2.04} & Mean & 0.89 & 0.88 & 0.86 & 0.84 & 0.84 & 0.86 & 0.86 \\ & StD & 0.03 & 0.04 & 0.04 & 0.04 & 0.04 & 0.04 & 0.04 \\ & PtD & 0.06 & 0.02 & 0.01 & -0.03 & -0.02 & & 0.00 \\ \hline \multirow{3}{*}{1.06} & Mean & 0.88 & 0.87 & 0.85 & 0.84 & 0.83 & 0.85 & 0.85 \\ & StD & 0.03 & 0.04 & 0.04 & 0.05 & 0.04 & 0.04 & 0.04 \\ & PtD & 0.07 & 0.03 & 0.00 & -0.02 & -0.03 & & 0.01 \\ \hline \multirow{3}{*}{0.54} & Mean & 0.86 & 0.86 & 0.84 & 0.83 & 0.82 & 0.85 & 0.84 \\ & StD & 0.03 & 0.04 & 0.04 & 0.04 & 0.04 & 0.04 & 0.04 \\ & PtD & 0.07 & 0.01 & 0.02 & -0.03 & -0.01 & & 0.02 \\ \hline \multirow{3}{*}{0.29} & Mean & 0.86 & 0.85 & 0.83 & 0.82 & 0.81 & 0.84 & 0.83 \\ & StD & 0.04 & 0.04 & 0.04 & 0.04 & 0.04 & 0.04 & 0.04 \\ \cline{1-1} & PtD & 0.07 & 0.01 & 0.01 & -0.02 & -0.02 & & 0.00 \\ \hline \hline \end{tabular} \end{table} Table 3: Average evaluation results of training with DP and without DP with different \(\varepsilon\) values, for all age intervals, as well as the patient sex. The results show the average AUROC values over all labels, tested on \(N=39\,809\) images. Mean is given over AUROC, StD is the standard deviation of the AUROC, and PtD denotes the statistical parity difference between the underrepresented class compared to all other patients. Positive values show a benefit, negative values show discrimination. groups the discrimination does not change with privacy levels, younger patients become more privileged as privacy increases. This finding indicates that -for models which are most protective of data privacy- young patients benefit the most, despite the group of younger patients being smaller overall. For patient sex, models show slightly better performance for female patients and slightly discriminate against male patients (Table 3). Statistical parity does not appear to correlate (Pearson's r: 0.13) with privacy levels (Table 3). ## 4 Discussion The main contribution of our paper is to demonstrate that the training of highly accurate diagnostic AI models on a large-scale clinical chest radiography database is possible while furnishing strong objective guarantees of data privacy and without inducing patient discrimination. Across all levels of privacy protection, training with DP only resulted in mild AUROC reductions. The fact that the model maintained an AUROC of 0.83 even at \(\varepsilon=0.29\) is remarkable, and we are unaware of any prior work to report such a strong level of privacy protection at this level of model accuracy on clinical data. Our results thus exemplify that, through the use of model pretraining on a related public dataset, specialized architecture designs, and the availability of sufficient data samples, privately trained models require only very small additional amounts of private information from the training dataset to achieve high diagnostic accuracy on the tasks at hand. Our analysis of the per-diagnosis performance of models that are trained with and without privacy guarantees shows that models discriminate against diagnoses that are underrepresented in the training set in both private and non-private training. This finding is not unusual and several examples can be found in [29]. However, the drop in performance between private and non-private training is uncorrelated to the sample Figure 2: Average results of training with DP with different \(\varepsilon\) values for \(\delta=6\cdot 10^{-6}\). The curves show the average (A) area-under-the-receiver-operator-curve (AUROC), (B) accuracy, (C) specificity, and (D) sensitivity values over all labels, including cardiomegaly, congestion, pleural effusion right, pleural effusion left, pneumonic infiltration right, pneumonic infiltration left, atelectasis right, and atelectasis left tested on \(N=39\,809\) test images. The training dataset includes \(N=153\,502\) images. Note, that the AUROC is continuously increasing, while sensitivity, specificity and accuracy exhibit more variation. This is due to the fact that all training processes were optimized for the AUROC. size. Instead, the difficulty of the diagnosis seems to drive the difference in AUROC between the two settings. Concretely, diagnostic performance under privacy constraints suffers the most for those classes, which already have the lowest AUROC in the non-private setting. Conversely, diagnoses that are predicted with the highest AUROC suffer the least when DP is introduced. Previous works investigating the effect of DP on fairness show that privacy preservation amplifies discrimination [30]. This effect is very limited in our study. Our models remain fair despite strong privacy guarantees, likely due to our real-life dataset's large size and high quality (whereas previous works limited their scope to toy datasets). Moreover, our use of pre-training helps to boost model performance and reduce the amount of additional information the model needs to learn "from scratch", which seems to benefit under-represented groups in the dataset the most. Our analysis of fairness related to patient age showed that older patients (older than 70 years of age) are discriminated against both in the non-private and the private setting, with discrimination against them remaining approximately constant with stronger privacy guarantees. On the other hand, patients below 30 years of age suffer overall lower model discrimination in the non-private and the private setting. Interestingly, young patients seem to profit more from stronger privacy guarantees, as they enjoy progressively more fairness privilege with increasing privacy protection level. This holds despite the fact that patients under 30 represent the smallest fraction of the dataset. This effect is most likely due to a confounding variable, namely the lower complexity of imaging findings in younger patients due to their improved ability to cooperate during radiograph acquisition, resulting in a better discrimination of the pathological finding on a more homogenous background (i.e. "cleaner") radiographs which are easier to diagnose overall [15, 31] (see Figure 5). This hypothesis should be validated in cohorts with a larger proportion of young patients, and we intend to expand on this finding in future work. The analysis of model fairness related to patient sex shows that female patients (which -similar to young patients- are an underrepresented group) enjoy a slightly higher diagnostic accuracy Figure 3: Relation of sample size to training performance for private and performance loss compared to non private training. Each dot marks the performance on the test set on one diagnosis of the private model at \(\varepsilon=7.89\) (compare Table 2). Colors indicate the performance loss compared to the non private model. than male patients for almost all privacy levels. However, effect size differences were found to be small, so that this finding can also be explained by variability between models or by the randomness in the training process. Further investigation is thus required to elucidate the aforementioned effects. In conclusion, we analyzed the usage of privacy-preserving neural network training and its implications on utility and fairness for a relevant diagnostic task on a large real-world dataset. We showed that the utilization of specialized architectures and targeted model pre-training allows for high model accuracy despite stringent privacy guarantees. This enables us to train expert-level diagnostic AI models even with privacy budgets as low as \(\varepsilon<1\), which - to our knowledge- has not been shown before, and represents an important step towards the widespread utilization of differentially private models in radiological diagnostic AI applications. Moreover, our finding that the introduction of differential privacy mechanisms to model training does not amplify unfair model bias regarding patient age, sex or comorbidity signifies that -at least in our use case-the resulting models abide by important non-discrimination principles of ethical AI. We are hopeful that our findings will encourage practitioners and clinicians to introduce advanced privacy-preserving techniques such as differential privacy when training diagnostic AI models.
2308.03708
Measuring income inequality via percentile relativities
"The rich are getting richer" implies that the population income distributions are getting more right skewed and heavily tailed. For such distributions, the mean is not the best measure of the center, but the classical indices of income inequality, including the celebrated Gini index, are all mean-based. In view of this, Professor Gastwirth sounded an alarm back in 2014 by suggesting to incorporate the median into the definition of the Gini index, although noted a few shortcomings of his proposed index. In the present paper we make a further step in the modification of classical indices and, to acknowledge the possibility of differing viewpoints, arrive at three median-based indices of inequality. They avoid the shortcomings of the previous indices and can be used even when populations are ultra heavily tailed, that is, when their first moments are infinite. The new indices are illustrated both analytically and numerically using parametric families of income distributions, and further illustrated using capital incomes coming from 2001 and 2018 surveys of fifteen European countries. We also discuss the performance of the indices from the perspective of income transfers.
Vytaras Brazauskas, Francesca Greselin, Ricardas Zitikis
2023-08-07T16:25:12Z
http://arxiv.org/abs/2308.03708v1
# Measuring income inequality via percentile relativities ###### Abstract "The rich are getting richer" implies that the population income distributions are getting more right skewed and heavily tailed. For such distributions, the mean is not the best measure of the center, but the classical indices of income inequality, including the celebrated Gini index, are all mean-based. In view of this, Professor Gastwirth sounded an alarm back in 2014 by suggesting to incorporate the median into the definition of the Gini index, although noted a few shortcomings of his proposed index. In the present paper we make a further step in the modification of classical indices and, to acknowledge the possibility of differing viewpoints, arrive at three median-based indices of inequality. They avoid the shortcomings of the previous indices and can be used even when populations are ultra heavily tailed, that is, when their first moments are infinite. The new indices are illustrated both analytically and numerically using parametric families of income distributions, and further illustrated using capital incomes coming from 2001 and 2018 surveys of fifteen European countries. We also discuss the performance of the indices from the perspective of income transfers. _Key words and phrases:_ measures of inequality, heavy-tailed distributions, income transfers. This research has been supported by the NSERC Alliance-MITACS Accelerate grant (ALLRP 580632-22) entitled "New Order of Risk Management: Theory and Applications in the Era of Systemic Risk" from the Natural Sciences and Engineering Research Council (NSERC) of Canada, and the national research organization Mathematics of Information Technology and Complex Systems (MITACS) of Canada, as well as by the individual NSERC Discovery Grant of R. Zitikis (RGPIN-2022-04426). Introduction Measuring income inequality has been a challenging task, as each of the indices used for the purpose attempt to condense the complexities of populations into just one number. Among the many indices, we have the Atkinson, Bonferroni, Gini, Palma, Pietra, Theil, and Zenga indices, to name just a few associated with the names of their inventors. Many treatises have been written on the topic, such as the handbook by Atkinson and Bourguignon (2000, 2015), which also contains many references to earlier studies, and they are voluminous. The indices are often the areas under certain income-equality curves, which are considerably more difficult to present and explain to the general audience, let alone to easily compare. For example, the Gini index of inequality is 1 minus twice the area under the Lorenz curve. (We shall give mathematical definitions later in this paper.) The curves and thus the indices are based on comparing the mean income of the poor with other means, such as the mean income of the entire population, the mean income of the nonpoor, and the mean income of the rich, whatever the definitions of "poor" and "rich" might be. Hence, to be well defined, the curves and the indices inevitably assume that the mean of the underlying population is finite. With the rising income inequality, and thus with the distribution of incomes becoming more skewed and heavily tailed, researchers have therefore sought other ways for measuring inequality. Gastwirth (2014) proposed to use the median instead of the mean when "normalizing" the absolute Gini mean difference, widely known as the GMD. The author noted, however, that the proposed index might fall outside the class of normalized indices because it compares the _mean_ income of the poor with the _median_ income of the entire population. There is a natural remedy to this normalization issue: compare the _median_ income of the poor with the _median_ of the population. Even more, we can compare the median income of the poor with the median of the "not poor" or, for example, with the median of the rich, whatever the latter might mean. This is the path that we take in this paper to arrive at the indices to be formally introduced in the next section. In this regard we wish to mention the study of Bennett and Zitikis (2015) where it is shown that a number of classical indices of income inequality arise naturally from a Harsanyi-inspired model of choice under risk, with persons acting as _reference-dependent_ expected-utility maximizers in the face of an income quantile lottery, thus giving rise to a reinterpretation of the classical indices as measures of the desirability of redistribution in society. This relativistic approach to constructing indices of income inequality was further explored by Greselin and Zitikis (2018), although more from the modeller's perspective than from the philosophical one. The present paper further advances this line of research by showing how naturally percentile-based indices arise in this relativistic context, and how they facilitate inequality measurement even in those populations whose distributions are ultra-heavily tailed, that is, do not possess even a finite first moment. The rest of the paper is organized as follows. In Section 2 we define the new inequality indices, alongside the corresponding equality curves, preceded by several known indices for comparison purposes. In Section 3 we illustrate the new indices and their curves numerically, using several popular families of distributions. In Section 4, we use the indices to first analyze capital incomes of European countries using data from a 2001 survey, and then we compare the results with those obtained from a 2018 survey. In Section 5 we look at the new indices from the perspective of income transfers. Section 6 concludes the paper. Proofs and other technicalities are in Appendix A. ## 2 Inequality indices and their curves We start with technical prerequisites. Let \(F\) be the cumulative distribution function of the population incomes \(X\), a random variable. We assume that \(F\) is non-negatively supported, that is, \(F(x)=0\) for all real \(x<0\). Furthermore, let \(Q\) denote the (generalized) inverse of \(F\), called the quantile function. That is, for each \(p\in(0,1)\), \(Q(p)\) is the smallest number \(x\) such that \(F(x)\geq p\). Hence, the population median income is \[m=Q(1/2)\] and, generally, \(Q(p)\) is the \(p\times 100^{\text{th}}\) percentile. Furthermore, the median income of the poorest \(p\times 100\%\) persons is \(Q(p/2)\). Based on these quantities, we shall later describe three new ways for measuring inequality, but first, we recall the definitions of a few earlier indices that serve as benchmarks for the new ones. ### In the classical mean-based world The index of Gini (1914) is the most widely-used measure of inequality. It can be expressed in a myriad of ways (e.g., Yitzhaki, 1998; Yitzhaki and Schechtman, 2013). For example, the Gini index can be written in terms of the Bonferroni curve \[B(p)=\frac{1}{\mu p}\int_{0}^{p}Q(s)\mathrm{d}s,\quad 0\leq p\leq 1,\] as follows: \[G =2\int_{0}^{1}\bigg{(}1-\frac{\frac{1}{p}\int_{0}^{p}Q(s)\mathrm{d}s }{\mu}\bigg{)}p\mathrm{d}p\] \[=1-\int_{0}^{1}\frac{\frac{1}{p}\int_{0}^{p}Q(s)\mathrm{d}s}{\mu}2 p\mathrm{d}p\] \[=1-\int_{0}^{1}B(p)2p\mathrm{d}p, \tag{2.1}\] where \[\mu=\int_{0}^{1}Q(s)\mathrm{d}s\] is the mean of \(X\). Zenga (2007) argued that the mean income of those below the percentile \(Q(p)\) need to be compared not with the mean of all the incomes but with the mean income of those above the percentile \(Q(p)\). This point of view led the author to the index \[Z=1-\int_{0}^{1}\frac{\frac{1}{p}\int_{0}^{p}Q(s)\mathrm{d}s}{\frac{1}{1-p} \int_{p}^{1}Q(s)\mathrm{d}s}\mathrm{d}p.\] Davydov and Greselin (2019, 2020) suggested to modify Zenga's idea by comparing the mean income of those below the percentile \(Q(p)\) with the mean income of those above the percentile \(Q(1-p)\). This point of view led the authors to the index \[D=1-\int_{0}^{1}\frac{\frac{1}{p}\int_{0}^{p}Q(s)\mathrm{d}s}{\frac{1}{p} \int_{1-p}^{1}Q(s)\mathrm{d}s}\mathrm{d}p.\] Of course, \(1/p\) in the numerator and denominator cancel out, but in this way written \(D\) facilitates an easier comparison with \(Z\). ### A transition into the heavy-tailed modern world Unlike the above three mean-based indices \(G\), \(Z\) and \(D\), the index of Gastwirth (2014) is a mean-median based index. Namely, given the well-known expression \[G=\frac{\text{GMD}}{2\mu} \tag{2.2}\] of the Gini index \(G\) in terms of the Gini mean difference (GMD), which is often written as the expectation \(\mathbb{E}(|X_{1}-X_{2}|)\), where \(X_{1}\) and \(X_{2}\) are two independent copies of \(X\), Gastwirth (2014) argued that comparing the GMD with twice the median would be better than comparing with twice the mean as in equation (2.2). This viewpoint has given rise to the index \[G_{2} =\frac{\text{GMD}}{2m}\] \[=\int_{0}^{1}\bigg{(}\frac{\mu}{m}-\frac{\frac{1}{p}\int_{0}^{p}Q( s)ds}{m}\bigg{)}2p\text{d}p\] \[=\frac{\mu}{m}-\int_{0}^{1}\frac{\frac{1}{p}\int_{0}^{p}Q(s)ds}{m }2p\text{d}p.\] Note that \(\mu/m\), which can be viewed as the benchmark replacing \(1\) in the previous indices, is the mean-median ratio that has been used as an easy to understand - and thus to convey to the general audience - indicator of wealth and income distribution (e.g., Garratt, 2020). In the case of symmetric distributions, \(\mu/m\) is of course equal to \(1\). ### In the skewed and heavy-tailed modern world: new indices The above discussion naturally leads to three strategies of defining purely median-based indices of income inequality and their corresponding curves of equality, all based on percentiles and thus well defined irrespective of whether the income variable \(X\) has a finite first or any other moment. Strategy 1:Compare the median income of the poorest \(p\times 100\%\) persons with the median of the entire population (Figure 2.1). This leads to the equality curve \[\psi_{1}(p)=\frac{Q(p/2)}{Q(1/2)},\quad 0<p<1. \tag{2.3}\] Figure 2.1: The median of the poor (red) and the median of all (green). Averaging this curve over all \(p\)'s gives rise to the inequality index \[\Psi_{1}=1-\int_{0}^{1}\frac{Q(p/2)}{Q(1/2)}\mathrm{d}p. \tag{2.4}\] Note the mathematical similarity between the Bonferroni curve \(b\) and the curve \(\psi_{1}\): \[b(p)=\frac{\frac{1}{p}\int_{0}^{p}Q(s)\mathrm{d}s}{\int_{0}^{1}Q(s)\mathrm{d}s },\quad\psi_{1}(p)=\frac{\frac{1}{p}\int_{0}^{p}Q(p/2)\mathrm{d}s}{\int_{0}^{1 }Q(1/2)\mathrm{d}s}.\] Strategy 2:Compare the median income of the poorest \(p\times 100\%\) persons with the median of the nonpoor (Figure 2.2). This leads to the equality curve \[\psi_{2}(p)=\frac{Q(p/2)}{Q(1/2+p/2)},\quad 0<p<1, \tag{2.5}\] and, after averaging over all \(p\)'s, to the inequality index \[\Psi_{2}=1-\int_{0}^{1}\frac{Q(p/2)}{Q(1/2+p/2)}\mathrm{d}p. \tag{2.6}\] Note the mathematical similarity between the Zenga curve \(z\) and the curve \(\psi_{2}\): \[z(p)=\frac{\frac{1}{p}\int_{0}^{p}Q(s)\mathrm{d}s}{\frac{1}{1-p}\int_{p}^{1}Q (s)\mathrm{d}s},\quad\psi_{2}(p)=\frac{\frac{1}{p}\int_{0}^{p}Q(p/2)\mathrm{d} s}{\frac{1}{1-p}\int_{p}^{1}Q(p+(1-p)/2)\mathrm{d}s}.\] Strategy 3:Compare the median income of the poorest \(p\times 100\%\) persons with the median Figure 2.3: The median of the poor (red) and the median of the rich (green). Figure 2.2: The median of the poor (red) and the median of the nonpoor (green). of the richest \(p\times 100\%\) persons (Figure 2.3). This leads to the equality curve \[\psi_{3}(p)=\frac{Q(p/2)}{Q(1-p/2)},\quad 0<p<1, \tag{2.7}\] and, after averaging over all \(p\)'s, to the inequality index \[\Psi_{3}=1-\int_{0}^{1}\frac{Q(p/2)}{Q(1-p/2)}\mathrm{d}p. \tag{2.8}\] Note the mathematical similarity between the Davydov-Greselin curve \(d\) and the curve \(\psi_{3}\): \[d(p)=\frac{\frac{1}{p}\int_{0}^{p}Q(s)\mathrm{d}s}{\frac{1}{p}\int_{1-p}^{1}Q( s)\mathrm{d}s},\quad\psi_{3}(p)=\frac{\frac{1}{p}\int_{0}^{p}Q(p/2)\mathrm{d}s}{ \frac{1}{p}\int_{1-p}^{1}Q(1-p+p/2)\mathrm{d}s}.\] Summarizing the above discussion, in view of equations (2.4), (2.6), and (2.8), the three income-equality curves are connected to the corresponding income-inequality indices via the equation \[\Psi_{k}=1-\int_{0}^{1}\psi_{k}(p)\mathrm{d}p. \tag{2.9}\] Note that the three curves \(\psi_{k}\) take values only in the interval \([0,1]\), and so the three indices \(\Psi_{k}\) are always normalized, that is, \(\Psi_{k}\in[0,1]\). In this context it is useful to look at the following unrealistic but illuminating cases: * If the income-equality curve \(\psi_{k}\) is equal to \(1\) everywhere on \((0,1)\), which means perfect equality, then the income-inequality index \(\Psi_{k}\) is equal to \(0\), which means lowest inequality. * If the income-equality curve \(\psi_{k}\) is equal to \(0\) everywhere on \((0,1)\), which means extreme inequality, then the income-inequality index \(\Psi_{k}\) is equal to \(1\), which means maximal inequality. Hence, these two extreme cases serve as benchmark curves: the one that is identically equal to \(1\) is the curve of perfect equality, and the one that is identically equal to \(0\) is the curve of extreme inequality. We can therefore say that the three indices \(\Psi_{k}\) measure the deviation of the actual curves \(\psi_{k}\) from the benchmark egalitarian curve \(\psi_{e}(p)=1\), \(0\leq p\leq 1\), by calculating the areas between them. The new indices and curves: a parametric viewpoint Modelling population incomes using parametric distributions and also fitting such distributions to income data are common approaches in the area (e.g., Kleiber and Kotz, 2003). From this perspective, the inequality indices \(G\), \(Z\), \(D\) and \(G_{2}\) and their corresponding equality curves have been amply discussed and illustrated by their inventors and subsequent researchers. Hence, we devote this section to illustrating only the three indices \(\Psi_{k}\) and their corresponding curves \(\psi_{k}\). We use nine parametric families of distributions, most of which are common in modeling incomes (e.g., Kleiber and Kotz, 2003). They are right skewed and present a full spectrum of tail heaviness: some are lightly tailed (e.g., exponential), some are heavily tailed (e.g., Pareto distributions), and others have the right tails of intermediate heaviness (e.g., lognormal). For their specific parametrizations, next is the list of their quantile functions: * _Uniform\((0,\theta)\)_: \(Q(p)=\theta p\). * _Exponential\((0,\theta)\)_: \(Q(p)=-\theta\log(1-p)\). * _Gamma\((\theta,\alpha)\)_: \(Q(p)=\theta Q_{0}(p)\), where \(Q_{0}(p)\) is the quantile function of the standard gamma distribution (i.e., \(\theta=1\)) whose cumulative distribution function is \(F_{0}(t)=\frac{1}{\Gamma(\alpha)}\int_{0}^{t}x^{\alpha-1}e^{-x}\;\mathrm{d}x\). * _Weibull\((\theta,\tau)\)_: \(Q(p)=-\theta\big{(}\log(1-p)\big{)}^{1/\tau}\). * _Lognormal\((\mu,\sigma)\)_: \(Q(p)=\exp\left\{\mu+\sigma\Phi^{-1}(p)\right\}\), where \(\Phi^{-1}(p)\) is the quantile function of the standard normal distribution (i.e., \(\mu=0\) and \(\sigma=1\)). * _Log-Cauchy\((\mu,\sigma)\)_: \(Q(p)=\exp\left\{\mu+\sigma\tan(\pi(p-1/2))\right\}\). * _Pareto-II\((\sigma,\alpha)\)_: \(Q(p)=\sigma\big{(}(1-p)^{-1/\alpha}-1\big{)}\). * _Pareto-III\((\sigma,\gamma)\)_: \(Q(p)=\sigma\big{(}(1-p)^{-1}-1\big{)}^{\gamma}\). * _Pareto-IV\((\sigma,\alpha,\gamma)\)_: \(Q(p)=\sigma\big{(}(1-p)^{-1/\alpha}-1\big{)}^{\gamma}\). We have computed the inequality indices \(\Psi_{k}\) for these distributions under various parameter choices. The results are in Table 3.1, where we also report the rankings of the distributions based on the new indices: rank 1 corresponds to the lowest inequality and rank 16 to the highest inequality. It is encouraging to see that while the magnitudes of the indices differ, the rankings induced by them are fairly similar. In Table 3.1 we have four groups consisting of four distributions. The groups reflect the fact that in Figures 3.1-3.3, the distributions are grouped into four rows each containing four panels. The figures depict the three income-equality curves \(\psi_{k}\) for the distributions specified in Table 3.1. Since the curves are ratios of percentiles, the scale parameter of each distribution has no effect on the inequality indices. The same is true for the log-location parameter (\(e^{\mu}\)) of the lognormal and log-Cauchy distributions. However, the shape (\(\alpha\), \(\gamma\)) and the log-scale (\(e^{\sigma}\)) parameters are the primary drivers of the underlying inequality. To explore this effect, we choose a couple of values of each of these parameters for plotting. In the plots of Figures 3.1-3.3, the uniform distribution serves as a benchmark for comparing the curves. In each plot, the dash-dotted line (invisible in the top left panels of the figures) marks the curve \(\psi_{k}\) in the case of the uniform distribution. Numerical evaluations labeled 'area' represent the areas of the corresponding shaded regions above the curves \(\psi_{k}\), which are the values of the inequality indices. From Table 3.1 and Figures 3.1-3.3 we observe several facts, which can also be verified mathematically: * \(\psi_{1}\) for _Pareto-III_(\(\sigma,\gamma=2\)) and \(\psi_{3}\) for _Pareto-II_(\(\sigma,\alpha=1\)) coincide, thus giving identical \begin{table} \begin{tabular}{l|c c c|c c c} \hline \hline Distributions & \multicolumn{3}{c|}{Inequality indices} & \multicolumn{3}{c}{Ranks based on} \\ & \(\Psi_{1}\) & \(\Psi_{2}\) & \(\Psi_{3}\) & \(\Psi_{1}\) & \(\Psi_{2}\) & \(\Psi_{3}\) \\ \hline _Uniform_(\(0,\theta\)) & 0.5010 & 0.6936 & 0.6147 & 6 & 2 & 3-4 \\ _Exponential_(\(0,\theta\)) & 0.5583 & 0.8327 & 0.7026 & 7 & 7 & 7 \\ _Gamma_(\(\theta,\alpha=0.5\)) & 0.6874 & 0.9378 & 0.8020 & 12 & 10 & 11 \\ _Gamma_(\(\theta,\alpha=2\)) & 0.4360 & 0.6974 & 0.5956 & 3 & 3 & 2 \\ \hline _Weibull_(\(\theta,\tau=0.5\)) & 0.7237 & 0.9681 & 0.8358 & 13 & 13 & 13 \\ _Weibull_(\(\theta,\tau=2\)) & 0.3810 & 0.6022 & 0.5239 & 1 & 1 & 1 \\ _Lognormal_(\(\mu,\sigma=1\)) & 0.4779 & 0.7886 & 0.6648 & 4 & 5 & 5 \\ _Lognormal_(\(\mu,\sigma=2\)) & 0.6648 & 0.9527 & 0.8122 & 11 & 12 & 12 \\ \hline _Log-Cauchy_(\(\mu,\sigma=1\)) & 0.6054 & 0.9382 & 0.7470 & 9 & 11 & 9 \\ _Log-Cauchy_(\(\mu,\sigma=2\)) & 0.7470 & 0.9935 & 0.8551 & 14 & 16 & 14 \\ _Pareto-II_(\(\sigma,\alpha=1\)) & 0.6147 & 0.9242 & 0.7736 & 10 & 9 & 10 \\ _Pareto-II_(\(\sigma,\alpha=2\)) & 0.5868 & 0.8863 & 0.7407 & 8 & 8 & 8 \\ \hline _Pareto-III_(\(\sigma,\gamma=0.5\)) & 0.4302 & 0.7344 & 0.6147 & 2 & 4 & 3-4 \\ _Pareto-III_(\(\sigma,\gamma=2\)) & 0.7736 & 0.9932 & 0.8795 & 16 & 15 & 16 \\ _Pareto-IV_(\(\sigma,\alpha=0.5,\gamma=0.5\)) & 0.4803 & 0.8288 & 0.6887 & 5 & 6 & 6 \\ _Pareto-IV_(\(\sigma,\alpha=2,\gamma=2\)) & 0.7495 & 0.9852 & 0.8598 & 15 & 14 & 15 \\ \hline \end{tabular} \end{table} Table 3.1: The inequality indices \(\Psi_{k}\) for various parametric distributions and the rankings of these distributions based on the indices. inequality indices \(0.7736\). * \(\psi_{1}\) for _Pareto-II\((\sigma,\alpha=1)\)_ and \(\psi_{3}\) for both _Uniform\((0,\theta)\)_ and _Pareto-III\((\sigma,\gamma=0.5)\)_ coincide, thus giving identical inequality indices \(0.6147\). * \(\psi_{1}\) for _Lognormal\((\mu,\sigma=2)\)_ and \(\psi_{3}\) for _Lognormal\((\mu,\sigma=1)\)_ coincide, thus giving Figure 3.1: The income-equality curve \(\psi_{1}\) and the shaded-in area (i.e., \(\Psi_{1}\)) above it for the distributions of Table 3.1, with the dash-dotted line depicting \(\psi_{1}\) of the uniform distribution. identical inequality indices \(0.6648\). * The curves \(\psi_{1}\) for _Log-Cauchy\((\mu,\sigma=2)\)_ and \(\psi_{3}\) for _Log-Cauchy\((\mu,\sigma=1)\)_ coincide, thus giving identical inequality indices \(0.7470\). We conclude this section with the note that there are, of course, many other parametric Figure 3.2: The income-equality curve \(\psi_{2}\) and the shaded-in area (i.e., \(\Psi_{2}\)) above it for the distributions of Table 3.1, with the dash-dotted line depicting \(\psi_{2}\) of the uniform distribution. distributions for modelling incomes (see, e.g., Kleiber and Kotz, 2003), and one of them is the Dagum distribution. For statistical inference for the ratio of any two quantiles of this distribution - and the three equality curves \(\psi_{k}(p)\) are such ratios - we refer to Jedrzejczak et al. (2023). Figure 3.3: The income-equality curve \(\psi_{3}\) and the shaded-in area (i.e., \(\Psi_{3}\)) above it for the distributions of Table 3.1, with the dash-dotted line depicting \(\psi_{3}\) of the uniform distribution. A nonparametric viewpoint We now consider nonparametric ways for estimating all the aforementioned indices of inequality and their corresponding equality curves, with an analysis of real data. ### Estimators of the new indices Let \(X_{1},\ldots,X_{n}\) denote incomes of randomly selected persons, with \(X_{1:n}\leq\cdots\leq X_{n:n}\) denoting the ordered incomes. The empirical counterparts of the three indices \(\Psi_{k}\) are (see their justifications in Appendix A) \[\Psi_{1,n} =1-\frac{1}{\lfloor n/2\rfloor}\sum_{k=1}^{\lfloor n/2\rfloor} \frac{X_{k:n}}{X_{\lceil n/2\rceil:n}}, \tag{4.1}\] \[\Psi_{2,n} =1-\frac{1}{\lfloor n/2\rfloor}\sum_{k=1}^{\lfloor n/2\rfloor} \frac{X_{k:n}}{X_{\lceil n/2\rceil+k:n}},\] (4.2) \[\Psi_{3,n} =1-\frac{1}{\lfloor n/2\rfloor}\sum_{k=1}^{\lfloor n/2\rfloor} \frac{X_{k:n}}{X_{n-k+1:n}}, \tag{4.3}\] where, for every real \(x\geq 0\), \(\lfloor x\rfloor\) is the largest integer that does not exceed \(x\), and \(\lceil x\rceil\) is the smallest integer that is not below \(x\). (These are the classical floor and ceiling functions.) When it is desirable to emphasize the dependence of the indices on incomes, we do so by writing them as \(\Psi_{k,n}(\mathbf{X})\), where \(\mathbf{X}=(X_{1:n},\ldots,X_{n:n})\) is the vector of all the (ordered) incomes in the sample. Next are a few immediate consequences of definitions (4.1)-(4.3). **Property 4.1**.: _For every real \(c\geq 0\), we have \(\Psi_{k,n}(c\mathbf{X})=\Psi_{k,n}(\mathbf{X})\)._ This property implies, for example, that changing the currency with which the incomes are reported does not affect the values of the three inequality indices. **Property 4.2**.: _We have the inequality \(\Psi_{k,n}(\mathbf{X})\geq\Psi_{k,n}(\mathbf{X}+c)\) for every real \(c\geq 0\). The inequality is strict under the following two conditions: first, \(c>0\), and second, there is at least one ratio inside the sum of the definition of \(\Psi_{k,n}\) that is not equal to \(1\). (Note that none of the ratios exceeds \(1\).)_ This property implies that adding the same amount of income to everybody does not increase inequality and, under a minor caveat specified in the property, the index even decreases. To see the necessity of the assumption, consider the case when all \(X\)'s are equal, which gives \(\Psi_{k,n}(\mathbf{X})=0\) and also \(\Psi_{k,n}(\mathbf{X}+c)=0\) irrespective of the value of \(c\). For a proof of Property 4.2, as well as for proofs of other properties (see Appendix A). **Property 4.3**.: _When \(c\to\infty\), we have \(\Psi_{k,n}(\mathbf{X}+c)\to 0\)._ Intuitively, this property says that if we keep adding the same positive amount of income to everyone, all else being equal, then we shall eventually eliminate the inequality. ### Estimators of the earlier indices Next we report the definitions of the empirical estimators of \(Z\), \(D\), \(G\) and \(G_{2}\) obtained by replacing the population quantile function \(Q\) by the empirical quantile function \(Q_{n}\), which is given by the equation \[Q_{n}(p)=X_{\lceil np\rceil:n} \tag{4.4}\] for every \(p\in(0,1]\). Slightly modifying the obtained expression in an asymptotically equivalent way to make it intuitively and computationally more appealing, we arrive at the estimator \[Z_{n}=1-\frac{1}{n}\sum_{i=1}^{n-1}\frac{\frac{1}{i}\sum_{k=1}^{i}X_{k:n}}{ \frac{1}{n-i}\sum_{k=i+1}^{n}X_{k:n}}\] of \(Z\), which appears in Greselin and Pasquazzi (2009). Likewise, we arrive at \[D_{n}=1-\frac{1}{n}\sum_{i=1}^{n}\frac{\frac{1}{i}\sum_{k=1}^{i}X_{k:n}}{ \frac{1}{i}\sum_{k=n-i+1}^{n}X_{k:n}},\] which is an empirical estimator of \(D\) that appeared in Davydov and Greselin (2020). (Of course, \(1/i\) in the numerator and denominator cancel out.) The same reasoning leads to the empirical Gini index \[G_{n} =1-\frac{2}{n}\sum_{i=1}^{n}\frac{\sum_{k=1}^{i}X_{k:n}}{\sum_{k= 1}^{n}X_{k:n}}+\frac{1}{n}\] \[=1-\frac{1}{\bar{X}n^{2}}\sum_{i=1}^{n}\big{(}2(n-i)+1\big{)}X_{i: n},\] where the last equation follows from simple algebra, with \(\bar{X}\) denoting the mean of \(X_{1},\ldots,X_{n}\). Note that the last expression for \(G_{n}\) is the one that places the empirical Gini index into the family of \(S\)-Gini indices introduced by Donaldson and Weymark (1980) and Weymark (1980/81); see also Zitikis and Gastwirth (2002) for further references and statistical inference. **Note 4.1**.: The asymptotically negligible term \(1/n\) on the right-hand side of the first equation of \(G_{n}\) ensures that \(G_{n}\) makes sense for all sample sizes. Without this term we may get counterintuitive values. For example, when the 'incomes' are \(X_{1}=1\), \(X_{2}=2\) and \(X_{3}=3\), we have \(G_{n}=2/9\), whereas \(G_{n}\) without the added \(1/n=1/3\) would give the negative value \(-1/9\), which is incompatible with the meaning of the index. Finally, using the same arguments as above but now with the right-most expression for \(G_{2}\) given in Section 2.2 as our starting point, we arrive at \[G_{2,n}=\frac{\bar{X}}{X_{\lceil n/2\rceil:n}}-\frac{2}{n^{2}}\sum_{i=1}^{n} \frac{\sum_{k=1}^{i}X_{k:n}}{X_{\lceil n/2\rceil:n}}\] as an empirical estimator of \(G_{2}\). As before, \(\bar{X}\) stands for the mean of \(X_{1},\ldots,X_{n}\). ### An analysis of capital incomes from the ECHP 2001 survey Using the formulas for calculating the aforementioned indices from data, we now analyze capital incomes reported in the European Community Household Panel survey (ECHP, 2001) that was conducted by Eurostat in 2001, which is the last of the eight waves of the survey. Specifically, the data come from 59,750 households with 121,122 persons from the fifteen European countries specified in Table 4.1 using the ISO 3166-1 alpha-2 (two-letter) codes. By looking at the means and medians in Table 4.1, we see how skewed to the right the distributions of the countries are. Figure 4.1 (with \(G_{2,n}\) excluded due to its large values) visualizes the index values calculated using formulas (4.1)-(4.3) and reported in Table 4.1. For a more detailed description of the data and relevant references, we refer to Greselin et al. (2014, Section 1). Next are several observations based on Table 4.1 and Figure 4.1. Portugal has the lowest value of \(\Psi_{1,n}\), with the median income of the poorest \(p\times 100\%\) persons equal, after averaging over all \(p\in(0,1)\), to \(84.7\%\) of the median income of the entire population. The opposite happens in France, which provides the highest contrast among the countries when comparing the median income of the poorest \(p\times 100\%\) persons with the overall median income: after averaging such ratios over all \(p\in(0,1)\), we obtain \(21.7\%\). For France, we also observe the largest value of \(\Psi_{3,n}\). The median income of the poorest \(p\times 100\%\) people is equal, after averaging over all \(p\in(0,1)\), to only \(15.5\%\) of the median ## 6 Conclusion In this paper, we have proposed a novel method for the estimation of the unknown income of the richest \(p\times 100\%\) persons in the population. When we are interested in comparing the median income of the poorest \(p\times 100\%\) persons with the median income of the remaining \((1-p)\times 100\%\) part of the population, the index \(\Psi_{2,n}\) tells us that Finland is the country in which such a contrast, after averaging over all \(p\in(0,1)\), is the largest. Figures 4.2-4.4 depict the three income-equality curves \(\psi_{k,n}\) for the fifteen European countries specified in Table 4.1, with the shaded-in areas above them depicting the values of the indices \(\Psi_{k,n}\). The curves have been obtained via formulas (2.3)-(2.7) by replacing \(Q\) by \(Q_{n}\) given by equation (4.4) with \(n=n_{P}\), where \(n_{P}\) is the number of people in the sample who possess capital incomes, and \(n_{T}\) is the total sample size of the given country. Comparing the plots of Figures 4.2-4.4 derived from the actual data with the ones of Figures 3.1-3.3 generated from the parametric distributions, for most of the countries we see that the distributions of capital incomes are similar to Pareto. The only exception is Portugal, where the three equality curves \(\psi_{k,n}\) behave differently: having found no apparent correspondence with any of the parametric models of Section 3, the histogram of Portugal Figure 4.1: The income-inequality indices \(G_{n}\), \(Z_{n}\), \(D_{n}\), and the new indices \(\Psi_{k,n}\) for the fifteen European countries with \(n=n_{P}\) specified in Table 4.1 (based on ECHP, 2001). Figure 4.2: The income-equality curve \(\psi_{1,n}\) and the shaded-in area (i.e., \(\Psi_{1,n}\)) above it for the fifteen European countries, where \(n=n_{P}\) is specified in Table 4.1 (based on ECHP, 2001). Figure 4.3: The income-equality curve \(\psi_{2,n}\) and the shaded-in area (i.e., \(\Psi_{2,n}\)) above it for the fifteen European countries, where \(n=n_{P}\) is specified in Table 4.1 (based on ECHP, 2001). Figure 4.4: The income-equality curve \(\psi_{3,n}\) and the shaded-in area (i.e., \(\Psi_{3,n}\)) above it for the fifteen European countries, where \(n=n_{P}\) is specified in Table 4.1 (based on ECHP, 2001). suggests a bimodal distribution. For all the other countries, the histograms are strongly skewed with strictly decreasing bars when viewing from left to right, thus following the familiar \(J\)-shape mimicking the power law of the Pareto density. ### A comparison with capital incomes from the EU-SILC 2018 survey To get an insight into more recent European situation, we further analyse data coming from the EU Statistics on Income and Living Conditions survey (EU-SILC, 2018), which substituted the ECHP survey after its eighth wave in 2001. We note at the outset that in the EU-SILC survey, the capital incomes are available only at the level of households, and sample sizes are approximately seven times larger if compared with the earlier ECHP survey. Hence, the EU-SILC data give rise to more accurate estimates. In our study we use the following variables: HY040G: income from rental of a property or land. HY090G: interests, dividends, profit from capital investments in unincorporated business. PY080: pensions received from individual private plans. As the data refer to households, an equivalence scale needs to be employed to make meaningful comparisons of monetary incomes of social units with different numbers of inhabitants, and to also take into account the economies of scale (within each household) with regard to the consumption of certain goods. An equivalence scale acts as a weight, giving rise to an _equivalence income_ that can be used for inequality, poverty and welfare analyses. We opt for the modified Organization for Economic Cooperation and Development (OECD) equivalence scale, which gives weight 1 to the household head, 0.5 to the other adult members of the household, and 0.3 to the members under 14 years of age. We analyse the same fifteen European countries as in previous Section 4.3, and consider the 340,540 households surveyed by the EU-SILC in 2018. A summary is provided in Table 4.2. For a useful comparison of means and medians, we apply the official average national currency exchange rates (year 2018) for the three countries that have not adopted the Euro: Denmark, Great Britain, and Sweden, whose currencies are the Danish Krone, the British Pound, and the Swedish Krona, respectively. Hence, all the analyzed data are in Euro. The differences between the means and medians in Table 4.2 facilitate the assessment of skewness of income distributions. The list of countries with lower inequality (having a ## References two-digit rank in at least one of the new indices) is comprised of Denmark, Benelux, France, Ireland, Spain, Finland and Sweden. To compare with the 2001 data, Ireland has joined the list while Germany, Luxembourg, Great Britain and Greece left it. Portugal, that was the country with the highest inequality in 2001, in 2018 was joined by Greece in the list for the primacy of the highest inequality, as seen from the rankings produced by the three new indices. Figure 4.1 (with \(G_{2,n}\) excluded due to its large values) visualizes the index values calculated using formulas (4.1)-(4.3) and reported in Table 4.2. Figures 4.6-4.8 depict the three income-equality curves \(\psi_{k,n}\) for the fifteen European countries specified in Table 4.2, with the shaded-in areas above them depicting the values of the indices \(\Psi_{k,n}\). We observe from the simultaneous inspection of the plots that, in general, the Pareto models fit well the data, and that the Gamma distribution \((\theta,\alpha=0.5)\) can be a good model for the capital incomes in Denmark, France, Ireland, Spain and Sweden. Figure 4.6: The income-equality curve \(\psi_{1,n}\) and the shaded-in area (i.e., \(\Psi_{1,n}\)) above it for the fifteen European countries, where \(n=n_{P}\) is specified in Table 4.2 (based on EU-SILC, 2018). Figure 4.7: The income-equality curve \(\psi_{2,n}\) and the shaded-in area (i.e., \(\Psi_{2,n}\)) above it for the fifteen European countries, where \(n=n_{P}\) is specified in Table 4.2 (based on EU-SILC, 2018). Figure 4.8: The income-equality curve \(\psi_{3,n}\) and the shaded-in area (i.e., \(\Psi_{3,n}\)) above it for the fifteen European countries, where \(n=n_{P}\) is specified in Table 4.2 (based on EU-SILC, 2018). The effects of income transfers on the new indices Consider \(n\) persons whose ordered incomes we denote by \(X_{1:n}<\cdots<X_{n:n}\). Choose any pair from these persons and call them \(L\) and \(H\). The person \(L\in\{1,\ldots,n-1\}\) possesses income \(X_{L:n}\) and the person \(H\in\{2,\ldots,n\}\) possesses income \(X_{H:n}\). We assume \(L<H\). Hence, \(L\) has less income than \(H\), that is, \(X_{L:n}<X_{H:n}\). Denote \(\mathbf{X}=(X_{1:n},\ldots,X_{n:n})\). Assume now that \(H\) transfers a positive amount \(c>0\) to \(L\) without changing the income ordering among the \(n\) persons. The transfer produces \(\mathbf{X}^{\prime}=(X^{\prime}_{1:n},\ldots,X^{\prime}_{n:n})\) with the same ordering \(X^{\prime}_{1:n}<\cdots<X^{\prime}_{n:n}\) of the coordinates as in the case of \(\mathbf{X}\). (See Appendix A for additional technical details.) Succinctly, we denote the transfer by \[L\stackrel{{ c}}{{\longleftarrow}}H \tag{5.1}\] and read it, e.g., "\(L\) receives amount \(c\) from \(H\)" or "\(H\) transfers amount \(c\) to \(L\)." We are interested in how the three indices \(\Psi_{k,n}=\Psi_{k,n}(\mathbf{X})\) react to such transfers, that is, when \(\mathbf{X}\) turns into \(\mathbf{X}^{\prime}\). In addition to \(L\) and \(H\), we also involve the "median" person \[M:=\lceil n/2\rceil\] whose income is \(X_{M:n}=Q_{n}(1/2)\) as per equation (4.4) with \(p=1/2\). Any person \(P\) with income above the median (i.e., when \(P>M\)) is called _well-off_, and any person \(P\) with income below the median (i.e., when \(P<M\)) is called _struggling_ (see Figure 5.1). In what follows, we shall be interested in the effects of transfer (5.1) on the new three indices when both \(L\) and \(H\) are well-off, both are struggling, and when one of them (i.e., \(L\)) is struggling and the other one (i.e., \(H\)) is well-off. Before going into details, we note that the classical Pigou-Dalton principle (PDP) - when it holds - says that \(\Psi_{k,n}(\mathbf{X})\geq\Psi_{k,n}(\mathbf{X}^{\prime})\) in its weak form and \(\Psi_{k,n}(\mathbf{X})>\Psi_{k,n}(\mathbf{X}^{\prime})\) in its strong form. As we shall soon see, the three new indices will tell us a richer story. Based on it, we shall be able to choose a preferred index, or at least be prompted to think outside the box, Figure 5.1: The median (green) delineates the struggling group from the well-off. which is necessary as Amiel and Cowell (1999) have convincingly argued. ### Index \(\Psi_{1,n}\) **Property 5.1**.: _In the case of struggling \(L\) and well-off \(H\) (i.e., \(L<M<H\)), the transfer \(L\stackrel{{ c}}{{\longleftarrow}}H\) diminishes the value of the index \(\Psi_{1,n}\), that is, we have \(\Psi_{1,n}(\mathbf{X})>\Psi_{1,n}(\mathbf{X}^{\prime})\)._ **Property 5.2**.: _When both \(L\) and \(H\) are well-off (i.e., \(M<L<H\)), or when both are struggling (i.e., \(L<H<M\)), the transfer \(L\stackrel{{ c}}{{\longleftarrow}}H\) does not change the value of the index \(\Psi_{1,n}\), that is, we have \(\Psi_{1,n}(\mathbf{X})=\Psi_{1,n}(\mathbf{X}^{\prime})\)._ These two properties say that in order to decrease income inequality based on the index \(\Psi_{1,n}\), a well-off person needs to transfer some amount to a struggling person, whereas any transfer between two well-off persons or between two struggling ones does not make any difference. ### Index \(\Psi_{2,n}\) The index \(\Psi_{2,n}\) is more sensitive to transfers than the previous index. Specifically, we shall see from the following properties that \(\Psi_{2,n}\) decreases when \(L\stackrel{{ c}}{{\longleftarrow}}H\), unless both \(H\) and \(L\) are well-off and \(H\) transfers to \(L\) only a small amount \(c>0\). **Property 5.3**.: _In the case of struggling \(L\) and well-off \(H\) (i.e., \(L<M<H\)), or when both \(L\) and \(H\) are struggling (i.e., \(L<H<M\)), the transfer \(L\stackrel{{ c}}{{\longleftarrow}}H\) diminishes the value of the index \(\Psi_{2,n}\), that is, \(\Psi_{2,n}(\mathbf{X})>\Psi_{2,n}(\mathbf{X}^{\prime})\)._ **Property 5.4**.: _When both \(L\) and \(H\) are well-off (i.e., \(M<L<H\)), the transfer \(L\stackrel{{ c}}{{\longleftarrow}}H\) implies \(\Psi_{2,n}(\mathbf{X})>\Psi_{2,n}(\mathbf{X}^{\prime})\) when_ \[c>c_{2}:=\frac{X_{L-M:n}X_{H:n}^{2}-X_{H-M:n}X_{L:n}^{2}}{X_{L-M:n}X_{H:n}+X_{ H-M:n}X_{L:n}}. \tag{5.2}\] _Furthermore, we have \(\Psi_{2,n}(\mathbf{X})=\Psi_{2,n}(\mathbf{X}^{\prime})\) when \(c=c_{2}\), and \(\Psi_{2,n}(\mathbf{X})<\Psi_{2,n}(\mathbf{X}^{\prime})\) when \(c<c_{2}\)._ Hence, the index \(\Psi_{2,n}\) avoids giving the impression of inequality reduction when only a small amount is transferred among well-off persons. In other words, for the index to decrease in the case of two well-off persons, the richer one needs to transfer a sufficiently large amount in order to qualify for inequality reduction. Next is an example illustrating Properties 5.3 and 5.4. **Example 5.1**.: Consider a group of seven persons, among whom there are three struggling ones (denoted by \(S\)'s) and three well-off persons (denoted by \(W\)'s). The person \(M\) has the median income \(X_{M:7}\) among these seven persons, and thus a "7" in its notation. Let their incomes be \[\mathbf{X} =(X_{1:7},X_{2:7},X_{3:7},X_{4:7},X_{5:7},X_{6:7},X_{7:7})\] \[=(X_{S_{1}:7},X_{S_{2}:7},X_{S_{3}:7},X_{M:7},X_{W_{1}:7},X_{W_{2 }:7},X_{W_{3}:7})\] \[=(\underbrace{1,3,5,}_{\text{Incomes of $S$'s}}\overbrace{7,}^{ \text{Income of $M$}}\underbrace{10,20,24}_{\text{Incomes of $W$'s}}). \tag{5.3}\] The index of inequality for this vector is \(\Psi_{2,n}=0.8472\). Hence, \(n=7\) and thus \(M=\lceil 3.5\rceil=4\), which gives the median income \(X_{4:7}=7\). There are three struggling persons \(S_{1}=1\), \(S_{2}=2\), and \(S_{3}=3\) with incomes 1, 3, and 5, respectively, and three well-off persons \(W_{1}=5\), \(W_{2}=6\), and \(W_{3}=7\) with incomes 10, 20, and 24, respectively (see the top-left panel in Figure 5.2 for a visualization). The horizontal dashed line in each panel of Figure 5.2, noted as "egalitarian income" and plotted at the height 10, refers to the egalitarian redistribution of the above specified incomes (whose sum is equal to 70) among the seven participating persons. Choose now two well-off persons, say \(W_{1}(=L)=5\) and \(W_{2}(=H)=6\). We have \(L-M=1\) and \(H-M=2\). Condition (5.2) is equivalent to \[c>c_{2}=\frac{1\times 20^{2}-3\times 10^{2}}{1\times 20+3\times 10}=2.\] For the ordering of incomes to remain the same after the transfer \(L\xleftarrow{c}H\), we need the restriction \[c<\frac{X_{H:7}-X_{L:7}}{2}=5.\] Hence, to decrease income inequality according to the index \(\Psi_{2,n}\), the person \(H\) needs to transfer to \(L\) more than 2, but less than 5 to avoid swapping the position with \(L\). The top-right panel of Figure 5.2 depicts the transfer from \(W_{2}=6\) to \(W_{1}=5\) of the insufficient for inequality decrease amount \(c=2\), in which case we have the distribution \[(1,3,5,7,12,18,24) \tag{5.4}\] with the value of the index remaining the same, that is, \(\Psi_{2,n}=0.8472\). The bottom-left panel of Figure 5.2 depicts the transfer from \(W_{2}=6\) to \(W_{1}=5\) of the sufficient for inequality decrease amount \(c=4\), in which case we have \[(1,3,5,7,14,16,24) \tag{5.5}\] with the value of the index \(\Psi_{2,n}=0.8442\). We now consider a more complex situation, depicted in the bottom-right panel of Figure 5.2, when every well-off person commits to improving the incomes of the three struggling persons, with the final distribution of incomes becoming \((4,5,6,7,9,18,21)\). We achieve this distribution in several steps, each reducing income inequality and maintaining the original ordering of the seven persons. Recall that we start from the vector \((1,3,5,7,10,20,24)\), whose Figure 5.2: Distributions of incomes with dots representing units, or amounts, of income: the blue dots correspond to the original distribution of incomes, the red ones correspond to reduced incomes due to transfers, and the green dots correspond to increased incomes. inequality index is \(\Psi_{2,n}=0.8472\), and the steps could be these: The transfer \(S_{3}\xleftarrow{1}W_{1}\) results in the distribution \[(1,3,6,7,9,20,24) \tag{5.6}\] with the index \(\Psi_{2,n}=0.8296\). The transfer \(S_{2}\xleftarrow{2}W_{2}\) results in \[(1,5,6,7,9,18,24) \tag{5.7}\] with the index \(\Psi_{2,n}=0.7870\). Finally, the transfer \(S_{1}\xleftarrow{3}W_{3}\) results in the distribution \[(4,5,6,7,9,18,21) \tag{5.8}\] depicted in the bottom right panel of Figure 5.2 and having the index \(\Psi_{2,n}=0.6640\). All these are inequality-reducing transfers from well-off persons to struggling ones. Alternatively, without delving into the psychology of people and thus plausibility of transfers, we can have the following steps, some of which involve two well-off persons and some involve both well-off and struggling persons, leading to the same end-result \((4,5,6,7,9,18,21)\), the same as above: 1. \(W_{1}\xleftarrow{3}W_{2}\) results in \((1,3,5,7,13,17,24)\) with \(\Psi_{2,n}=0.8461\) 2. \(W_{2}\xleftarrow{3}W_{3}\) results in \((1,3,5,7,13,20,21)\) with \(\Psi_{2,n}=0.8450\) 3. \(S_{3}\xleftarrow{1}W_{2}\) results in \((1,3,6,7,13,19,21)\) with \(\Psi_{2,n}=0.8265\) 4. \(S_{2}\xleftarrow{1}W_{2}\) results in \((1,4,6,7,13,18,21)\) with \(\Psi_{2,n}=0.8050\) 5. \(S_{2}\xleftarrow{1}W_{1}\) results in \((1,5,6,7,12,18,21)\) with \(\Psi_{2,n}=0.7844\) 6. \(S_{1}\xleftarrow{3}W_{1}\) results in \((4,5,6,7,9,18,21)\) with \(\Psi_{2,n}=0.6640\) Step 1) is justified by our earlier argument at the beginning of this example saying that any transfer higher than 2 but less than 5 from \(W_{2}\) to \(W_{1}\) is legitimate, and we transfer \(c=3\). To justify Step 2), we note that we can only transfer less than \((24-17)/2=3.5\) but more than \((3\times 24^{2}-5\times 17^{2})/(3\times 24+5\times 17)=1.8025\), and so we transfer \(c=3\). All Steps 3)-6) are from well-off persons to struggling ones, and so the only requirement on the transfers is that they should maintain the original ordering of incomes. This concludes Example 5.1. ### Index \(\Psi_{3,n}\) **Property 5.5**.: _In the case of struggling \(L\) and well-off \(H\) (i.e., \(L<M<H\)), the transfer \(L\stackrel{{ c}}{{\longleftarrow}}H\) diminishes the value of the index \(\Psi_{3,n}\), that is, \(\Psi_{3,n}(\mathbf{X})>\Psi_{3,n}(\mathbf{X}^{\prime})\)._ **Property 5.6**.: _When both \(L\) and \(H\) are well-off (i.e., \(M<L<H\)), or when both are struggling (i.e., \(L<H<M\)), the transfer \(L\stackrel{{ c}}{{\longleftarrow}}H\) increases the value of the index \(\Psi_{3,n}\), that is, we have \(\Psi_{3,n}(\mathbf{X})<\Psi_{3,n}(\mathbf{X}^{\prime})\)._ Hence, when the goal is to decrease income inequality, these two properties say that well-off persons must transfer to struggling persons, and the index discourages transfers between two well-off persons, or between two struggling ones, as the index views such transfers manipulate with no real consequences. Whether we agree with this or not determines whether or not we shall adopt the index \(\Psi_{3,n}\) for measuring income inequality. Having by now discussed the three indices and their properties, we next have a numerical example that illustrates the performance of the three indices side-by-side. **Example 5.2**.: Consider the six distributions of incomes specified in (5.3)-(5.8) and visualized in Figure 5.2. Table 5.1 contains the numerical values of the three indices for the six income distributions. We see from the values that the index \(\Psi_{1,n}\) remains the same when transfers are only among well-off persons (distributions (5.4) and (5.5)) and diminishes in the case of transfers from well-off persons to struggling ones (distributions (5.6)-(5.8)). The performance of the index \(\Psi_{2,n}\) has already been amply discussed and so we move on to \(\Psi_{3,n}\). Unlike \(\Psi_{1,n}\), the index \(\Psi_{3,n}\) increases when transfers are only among well-off persons (distributions (5.4) and (5.5)) but diminishes in the case of transfers from well-off persons to struggling ones (distributions (5.6)-(5.8)). This concludes Example 5.2. ## 6 Conclusion In this paper we have introduced and explored three inequality indices that reflect three views of measuring income inequality: \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Indices & (5.3) & (5.4) & (5.5) & (5.6) & (5.7) & (5.8) \\ \hline \(\Psi_{1,n}\) & 0.5714 & 0.5714 & 0.5714 & 0.5238 & 0.4286 & 0.2857 \\ \(\Psi_{2,n}\) & 0.8472 & 0.8472 & 0.8442 & 0.8296 & 0.7870 & 0.6640 \\ \(\Psi_{3,n}\) & 0.7694 & 0.7917 & 0.8046 & 0.7139 & 0.6713 & 0.6217 \\ \hline \end{tabular} \end{table} Table 5.1: The three indices for income distributions (5.3)–(5.8). 1. The median income of the poor is compared with the median income of the entire population. 2. The median income of the poor is compared with the median income of those who are not poor. 3. The median income of the poor is compared with the median of the same proportion of the richest. We have presented these inequality indices and their equality curves in two ways: one that is suitable for modeling populations parametrically, and the other one that is suitable for direct data-focused computations. Several properties of the indices have been derived and discussed, most notably their behaviour with respect to income transfers. The indices and their curves have been illustrated using popular parametric models of income distributions, and also calculated and interpreted using real data. The new indices do not require any finite moment and, therefore, are suitable (mathematically) for analyzing all populations, including those that are ultra heavily tailed, that is, do not even have a finite first moment.
2301.10519
Lecture Notes on Monadic First- and Second-Order Logic on Strings
These notes present the essentials of first- and second-order monadic logics on strings with introductory purposes. We discuss Monadic First-Order logic and show that it is strictly less expressive than Finite-State Automata, in that it only captures a strict subset of Regular Languages -- the non-counting ones. We then introduce Monadic Second-Order logic; such a logic is, syntactically, a superset of Monadic First-Order logic and captures Regular Languages exactly. We also show how to transform an automaton into a corresponding formula and vice versa. Finally, we discuss the use of logical characterizations of classes of languages as the basis for automatic verification techniques.
Dino Mandrioli, Davide Martinenghi, Angelo Morzenti, Matteo Pradella, Matteo Rossi
2023-01-25T11:01:31Z
http://arxiv.org/abs/2301.10519v1
# Lecture Notes on # Lecture Notes on Monadic First- and Second-Order Logic on Strings Dino Mandrioli Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB), Politecnico di Milano, Piazza Leonardo Da Vinci 32, 20133 Milano, Italy Davide Martinenghi Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB), Politecnico di Milano, Piazza Leonardo Da Vinci 32, 20133 Milano, Italy Angelo Morzenti Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB), Politecnico di Milano, Piazza Leonardo Da Vinci 32, 20133 Milano, Italy {firstname.lastname}@polimi.it Matteo Rossi Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB), Politecnico di Milano, Piazza Leonardo Da Vinci 32, 20133 Milano, Italy {firstname.lastname}@polimi.it ## 1 Introduction From the very beginning of formal language and automata theory, the investigation of the relations between defining a language through some kind of abstract machine and through a logic formalism has produced challenging theoretical problems and important applications in system design and verification. A well-known example of such an application is the classical Hoare's method to prove the correctness of a Pascal-like program w.r.t. a specification stated as a pair of pre- and post-conditions expressed through a first-order theory [6]. Such a verification problem is undecidable if the involved formalisms have the computational power of Turing machines but may become decidable for less powerful formalisms as in the important case of Regular Languages. Originally, Buchi, Elgot, and Trakhtenbrot [1, 2, 9] independently developed a _Monadic Second-Order logic_ defining exactly the Regular Language family, so that the decidability properties of this class of languages could be exploited to achieve automatic verification. Intuitively, monadic logics have some syntactic restrictions on the predicates used. In particular, only predicates that have one argument (i.e., whose arity is 1) are allowed (with the exception of the ordering relation). As usual, in the first-order case only variables can be quantified. In the second-order, instead, monadic predicates--i.e., predicates with arity 1, as mentioned above--can also be quantified, thus resulting in so-called second-order variables. Interestingly, to capture the full class of Regular Languages by means of a monadic logic it is necessary to exploit a second-order version of the logic, which is more powerful than the simpler first-order one; it has been shown by McNaughton and Papert [7], however, that restricting the logic to first-order allows users to define precisely the _non-counting_ subclass of Regular Languages--i.e., the languages which cannot "count" the number of repeated occurrences of a given subword.1 Footnote 1: For instance, the language \(\{(ab)^{2n}\mid n>0\}\) is counting. Non-counting languages, in turn, are equivalent to other interesting subclasses of regular ones, such as, e.g., the _star-free_ ones, i.e. those languages that can be defined by means of regular expressions not making use of the Kleene-* operation. Such logic characterizations, however, have not been exploited in practice to achieve automatic verification due to the intractable complexity of the necessary algorithms. Later on, a major breakthrough in this field has been obtained thanks to the advent of _model checking_, which exploits language characterization in terms of _temporal logic_[3]. Temporal logic has the same expressive power as first-order logic but, being less succint, allows for more efficient (though still exponential) verification algorithms. These notes present the essentials of first- and second-order monadic logics with introductory purpose and are organized as follows. In Section 2, we discuss Monadic First-Order logic and show that it is strictly less expressive than Finite-State Automata, in that it only captures a strict subset of Regular Languages--the non-counting ones. We then introduce Monadic Second-Order logic in Section 3; such a logic is, syntactically, a superset of Monadic First-Order logic and captures Regular Languages exactly. We also show how to transform an automaton into a corresponding formula and vice versa. Finally, in Section 4 we discuss the use of logical characterizations of classes of languages, such as those described in Sections 2 and 3, as the basis for automatic verification techniques. ## 2 Monadic First-order Logic of Order on Strings Given an input alphabet \(\Sigma\), formulae of the _monadic first-order logic_ (MFO) are built out of the following elements: * First-order variables, denoted as lowercase letters (written in boldface to avoid confusion with strings), \(\vec{x}\), \(\vec{y}\),..., which are interpreted over the natural numbers \(\mathbb{N}\). * Monadic predicates \(a(\cdot)\), \(b(\cdot)\),..., one for each symbol of \(\Sigma\); intuitively, \(a(\vec{x})\) evaluates to true in a string \(w\) if, and only if, the character of \(w\) at position \(\vec{x}\) is \(a\). * The order relation \(<\) between natural numbers. * The usual propositional connectives and first-order quantifiers. More precisely, let \(\mathcal{V}\) be a finite set of first-order variables, and let \(\Sigma\) be an alphabet. Well-formed formulae of the MFO logic are defined according to the following syntax: \[\varphi:=a(\vec{x})\ \mid\ \vec{x}<\vec{y}\ \mid\ \neg\varphi\ \mid\ \varphi\lor\varphi\ \mid\ \exists\vec{x}(\varphi)\] where \(a\in\Sigma\) and \(\vec{x},\vec{y}\in\mathcal{V}\). The usual predefined abbreviations are introduced to denote the remaining propositional connectives, the universal quantifier, the arithmetic relations \(\geq,\leq,=,\neq,>\), and sums and subtractions between first order variables and numeric constants. We have the following definitions of propositional connectives and first-order quantifiers: \[\varphi_{1}\land\varphi_{2}\ \stackrel{{\mathrm{def}}}{{=}}\ \neg(\neg\varphi_{1}\lor\neg\varphi_{2})\] \[\varphi_{1}\Rightarrow\varphi_{2}\ \stackrel{{\mathrm{def}}}{{=}}\ \neg\varphi_{1}\lor\varphi_{2}\] \[\varphi_{1}\Leftrightarrow\varphi_{2}\ \stackrel{{\mathrm{def}}}{{=}}\ (\varphi_{1}\Rightarrow\varphi_{2})\land(\varphi_{2} \Rightarrow\varphi_{1})\] \[\forall\vec{x}(\varphi)\ \stackrel{{\mathrm{def}}}{{=}}\ \neg\exists\vec{x}(\neg\varphi)\] the following definitions of relations: \[\mathbf{x}\geq\mathbf{y} \stackrel{{\mathrm{def}}}{{=}} \neg(\mathbf{x}<\mathbf{y})\] \[\mathbf{x}\leq\mathbf{y} \stackrel{{\mathrm{def}}}{{=}} \mathbf{y}\geq\mathbf{x}\] \[\mathbf{x}=\mathbf{y} \stackrel{{\mathrm{def}}}{{=}} \mathbf{x}\leq\mathbf{y}\wedge\mathbf{y}\leq\mathbf{x}\] \[\mathbf{x}\neq\mathbf{y} \stackrel{{\mathrm{def}}}{{=}} \neg(\mathbf{x}=\mathbf{y})\] \[\mathbf{x}>\mathbf{y} \stackrel{{\mathrm{def}}}{{=}} \mathbf{y}<\mathbf{x}\] and the following definitions of constants, of the successor of a natural number, and of addition and subtraction of constant values: \[\mathbf{x}=0 \stackrel{{\mathrm{def}}}{{=}} \forall\mathbf{y}\neg(\mathbf{y}<\mathbf{x})\] \[\mathrm{succ}(\mathbf{x},\mathbf{y}) \stackrel{{\mathrm{def}}}{{=}} \mathbf{x}<\mathbf{y}\wedge\neg\exists\mathbf{x}(\mathbf{x}<\mathbf{z}\wedge\mathbf{z}< \mathbf{y})\] \[\mathbf{y}=\mathbf{x}+k \stackrel{{\mathrm{def}}}{{=}} \exists\varepsilon_{0},\ldots,z_{k}(z_{0}=\mathbf{x}\wedge\mathrm{succ }(z_{0},z_{1})\wedge\mathrm{succ}(z_{1},z_{2})\wedge\ldots\wedge\mathrm{succ} (z_{k-1},z_{k})\wedge\mathbf{y}=z_{k})\] \[\mathbf{y}=\mathbf{x}-k \stackrel{{\mathrm{def}}}{{=}} \mathbf{x}=\mathbf{y}+k\] where \(k\) is a constant in \(\mathbb{N}\). Further useful abbreviations are the follwwowing ones: * first(\(\mathbf{x}\)) and last(\(\mathbf{x}\)) identify, respectively, the first and last positions in the string: first(\(\mathbf{x}\)) \(\stackrel{{\mathrm{def}}}{{=}}\neg\exists\mathbf{y}(\mathbf{y}<\mathbf{x})\), obviously equivalent to \(\mathbf{x}=0\); last(\(\mathbf{x}\)) \(\stackrel{{\mathrm{def}}}{{=}}\neg\exists\mathbf{y}(\mathbf{y}>\mathbf{x})\) An MFO formula is interpreted over a string \(w\in\Sigma^{*}\), with respect to assignment \(\nu:\mathcal{V}\to U\), where \(U=\{0,\ldots,|w|-1\}\), which maps \(\mathcal{V}\) to a position in string \(w\). Notice that, if \(w=\varepsilon\), then function \(\nu(\mathbf{x})\) is undefined for every variable \(\mathbf{x}\in\mathcal{V}\). The satisfaction relation (indicated, as usual, as \(\models\)) for MFO formulae is defined in the following way: * \(w,\nu\models a(\mathbf{x})\) if, and only if, \(\nu(\mathbf{x})\) is defined and \(w[\nu(\mathbf{x})]=a\) * \(w,\nu\models\mathbf{x}<\mathbf{y}\) if, and only if, \(\nu(\mathbf{x})<\nu(\mathbf{y})\) holds * \(w,\nu\models\neg\varphi\) if, and only if, \(w,\nu\not\models\varphi\) holds * \(w,\nu\models\varphi_{1}\vee\varphi_{2}\) if, and only if, at least one of \(w,\nu\models\varphi_{1}\) and \(w,\nu\models\varphi_{2}\) holds * \(w,\nu\models\exists\mathbf{x}(\varphi)\) if, and only if, \(|w|>0\) and \(w,\nu[i/\mathbf{x}]\models\varphi\) hold for some \(i\in\{0,\ldots,\,|w|-1\}\) where \(w[k]\) denotes the letter of \(w\) at position \(k\), and \(\nu[i/\mathbf{x}]\) is the mapping that assigns \(i\) to \(\mathbf{x}\) and otherwise coincides with \(\nu\). Notice that, in case \(w=\varepsilon\) (and therefore \(U=\emptyset\)), \(w,\nu\not\models a(\mathbf{x})\) for any \(a\) and \(\mathbf{x}\) (i.e., it is not the case that \(w,\nu\models a(\mathbf{x})\)), and \(w,\nu\not\models\exists\mathbf{x}(\varphi)\) for any \(\mathbf{x}\) and \(\varphi\) (i.e., an existential quantification is false, and conversely a universal quantification is true, if the quantified variable ranges over the empty set). To improve readability, we will drop \(\nu\) from the notation whenever there is no risk of ambiguity--i.e., we will write \(w\models\varphi\) to indicate that string \(w\) satisfies formula \(\varphi\). An MFO _sentence_ is a closed MFO formula. Given a sentence \(\varphi\), the language \(L(\varphi)\) is defined as \[L(\varphi)=\{w\in\Sigma^{*}\mid w\models\varphi\}.\] We say that a language \(L\) is _expressible_ in MFO (or _definable_ in MFO or _MFO-definable_ for short) iff there exists a MFO sentence \(\varphi\) such that \(L=L(\varphi)\). ### Examples The following MFO formula \(\varphi_{L_{1}}\) defines the language \(L_{1}\) made of all strings that start with symbol \(a\): \[\varphi_{L_{1}}:\exists\mathbf{x}(\boldsymbol{x}=0\wedge a(\boldsymbol{x}))\] The following formula \(\varphi_{L_{2}}\) defines the language \(L_{2}\) made of all strings in which every symbol \(a\) is necessarily immediately followed by a \(b\) (notice that these strings cannot end with a symbol \(a\)). \[\varphi_{L_{2}}:\forall\boldsymbol{x}(a(\boldsymbol{x})\Rightarrow\exists \mathbf{y}(\text{succ}(\boldsymbol{x},\boldsymbol{y})\wedge b(\boldsymbol{y} )))\] The following formula \(\varphi_{L_{3}}\) defines the language \(L_{3}\) made of all (nonempty) strings in which the last symbol is an \(a\). \[\varphi_{L_{3}}:\exists\mathbf{x}(\text{last}(\boldsymbol{x})\wedge a( \boldsymbol{x}))\] The following formula \(\varphi_{L_{4}}\) defines the language \(L_{4}\) made of all strings (containing at least 3 symbols) in which the symbol three positions from the right is an \(a\). \[\varphi_{L_{4}}:\exists\mathbf{x}(a(\boldsymbol{x})\wedge\exists\mathbf{y}( \mathbf{y}=\boldsymbol{x}+2\wedge\text{last}(\mathbf{y})))\] Alternatively, language \(L_{4}\) is also defined by the following formula \(\varphi^{\prime}_{L_{4}}\): \[\varphi^{\prime}_{L_{4}}:\exists\mathbf{x}(\text{last}(\boldsymbol{x})\wedge \exists\mathbf{y}(\mathbf{y}=\boldsymbol{x}-2\wedge a(\mathbf{y})))\] The following formula \(\varphi_{L_{\epsilon}}\) (or, alternatively, \(\varphi^{\prime}_{L_{\epsilon}}\)) defines the language \(L_{\epsilon}\) made of only the empty string (assuming that the input alphabet includes at least symbol \(a\)): \[\varphi_{L_{\epsilon}}:\neg\exists\mathbf{x}(a(\boldsymbol{x})\vee\neg a( \boldsymbol{x}))\] \[\varphi^{\prime}_{L_{\epsilon}}:\forall\boldsymbol{x}(a(\boldsymbol{x})\wedge \neg a(\boldsymbol{x}))\] Formula \(\varphi_{L_{\epsilon}}\) (resp., formula \(\varphi^{\prime}_{L_{\epsilon}}\)) states that, given a word \(w\) of language \(L_{\epsilon}\), there does not exist a position such that \(true\) holds (resp., in all its positions _false_ holds). This can only occur if the set of positions is empty: by definition an existential quantification is false, and a universal quantification is true, if it ranges over an empty set, that is, if \(w\) is the empty string. Finally, the following formula \(\varphi_{L_{\epsilon}}\) defines the empty language (assuming that the input alphabet \(\Sigma\) includes at least symbol \(a\)): \[\varphi_{L_{\epsilon}}:\exists\mathbf{x}(a(\boldsymbol{x})\wedge\neg a( \boldsymbol{x}))\] Formula \(\varphi_{L_{\epsilon}}\) is contradictory, as it states that a position exists in which symbol \(a\) both appears and does not appear. No string \(w\) (not even the empty one) is such that \(w\models\varphi_{L_{\epsilon}}\) holds, hence the formula defines the empty language. Every singleton language--i.e., every language consisting of one finite-length string--is trivially expressible in MFO. Consider, for instance, language \(L_{abc}=\{abc\}\) that includes only string \(abc\). It is easily defined by the following MFO formula: \[\varphi_{L_{abc}}:\exists\mathbf{x}\exists\mathbf{y}\exists\mathbf{z}( \boldsymbol{x}=0\wedge\boldsymbol{y}=succ(\boldsymbol{x})\wedge\boldsymbol{z }=succ(\boldsymbol{y})\wedge last(\boldsymbol{z})\wedge a(\boldsymbol{x}) \wedge b(\boldsymbol{y})\wedge c(\boldsymbol{z}))\] ### Expressiveness of MFO The following statements trivially hold. Proposition 1: _Let \(L\), \(L_{1}\), and \(L_{2}\) be any languages defined by MFO formulae \(\varphi\), \(\varphi_{1}\) and \(\varphi_{2}\), respectively:_ * _Language_ \(L_{1}\cap L_{2}\) _is defined by formula_ \(\varphi_{1}\wedge\varphi_{2}\)_--i.e.,_ \(L(\varphi_{1}\wedge\varphi_{2})=L_{1}\cap L_{2}\)_._ * _Language_ \(L_{1}\cup L_{2}\) _is defined by formula_ \(\varphi_{1}\vee\varphi_{2}\)_--i.e.,_ \(L(\varphi_{1}\vee\varphi_{2})=L_{1}\cup L_{2}\)_._ * _Language_ \(L\) _(the complement of_ \(L\)_) is defined by formula_ \(\neg\varphi\)_--i.e.,_ \(L(\neg\varphi)=L\)_._ The next theorem follows from Proposition 1. Theorem 2.2: _The family of MFO-definable languages is closed under union, intersection, and complementation._ To further investigate the expressive power of MFO,2 we consider the MFO-definable languages over a one-letter alphabet \(\Sigma=\{a\}\). In this simple case the MFO predicate \(a(\vec{x})\) is always true at any position \(x\) in any interpretation, therefore it is redundant and every formula is equivalent to a formula that does not include any occurrence of predicate \(a(\cdot)\)--e.g., \(\exists\vec{x}\,(a(\vec{x})\wedge\vec{y}<\vec{x})\) is equivalent to \(\exists x\,(\vec{y}<\vec{x})\). Footnote 2: In the present section we follow the line of discussion adopted in the (unedited, to the best of our knowledge) lecture notes _Automata theory - An algorithmic approach_ by Javier Esparza, February 13, 2019. We next show that, for the simple family of the languages over a one-letter alphabet, every language is MFO-definable if, and only if, it is finite3 or co-finite (where a co-finite language is one whose complement is finite). As a consequence, for instance, the simple _regular_ language \(L_{even}=\{\,a^{2n}\mid n\geq 0\,\}\) is _not_ MFO-definable, which proves that the MFO logic is strictly less expressive than finite state automata and regular grammars and expressions. Footnote 3: Recall that a finite language is one whose cardinality is finite. Our proof that every language over a one-letter alphabet is expressible in MFO if, and only if, it is finite or co-finite is organized as follows. First, we observe that if a language is finite or co-finite, then it is expressible in MFO, as a consequence of the fact that--as exemplified by language \(L_{abc}\) in Section 2.1--singleton languages are expressible in MFO and that the family of MFO-expressible languages is closed under union and complementation. Next, we prove that if a language over a one-letter alphabet is MFO-definable, then it is finite or co-finite. This is in turn proved in three steps: 1. we introduce a new logic called QF-MFO, a quantifier-free fragment of MFO; 2. we show that every language over a one-letter alphabet \(\Sigma=\{a\}\) is QF-MFO-definable if, and only if, it is finite or co-finite; 3. we show that the two logics, MFO and QF-MFO, are equally expressive, as every MFO formula \(\varphi\) has an equivalent QF-MFO formula. To define the QF-MFO logic, we first introduce a few additional abbreviations (where \(k\) is a constant in \(\mathbb{N}\)): \[\begin{array}{rcl}\boldsymbol{x}<\boldsymbol{y}+k&\stackrel{{ \mathrm{def}}}{{=}}&\exists\boldsymbol{z}\left(z=\boldsymbol{y}+k\wedge \boldsymbol{x}<z\right)\\ \boldsymbol{x}>\boldsymbol{y}+k&\stackrel{{\mathrm{def}}}{{=}}& \exists\boldsymbol{z}\left(z=\boldsymbol{y}+k\wedge\boldsymbol{z}< \boldsymbol{x}\right)\\ \boldsymbol{x}<k&\stackrel{{\mathrm{def}}}{{=}}&\exists \boldsymbol{z}\left(z=0\wedge\boldsymbol{x}<z+k\right)\\ \boldsymbol{x}>k&\stackrel{{\mathrm{def}}}{{=}}&\exists \boldsymbol{z}\left(z=0\wedge\boldsymbol{x}>z+k\right)\\ k<last&\stackrel{{\mathrm{def}}}{{=}}&\forall\boldsymbol{x} \left(last(\boldsymbol{x})\Rightarrow\boldsymbol{x}>k\right)\\ k>last&\stackrel{{\mathrm{def}}}{{=}}&\forall\boldsymbol{x} \left(last(\boldsymbol{x})\Rightarrow\boldsymbol{x}<k\right)\end{array}\] Definition 3 (QF-MFO): The formulae of QF-MFO are defined by the following syntax: \[\varphi:=\boldsymbol{x}<k\ \ |\ \ \boldsymbol{x}>k\ \ |\ \ \boldsymbol{x}< \boldsymbol{y}+k\ \ |\ \ \boldsymbol{x}>\boldsymbol{y}+k\ \ |\ \ k<last\ \ |\ \ k>last\ \ |\ \ \varphi\wedge\varphi\ \ |\ \ \varphi\vee\varphi\] where \(\boldsymbol{x},\boldsymbol{y}\in\mathcal{V}\) and \(k\in\mathbb{N}\). In the remainder, with some (innocuous) overloading, a constant \(k\) will denote both the numerical value \(k\in\mathbb{N}\) and the string \(a^{k}\). Proposition 4: _Every language \(L\) over a one-letter alphabet is QF-MFO-definable if, and only if, it is finite or co-finite._ Proof: _Only if part_: Every QF-MFO sentence defines a finite or a co-finite language. Let \(\varphi\) be a sentence of QF-MFO. Since QF-MFO is quantifier-free, the sentence \(\varphi\) is an and-or combination of formulae of type \(k<last\) and \(k>last\). Then, the following cases arise. * \(L(k<last)=\{k+1,k+2,\ \dots\ \}\) is a co-finite language (remember that numbers identify words, and vice versa, so that \(\{k+1,k+2,\ \dots\ \}\) is the same as \(\{a^{k+1},a^{k+2},\ \dots\ \}\)). * \(L(k>last)=\{0,1,\ \dots\ k\}\) is a finite language. * \(L(\varphi_{1}\vee\varphi_{2})=L(\varphi_{1})\cup L(\varphi_{2})\). If \(L_{1}=L(\varphi_{1})\) and \(L_{2}=L(\varphi_{2})\) are both finite, then \(L(\varphi_{1}\vee\varphi_{2})\) is also finite; if \(L_{1}\) and \(L_{2}\) are both co-finite, then the language \(L(\varphi_{1}\vee\varphi_{2})=L_{1}\cup L_{2}=\overline{\overline{L_{1}\cup L_ {2}}}=\overline{\overline{L_{1}\cap\overline{L_{2}}}}\) is the complement of the intersection of two finite languages, hence it is co-finite; if one of the two languages \(L_{1}\) and \(L_{2}\) is finite and the other is co-finite, then \(L(\varphi_{1}\vee\varphi_{2})\) is the complement of the intersection of a finite and a co-finite language, therefore it is co-finite. * \(L(\varphi_{1}\wedge\varphi_{2})=L(\varphi_{1})\cap L(\varphi_{2})\). If \(L_{1}=L(\varphi_{1})\) and \(L_{2}=L(\varphi_{2})\) are both finite, then their intersection is finite; if \(L_{1}\) and \(L_{2}\) are both co-finite, then \(L(\varphi_{1}\wedge\varphi_{2})=L_{1}\cap L_{2}=\overline{\overline{L_{1}\cap L _{2}}}=\overline{\overline{L_{1}\cup\overline{L_{2}}}}\) is the complement of the union of two finite languages, hence it is co-finite; if one of the two languages \(L_{1}\) and \(L_{2}\) is finite and the other is co-finite, then \(L(\varphi_{1}\wedge\varphi_{2})\) is the complement of the union of a finite language and a co-finite language, hence it is finite. _If part_: Every finite or co-finite language is definable by a QF-MFO sentence. If \(L\) is finite then \(L=\{k_{1},\ \ldots\ k_{n}\}\) and \[\varphi_{L}=\varphi_{[k_{1}]}\vee\ \cdots\ \vee\ \varphi_{[k_{n}]}=\] \[=(k_{1}-1<last\wedge last<k_{1}+1)\ \vee\ \cdots\ \vee\ (k_{n}-1<last\wedge last<k_{n}+1)\] If \(L\) is co-finite, then its complement \(\overline{L}\) is finite, therefore it is defined by some QF-MFO formula. Then, \(L\) is QF-MFO-definable if, for every QF-MFO sentence \(\varphi\), there exists a QF-MFO sentence, call it \(\overline{\varphi}\), that defines the language \(\overline{L}\), the complement of \(L\). Such a sentence \(\overline{\varphi}\) is equal to \(neg(\varphi)\), where function \(neg(\cdot)\) is defined inductively by the following clauses. * \(neg(k<last)=last<k\vee\underbrace{(k-1<last\wedge last<k+1)}_{last=k}\) * \(neg(k>last)=k<last\vee\underbrace{(k-1<last\wedge last<k+1)}_{k=last}\) * \(neg(\varphi_{1}\vee\varphi_{2})=neg(\varphi_{1})\wedge neg(\varphi_{2})\) * \(neg(\varphi_{1}\wedge\varphi_{2})=neg(\varphi_{1})\lor neg(\varphi_{2})\) Proposition 5: _Every MFO formula \(\varphi\) over a one-letter alphabet is equivalent to some QF-MFO formula \(f\)--i.e., \(\varphi\equiv f\)._ Proof: The proof is by induction on the structure of \(\varphi\). If \(\varphi=\vec{x}<\vec{y}\), then \(\varphi\equiv\vec{x}<\vec{y}+0\). If \(\varphi=\neg\psi\), then the inductive hypothesis can be applied and then the negation can be removed using the De Morgan's laws and equivalences such as, e.g., \(\neg(\vec{x}<\vec{y}+k)\equiv\vec{x}\geq\vec{y}+k\) (where \(\vec{x}\geq\vec{y}+k\) is a natural abbreviation for \(\vec{x}>\vec{y}+k-1\)). If \(\varphi=\varphi_{1}\vee\varphi_{2}\), then the induction hypothesis is directly applied. If \(\varphi=\exists\vec{x}\,\psi\) then, by the induction hypothesis, \(\psi\equiv f\) for some QF-MFO formula \(f\), and \(f\) can be assumed to be in disjunctive normal form--i.e., \(f=D_{1}\vee\cdots\lor D_{n}\), and \(\varphi\equiv\exists\vec{x}D_{1}\vee\cdots\vee\exists\vec{x}D_{n}\); then, we define a set of QF-MFO formulae \(f_{i}\) such that, for each \(1\leq i\leq n\), \(f_{i}\equiv\exists\vec{x}D_{i}\) holds. Notice that, since \(f\) is a QF-MFO formula, each \(f_{i}\) is such that it does not include any quantification of variable \(\vec{x}\) nor, if \(\varphi\) is a sentence, any occurrence of the same variable. Each \(f_{i}\) is built as follows. Formula \(f_{i}\) is a conjunction of formulae that contains all the conjuncts of \(D_{i}\) that do not include any occurrence of variable \(\vec{x}\), plus the conjuncts defined next. Consider every pair of conjuncts of \(D_{i}\), one conjunct being of type \(t_{1}<\vec{x}\), where \(t_{1}=h\) or \(t_{1}=\vec{y}+h\) and the constraint is an _upper bound_ (i.e., \(h\) is maximal, that is, it is the greatest value that appears in a constraint of the type \(\vec{y}+h<\vec{x}\)), and the other conjunct being of type \(\vec{x}<t_{2}\), where \(t_{2}=h\) or \(t_{2}=\vec{y}+h\) and the constraint is a _lower bound_ (i.e., \(h\) is minimal); for every such pair we add to \(f_{i}\) a conjunct equivalent to \(t_{1}+1<t_{2}\); for instance, if the two above-described conjuncts are \(\vec{z}-4<\vec{x}\) and \(\vec{x}<\vec{y}+3\), then the added conjunct is \(\vec{z}<\vec{y}+6\equiv\vec{z}-3<\vec{y}+3\). Notice that, if only the conjunct of type \(t_{1}<\vec{x}\) is present and the conjunct of type \(\vec{x}<t_{2}\) is missing, then the (trivially true) conjunct \(\vec{x}<last+1\) must be considered--in place of the latter--as the other element of the pair; similarly, if only the conjunct of type \(\vec{x}<t_{2}\) is present and the conjunct of type \(t_{1}<\mathbf{x}\) is missing, then the (trivially true) conjunct \(-1<\mathbf{x}\) must be considered in place of the latter. Then, \(f_{i}\equiv\exists\mathbf{x}D_{i}\); notice that \(f_{i}\) does not include any occurrence of variable \(\mathbf{x}\) nor any quantification of that variable. Example 6: In the MFO formula \[\exists\mathbf{x}(\mathbf{x}<\mathbf{y}+3\wedge\mathbf{z}<\mathbf{x}+4\wedge\mathbf{z}<\mathbf{y}+2\wedge \mathbf{y}<\mathbf{x}+1)\] we identify the pair of constraints \(\mathbf{z}-4<\mathbf{x}\) and \(\mathbf{x}<\mathbf{y}+3\), from which we get the additional conjunct \(\mathbf{z}-3<\mathbf{y}+3\equiv\mathbf{z}<\mathbf{y}+6\); we also identify the pair of constraints \(\mathbf{y}-1<\mathbf{x}\) and \(\mathbf{x}<\mathbf{y}+3\), from which we get the additional conjunct \(\mathbf{y}<\mathbf{y}+3\). Therefore, the MFO formula \(\exists\mathbf{x}(\mathbf{x}<\mathbf{y}+3\wedge\mathbf{z}<\mathbf{x}+4\wedge\mathbf{z}<\mathbf{y}+2\wedge \mathbf{y}<\mathbf{x}+1)\) is equivalent to the QF-MFO formula \[\mathbf{z}<\mathbf{y}+6\wedge\mathbf{y}<\mathbf{y}+3\wedge\mathbf{z}<\mathbf{y}+2\] Example 7: We provide two examples of QF-MFO formulae equivalent to given MFO sentences. * The MFO formula \(\exists\mathbf{x}\,\exists\mathbf{y}\,\exists\mathbf{z}\,(\,\mathbf{x}<\mathbf{y}\wedge\mathbf{y}<\bm {z}\,)\) defines the language \(\{\,a^{k}\mid k\geq 3\,\}\). By repeated application of the inductive step, moving inside-out, we obtain \(f_{1}\equiv\underbrace{\exists\mathbf{z}\,(\,\mathbf{x}<\mathbf{y}\wedge\mathbf{y}<\mathbf{z}\,)} _{\exists\mathbf{z}\,D_{1}}\) and the pair of constraints on the quantified variable \(\mathbf{z}\), \(\mathbf{y}<\mathbf{z}\) and \(\mathbf{z}<\mathbf{last}+1\), from which we derive constraint \(\mathbf{y}<\mathbf{last}\), so that \(f_{1}\equiv\mathbf{x}<\mathbf{y}\wedge\mathbf{y}<\mathbf{last}\) holds; in the next inductive step we have \(f_{1}\equiv\underbrace{\exists\mathbf{y}(\mathbf{x}<\mathbf{y}\wedge\mathbf{y}<\mathbf{last})}_{ \exists\mathbf{y}\,D_{1}}\) and the pair of constraints on the quantified variable \(\mathbf{y}\), \(\mathbf{x}<\mathbf{y}\) and \(\mathbf{y}<\mathbf{last}\), from which we derive \(\mathbf{x}+1<\mathbf{last}\) and \(f_{1}\equiv\mathbf{x}+1<\mathbf{last}\); in the final inductive step we have \(f_{1}\equiv\underbrace{\exists\mathbf{x}\,(\mathbf{x}+1<\mathbf{last})}_{\exists\mathbf{x}\,D_ {1}}\) and the pair of constraints on the quantified variable \(\mathbf{x}\), \(-1<\mathbf{x}\) and \(\mathbf{x}<\mathbf{last}-1\), from which we obtain \(0<\mathbf{last}-1\) and \(f_{1}\equiv\mathbf{last}>1\), hence \(\mathbf{last}>1\) is the QF-MFO formula equivalent to the original MFO formula \(\exists\mathbf{x}\,\exists\mathbf{y}\,\exists\mathbf{z}\,(\,\mathbf{x}<\mathbf{y}\wedge\mathbf{y}<\bm {z}\,)\). * The MFO formula \(\exists\mathbf{x}\,(\,\neg\exists\mathbf{y}\,(\mathbf{x}<\mathbf{y})\wedge\mathbf{x}<4\,)\) defines the language \(\{\,a^{k}\mid k\leq 4\,\}\). Again moving inside-out, we have \(f_{1}\equiv\underbrace{\exists\mathbf{y}\,(\,\mathbf{x}<\mathbf{y}\,)}_{\exists\mathbf{y}\,D_ {1}}\) and the pair of constraints on the quantified variable \(\mathbf{y}\), \(\mathbf{x}<\mathbf{y}\) and \(\mathbf{y}<\mathbf{last}+1\), from which we derive \(\mathbf{x}+1<\mathbf{last}+1\equiv\mathbf{x}<\mathbf{last}\) and \(f_{1}\equiv\mathbf{x}<\mathbf{last}\); at the next inductive step we apply negation and obtain \(f_{1}\equiv\underbrace{\exists\mathbf{x}\,(\,\mathbf{x}\geq\mathbf{last}\wedge\mathbf{x}<4\,)} _{\exists\mathbf{x}\,D_{1}}\) and the pair of constraints on the quantified variable \(\mathbf{x}\), \(\mathbf{last}-1<\mathbf{x}\) and \(\mathbf{x}<4\), from which we obtain \(f_{1}\equiv\mathbf{last}<4\). Hence, \(\mathbf{last}<4\) is the QF-MFO formula equivalent to the original MFO formula \(\exists\mathbf{x}\,(\,\neg\exists\mathbf{y}\,(\mathbf{x}<\mathbf{y})\wedge\mathbf{x}<4\,)\). The following theorem easily follows from Proposition 4 and Proposition 5. Theorem 4.1: _A language over a one-letter alphabet is expressible in MFO if, and only if, it is finite or co-finite._ Every MFO formula defines a regular language. In fact, MFO is a restriction of the Monadic Second-Order (MSO) logic introduced in Section 3 and, as shown there, for every MSO sentence \(\varphi\) there is a Finite-State Automaton (FSA) that accepts exactly the language defined by \(\varphi\)--hence, _a fortiori_ this also holds for every MFO formula. We have therefore the following result (whose proof will be given in Section 3.1). **Statement 1**: _For every MFO sentence \(\varphi\) there is a FSA \(\mathcal{A}\) such that \(L(\varphi)=L(\mathcal{A})\)._ However, the MFO logic is strictly less expressive than Finite State Automata (also abbreviated as FSA), as not all regular languages are expressible in MFO. Indeed, as a consequence of Theorem 4.1 the regular language \(L_{\text{even}}\) defined above, which includes exactly the strings over alphabet \(\Sigma=\{a\}\) having even length and therefore is neither finite nor co-finite, is _not_ expressible in MFO, as stated by the next corollary. Corollary 4.2: _There is no MFO sentence \(\varphi\) that defines language \(L_{\text{even}}\) (i.e., such that \(L(\varphi)=L_{\text{even}}\))._ From Statement 1 and Corollary 4.2 the following result is immediate, by observing that it is easy to define a FSA \(\mathcal{A}\) such that \(L(\mathcal{A})=L_{\text{even}}\). Theorem 4.3: _MFO is strictly less expressive than FSA._ On the other hand, the set of languages that can be defined through MFO formulae is not closed under the so-called "Kleene star" operation, as stated by the following theorem. Theorem 4.4: _The set of languages that can be defined by MFO sentences is not closed under the \({}^{*}\) operation._ Proof: To prove the claim it is enough to remark that the following MFO formula \(\varphi_{L_{\text{av}}}\) defines language \(L_{\text{ua}}=\{aa\}\) (i.e., the language containing only string \(aa\)): \[\varphi_{L_{\text{av}}}:\exists\mathbf{x}\cdot\exists\mathbf{y}(\mathbf{x}=0 \wedge\mathbf{y}=\mathbf{x}+1\wedge a(\mathbf{x})\wedge a(\mathbf{y})\wedge \text{last}(\mathbf{y}))\] and that \(L_{\text{even}}=L_{\text{aa}}^{*}\). From Theorem 4.1 and Theorem 4.1, it can be shown [7] that MFO can express only the so-called "star-free" languages--that is, those that can be obtained through union, intersection, complementation and concatenation of finite languages. For example, language \(L_{3}\) of Section 2.1 can be obtained from finite languages \(L_{3}^{\prime}=\emptyset\) and \(L_{3}^{\prime\prime}=\{a\}\)--containing, respectively, no string (hence, whose cardinality is \(0\)) and only string \(a\) (hence, whose cardinality is \(1\))--in the following way: \[L_{3}=(\neg L_{3}^{\prime})\cdot L_{3}^{\prime\prime}\] As a further example, the language \(L_{3a}\), which is made of all strings that contain at least an \(a\), can be defined in the following way: \[L_{3a}=(\neg L_{3}^{\prime})\cdot L_{3}^{\prime\prime}\cdot(\neg L_{3}^{ \prime})\] and the language \(L_{3a}\) made of all strings that contain _exactly_ one \(a\) can be defined as follows: \[L_{3a}=(\neg L_{3a})\cdot L_{3}^{\prime\prime}\cdot(\neg L_{3a})\] ## 3 Monadic Second-order Logic of Order on Strings Formulae of the _monadic second-order logic of order_ (MSO), as defined by Buchi and others [8], are built out of the elements of the MFO logic defined in Section 2 plus, in addition: * Second-order variables, denoted as uppercase boldface letters, \(\mathbf{X}\), \(\mathbf{Y}\),..., which are interpreted over _sets_ of natural numbers. More precisely, let \(\Sigma\) be an input alphabet, \(\mathcal{V}_{1}\) be a set of first-order variables, and \(\mathcal{V}_{2}\) be a set of second-order (or set) variables. Well-formed formulae of MSO logic are defined according to the following syntax: \[\varphi:=a(\mathbf{x})\ \mid\ \mathbf{X}(\mathbf{x})\ \mid\ \mathbf{x}<\mathbf{y}\ \mid\ \neg\varphi\ \mid\ \varphi\lor\varphi\ \mid\ \exists\mathbf{x}(\varphi)\ \mid\ \exists\mathbf{X}(\varphi)\] where \(a\in\Sigma,\mathbf{x},\mathbf{y}\in\mathcal{V}_{1}\), and \(\mathbf{X}\in\mathcal{V}_{2}\). Naturally, all abbreviations introduced in Section 2 are still valid. We also introduce the following additional abbreviations: \[\mathbf{x}\in\mathbf{X}\ \stackrel{{\text{def}}}{{=}}\ \mathbf{X}(\mathbf{x})\] \[\mathbf{X}\subseteq\mathbf{Y}\ \stackrel{{\text{def}}}{{=}}\ \forall\mathbf{x}(\mathbf{x}\in\mathbf{X}\Rightarrow\mathbf{x}\in\mathbf{Y})\] \[\mathbf{X}=\mathbf{Y}\ \stackrel{{\text{def}}}{{=}}\ (\mathbf{X} \subseteq\mathbf{Y})\land(\mathbf{Y}\subseteq\mathbf{X})\] \[\mathbf{X}\neq\mathbf{Y}\ \stackrel{{\text{def}}}{{=}}\ \neg(\mathbf{X}=\mathbf{Y})\] where \(\mathbf{x},\mathbf{y},\mathbf{X}\) are as before, and \(\mathbf{Y}\in\mathcal{V}_{2}\). An MSO formula is interpreted over a string \(w\in\Sigma^{*}\), with respect to assignments \(\nu_{1}:\mathcal{V}_{1}\to\{0,\ldots,|w|-1\}\) and \(\nu_{2}:\mathcal{V}_{2}\to\mathcal{P}(\{0,\ldots,|w|-1\})\). Notice that, like assignment \(\nu\) for MFO formulae, \(\nu_{1}\) maps each first-order variable of \(\mathcal{V}_{1}\) to a position in string \(w\). Assignment \(\nu_{2}\), instead, maps each second-order variable of \(\mathcal{V}_{2}\) to a _set_ of positions in string \(w\). Then, the satisfaction relation \(\models\) for MSO formulae is defined in the following way: * \(w,\nu_{1},\nu_{2}\models a(\mathbf{x})\) if, and only if, \(\nu_{1}(\mathbf{x})\) is defined and \(w[\nu_{1}(\mathbf{x})]=a\) * \(w,\nu_{1},\nu_{2}\models\mathbf{X}(\mathbf{x})\) if, and only if, \(\nu_{1}(\mathbf{x})\in\nu_{2}(\mathbf{X})\) holds * \(w,\nu_{1},\nu_{2}\models\mathbf{x}<\mathbf{y}\) if, and only if, \(\nu_{1}(\mathbf{x})<\nu_{1}(\mathbf{y})\) holds * \(w,\nu_{1},\nu_{2}\models\neg\varphi\) if, and only if, \(w,\nu_{1},\nu_{2}\not\models\varphi\) holds * \(w,\nu_{1},\nu_{2}\models\varphi_{1}\lor\varphi_{2}\) if, and only if, at least one of \(w,\nu_{1},\nu_{2}\models\varphi_{1}\) and \(w,\nu_{1},\nu_{2}\models\varphi_{2}\) holds * \(w,\nu_{1},\nu_{2}\models\exists\mathbf{x}(\varphi)\) if, and only if, \(|w|>0\) and some \(i\in\{0,\,\ldots,\,|w|-1\}\) satisfies \(w,\nu_{1}[i/\mathbf{x}],\nu_{2}\models\varphi\) * \(w,\nu_{1},\nu_{2}\models\exists\mathbf{X}(\varphi)\) if, and only if, \(|w|>0\) and some \(S\subseteq\{0\,\ldots\,|w|-1\,\}\) satisfies \(w,\nu_{1},\nu_{2}[S/\mathbf{X}]\models\varphi\) where \(w[k]\) and \(\nu[i/\mathbf{x}]\) are as in Section 2, and \(\nu_{2}[S/\mathbf{X}]\) assigns \(S\) to \(\mathbf{X}\) and otherwise coincides with \(\nu_{2}\). To improve readability, we will drop \(\nu_{1}\), \(\nu_{2}\) from the notation whenever there is no risk of ambiguity, and write \(w\models\varphi\) to indicate that string \(w\) satisfies MSO formula \(\varphi\). The definitions of MSO _sentence_ and of language \(L(\varphi)\) defined by sentence \(\varphi\) are as for the MFO logic. Example 12: The following MSO formula \(\varphi_{\text{even}}\) defines language \(L_{\text{even}}\) introduced in Section 2.2. \[\varphi_{\text{even}}:\exists\mathbf{P}\left(\begin{array}{l}\forall\mathbf{x}\begin{array} []{l}(\mathbf{x}=0\Rightarrow\neg\mathbf{P}(x))\\ \wedge\\ \forall\mathbf{y}(\mathbf{y}=\mathbf{x}+1\Rightarrow(\neg\mathbf{P}(\mathbf{x})\Leftrightarrow\bm {P}(\mathbf{y})))\\ \wedge\\ a(\mathbf{x})\\ \wedge\\ (\text{last}(\mathbf{x})\Rightarrow\mathbf{P}(\mathbf{x}))\end{array}\right)\end{array}\right)\] Formula \(\varphi_{\text{even}}\) introduces a second-order variable \(\mathbf{P}\) that identifies exactly all even positions in string \(w\). More precisely, the first position of \(w\) (which is conventionally 0), is not even, and indeed the first conjunct in formula \(\varphi_{\text{even}}\) states that \(\mathbf{P}(x)\) does not hold when \(\mathbf{x}\) is 0. In addition, the second conjunct in \(\varphi_{\text{even}}\) states that the next position after \(\mathbf{x}\) (i.e., position \(\mathbf{y}\) such that \(\mathbf{y}=\mathbf{x}+1\) holds), if it exists, is even (i.e., \(\mathbf{P}(\mathbf{y})\) holds there) if, and only if, position \(\mathbf{x}\) is odd; hence, since the first position is odd, the second position is even, the third is odd, the fourth is even, and so on. The third conjunct states that, in every position \(\mathbf{x}\), \(a(\mathbf{x})\) holds (i.e., \(a\) appears in every position). Finally the last conjunct states that the last position in the string must be even. ### Expressiveness of MSO Since every MFO formula is also an MSO formula, from the fact that MSO formula \(\varphi_{\text{even}}\) introduced in Example 12 defines language \(L_{\text{even}}\), which, by Proposition 9, cannot be defined by an MFO formula, we have the following straightforward result. Theorem 3.1: _The MSO logic is strictly more expressive than the MFO logic._ Indeed, the original seminal result by Buchi and others is that, unlike the MFO logic, MSO indeed has the same expressive power as FSAs, as captured by the following theorem. Theorem 3.2: _A language \(L\) is regular if, and only if, there exists a sentence \(\varphi\) in the MSO logic such that \(L=L(\varphi)\)._ Before proving Theorem 3.1, we remark that, since for every FSA there is an equivalent MSO formula--and vice versa--MSO enjoys all closure properties of FSAs, as captured by the following corollary. Corollary 3.3: _The set of languages that can be defined by MSO formulae is closed under union, intersection, complementation, and Kleene star._ The proof of Theorem 3.1 is constructive, i.e., it provides an algorithmic procedure that, for a given FSA \(\mathcal{A}\), builds an equivalent MSO sentence \(\varphi_{\mathcal{A}}\), and vice versa. Next, we offer an intuitive explanation of the construction, referring the reader to, e.g., [8] for a complete and detailed proof. ### From FSA to MSO logic The key idea of the construction consists in using, for each state \(q\) of FSA \(\mathcal{A}\), a second-order variable \(\mathbf{X}_{q}\), whose value is the set of positions of all the characters that \(\mathcal{A}\) may read in a transition starting from state \(q\). Without loss of generality, we assume that \(\mathcal{A}\)'s set of states \(Q\) is \(\{0,1,\ldots,m\}\), for some \(m\), where \(0\) denotes the initial state. Then, we encode the definition of the FSA \(\mathcal{A}\) recognizing \(L\) (i.e, such that \(L=L(\mathcal{A})\)) as the conjunction of several clauses, each one capturing a part of the definition of \(\mathcal{A}\): * We introduce a formula capturing the transition relation \(\delta\) of \(\mathcal{A}\), which includes a disjunct for each transition \(\delta(q_{i},a)=q_{j}\) of the automaton: \(\forall\mathbf{x},\mathbf{y}\left(\mathbf{y}=\mathbf{x}+1\Rightarrow\bigvee_{\delta(q_{i},a)= q_{j}}\left(\mathbf{x}\in\mathbf{X}_{i}\wedge a(\mathbf{x})\wedge\mathbf{y}\in\mathbf{X}_{j}\right)\right)\). * The fact that the machine starts in state \(0\) is captured by formula \(\forall\mathbf{x}(\mathbf{x}=0\Rightarrow\mathbf{x}\in\mathbf{X}_{0})\). * Since the automaton cannot be in two different states \(i,j\) at the same time, for each pair of distinct second-order variables \(\mathbf{X}_{i}\) and \(\mathbf{X}_{j}\) we introduce formula \(\neg\exists\mathbf{y}(\mathbf{y}\in\mathbf{X}_{i}\wedge\mathbf{y}\in\mathbf{X}_{j})\). * Acceptance by the automaton--i.e. \(\delta(q_{i},a)\in F\)--is formalized by formula \(\forall\mathbf{x}\left(\operatorname{last}(\mathbf{x})\Rightarrow\bigvee_{\delta(q_ {i},a)\in F}\left(\mathbf{x}\in\mathbf{X}_{i}\wedge a(\mathbf{x})\right)\right)\). Finally, MSO formula \(\varphi_{\mathcal{A}}\) corresponding to automaton \(\mathcal{A}\) is the following sentence \[\varphi_{\mathcal{A}}:\exists\mathbf{X}_{0},\mathbf{X}_{1},\ldots\mathbf{X}_{m}(\varphi)\] where \(\varphi\) is the conjunction of all the above clauses. It is not difficult to show that the set of strings satisfying formula \(\varphi_{\mathcal{A}}\) is exactly \(L\). Example 16: Consider the FSA \(\mathcal{A}_{\text{ex}}\) shown in Figure 1. The corresponding MSO for mula \(\varphi_{\mathcal{A}_{\mathsf{L}_{\mathsf{c}}}}\) built according to the rules described above is the following: \[\varphi_{\mathcal{A}_{\mathsf{L}_{\mathsf{c}}}}:\exists\mathbf{X}_{0},\mathbf{X}_{1}, \mathbf{X}_{2}\] \[\wedge\] \[\forall\mathbf{x}(\mathbf{x}=0\Rightarrow\mathbf{x}\in\mathbf{X}_{0})\] \[\wedge\] \[\neg\exists\mathbf{y}(\mathbf{y}\in\mathbf{X}_{0}\wedge\mathbf{y}\in\mathbf{X}_{1})\wedge\] \[\neg\exists\mathbf{y}(\mathbf{y}\in\mathbf{X}_{0}\wedge\mathbf{y}\in\mathbf{X}_{2})\wedge\] \[\neg\exists\mathbf{y}(\mathbf{y}\in\mathbf{X}_{1}\wedge\mathbf{y}\in\mathbf{X}_{2})\] \[\wedge\] \[\forall\mathbf{x}\left(\text{last}(\mathbf{x})\Rightarrow\begin{pmatrix} \left(\mathbf{X}_{0}(\mathbf{x})\wedge a(\mathbf{x})\right)\\ \vee\\ \left(\mathbf{X}_{1}(\mathbf{x})\wedge a(\mathbf{x})\right)\\ \vee\\ \left(\mathbf{X}_{2}(\mathbf{x})\wedge a(\mathbf{x})\right)\end{pmatrix}\right)\] where the first clause captures the transition relation of the automaton; the second clause formalizes its initial state; the next three conjuncts state the mutual exclusion of states; and the last clause captures the acceptance condition. Remark 17: The formalization of FSA \(\mathcal{A}\) through MSO formula \(\varphi_{\mathcal{A}}\), and in particular the clause capturing the acceptance condition, assumes that accepted strings contain at least one symbol (i.e., they are not empty). To formalize in MSO an FSA \(\mathcal{A}_{\epsilon}\) that also accepts the empty string, it is enough to include in the corresponding formula \(\varphi_{\mathcal{A}_{\epsilon}}\) also a disjunct covering the case of the empty string, in the following way: \[\varphi_{\mathcal{A}_{\epsilon}}:\exists\mathbf{X}_{0},\mathbf{X}_{1},\ldots\mathbf{X}_{m }(\varphi)\vee\varphi_{L_{\epsilon}}\] where \(\varphi_{L_{\epsilon}}\) is the formula introduced in Section 2.1 to capture the empty string. #### 4.2.2 From MSO logic to FSA The construction in the opposite direction has been proposed in various versions in the literature. Here we summarize its main steps along the lines of [8]. First, the MSO sentence is translated into a standard form that uses only second-order variables (no first-order variables are allowed), the \(\subseteq\) predicate, and variables \(\mathbf{W}_{a}\), for each \(a\in\Sigma\), denoting the set of all the positions of the word containing the character \(a\). Moreover, we use Succ, which has the same meaning as succ, has second-order variable arguments that are singletons. This simpler (yet equivalent) logic is defined by the following syntax: \[\varphi:=\mathbf{X}\subseteq\mathbf{W}_{a}\mid\mathbf{X}\subseteq\mathbf{Y}\mid\text{Succ}( \mathbf{X},\mathbf{Y})\mid\neg\varphi\mid\varphi\vee\varphi\mid\exists\mathbf{X}(\varphi).\] As before, we also use the standard abbreviations for, e.g., \(\wedge\), \(\forall\), \(=\). To translate first-order variables to second-order variables we need to state that a (second-order) variable is a singleton. Hence we introduce the abbreviation: \[\text{Sing}(\mathbf{X})\ \stackrel{{\text{def}}}{{=}}\ \exists\mathbf{Y}(\mathbf{Y} \subseteq\mathbf{X}\wedge\mathbf{Y}\neq\mathbf{X}\wedge\neg\exists\mathbf{Z}(\mathbf{Z}\subseteq \mathbf{X}\wedge\mathbf{Z}\neq\mathbf{Y}\wedge\mathbf{Z}\neq\mathbf{X}))\] Then, in the transformation below, \(\text{Succ}(\mathbf{X},\mathbf{Y})\) is always conjoined with \(\text{Sing}(\mathbf{X})\wedge\text{Sing}(\mathbf{Y})\) and the resulting formula is therefore false whenever \(\mathbf{X}\) or \(\mathbf{Y}\) are not singletons. The following step entails the inductive construction of the equivalent automaton. This is built by associating a single automaton to each elementary subformula and by composing them according to the structure of the global formula. This inductive approach requires to use open formulas, i.e., formulas where free variables occur. For technical reasons, with such formulas we are going to consider words on the alphabet \(\Sigma\times\{0,1\}^{k}\), where \(k\) is the number of free variables; in the subsequent steps of the transformation from MSO logic to FSA, the alphabet will revert to \(\Sigma\). Hence, if \(\mathbf{X}_{1}\), \(\mathbf{X}_{2}\), \(\ldots\mathbf{X}_{k}\) are the free variables used in the formula, a value of \(1\) in the, say, \(j\)-th component means that the considered position belongs to \(\mathbf{X}_{j}\) (that is, the second-order variable \(\mathbf{X}_{j}\) represents a first-order variable whose value is the considered position); \(0\) means the opposite. For instance, if \(w=(b,1,0)(a,0,0)(a,0,1)\), then \(w\models\mathbf{X}_{2}\subseteq\mathbf{W}_{a}\), \(w\models\mathbf{X}_{1}\subseteq\mathbf{W}_{b}\), with \(\mathbf{X}_{1}\) and \(\mathbf{X}_{2}\) singletons representing (first-order variables and hence) positions in string \(w\) respectively equal to \(0\) and \(2\). #### Formula transformation 1. First order variables are translated in the following way: \(\exists\mathbf{x}(\varphi(\mathbf{x}))\) becomes \(\exists\mathbf{X}(\text{Sing}(\mathbf{X})\wedge\varphi^{\prime}(\mathbf{X}))\), where \(\varphi^{\prime}\) is the translation of \(\varphi\), and \(\mathbf{X}\) is a fresh new variable not occurring elsewhere. 2. Subformulas having the form \(a(\mathbf{x})\), \(\text{succ}(\mathbf{x},\mathbf{y})\) are translated into \(\mathbf{X}\subseteq\mathbf{W}_{a}\), \(\text{Succ}(\mathbf{X},\mathbf{Y})\), respectively. 3. The other parts are unchanged. #### Inductive construction of the automaton We assume for simplicity that \(\Sigma=\{a,b\}\), and that \(k=2\), i.e. two variables are used in the formula. Moreover, in the transition labels of the automata we use the shortcut symbol \(\circ\) to mean all possible values. Figure 2: Automata for the construction from MSO logic to FSA. * The formula \(\mathbf{X}_{1}\subseteq\mathbf{X}_{2}\) is translated into an automaton that checks that there are 1's for the \(\mathbf{X}_{1}\) component only in positions where there are also 1's for the \(\mathbf{X}_{2}\) component (Figure 2 (a)). * The formula \(\mathbf{X}_{1}\subseteq\mathbf{W}_{a}\) is analogous: the automaton checks that positions marked by 1 in the \(\mathbf{X}_{1}\) component must have symbol \(a\) (Figure 2 (b)). * The formula \(\operatorname{Succ}(\mathbf{X}_{1},\mathbf{X}_{2})\) considers two singletons, and checks that the 1 for component \(\mathbf{X}_{1}\) is immediately followed by a 1 for component \(\mathbf{X}_{2}\) (Figure 2 (c)). * Formulas inductively built with \(\neg\) and \(\vee\) are covered by the closure of regular languages w.r.t. complement and union, respectively. * For a formula of type \(\exists\mathbf{X}(\varphi)\), we use the closure under alphabet projection; for instance, we may start with an automaton with input alphabet \(\Sigma\times\{0,1\}^{2}\), for the formula \(\varphi(\mathbf{X}_{1},\mathbf{X}_{2})\) and we may need to define an automaton for the formula \(\exists\mathbf{X}_{1}(\varphi(\mathbf{X}_{1},\mathbf{X}_{2}))\). But in this case the alphabet is \(\Sigma\times\{0,1\}\), where the last component represents the only free remaining variable, i.e. \(\mathbf{X}_{2}\). The automaton \(\mathcal{A}_{3}\) is built by starting from the one for \(\varphi(\mathbf{X}_{1},\mathbf{X}_{2})\), and changing the transition labels from \((a,0,0)\) and \((a,1,0)\) to \((a,0)\); \((a,0,1)\) and \((a,1,1)\) to \((a,1)\), and analogously for those with \(b\). The idea is that this last automaton nondeterministically "guesses" the quantified component (i.e. \(\mathbf{X}_{1}\)) when reading its input, and the resulting word \(w\in(\Sigma\times\{0,1\}^{2})^{*}\) is such that \(w\models\varphi(\mathbf{X}_{1},\mathbf{X}_{2})\). Thus, \(\mathcal{A}_{3}\) recognizes \(\exists\mathbf{X}_{1}(\varphi(\mathbf{X}_{1},\mathbf{X}_{2}))\). We refer the reader to the available literature for a full proof of equivalence between the logic formula and the constructed automaton. Here we illustrate the rationale of the above construction through the following example. Example 18: Consider the language \(L=\{a,b\}^{*}aa\{a,b\}^{*}\): it consists of the strings satisfying the formula: \(\varphi_{L}=\exists\mathbf{x}\cdot\exists\mathbf{y}(\operatorname{succ}(\mathbf{x},\mathbf{y })\wedge a(\mathbf{x})\wedge a(\mathbf{y}))\). As seen before, first we translate this formula into a version using only second-order variables: \(\varphi_{L}^{\prime}=\exists\mathbf{X},\mathbf{Y}(\operatorname{Sing}(\mathbf{X})\wedge \operatorname{Sing}(\mathbf{Y})\wedge\operatorname{Succ}(\mathbf{X},\mathbf{Y})\wedge\mathbf{ X}\subseteq\mathbf{W}_{a}\wedge\mathbf{Y}\subseteq\mathbf{W}_{a})\). The automata for \(\operatorname{Sing}(\mathbf{X})\) and \(\operatorname{Sing}(\mathbf{Y})\) are depicted in Figure 3; they could also be obtained by expanding the definition of \(\operatorname{Sing}\) and then projecting the quantified variables. By intersecting the automata for \(\operatorname{Sing}(\mathbf{X})\), \(\operatorname{Sing}(\mathbf{Y})\), and \(\operatorname{Succ}(\mathbf{X},\mathbf{Y})\), by means of the customary construction of the cartesian product automaton (the details of the construction are not shown), we obtain an automaton that is identical to the one we defined Figure 3: Automata for \(\operatorname{Sing}(\mathbf{X})\) and \(\operatorname{Sing}(\mathbf{Y})\). for translating formula \(\text{Succ}(\mathbf{X}_{1},\mathbf{X}_{2})\), where here \(\mathbf{X}\) takes the role of \(\mathbf{X}_{1}\) and \(\mathbf{Y}\) of \(\mathbf{X}_{2}\). Intersecting it with those for \(\mathbf{X}\subseteq W_{a}\) and \(\mathbf{Y}\subseteq W_{a}\) produces the automaton of Figure 4. Finally, by projecting on the quantified variables \(\mathbf{X}\) and \(\mathbf{Y}\) we obtain the automaton for \(L\), given in Figure 5. ## 4 Discussion The logical characterization of a class of languages, together with the decidability of the associated _containment_ problem (i.e., checking whether a language is a subset of another language in that class), is the main door towards automatic verification techniques. Suppose that a logic formalism \(\mathfrak{L}\) is recursively equivalent to an automaton family \(\mathfrak{H}\); then, one can use a formula \(\varphi_{\mathfrak{L}}\) of \(\mathfrak{L}\) to specify the requirements of a given system and an abstract machine \(\mathcal{A}\) in \(\mathfrak{H}\) to implement the desired system: the correctness of the design defined by \(\mathcal{A}\) w.r.t. to the requirements stated by \(\varphi_{\mathfrak{L}}\) is therefore formalized as \(L(\mathcal{A})\subseteq L(\varphi_{\mathfrak{L}})\), i.e., all behaviors realized by the machine are also satisfying the requirements. This is just the case with FSAs and MSO logic for Regular Languages. Unfortunately, known theoretical lower bounds state that the decision of the above containment problem is PSPACE-complete and therefore intractable in general. The recent striking success of model-checking [3], however, has produced many refined results that explain how and when practical tools can produce results of "acceptable complexity" - although the term "acceptable" is context-dependent, since in some cases even running times of the order of hours or weeks can be considered acceptable. In a nutshell, normally--and roughly--we trade a lower expressive power of the adopted logic, typically linear temporal logic, for a complexity that is "only exponential" in the size of the logic formulas, whereas the worst case complexity for MSO logic can be even a non-elementary function [4].4 In any case, our interest in these notes is not on the complexity issues, but it is focused on the equivalence between automata recognizers and MSO logics, which leads to the decidability of the above fundamental containment problem. Footnote 4: There are, however, a few noticeable cases of tools that run satisfactorily at least in some particular cases of properties expressed in MSO logic [5].
2304.13816
Verifying linear temporal specifications of constant-rate multi-mode systems
Constant-rate multi-mode systems (MMS) are hybrid systems with finitely many modes and real-valued variables that evolve over continuous time according to mode-specific constant rates. We introduce a variant of linear temporal logic (LTL) for MMS, and we investigate the complexity of the model-checking problem for syntactic fragments of LTL. We obtain a complexity landscape where each fragment is either P-complete, NP-complete or undecidable. These results generalize and unify several results on MMS and continuous counter systems.
Michael Blondin, Philip Offtermatt, Alex Sansfaçon-Buchanan
2023-04-26T20:36:00Z
http://arxiv.org/abs/2304.13816v1
# Verifying linear temporal specifications of constant-rate multi-mode systems ###### Abstract Constant-rate multi-mode systems (MMS) are hybrid systems with finitely many modes and real-valued variables that evolve over continuous time according to mode-specific constant rates. We introduce a variant of linear temporal logic (LTL) for MMS, and we investigate the complexity of the model-checking problem for syntactic fragments of LTL. We obtain a complexity landscape where each fragment is either P-complete, NP-complete or undecidable. These results generalize and unify several results on MMS and continuous counter systems. + Footnote †: M. Blondin was supported by a Discovery Grant from the Natural Sciences and Engineering Research Council of Canada (NSERC), and by the Fonds de recherche du Québec – Nature et technologies (FRQNT). P. Offtermatt is now at Informal Systems, Munich, Germany. ## I Introduction Constant-rate multi-mode systems (MMS) are hybrid systems with finitely many modes and a finite number of real-valued variables that evolve over continuous time according to mode-specific constant rates. MMS were originally introduced by Alur _et al._ to model, _e.g._, problems related to green scheduling and reducing energy peak consumption of systems [1]. There, they consider the problems of safe schedulability and safe reachability with respect to zones defined as bounded convex polytopes. _Safe schedulability_ asks whether a given MMS admits a non-Zeno1 infinite execution that remains within a given safety zone. _Safe reachability_ asks whether a given MMS has a finite execution that reaches a target point, while staying within a given safety zone along the way. Both problems were shown to be solvable in polynomial time [1]. Footnote 1: Informally, this means that there cannot be infinitely many mode switches within a finite amount of time. A similar problem was studied by Krishna _et al._ in the context of motion planning [2]. There, the authors are interested in the _reach-avoid problem_. In the latter, the goal is to reach a given target point without ever entering any of the given obstacles. The authors of [2] consider obstacles specified as convex polytopes. They show that the reach-avoid problem is decidable if the obstacles are closed, and is undecidable in general. They further provide an implementation of their procedure which is benchmarked positively against the _Open Motion Planning Library_. _Contribution:_ The aforementioned problems were solved with ad hoc approaches. Moreover, many natural problems cannot be expressed in these existing frameworks. One such problem is _safe repeated reachability_, where the goal is to find a non-Zeno infinite execution that remains within a safety zone and visits a finite set of zones infinitely often. We propose a framework that encompasses all of the above. More precisely, we introduce a linear temporal logic (LTL) for MMS. Our variant uses bounded convex polytopes as atomic propositions. We omit the next operator \(\mathsf{X}\) which is ill-suited for the continuous behavior of MMS. Moreover, we use a strict-future interpretation of the until temporal operator \(\mathsf{U}\), inspired from metric temporal logic [3] (more precisely from \(\mathsf{MITL}_{0,\infty}\)). In particular, our logic can express * Safe schedulability: \(\mathsf{G}\,Z_{\text{safe}}\); * Safe reachability: \(Z_{\text{safe}}\,\mathsf{U}\,\{\boldsymbol{x}_{\text{target}}\}\); * Reach-avoid: \((-O_{1}\wedge\cdots\wedge-O_{n})\,\mathsf{U}\,\{\boldsymbol{x}_{\text{target}}\}\); and * Safe repeated reachability: \((\mathsf{G}\,Z_{\text{safe}})\wedge\bigwedge_{i=1}^{n}(\mathsf{GF}\,Z_{i})\). We investigate the computational complexity of LTL model checking, which asks, given an MMS \(M\), a starting point \(\boldsymbol{x}\) and an LTL formula \(\varphi\), whether there is a non-Zeno infinite execution of \(M\) that satisfies \(\varphi\) from \(\boldsymbol{x}\), denoted \(\boldsymbol{x}\models_{M}\varphi\). We consider the syntactic fragments obtained by (dis)allowing operators from \(\{\mathsf{U},\mathsf{F},\mathsf{G},\wedge,\vee,-\}\) and allowing at least one temporal operator2. We establish the computational complexity of _all_ of the \(2^{6}-2^{3}=56\) fragments: Each one is either P-complete, NP-complete or undecidable. Footnote 2: Without any temporal operator, the logic has nothing to do with MMS; it becomes quantifier-free linear arithmetic. Our work is also closely related to the study of counter systems like vector addition systems (VAS) and Petri nets. These models have countless applications ranging from program verification and synthesis, to the formal analysis of chemical, biological and business processes (_e.g._, see [4, 5, 6, 7]). Moreover, the continuous relaxation of counter systems has been successfully employed in practice to alleviate their tremendous computational complexity (_e.g._, see [8, 9]). The behavior of an MMS amounts to continuous pseudo-reachability of VAS and Petri nets, _i.e._ where the effect of transitions can be scaled by positive real values, and without the requirement that counters must remain non-negative. The latter requirement can be regained in our logic. While we do not investigate unbounded zones in their full generality, we consider semi-bounded linear formulas, which include formulas of the form \((\mathsf{G}\,Z)\wedge\cdots\) or \(Z\)\(\mathsf{U}\,\cdots\), where \(Z\) is unbounded, and so can be set to \(Z:=\mathbb{R}_{\geq 0}^{d}\). In particular, our results imply the known fact that continuous reachability, _i.e._ checking \(\mathbb{R}_{\geq 0}^{d}\)\(\mathsf{U}\)\(\{\mathbf{x}_{\text{target}}\}\), can be done in polynomial time [10]. Moreover, we establish the decidability of richer properties. Thus, our work can be seen as a unifying and more general framework for MMS and continuous VAS/Petri nets. #### Results Let us write \(\text{LTL}_{\text{B}}(X)\) to denote the set of LTL formulas using only operators from \(X\), and LTL(\(X\)) for the same fragment but with zones possibly unbounded. We obtain the full complexity landscape depicted in Figure 1. Our contribution is summarized by the following three points. #### Iii-A1 We show that \(\text{LTL}_{\text{B}}(\{\mathsf{F},\mathsf{G},\wedge\})\) is in NP, and hence that \(\text{LTL}_{\text{B}}(\{\mathsf{F},\wedge,\vee\})\) is as well. More precisely, we prove that: 1. Formulas from this fragment can be put in a normal form, coined as _flat formulas_, where the nesting of temporal operators is restricted; 2. Flat formulas can be translated into generalized Buchi automata with transition-based acceptance, no cycles except for self-loops ("almost acyclic") and linear width; 3. Testing whether an MMS \(M\) satisfies a specification given by such an automaton \(\mathcal{A}\) can be done in NP by guessing a so-called linear path scheme \(S\) of \(\mathcal{A}\); constructing a so-called linear formula \(\psi\) equivalent to \(S\), and testing whether \(\mathbf{x}\models_{M}\psi\) in polynomial time. Step 2 is inspired by the work of Kretinsky and Esparza [11] on deterministic Muller automata for classical LTL restricted to \(\{\mathsf{F},\mathsf{G},\wedge,\vee,\neg\}\). Our construction also deals with classical LTL, restricted to \(\{\mathsf{F},\mathsf{G},\wedge\}\), and is thus an indirect contribution to logic and automata independent of MMS. In particular, in Step 3 we establish a polynomial-time LTL fragment for MMS, namely _semi-bounded linear LTL formulas_. We do so by using a polynomial-time fragment of existential linear arithmetic, introduced in [12] for the purpose of characterizing reachability sets of continuous Petri nets. In particular, we show how to translate LTL formulas of the form \(\psi=(\mathsf{G}Z_{0})\wedge\bigwedge_{i=1}^{n}\mathsf{GF}Z_{i}\), with \(Z_{0}\) unbounded, into the logic of [12]. This is challenging, in contrast to simply handling \(\mathsf{G}Z_{0}\), with \(Z_{0}\) bounded, as done in [1]. It involves a technical characterization of MMS and points that satisfy \(\psi\), which, in particular, goes through a careful use of Farkas' lemma. As a corollary of Step 3, we show that \(\text{LTL}_{\text{B}}(\{\mathsf{F},\mathsf{G},\neg\})\), \(\text{LTL}(\{\mathsf{F},\vee\})\) and \(\text{LTL}(\{\mathsf{G},\wedge\})\) are solvable in polynomial time. These fragments include safe schedulability and safe reachability, which generalizes their membership in P [1]. #### Iii-A2 We show the NP-hardness of \(\text{LTL}_{\text{B}}(\{\mathsf{F},\wedge\})\) by reducing from SUBSET-SUM. With the previous results, this shows that \(\text{LTL}_{\text{B}}(\{\mathsf{F},\mathsf{G},\wedge\})\) and \(\text{LTL}_{\text{B}}(\{\mathsf{F},\wedge,\vee\})\) are NP-complete. #### Iii-A3 We show that \(\text{LTL}_{\text{B}}(\{\mathsf{U}\})\) and \(\text{LTL}_{\text{B}}(\{\mathsf{G},\vee\})\) are both undecidable, by reducing from the reachability problem for Petri nets with inhibitor arcs. This "generalizes" the undecidability of the reach-avoid problem established in [2]. Their proof indirectly shows that the model checking problem is undecidable for formulas of the form \((Z_{1}\vee\cdots\lor Z_{n})\)\(\mathsf{U}\)\(\{\mathbf{x}_{\text{target}}\}\), where each \(Z_{i}\) is a possibly unbounded zone. We strengthen this result by using bounded zones only. #### Further related work MMS are related to hybrid automata [13]. Contrary to MMS, however, the latter allow for a finite control structure, and modes of non-constant rates. Their immense modelling power leads to the undecidability of most problems, including reachability, _i.e._ formulas of the form \(\mathsf{F}\,\{\mathbf{x}_{\text{target}}\}\). Yet, some researchers have investigated decision procedures for temporal specification languages such as signal temporal logic (_e.g._, see [14]). Timed automata [15] form another related type of hybrid system. In this model, all variables (known as clocks) increase at the _same_ constant rate, as opposed to the case of MMS. On the other hand, timed automata are equipped with a finite control structure, which is not the case of MMS. Bounded-rate multi-mode systems generalize MMS [16, 17]. In this model, the mode-dependent rates are given as bounded convex polytopes. The setting can be seen as a two-player game. Player 1 chooses a mode and a duration, and Player 2 chooses the rates from the set for that mode. The system evolves according to the rates chosen by Player 2. In this context, "schedulability" is a strategy for Player 1 that never leaves the safety zone, no matter the choices of Player 2. Small fragments of classical LTL have been investigated in the literature, _e.g._ see [18, Table 1]. In particular, \(\text{LTL}(\{\mathsf{F},\mathsf{G},\wedge,\vee,\neg\})\) has been studied in [11, 19] under the names Fig. 1: Complexity landscape of LTL model checking for MMS. An edge from \(X\) to \(Y\) indicates that any formula from \(\text{LTL}_{\text{B}}(X)\) is equivalent to some formula from \(\text{LTL}_{\text{B}}(Y)\). Each expression “\(X\equiv Y\)” stands for \(X\leftrightarrow Y\), _i.e._, an edge from \(X\) to \(Y\) and an edge from \(Y\) to \(X\). Node \(\{\mathsf{U},\ldots\}\) stands for any LTL fragment that contains \(\mathsf{U}\). \(\mathrm{L}(\mathsf{F})\) and \((\mathsf{F},\mathsf{G})\), and \(\mathrm{L}\mathrm{L}\mathrm{L}(\{\mathsf{F},\wedge\})\) has been studied in [20] under the name \(\mathrm{L}\mathrm{L}_{+}(\lozenge,\wedge)\). The authors of [20] show that a fragment, called \(\mathrm{L}\mathrm{L}\mathrm{L}^{\mathrm{P}\mathrm{O}\mathsf{B}}\), and which is incomparable to \(\mathrm{L}\mathrm{L}\mathrm{L}(\{\mathsf{F},\mathsf{G},\wedge\})\), admits partially-ordered deterministic \(\mathrm{Buchi}\) automata of exponential size and linear width. To the best of our knowledge, there is no work dedicated to \(\mathrm{L}\mathrm{L}\mathrm{L}(\{\mathsf{F},\mathsf{G},\wedge\})\), and in particular to its translation into automata of linear width. _Organization:_ In Section II, we introduce basic definitions, MMS and LTL. We further relate LTL over MMS with classical LTL over infinite words. In Section III, we show that any formula from classical \(\mathrm{L}\mathrm{L}\mathrm{L}\mathrm{L}(\{\mathsf{F},\mathsf{G},\wedge\})\) translates into a specific type of \(\omega\)-automaton, which amounts to a disjunction of so-called linear LTL formulas. In Section IV, we show that linear LTL formulas over MMS can be model-checked in polynomial time. From this, we establish the P-completeness of some syntactic fragments. In Section V and Section VI, we respectively prove the NP-completeness and undecidability of the other fragments. We conclude in Section VII. Due to space limitation, many proofs are deferred to the full version which is freely available on arXiv. ## II Preliminaries We write \(\mathbb{N}\) to denote \(\{0,1,\ldots\}\), \(\mathbb{Z}\) to denote the integers, and \(\mathbb{R}\) to denote the reals. We use subscripts to restrict these sets, _e.g._\(\mathbb{R}_{>0}\coloneqq\{x\in\mathbb{R}:x>0\}\). We write \([\alpha,\beta]\coloneqq\{x\in\mathbb{R}:\alpha\leq x\leq\beta\}\) and \([a..b]\coloneqq\{i\in\mathbb{N}:a\leq i\leq b\}\). We also use (semi-)open intervals, _e.g._\((1,2]=[1,2]\setminus\{1\}\). Let \(I\) be a set of indices and let \(X\subseteq\mathbb{R}^{I}\). We write \(\boldsymbol{e}_{i}\in\mathbb{R}^{I}\) for the vector with \(\boldsymbol{e}_{i}(i)=1\) and \(\boldsymbol{e}_{i}(j)=0\) for all \(j\neq i\), and \(\boldsymbol{0}\) to denote the vector such that \(\boldsymbol{0}(i)=0\) for all \(i\in I\). Let \(\|\boldsymbol{x}\|\coloneqq\max\{|\boldsymbol{x}(i)|:i\in I\}\) and \(\|X\|\coloneqq\sup\{\|\boldsymbol{x}\|:\boldsymbol{x}\in X\}\). We say that \(X\) is _convex_ if \(\lambda\boldsymbol{x}+(1-\lambda)\boldsymbol{y}\in X\) for all \(\lambda\in[0,1]\) and \(\boldsymbol{x},\boldsymbol{y}\in X\), and _bounded_ if \(\|X\|\leq b\) for some \(b\in\mathbb{R}_{\geq 0}\). We write \(2^{\Sigma}\) for the powerset of \(\Sigma\). Given a nonempty finite sequence \(w\), let \(w^{\omega}\coloneqq ww\cdots\). Let \(\Sigma^{\omega}\coloneqq\{w_{0}w_{1}\cdots w_{i}\in\Sigma\}\) be the set of infinite sequences with elements from \(\Sigma\). ### _Constant-rate multi-mode systems_ A \(d\)_-dimensional constant-rate multi-mode system (MMS)_, with \(d\in\mathbb{N}_{\geq 1}\), is a finite set \(M\subseteq\mathbb{R}^{d}\) whose elements are called _modes_. A _schedule_ is a (finite or infinite) sequence \(\pi=(\alpha_{1},\boldsymbol{m}_{1})(\alpha_{2},\boldsymbol{m}_{2})\cdots\), where each \((\alpha_{i},\boldsymbol{m}_{i})\in\mathbb{R}_{>0}\times M\). To ease the notation, we often write, _e.g._, \(\boldsymbol{m}\,\frac{1}{2}\boldsymbol{m}^{\prime}\) rather than \((1,\boldsymbol{m})(1/2,\boldsymbol{m}^{\prime})\). Given \(\lambda\in\mathbb{R}_{>0}\), we define the schedule \(\lambda\pi\) as \(\pi\) with each \(\alpha_{i}\) replaced by \(\lambda\alpha_{i}\). The _size_ of \(\pi\), denoted \(|\pi|\), is its number of pairs. The effect of \(\pi\) is \(\Delta_{\pi}\coloneqq\sum_{i}\alpha_{i}\boldsymbol{m}_{i}\). The _support_ of \(\pi\) is \(\mathrm{supp}(\pi)\coloneqq\{\boldsymbol{m}_{1},\boldsymbol{m}_{2},\ldots\}\). Let \(\mathrm{time}_{\pi}(\pi)\coloneqq\sum_{i:\boldsymbol{m}_{i}=\boldsymbol{m}} \alpha_{i}\) and \(\mathrm{time}(\pi)\coloneqq\sum_{\boldsymbol{m}\in M}\mathrm{time}_{\pi}(\pi)\). We say that an infinite schedule \(\pi\) is _non-Zero_ if \(\mathrm{time}(\pi)=\infty\). The _Parikh image_ of a finite schedule \(\pi\) is denoted \(\boldsymbol{\pi}\in\mathbb{R}_{\geq 0}^{M}\), _i.e._\(\boldsymbol{\pi}(\boldsymbol{m})\coloneqq\mathrm{time}_{\boldsymbol{m}}(\pi)\). We say that two finite schedules are _equivalent_, denoted with \(\equiv\), if they are equal after merging consecutive equal modes, _i.e._ using the rule \(\pi(\alpha,\boldsymbol{m})(\beta,\boldsymbol{m})\pi^{\prime}\equiv\pi(\alpha+ \beta,\boldsymbol{m})\pi^{\prime}\). Let \(\pi[\tau..\tau^{\prime}]\) be the schedule obtained from \(\pi\) starting where time \(\tau\) has elapsed, and ending where time \(\tau^{\prime}\) has elapsed; _e.g._, for \(\pi=(2,\boldsymbol{m}_{1})(0.5,\boldsymbol{m}_{2})(1,\boldsymbol{m}_{3})^{\omega}\), we have \(\pi[0..1]=(1,\boldsymbol{m}_{1})\), \(\pi[0.5..2.25]=(1.5,\boldsymbol{m}_{1})(0.25,\boldsymbol{m}_{2})\) and \(\pi[3..]=(0.5,\boldsymbol{m}_{3})(1,\boldsymbol{m}_{3})^{\omega}\). An _execution_ is a (finite or infinite) sequence \(\sigma=\boldsymbol{x}_{0}I_{0}\boldsymbol{x}_{1}\)\(I_{1}\)\(\boldsymbol{x}_{2}\cdots\) where \(\boldsymbol{x}_{0},\boldsymbol{x}_{1},\ldots\in\mathbb{R}^{d}\), \(I_{0},I_{1},\ldots\subseteq\mathbb{R}_{\geq 0}\) are closed intervals with distinct endpoints, \(\min I_{0}=0\), and \(\min I_{j}=\max I_{j-1}\) for all \(j\in\mathbb{N}_{>0}\). Let \(\mathrm{dom}\,\sigma\coloneqq I_{0}\cup I_{1}\cup\cdots\). For every \(\tau\in\mathrm{dom}\,\sigma\), with \(\tau\in I_{j}\), let \[\sigma(\tau)\coloneqq\boldsymbol{x}_{j}+\frac{\tau-\min I_{j}}{\max I_{j}-\min I _{j}}\cdot(\boldsymbol{x}_{j+1}-\boldsymbol{x}_{j}).\] We define \(\sigma[\tau..\tau^{\prime}]\), where \([\tau,\tau^{\prime}]\subseteq\mathrm{dom}\,\sigma\), as the execution \(\sigma^{\prime}\) that satisfies \(\sigma^{\prime}(\alpha)=\sigma(\tau+\alpha)\) for every \(\alpha\in[0,\tau^{\prime}-\tau]\). A schedule \(\pi=(\alpha_{1},\boldsymbol{m}_{1})(\alpha_{2},\boldsymbol{m}_{2})\cdots\), together with a point \(\boldsymbol{x}_{0}\), gives rise to an execution \(\mathrm{exec}(\pi,\boldsymbol{x}_{0})\coloneqq\boldsymbol{x}_{0}I_{0} \boldsymbol{x}_{1}\cdots\) where \(I_{0}\coloneqq[0,\alpha_{1}]\), \(I_{j}\coloneqq[\max I_{j-1},\max I_{j-1}+\alpha_{j+1}]\) and \(\boldsymbol{x}_{j}\coloneqq\boldsymbol{x}_{j-1}+\alpha_{j}\boldsymbol{m}_{j}\). We use the notation \(\boldsymbol{x}\xrightarrow{\pi}\boldsymbol{y}\) to denote the fact that \(\pi\) is a schedule that, from \(\boldsymbol{x}\), gives rise to an execution ending in \(\boldsymbol{y}\). If we only care about the existence of such a schedule, we may write \(\boldsymbol{x}\xrightarrow{\ast}\boldsymbol{y}\), or write \(\boldsymbol{x}\xrightarrow{\ast}\boldsymbol{y}\) to denote that there is such a nonempty schedule. We sometimes omit either of the two endpoints if its value is irrelevant, _e.g._\(\boldsymbol{x}\xrightarrow{\ast}\) stands for \(\boldsymbol{x}\xrightarrow{\ast}\boldsymbol{x}+\boldsymbol{\Delta}_{\pi}\). Given a set \(Z\subseteq\mathbb{R}^{d}\), we write \(\boldsymbol{x}\xrightarrow{\ast}_{Z}^{\pi}\) to denote that the execution never leaves \(Z\), _i.e._\(\sigma(\tau)\in Z\) for all \(\tau\in\mathrm{dom}\,\sigma\), where \(\sigma\coloneqq\mathrm{exec}(\pi,\boldsymbol{x})\). We extend this notation to any set of sets \(\mathcal{X}\), requiring that, for all \(\tau\in\mathrm{dom}\,\sigma\), there exists \(Z\in\mathcal{X}\) such that \(\sigma(\tau)\in Z\). **Example 1**: _Let \(M\coloneqq\{(0,1),(1,0),(1,1),(-1,1)\}\). Let \(\pi\coloneqq\frac{1}{2}(1,1)\ \frac{1}{2}(1,0)\ \frac{1}{2}(1,1)\ \frac{1}{2}(1,0)\ (\frac{ 1}{2}(1,0)\ (-1,1)\ (0,1)\ \cdots\) be a schedule _Linear temporal logic (LTL)_, over a finite set of zones \(AP\), has the following syntax: \[\varphi::=\textit{true}\mid Z\mid\neg\varphi\mid\varphi\land\varphi\mid\varphi \lor\varphi\mid\mathsf{F}\varphi\mid\mathsf{G}\varphi\mid\varphi\mathsf{U}\ \varphi,\] where \(Z\in AP\). Let LTL(\(X\)) denote the set of LTL formulas (syntactically) using only the operators from \(X\). For example, LTL(\(\{\mathsf{F},\mathsf{G},\land\}\)) describes the LTL formulas using only operators \(\mathsf{F}\), \(\mathsf{G}\) and \(\land\). We write LTL\({}_{\text{B}}\) to indicate that all zones from \(AP\) must be bounded. We say that an LTL formula is _negation-free_ if it contains no occurrence of \(\neg\). We define the semantics over infinite executions: \[\sigma,\tau\models\textit{true} \Longleftrightarrow\textit{true},\] \[\sigma,\tau\models Z \Longleftrightarrow\sigma(\tau)\in Z,\] \[\sigma,\tau\models\neg\varphi \Longleftrightarrow\neg(\sigma,\tau\models\varphi),\] \[\sigma,\tau\models\varphi\land\varphi^{\prime} \Longleftrightarrow(\sigma,\tau\models\varphi)\land(\sigma, \tau\models\varphi^{\prime}),\] \[\sigma,\tau\models\varphi\lor\varphi^{\prime} \Longleftrightarrow(\sigma,\tau\models\varphi)\lor(\sigma,\tau \models\varphi^{\prime}),\] \[\sigma,\tau\models\mathsf{F}\varphi \Longleftrightarrow\exists\tau^{\prime}\geq\tau:\sigma,\tau^{ \prime}\models\varphi,\] \[\sigma,\tau\models\mathsf{G}\varphi \Longleftrightarrow\forall\tau^{\prime}\geq\tau:\sigma,\tau^{ \prime}\models\varphi,\] \[\sigma,\tau\models\varphi\mathsf{U}\ \varphi^{\prime} \Longleftrightarrow\exists\tau^{\prime}\geq\tau:(\sigma,\tau^{ \prime}\models\varphi^{\prime})\] \[\qquad\qquad\qquad\qquad\qquad\qquad\land(\forall\tau^{\prime \prime}\in[\tau,\tau^{\prime}):\sigma,\tau^{\prime\prime}\models\varphi).\] We write \(\sigma\models\varphi\) iff \(\sigma,0\models\varphi\). We say that two formulas are _equivalent_, denoted \(\varphi\equiv\varphi^{\prime}\), if they are satisfied by the same executions. In particular, \(\mathsf{F}\psi\equiv\textit{true}\ \mathsf{U}\ \psi\) and \(\mathsf{G}\psi\equiv\neg\mathsf{F}\neg\psi\). Let \(M\) be an MMS and let \(\mathbf{x}\in\mathbb{R}^{d}\). We say that \(\mathbf{x}\models_{M}\varphi\) iff \(M\) has a non-Zeno infinite schedule \(\pi\) such that \(\operatorname{exec}(\pi,\mathbf{x})\models\varphi\). The _model-checking problem_ of a fragment LTL(\(X\)) asks, given \(M\), \(\mathbf{x}\) and \(\varphi\in\text{LTL($X$)}\), whether \(\mathbf{x}\models_{M}\varphi\). **Example 2**: _Recall the MMS \(M\) and the schedule \(\pi\) from Example 1. Let \(X\), \(Y\) and \(Z\) be the bounded zones colored in Figure 2, e.g. \(X=\{(x,y)\in\mathbb{R}^{2}:0.5\leq x\leq 1.5,0.5\leq y\leq 1.5\}\). We have \((1,1)\models_{M}X\land\mathsf{F}((Y\land\neg Z)\land\mathsf{F}Z)\). _ ### _Connection with classical LTL_ _Classical linear temporal logic (LTL)_ (without temporal operator \(\mathsf{X}\)) has the same syntax as the logic from Section II-B, but is interpreted over infinite words \(w\in(2^{AP})^{\omega}\): \[w,i \models\textit{true}, \Longleftrightarrow\textit{true},\] \[w,i \models a \Longleftrightarrow a\in w(i),\] \[w,i \models\neg\varphi \Longleftrightarrow\neg(w,i\models\varphi),\] \[w,i \models\varphi\land\varphi^{\prime} \Longleftrightarrow(w,i\models\varphi)\land(w,i\models\varphi^{ \prime}),\] \[w,i \models\varphi\lor\varphi^{\prime} \Longleftrightarrow(w,i\models\varphi)\lor(w,i\models\varphi^{ \prime}),\] \[w,i \models\mathsf{F}\varphi \Longleftrightarrow\exists j\geq i:w,j\models\varphi,\] \[w,i \models\mathsf{G}\varphi \Longleftrightarrow\forall j\geq i:w,j\models\varphi,\] \[w,i \models\varphi\mathsf{U}\ \varphi^{\prime} \Longleftrightarrow\exists j\geq i:(w,j\models\varphi^{ \prime})\] \[\qquad\qquad\qquad\qquad\land(\forall k\in[i..j-1]:w,k\models \varphi).\] We write \(w\models\varphi\) iff \(w,0\models\varphi\). Observe that \(w,i\models\varphi\) holds iff \(w[i..]\models\varphi\), where \(w[i..]\coloneqq w(i)w(i+1)\cdots\). We write \(\varphi\equiv\varphi^{\prime}\) if \(\varphi\) and \(\varphi^{\prime}\) are satisfied by the same infinite words. In order to relate LTL over executions with LTL over infinite words, we introduce the notion of traces. Informally, a trace captures the zone changes within an execution. Let \(\chi_{AP}\colon\mathbb{R}^{d}\to AP\) be the function that yields the set of zones a given point lies in: \(\chi_{AP}(\mathbf{x})\coloneqq\{Z\in AP:\mathbf{x}\in Z\}\). Let \(\sigma\) be an execution. We say that word \(w\) is a _trace_ of \(\sigma\) if there exist \(\tau_{0}<\tau_{1}<\cdots\in\mathbb{R}_{\geq 0}\) such that * \(\operatorname{dom}\sigma=[\tau_{0},\tau_{1}]\cup[\tau_{1},\tau_{2}]\cup\cdots\), * \(w(i)=\chi_{AP}(\sigma(\tau_{i}))\) for every \(i\in\mathbb{N}\), and * for every \(i\in\mathbb{N}\), there exists \(j\in\{i,i+1\}\) such that: \(\chi_{AP}(\sigma(\tau^{\prime}))=\chi_{AP}(\sigma(\tau_{j}))\) for all \(\tau^{\prime}\in(\tau_{i},\tau_{i+1})\). **Example 3**: _Recall execution \(\sigma\) from Example 1. The word \(w\coloneqq\{X\}\{X\}\emptyset\{Y\}\{Y\}\{Y,Z\}\{Y,Z\}\{Z\}\emptyset\emptyset\cdots\) is a trace of \(\sigma\). As depicted with circular marks in Figure 2, it is obtained from \(\tau_{0}\coloneqq 0,\tau_{1}\coloneqq 0.5,\tau_{2}\coloneqq 1,\tau_{3}\coloneqq 1.5,\tau_{4} \coloneqq 2,\tau_{5}\coloneqq 2.5,\tau_{6}\coloneqq 2.75,\tau_{7} \coloneqq 3,\tau_{8}\coloneqq 3.5\) and so on. _ With a bit of care, it is possible to prove that any execution admits a trace. Moreover, in the absence of negations, model-checking an execution amounts to model-checking any of its traces under the classical LTL semantics. Thus, equivalences of negation-free classical LTL also hold under the semantics over executions. **Proposition 1**: _Any execution \(\sigma\) has a trace._ **Proposition 2**: _Let \(\sigma\) be an execution with \(\operatorname{dom}\sigma=\mathbb{R}_{\geq 0}\), let \(w\) be a trace of \(\sigma\), and let \(\varphi\) be a negation-free LTL formula. It is the case that \(\sigma\models\varphi\) iff \(w\models\varphi\)._ ## III From LTL(\(\{\mathsf{F},\mathsf{G},\land\}\)) to linear LTL This section deals with classical LTL formulas interpreted over infinite words. We show that any formula from LTL(\(\{\mathsf{F},\mathsf{G},\land\}\)) corresponds to an automaton of a certain shape, which amounts to what we call linear LTL formulas. ### _From LTL(\(\{\mathsf{F},\mathsf{G},\land\}\)) to flat formulas_ We say that a formula is _pseudo-atomic_ if it is a conjunction of atomic propositions. By convention, an empty conjunction amounts to \(\textit{true}\). An LTL formula \(\varphi\) is _flat_ if it has this form: \[\psi\land\mathsf{G}\psi^{\prime}\land\bigwedge_{i\in I}\mathsf{GF}\psi^{ \prime\prime}_{i}\land\bigwedge_{j\in J}\mathsf{F}\varphi_{j},\] where \(\psi\), \(\psi^{\prime}\) and \(\psi^{\prime\prime}_{i}\) are pseudo-atomic; and \(\varphi_{j}\) is flat. Given a formula \(\varphi\in\text{LTL}(\{\mathsf{F},\mathsf{G},\land\})\) of this form: \[\varphi=\psi\land\bigwedge_{i\in I}\mathsf{G}\varphi_{i}\land\bigwedge_{j\in J }\mathsf{F}\varphi_{j},\] we define these mappings: \[\text{flat}_{\mathsf{G}}(\varphi) \coloneqq\mathsf{G}\psi\land\bigwedge_{i\in I}\text{flat}_{ \mathsf{G}}(\varphi_{i})\land\bigwedge_{j\in J}\text{flat}_{\mathsf{GF}}( \varphi_{j}),\] \[\text{flat}_{\mathsf{GF}}(\varphi) \coloneqq\mathsf{GF}\psi\land\bigwedge_{i\in I}\text{flat}_{ \mathsf{FG}}(\varphi_{i})\land\bigwedge_{j\in J}\text{flat}_{\mathsf{GF}}( \varphi_{j}),\] \[\text{flat}_{\mathsf{FG}}(\varphi) \coloneqq\mathsf{FG}\psi\land\bigwedge_{i\in I}\text{flat}_{ \mathsf{FG}}(\varphi_{i})\land\bigwedge_{j\in J}\text{flat}_{\mathsf{GF}}( \varphi_{j}),\] \[\text{flat}(\varphi) \coloneqq\psi\land\bigwedge_{i\in I}\text{flat}_{\mathsf{G}}( \varphi_{i})\land\bigwedge_{j\in J}\text{F flat}(\varphi_{j}).\] As its name suggests, it follows by induction that formula \(\operatorname{flat}(\varphi)\) is flat. Moreover, the following holds. **Proposition 3**: _It is the case that \(\operatorname{flat}(\varphi)\equiv\varphi\)._ ### _From flat formulas to almost acyclic automata_ We say that an automaton is _almost acyclic_ if, for every pair of states \(q\neq r\), it is the case that \(q\to^{*}r\) implies \(r\not\to^{*}q\), _i.e._ cycles must be self-loops. The _width_ of an almost acyclic automaton is the maximal length among its simple paths. We will prove that any formula \(\varphi\in\operatorname{LTL}(\{\mathsf{F},\mathsf{G},\wedge\})\) can be translated into an almost acyclic automaton \(\mathcal{A}_{\varphi}\) of linear width in the size of \(\varphi\), and such that \(\mathcal{A}_{\varphi}\) accepts \(w\) iff \(w\models\varphi\). We will formally define the acceptance condition later on, but for readers familiar with \(\omega\)-automata: \(\mathcal{A}_{\varphi}\) will be a generalized Buchi automaton with accepting transitions. In order to define \(\mathcal{A}_{\varphi}\), we first provide intermediate definitions. Let \(\mathfrak{U}\colon\operatorname{LTL}(\{\mathsf{F},\mathsf{G},\wedge\})\to 2^{ \operatorname{LTL}(\{\mathsf{F},\mathsf{G},\wedge\})}\) be defined by \(\mathfrak{U}(\mathit{true})\coloneqq\{\mathit{true}\}\), \(\mathfrak{U}(a)\coloneqq\{a\}\), \(\mathfrak{U}(\mathsf{G}\varphi)\coloneqq\{\mathsf{G}\varphi\}\), \[\mathfrak{U}(\varphi_{1}\wedge\varphi_{2}) \coloneqq\{\psi_{1}\wedge\psi_{2}:\psi_{1}\in\mathfrak{U}( \varphi_{1}),\psi_{2}\in\mathfrak{U}(\varphi_{2})\},\] \[\mathfrak{U}(\mathsf{F}\varphi) \coloneqq\{\mathsf{F}\varphi\}\cup\mathfrak{U}(\varphi).\] **Example 4**: _The set \(\mathfrak{U}(\mathsf{G}a\wedge\mathsf{F}(b\wedge\mathsf{F}\mathsf{G}c))\) is equal to_ \[\{\mathsf{G}a\wedge\mathsf{F}(b\wedge\mathsf{F}\mathsf{G}c),\mathsf{G}a \wedge b\wedge\mathsf{F}\mathsf{G}c,\mathsf{G}a\wedge b\wedge\mathsf{G}c\}.\qed\] _Given \(A\subseteq AP\), let \(\operatorname{prop}(\bigwedge_{a\in A}a)\coloneqq A\). Given \(A\subseteq AP\) and a flat formula \(\varphi=\psi\wedge\mathsf{G}\psi^{\prime}\wedge\bigwedge_{i\in I}\mathsf{GF} \psi^{\prime\prime}_{i}\wedge\bigwedge_{j\in J}\mathsf{F}\varphi_{j}\), let_ \[\varphi[A]\coloneqq\begin{cases}\mathsf{G}\psi^{\prime}\wedge\bigwedge_{i \in I}\mathsf{GF}\psi^{\prime\prime}_{i}\wedge\bigwedge_{j\in J}\mathsf{F} \varphi_{j}&\text{if }\operatorname{prop}(\psi\wedge\psi^{\prime})\subseteq A,\\ \mathit{false}&\text{otherwise}.\end{cases}\] _Given \(\varphi\in\operatorname{LTL}(\{\mathsf{F},\mathsf{G},\wedge\})\), the automaton \(A_{\varphi}\coloneqq(Q,\Sigma,\to,q_{0})\) is defined respectively by the following states, alphabet, transitions and initial state:_ \[Q \coloneqq\{\psi\in\operatorname{LTL}(\{\mathsf{F},\mathsf{G}, \wedge\}):\psi\text{ is flat}\},\] \[\Sigma \coloneqq 2^{AP},\] \[\to \coloneqq\{(\psi,A,\psi^{\prime}):\exists\psi^{\prime\prime} \in\mathfrak{U}(\psi)\text{ s.t. }\psi^{\prime}=\psi^{\prime\prime}[A]\neq\mathit{false}\},\] \[q_{0} \coloneqq\operatorname{flat}(\varphi).\] **Example 5**: _Let \(\varphi\coloneqq a\wedge\mathsf{F}b\), which is flat. We have \(\mathfrak{U}(\varphi)=\{a\wedge\mathsf{F}b,a\wedge b\}\). The automaton \(\mathcal{A}_{\varphi}\) is depicted at the top of Figure 3. Note that \(w\models\varphi\) iff there is an infinite path from the initial state, labeled with \(w\), that visits \(true\)._ Let \(\varphi^{\prime}\coloneqq\mathsf{GF}(a\wedge\mathsf{G}c)\wedge\mathsf{F}b\). We have \(\operatorname{flat}(\varphi^{\prime})=\mathsf{GF}a\wedge\mathsf{F}\mathsf{G}c \wedge\mathsf{F}b\). Hence, \(\mathfrak{U}(\operatorname{flat}(\varphi^{\prime}))\) is equal to \[\{\mathsf{GF}a\wedge\mathsf{F}\mathsf{G}c\wedge\mathsf{F}b,\mathsf{GF}a \wedge\mathsf{F}\mathsf{G}c\wedge b,\mathsf{GF}a\wedge\mathsf{G}c\wedge \mathsf{F}b,\mathsf{GF}a\wedge\mathsf{G}c\wedge b\}.\] The automaton \(\mathcal{A}_{\varphi^{\prime}}\) is depicted at the bottom of Figure 3. Let \(q\coloneqq\mathsf{GF}a\wedge\mathsf{G}c\). Note that \(w\models\varphi^{\prime}\) iff there is an infinite path from the initial state, labeled with \(w\), that visits the set of transitions \(\{(q,A,q):A\supseteq\{a,c\}\}\) infinitely often. #### Iii-B1 Shape of automaton \(\mathcal{A}_{\varphi}\) We first seek to prove that \(\mathcal{A}_{\varphi}\) is almost acyclic. For every \(\varphi\in\operatorname{LTL}(\{\mathsf{F},\mathsf{G},\wedge\})\), let \(|true|=|a|\coloneqq 1\), \(|\varphi_{1}\wedge\varphi_{2}|\coloneqq|\varphi_{1}|+1+|\varphi_{2}|\) and \(|\mathsf{G}\varphi|=|\mathsf{F}\varphi|\coloneqq 1+|\varphi|\). Moreover, let \(|true|_{\mathsf{F}}=|a|_{\mathsf{F}}=|\mathsf{G}\varphi|_{\mathsf{F}}\coloneqq 0\), \(|\varphi_{1}\wedge\varphi_{2}|_{\mathsf{F}}\coloneqq|\varphi_{1}|+|\varphi_{2}|_{ \mathsf{F}}\) and \(|\mathsf{F}\varphi|_{\mathsf{F}}\coloneqq 1+|\varphi|_{\mathsf{F}}\). The properties below follow by induction. **Proposition 4**: _Let \(\varphi\in\operatorname{LTL}(\{\mathsf{F},\mathsf{G},\wedge\})\). This holds:_ 1. _if_ \(\varphi\) _is flat, then_ \(|\varphi|_{\mathsf{F}}>|\varphi^{\prime}|_{\mathsf{F}}\) _for all_ \(\varphi^{\prime}\in\mathfrak{U}(\varphi)\setminus\{\varphi\}\)_,_ 2. _if_ \(\varphi\) _is flat, then_ \(|\varphi|_{\mathsf{F}}\geq|\varphi[A]|_{\mathsf{F}}\) _for all_ \(A\subseteq AP\)_,_ 3. \(|\varphi|\geq|\operatorname{flat}(\varphi)|_{\mathsf{F}}\)_._ This proposition follows from Proposition 4: **Proposition 5**: _Let \(r_{0}\to^{A_{1}}r_{1}\to^{A_{2}}\cdots\to^{A_{n}}r_{n}\) be a simple path of \(\mathcal{A}_{\varphi}\). It is the case that \(|r_{1}|_{\mathsf{F}}>\cdots>|r_{n}|_{\mathsf{F}}\)._ **Proposition 6**: _Let \(\varphi\in\operatorname{LTL}(\{\mathsf{F},\mathsf{G},\wedge\})\). Automaton \(\mathcal{A}_{\varphi}\) is almost acyclic and its width belongs to \(\mathcal{O}(|\varphi|)\)._ Let us first prove almost acyclicity. For the sake of contradiction, suppose that \(\mathcal{A}_{\varphi}\) has a simple cycle \(q\to^{u}r\to^{v}q\) where \(q\neq r\). Since \(q\to^{*}r\), it follows from Items 1 and 2 of Proposition 4 that \(|q|_{\mathsf{F}}\geq|r|_{\mathsf{F}}\). Since \(q\neq r\), we have \(|u|>1\) and \(|v|>1\). Thus, Proposition 5 yields \(|r|_{\mathsf{F}}>|q|_{\mathsf{F}}\), which is a contradiction. Let us now bound the width \(n\) of \(A_{\varphi}\). Let \(q_{0}\to^{A_{1}}q_{1}\to^{A_{2}}\cdots\to^{A_{n}}q_{n}\) be a simple path of \(A_{\varphi}\). We have \[n \leq|q_{1}|_{\mathsf{F}}+1\] (by Proposition 5) \[\leq|q_{0}|_{\mathsf{F}}+1\] (by Items 1 and 2 of Proposition 4) \[=|\operatorname{flat}(\varphi)|_{\mathsf{F}}+1\] (by def. of \[q_{0}\]) \[\leq|\varphi|+1\] (by Item 3 of Proposition 4). \[\qed\] #### Iii-B2 Language of \(\mathcal{A}_{\varphi}\) Let us define the acceptance condition of automaton \(\mathcal{A}_{\varphi}\). Let \(F\coloneqq\{q\in Q:q_{0}\to^{+}q\wedge|q|_{\mathsf{F}}=0\}\). By definition, each state \(q\in F\) is of the form \(\mathsf{G}\psi\wedge\bigwedge_{j\in J}\mathsf{GF}\psi^{\prime}_{j}\). Given such a state \(q\), we define \[T_{q,j}\coloneqq\{(q,A,q)\in\to:\operatorname{prop}(\psi^{\prime}_{j}) \subseteq A\}.\] We say that word \(w\in(2^{AP})^{\omega}\) is _accepted_ by \(\mathcal{A}_{\varphi}\), denoted \(w\in L(\mathcal{A}_{\varphi})\), iff there exist \(q\in F\) and an infinite path from \(q_{0}\) that visits \(q\) and, for each \(j\in J\), the set \(T_{q,j}\) infinitely often. In the remainder, we prove that \(w\in L(\mathcal{A}_{\varphi})\) iff \(w\models\varphi\). **Lemma 1**: _Let \(\varphi\in\mathrm{LTL}(\{\mathsf{F},\mathsf{G},\wedge\})\) be a flat formula. These two properties are equivalent to \(w\models\varphi\):_ 1. _there exists_ \(\varphi^{\prime}\) _such that_ \(\varphi\to^{w(0)}\varphi^{\prime}\) _and_ \(w[1..]\models\varphi^{\prime}\)_;_ 2. _there exist_ \(i\in\mathbb{N}\) _and_ \(\varphi^{\prime}\) _such that_ \(\varphi\to^{w(0)\cdots w(i-1)}\varphi^{\prime}\)_,_ \(|\varphi^{\prime}|_{\mathsf{F}}=0\) _and_ \(w[i..]\models\varphi^{\prime}\)_._ **Proposition 7**: _Let \(\varphi\in\mathrm{LTL}(\{\mathsf{F},\mathsf{G},\wedge\})\). It is the case that \(w=\varphi\) iff \(w\in L(\mathcal{A}_{\varphi})\)._ Proof::\(\Rightarrow\)) By Lemma 1(2), there are \(k\in\mathbb{N}\) and \(\varphi^{\prime}\) with \[\mathrm{flat}(\varphi)\to^{w(0)\cdots w(k-1)}\varphi^{\prime},|\varphi^{ \prime}|_{\mathsf{F}}=0\text{ and }w[k..]\models\varphi^{\prime}.\] As \(|\varphi^{\prime}|_{\mathsf{F}}=0\), we have \(\mathfrak{U}(\varphi^{\prime})=\{\varphi^{\prime}\}\). So, Lemma 1(1) yields \[\varphi^{\prime}\to^{w(k)}\varphi^{\prime}\to^{w(k+1)}\varphi^{\prime}\to^{w( k+2)}\cdots.\] As \(\varphi^{\prime}\in F\), it has the form \(\mathsf{G}\psi\wedge\bigwedge_{j\in J}\mathsf{GF}\psi^{\prime}_{j}\). In particular, this means that \(w[k..]\models\bigwedge_{j\in J}\mathsf{GF}\psi^{\prime}_{j}\). Recall that \(T_{\varphi^{\prime},j}=\{(\varphi^{\prime},A,\varphi^{\prime})\in\to:\mathrm{ prop}(\psi^{\prime}_{j})\subseteq A\}\). So, for each \(j\in J\), the set \(T_{\varphi^{\prime},j}\) is visited infinitely often. \(\Leftarrow\)) By \(w\in L(A_{\varphi})\), there exist \(q\in F\) and \(k\in\mathbb{N}\) such that \(\bigwedge_{j\in J}\mathsf{GF}\psi^{\prime}_{j}\), * \(q_{0}\to^{w(0)\cdots w(k-1)}q\), and * some infinite path \(q\to^{w[k..]}\) visits, for each \(j\in J\), the set \(T_{q,j}\) infinitely often. Recall that \(T_{q,j}=\{(q,A,q)\in\to:\mathrm{prop}(\psi^{\prime}_{j})\subseteq A\}\). Since \(q\to^{w[k..]}\) visits each \(T_{q,j}\) infinitely often, we have \(w[k..]\models\bigwedge_{j\in J}\mathsf{GF}\psi^{\prime}_{j}\). By \(\mathfrak{U}(q)=\{q\}\) and by definition of \(\to\), we have \(w[k..]\models\mathsf{G}\psi\). So, \(w[k..]\models q\). By repeated applications of Lemma 1(1), this implies \(w\models q_{0}=\mathrm{flat}(\varphi)\equiv\varphi\). ### _From almost acyclic automata to linear LTL_ In this subsection, we show that almost acyclic automata are equivalent to finite sets of so-called linear LTL formulas, with the goal of showing that \(\mathrm{LTL}_{\mathsf{B}}(\{\mathsf{F},\mathsf{G},\wedge\})\) belongs to NP in the forthcoming Section V-A. For every \(A\subseteq AP\), let \(\mathord{\uparrow}A\coloneqq\{A^{\prime}\subseteq AP:A^{\prime}\supseteq A\}\). We say that \(X\subseteq 2^{AP}\) is _simple_ if \(X=\mathord{\uparrow}A\) for some \(A\subseteq AP\). **Example 6**: _Consider the bottom automaton of Figure 3. Its infinite paths are captured by these three expressions:_ * \(\mathord{\uparrow}\emptyset^{*}\uparrow\{b\}\mathord{\uparrow}\mathord{ \uparrow}\mathord{\uparrow}\mathord{\uparrow}\mathord{\downarrow}\mathord{ \uparrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{ \downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{ \downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{ \downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{ \downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{ \downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{ \downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{ \downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{ \downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{ \downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{ \downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow} \mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{\downarrow}\mathord{ \downarrow}\mathord{\downarrow}\mathord{\downarrow \(\mathsf{GF}\varphi\) and \(\mathsf{GF}\varphi\equiv\mathsf{FG}\varphi\). If the resulting formula is negation-free, then it is of the form \(\mathsf{F}Z\), \(\mathsf{G}Z\), \(\mathsf{GF}Z\) or \(\mathsf{FG}Z\). These are all linear as \(\mathsf{F}Z\equiv\mathbb{R}^{d}\)\(\mathsf{U}\)\(Z\) and \(\mathsf{FG}Z\equiv\mathbb{R}^{d}\)\(\mathsf{U}\)\(\mathsf{G}Z\). Thus, we are done by Theorem 1. If the resulting formula has a negation, then there are four forms to consider: (1) \(\mathsf{F}\neg Z\), (2) \(\mathsf{G}\neg Z\), (3) \(\mathsf{GF}\neg Z\) and (4) \(\mathsf{FG}\neg Z\). These are easy to handle: * We have \(\mathbf{x}\models_{M}\mathsf{F}\neg Z\) iff \(\mathbf{x}\models_{M}\mathsf{GF}\neg Z\) iff \(\mathbf{x}\models_{M}\mathsf{FG}\neg Z\) iff iff \(\mathbf{x}\not\in Z\) or there is a mode \(\mathbf{m}\in M\) such that \(\mathbf{m}\neq\mathbf{0}\). * We have \(\mathbf{x}\models_{M}\mathsf{G}\neg Z\) iff \(\mathbf{x}\not\in Z\) and there exists a mode \(\mathbf{m}\in M\) such that for all \(\alpha\in\mathbb{R}_{>0}\): \(\mathbf{x}+\alpha\mathbf{m}\not\in Z\). Since \(\mathsf{FF}\varphi\equiv\mathsf{F}\varphi\) and \(\mathsf{F}(\varphi\lor\psi)\equiv(\mathsf{F}\varphi)\vee(\mathsf{F}\psi)\), any formula from LTL(\(\{\mathsf{F},\vee\}\)) can be turned into a disjunction of atomic propositions and formulas from LTL(\(\{\mathsf{F}\}\)). So, it suffices to check each disjunct in polynomial time. As \(\mathsf{GG}\varphi\equiv\mathsf{G}\varphi\) and \((\mathsf{G}\varphi)\wedge(\mathsf{G}\psi)\equiv\mathsf{G}(\varphi\wedge\psi)\), we can transform formulas from LTL(\(\{\mathsf{G},\wedge\}\)) into the form \(\psi\wedge\mathsf{G}\psi^{\prime}\), where \(\psi,\psi^{\prime}\) are pseudo-atomic. The latter is linear, and hence can be model-checked in polynomial time. **Theorem 3**: _The model-checking problem is P-hard for both LTL\({}_{\mathsf{B}}\)(\(\{\mathsf{F}\}\)) and LTL\({}_{\mathsf{B}}\)(\(\{\mathsf{G}\}\))._ Proof:: It follows by simple reductions from feasibility of linear programs and the monotone circuit-value problem. ### _A polynomial-time first-order logic_ We recall a first-order logic over the reals introduced in [12]. It allows for conjunctions of _convex semi-linear Horn formulas_, _i.e._ formulas of this form: \[\sum_{i=1}^{d}\mathbf{a}(i)\cdot\mathbf{x}(i)\sim c\vee\bigvee\bigwedge_{i\in I}\bigwedge _{j\in J_{i}}\mathbf{x}(j)>0,\] where \(\mathbf{a}\in\mathbb{Z}^{d}\), \(c\in\mathbb{Z}\), \(\sim\in\{<,\leq,=,\geq,>\}\), and \(I\) and each \(J_{i}\) is a finite set of indices. The problem of determining, given a formula \(\varphi\) from this logic, whether there exists \(\mathbf{x}\in\mathbb{R}_{\geq 0}^{d}\) such that \(\varphi(\mathbf{x})\) holds, can be solved in polynomial time [12]. This result extends easily to solutions where \(\mathbf{x}(j)\in\mathbb{R}\) is allowed, provided that \(\mathbf{x}(j)\) is never used in disjuncts. Indeed, it suffices to introduce two variables \(y,z\in\mathbb{R}_{\geq 0}\) and replace each occurrence of \(\mathbf{x}(j)\) with \(y-z\). Given \(\mathbf{x}(i),\mathbf{x}(j)\in\mathbb{R}_{\geq 0}\), we will use \(\mathbf{x}(i)>0\rightarrow\mathbf{x}(j)>0\) as short for \(\mathbf{x}(i)=0\vee\mathbf{x}(j)>0\). ### _Expressing \(\rightarrow^{*}_{Z}\) in first-order logic_ We first seek to build a formula \(\varphi_{Z}\) from the aforementioned logic such that \(\varphi_{Z}(\mathbf{x},\mathbf{\lambda},\mathbf{y})\) holds iff there is a finite schedule \(\pi\) with \(\mathbf{x}\rightarrow^{\pi_{Z}}_{Z}\)\(\mathbf{y}\) and \(\mathbf{\pi}=\mathbf{\lambda}\). Let us fix zone \(Z\) and modes \(M=\{\mathbf{m}_{1},\dots,\mathbf{m}_{n}\}\). We take inspiration from the characterization of continuous Petri nets reachability of [10, Thm. 20], which is equivalent to finding a Parikh image that (1) admits the right effect, (2) is forward fireable, and (3) is backward fireable. The forthcoming Proposition 13 similarly characterizes \(\mathbf{x}\rightarrow^{\pi}_{Z}\mathbf{y}\). **Proposition 10**: _Let \(\mathbf{x}\rightarrow^{\pi_{Z}}_{Z}\mathbf{y}\). There exist \(\pi_{x}\) and \(\pi_{y}\) such that \(\mathbf{x}\rightarrow^{\pi_{Z}}_{Z}\mathbf{z}\), \(\rightarrow^{\pi_{Z}}_{Z}\mathbf{y}\), supp\((\pi_{x})=\) supp\((\pi_{y})=\) supp\((\pi)\) and \(|\pi_{x}|=|\pi_{y}|=|\)supp\((\pi)|\)._ **Lemma 2**: _Let \(\rho(\alpha,\mathbf{m})\rho^{\prime}\) be a schedule. This holds:_ * _If_ \(\mathbf{x}\rightarrow^{\rho^{\prime}(\alpha,\mathbf{m})\rho^{\prime}}_{Z}\)_, then_ \(\mathbf{x}\rightarrow^{\rho(\frac{\alpha}{2},\mathbf{m})\frac{1}{2}\rho^{\prime}( \frac{\alpha}{2},\mathbf{m})}_{Z}\)_,_ * _If_ \(\mathbf{\rightarrow}^{\rho^{\prime}(\alpha,\mathbf{m})\rho}_{Z}\mathbf{y}\)_, then_ \(\mathbf{\rightarrow}^{(\frac{\alpha}{2},\mathbf{m})\frac{1}{2}\rho^{\prime}(\frac{ \alpha}{2},\mathbf{m})}_{Z}\mathbf{y}\)_._ **Proposition 11**: _Let \(\mathbf{x}\rightarrow^{\pi_{Z}}_{Z}\mathbf{y}\). There exist \(\beta\in\mathbb{N}_{\geq 1}\), \(\mathbf{x}\rightarrow^{\pi^{\prime}}_{Z}\mathbf{y}\) and \(\mathbf{x}\mathbf{z}\rightarrow^{\pi^{\prime\prime}}_{Z}\mathbf{y}\) such that \(|\pi|=|\pi^{\prime}|=|\pi^{\prime\prime}|\), supp\((\pi)=\) supp\((\pi^{\prime\prime})=\) supp\((\pi^{\prime\prime})\), and, for every \(\mathbf{m}\in\) supp\((\pi)\), it is the case that \(\mathbf{x}\mathbf{z}\rightarrow^{(1/\beta)\mathbf{m}}_{Z}\mathbf{m}\) and \(\mathbf{\rightarrow}^{(1/\beta)\mathbf{m}}_{Z}\mathbf{y}_{Z}\)._ **Proposition 12**: _Let \(\mathbf{x}\rightarrow^{\pi}\mathbf{y}\), \(\mathbf{x}\cdot=|\pi|\) and \(\beta\in\mathbb{N}_{\geq 1}\) be such that \(\mathbf{x}\rightarrow^{(1/\beta)\pi(i)}_{Z}\) and \(\mathbf{\rightarrow}^{(1/\beta)\pi(i)}_{Z}\mathbf{y}\) hold for all \(i\in[1..k]\). It is the case that \(\mathbf{x}\rightarrow^{\pi^{\prime}}_{Z}\mathbf{y}\), where \(\pi^{\prime}\coloneqq((1/\beta)\pi)^{\beta k}\)._ **Proposition 13**: _It is the case that \(\mathbf{x}\rightarrow^{\pi}_{Z}\mathbf{y}\) iff there exist \(\pi^{\prime},\pi_{\text{fwd}},\pi_{\text{bwd}}\) with_ * \(\text{supp}(\pi^{\prime})=\) supp\((\pi_{\text{fwd}})=\) supp\((\pi_{\text{bwd}})=\) supp\((\pi)\)_,_ * \(\mathbf{x}\rightarrow^{\pi^{\prime}}\mathbf{y}\)_,_ \(\mathbf{x}\rightarrow^{\pi_{\text{fwd}}}_{Z}\text{ and }\mathbf{\rightarrow}^{\pi_{\text{fwd}}}_{Z}\mathbf{y}\)_._ Proof:: It suffices to take \(\pi^{\prime}=\pi_{\text{fwd}}=\pi_{\text{bwd}}\coloneqq\pi\). \(\Leftarrow\) Let \(\beta\) and the following be given by Proposition 11: \[\mathbf{x}\rightarrow^{\pi^{\prime}_{\text{fwd}}}_{Z}\mathbf{x}_{Z}\text{ and }\mathbf{y}_{Z}\rightarrow^{\pi^{\prime}_{\text{fwd}}}_{Z}\mathbf{y}.\] Let \(\gamma\in\mathbb{N}_{\geq 1}\) be sufficiently large so that \(\mathbf{\pi^{\prime}}\geq\frac{1}{\gamma}(\mathbf{\pi^{\prime}_{\text{fwd}}}+\mathbf{\pi^{ \prime}_{\text{bwd}}})\). Such a \(\gamma\) exists as supp\((\pi^{\prime}_{\text{fwd}})=\) supp\((\pi^{\prime}_{\text{bwd}})=\) supp\((\pi^{\prime})\). Let \(\pi^{\prime\prime}\) be any schedule with \(\mathbf{\pi^{\prime\prime}}=\mathbf{\pi^{\prime}}-\frac{1}{\gamma}(\mathbf{\pi^{\prime}_{ \text{fwd}}}+\mathbf{\pi^{\prime}_{\text{bwd}}})\). We have \[\mathbf{x}\rightarrow^{\frac{3}{2}\pi^{\prime}_{\text{fwd}}}_{Z}\mathbf{x}^{\prime} \rightarrow^{\pi^{\prime\prime}}\mathbf{y \(\Rightarrow\)) Suppose that \(\psi(\mathbf{x},\mathbf{\lambda},\mathbf{y})\wedge\psi_{\text{fwd}}(\mathbf{x},\mathbf{\lambda})\wedge \psi_{\text{bwd}}(\mathbf{\lambda},\mathbf{y})\) holds. Let \(\pi^{\prime}\coloneqq\prod_{i=1}^{n}\mathbf{\lambda}(i)\mathbf{m}_{i}\), \[\pi_{\text{fwd}}\coloneqq\prod_{i=1}^{n}\prod_{j=1}^{n}\lambda_{i,j}^{\text{ fwd}}\mathbf{m}_{j}\text{ and }\pi_{\text{bwd}}\coloneqq\prod_{i=1}^{n}\prod_{j=1}^{n}\lambda_{i,j}^{\text{bwd}} \mathbf{m}_{j},\] with the convention that \(0\cdot\mathbf{m}_{j}\) stands for the empty schedule. We clearly have \(\mathbf{x}\rightarrow^{\pi^{\prime}}\mathbf{y}\). By the above observation on \(\theta_{Z}\), we further have \(\mathbf{x}\rightarrow^{\pi_{\text{fwd}}}\text{ and }\rightarrow^{\pi_{\text{fwd}}}\mathbf{y}\). Moreover, \(\text{supp}(\pi^{\prime})=\text{supp}(\pi_{\text{fwd}})=\text{supp}(\pi_{ \text{bwd}})=\mathbf{\lambda}\). By Proposition 13, we obtain \(\mathbf{x}\rightarrow^{\pi}_{Z}\mathbf{y}\) for some \(\pi\) with \(\mathbf{\pi}=\mathbf{\lambda}\). \(\Leftarrow\)) Let \(\mathbf{x}\rightarrow^{\pi}_{Z}\mathbf{y}\) and \(\mathbf{\lambda}\coloneqq\mathbf{\pi}\). By Proposition 10, there are \(\pi_{x}\) and \(\pi_{y}\) such that \(\mathbf{x}\rightarrow^{\pi_{Z}}_{Z}\mathbf{\cdot}\), \(\rightarrow^{\pi_{Z}}_{Z}\mathbf{y}\), \(\text{supp}(\pi_{x})=\text{supp}(\pi_{y})=\text{supp}(\pi)\) and \(|\pi_{x}|=|\pi_{y}|=|\text{supp}(\pi)|\). Thus, we can use \(\pi\), \(\pi_{x}\) and \(\pi_{y}\) to satisfy \(\psi(\mathbf{x},\mathbf{\lambda},\mathbf{y})\), \(\psi_{\text{fwd}}(\mathbf{x},\mathbf{\lambda})\) and \(\psi_{\text{bwd}}(\mathbf{\lambda},\mathbf{y})\). _Expressing \(\textsf{GZ}\wedge\textsf{G}\textsf{F}X\wedge\textsf{G}\textsf{F}Y\) in first-order logic_ In this subsection, we build a formula \(\varphi_{\textsf{GZ}\wedge\textsf{G}\textsf{F}X\wedge\textsf{G}\textsf{F}Y}\) from the logic of Section IV-A such that \(\mathbf{x}\models_{M}\textsf{GZ}\wedge\textsf{G}\textsf{F}X\wedge\textsf{G} \textsf{F}Y\) iff \(\varphi_{\textsf{GZ}\wedge\textsf{G}\textsf{F}X\wedge\textsf{G}\textsf{F}Y}( \mathbf{x})\) holds. Let us fix an MMS \(M\). **Proposition 15**: _Let \(Z\) be a zone, let \(\pi\) be a schedule, let \(\mathbf{x},\mathbf{x}^{\prime},\mathbf{y}\in Z\) and let \(\beta\in(0,1]\). Let \(\mathbf{z}\coloneqq\beta\mathbf{x}+(1-\beta)\mathbf{y}\) and \(\mathbf{z}^{\prime}\coloneqq\beta\mathbf{x}^{\prime}+(1-\beta)\mathbf{y}\). If \(\mathbf{x}\rightarrow^{\pi}_{Z}\mathbf{x}^{\prime}\) holds, then \(\mathbf{z}\rightarrow^{\beta\pi}_{Z}\mathbf{z}^{\prime}\)._ **Proposition 16**: _Let \(X,Y,Z\) be zones where at least one of the three zones is bounded. Let \(\mathbf{z}\models_{M}\textsf{GZ}\wedge\textsf{G}\textsf{F}X\wedge\textsf{G}\textsf{F}Y\). There exist \(\mathbf{x}_{f}\in X\cap Z\), \(\mathbf{y}_{f}\in Y\cap Z\) and finite schedules \(\pi,\pi^{\prime}\) such that \(\mathbf{z}\rightarrow^{\pi}\mathbf{x}_{f}\rightarrow^{\pi^{\prime}}\mathbf{y}_{f} \rightarrow^{\pi^{\prime}}\mathbf{z}_{f}\) and \(\|\mathbf{\pi}+\mathbf{\pi}^{\prime}\|\geq 1\)._ Let \(X^{\prime}\coloneqq X\cap Z\) and \(Y^{\prime}\coloneqq Y\cap Z\). By assumption, there exist \(\mathbf{x}_{0},\mathbf{x}_{1},\ldots\in X^{\prime}\) and \(\mathbf{y}_{0},\mathbf{y}_{1},\ldots\in Y^{\prime}\) such that \[\mathbf{z}\rightarrow^{*}_{Z}\mathbf{x}_{0}\rightarrow^{\pi_{0}}_{Z}\mathbf{y}_{0} \rightarrow^{\pi^{\prime}_{0}}_{Z}\mathbf{x}_{1}\rightarrow^{\pi_{1}}_{Z}\mathbf{y}_{ 1}\rightarrow^{\pi_{1}^{\prime}}_{Z}\cdots,\] and \(\|\mathbf{\pi}_{i}+\mathbf{\pi}_{i}^{\prime}\|\geq 1\) for all \(i\in\mathbb{N}\). Note that the latter follows from non-Zenoness. Let \(\mathbf{A}_{1}\mathbf{\ell}\leq\mathbf{b}_{1}\) and \(\mathbf{A}_{2}\mathbf{\ell}\leq\mathbf{b}_{2}\) be the systems of inequalities that respectively represent zones \(X^{\prime}\) and \(Y^{\prime}\). We define \(\mathbf{M}\) as the matrix such that each column is a mode from \(M\). Let \(\mathcal{S}\) denote the following system: \[\exists\mathbf{u}_{1},\mathbf{u}_{2},\mathbf{u}_{3}\geq\mathbf{0}:\] \[\begin{bmatrix}\mathbf{\mathrm{A}}_{1}\mathbf{\mathrm{M}}&\mathbf{0}&\mathbf{0}\\ \mathbf{\mathrm{A}}_{2}\mathbf{\mathrm{M}}&\mathbf{\mathrm{A}}_{2}\mathbf{\mathrm{M}}&\mathbf{0}\\ \mathbf{0}&\mathbf{\mathrm{M}}&\mathbf{\mathrm{M}}\\ \mathbf{0}&-\mathbf{\mathrm{M}}&-\mathbf{\mathrm{M}}\\ \mathbf{0}^{T}&-\mathbf{\mathrm{1}}^{T}&-\mathbf{\mathrm{1}}^{T}\end{bmatrix}\begin{bmatrix} \mathbf{u}_{1}\\ \mathbf{u}_{2}\\ \mathbf{u}_{3}\end{bmatrix}\leq\begin{bmatrix}\mathbf{b}_{1}-\mathbf{\mathrm{A}}_{1}\mathbf{z} \\ \mathbf{b}_{2}-\mathbf{\mathrm{A}}_{2}\mathbf{z}\\ \mathbf{0}\\ \mathbf{0}\\ -1\end{bmatrix}.\] Observe that \(\mathcal{S}\) is equivalent to the existence of \(\mathbf{x}_{f}\in X^{\prime},\mathbf{y}_{f}\in Y^{\prime}\) and \(\pi,\pi^{\prime}\) such that \[\mathbf{z}\rightarrow^{*}\mathbf{x}_{f}\rightarrow^{\pi}\mathbf{y}_{f}\rightarrow^{\pi^{ \prime}}\mathbf{x}_{f}\text{ and }\|\mathbf{\pi}+\mathbf{\pi}^{\prime}\|\geq 1.\] For the sake of contradiction, suppose that \(\mathcal{S}\) has no solution. By Farkas' lemma, the following system \(\mathcal{S}^{\prime}\) has a solution: \[\exists\mathbf{v}_{1},\mathbf{v}_{2}\in\mathbb{R}^{d}_{\geq 0},\mathbf{v}_{3}\in \mathbb{R}^{d},v_{4}\in\mathbb{R}_{\geq 0}:\] \[\begin{bmatrix}\mathbf{M}^{T}\mathbf{A}_{1}^{T}&\mathbf{M}^{T}\mathbf{ A}_{2}^{T}&\mathbf{0}&\mathbf{0}\\ \mathbf{0}&\mathbf{M}^{T}\mathbf{A}_{2}^{T}&\mathbf{M}^{T}&-\mathbf{\mathrm{1}}\\ \mathbf{0}&\mathbf{0}&\mathbf{M}^{T}&-\mathbf{\mathrm{1}}\end{bmatrix}\begin{bmatrix}\mathbf{v}_{ 1}\\ \mathbf{v}_{2}\\ \mathbf{v}_{3}\\ v_{4}\end{bmatrix}\geq\mathbf{0},\] \[\begin{bmatrix}(\mathbf{b}_{1}-\mathbf{\mathrm{A}}_{1}\mathbf{z})^{T}&(\mathbf{b}_{2}- \mathbf{\mathrm{A}}_{2}\mathbf{z})^{T}&-1\end{bmatrix}\begin{bmatrix}\mathbf{v}_{1}\\ \mathbf{v}_{2}\\ \mathbf{v}_{2}\\ v_{4}\end{bmatrix}<0.\] Using the above, we will construct linear functions \(g\) and \(h\) such that \(\lim_{i\rightarrow\infty}g(\mathbf{x}_{i})=\lim_{i\rightarrow\infty}h(\mathbf{y}_{i})=\infty\). Since either \(X^{\prime}\) or \(Y^{\prime}\) is bounded, this yields a contradiction. We make a case distinction on the value of \(v_{4}\). _Case \(v_{4}>0\)._ We have \(\mathbf{v}_{3}^{T}\mathbf{M}\geq\mathbf{1}^{T}v_{4}>0\). Let \(g(\mathbf{x}_{i})\coloneqq\mathbf{v}_{3}^{T}(\mathbf{x}_{i}-\mathbf{x}_{0})\) and \(h(\mathbf{y}_{i})\coloneqq\mathbf{v}_{3}^{T}(\mathbf{y}_{i}-\mathbf{y}_{0})\). For every \(i\geq 1\), \[g(\mathbf{x}_{i}) =\mathbf{v}_{3}^{T}(\mathbf{x}_{i}-\mathbf{x}_{0})\] \[=\mathbf{v}_{3}^{T}(\mathbf{x}_{i-1}-\mathbf{x}_{0})+\mathbf{v}_{3}^{T}(\mathbf{x}_{i }-\mathbf{x}_{i-1})\] \[=g(\mathbf{x}_{i-1})+ Similarly, for every \(i\geq 1\), we have \[f(\mathbf{x}_{0},\mathbf{x}_{i})=f(\mathbf{x}_{0},\mathbf{y}_{i-1})+\mathbf{v}_{3}^{T}(\mathbf{x}_{i}-\bm {y}_{i-1})\geq f(\mathbf{x}_{0},\mathbf{y}_{i-1}),\] which implies \(g(\mathbf{x}_{i})\geq h(\mathbf{y}_{i-1})\). So, \(\lim_{i\to\infty}g(\mathbf{x}_{i})=\infty\). We now seek to show the following proposition. **Proposition 17**: _Let \(X\), \(Y\) and \(Z\) be zones. Let \(\mathbf{z},\mathbf{z}^{\prime}\in Z\), \(\mathbf{x}_{0},\mathbf{x}^{\prime},\mathbf{x}_{f}\in X\cap Z\) and \(\mathbf{y}_{0},\mathbf{y}_{f}\in Y\cap Z\) be such that_ * \(\mathbf{z}\to_{Z}^{*}\mathbf{z}^{\prime}\to_{Z}^{\prime\prime}\mathbf{x}_{0}\to_{Z}^{ \prime}\mathbf{y}_{0}\to_{Z}^{\prime}\mathbf{x}^{\prime}\)_,_ * \(\mathbf{x}^{\prime}\to^{\rho}\mathbf{x}_{f}\to^{\rho^{\prime}}\mathbf{y}_{f}\to^{\rho^{ \prime\prime}}\mathbf{x}_{f}\)_,_ * \(\text{supp}(\rho)=\text{supp}(\pi)=\text{supp}(\pi^{\prime})=\text{supp}(\pi ^{\prime\prime})\)_,_ * \(\text{supp}(\rho^{\prime})\cup\text{supp}(\rho^{\prime\prime})\subseteq \text{supp}(\rho)\) _and_ \(\|\mathbf{\rho}^{\prime}+\mathbf{\rho}^{\prime\prime}\|\geq 1\)_._ It is the case that \(\mathbf{z}\models\mathsf{G}Z\wedge\mathsf{GF}X\wedge\mathsf{GF}Y\). To prove the above proposition, we build a schedule from the given schedules. By assumption, there exists a sufficiently small \(\epsilon\in\mathbb{R}_{>0}\) such that \(\mathbf{\rho}\geq\epsilon\cdot(\mathbf{\pi}+\mathbf{\pi}^{\prime})\). Let \(\lambda\coloneqq 1-(1/(1+\epsilon))\). For every \(n\in\mathbb{N}_{\geq 1}\), let \[\mathbf{x}_{n}\coloneqq\mathbf{y}_{n-1}+\lambda^{n-1}\left(\mathbf{\Delta}_{ \pi^{\prime}}+\frac{\mathbf{\Delta}_{\rho}-\epsilon\mathbf{\Delta}_{\pi\pi^{\prime}}} {1+\epsilon}\right)\\ +(1-\lambda^{n-1})\mathbf{\Delta}_{\rho^{\prime\prime}},\] \[\mathbf{y}_{n}\coloneqq\mathbf{x}_{n}+\lambda^{n}\mathbf{\Delta}_{\pi}+(1- \lambda^{n})\mathbf{\Delta}_{\rho^{\prime}}.\] **Proposition 18**: _For every \(n\in\mathbb{N}\), it is the case that \(\mathbf{x}_{n}=\lambda^{n}\mathbf{x}_{0}+(1-\lambda^{n})\mathbf{x}_{f}\) and \(\mathbf{y}_{n}=\lambda^{n}\mathbf{y}_{0}+(1-\lambda^{n})\mathbf{y}_{f}\)._ The following proposition proves Proposition 17. **Proposition 19**: _It is the case that (1) \(\mathbf{x}_{n}\to_{Z}^{*}\mathbf{y}_{n}\) and (2) \(\mathbf{y}_{n}\to_{Z}^{*}\mathbf{x}_{n+1}\) for all \(n\in\mathbb{N}\)._ (1) Recall that \(\mathbf{x}_{0}\to_{Z}^{\pi}\mathbf{y}_{0}\) and \(\mathbf{x}_{f}\in Z\). Therefore, by Propositions 15 and 18, we have \[\mathbf{x}_{n}=(\lambda^{n}\mathbf{x}_{0}+(1-\lambda^{n})\mathbf{x}_{f})\\ \to_{Z}^{\lambda^{n}\pi}(\lambda^{n}\mathbf{y}_{0}+(1-\lambda^{n})\mathbf{ x}_{f}). \tag{5}\] Similarly, by Propositions 15 and 18, we have \[(\lambda^{n}\mathbf{x}_{0}+(1-\lambda^{n})\mathbf{y}_{f})\\ \to_{Z}^{\lambda^{n}\pi}(\lambda^{n}\mathbf{y}_{0}+(1-\lambda^{n}) \mathbf{y}_{f})=\mathbf{y}_{n}. \tag{6}\] By definition, we have \[\mathbf{y}_{n}=\mathbf{x}_{n}+\lambda^{n}\mathbf{\Delta}_{\pi}+(1-\lambda^{n})\mathbf{\Delta}_ {\rho^{\prime}}. \tag{7}\] Altogether, (5)-(7) yield \[\mathbf{x}_{n}\to^{\lambda^{n}\pi\,(1-\lambda^{n})\rho^{\prime}}\mathbf{y}_{n},\,\mathbf{ x}_{n}\to_{Z}^{\lambda^{n}\pi}\text{ and }\to_{Z}^{\lambda^{n}\pi}\mathbf{y}_{n}.\] As \(\text{supp}(\rho^{\prime})\subseteq\text{supp}(\pi)\), we have \(\text{supp}(\lambda^{n}\pi)=\text{supp}(\lambda^{n}\pi(1-\lambda^{n})\rho^{ \prime})\). So, by Proposition 13, we conclude that \(\mathbf{x}_{n}\to_{Z}^{*}\mathbf{y}_{n}\). (2) By Propositions 15 and 18, we have \[\mathbf{y}_{n}=\lambda^{n}\mathbf{y}_{0}+(1-\lambda^{n})\mathbf{y}_{f}\to_{Z}^{\lambda^{n} \pi^{\prime}}(\lambda^{n}\mathbf{x}^{\prime}+(1-\lambda^{n})\mathbf{y}_{f}). \tag{8}\] Similarly, by Propositions 15 and 18, we have \[(\lambda^{n+1}\mathbf{z}^{\prime}+(1-\lambda^{n+1})\mathbf{x}_{f})\to_{Z}^{\lambda^{n+ 1}\pi^{\prime\prime}}\mathbf{x}_{n+1}. \tag{9}\] Let \(\pi_{n}\) be any finite schedule such that \(\mathbf{\pi}_{n}=\mathbf{\rho}-\epsilon(\mathbf{\pi}+\mathbf{\pi}^{\prime})\). We have \[\mathbf{y}_{n}\to^{\lambda^{n}\pi^{\prime}\,(\lambda^{n}/(1+\epsilon))\pi_{n}\,(1- \lambda^{n})\rho^{\prime\prime}}\mathbf{x}_{n+1}. \tag{10}\] By (8)-(10), \(\text{supp}(\rho^{\prime\prime})\subseteq\text{supp}(\rho)=\text{supp}(\pi)= \text{supp}(\pi^{\prime})=\text{supp}(\pi^{\prime\prime})\) and Proposition 13, we obtain \(\mathbf{y}_{n}\to_{Z}^{*}\mathbf{x}_{n+1}\). We may now conclude this subsection by building a suitable first-order formula. Let \(\mathbf{x}\to_{Z}^{\mathbf{\lambda}}\mathbf{y}\) be a shorthand for formula \(\varphi_{Z}(\mathbf{x},\mathbf{\lambda},\mathbf{y})\) from Section IV-B, and let \(\mathbf{x}\to_{Z}^{\mathbf{\ast}}\mathbf{y}\) stand for \(\exists\mathbf{\lambda}\geq\mathbf{0}:\mathbf{x}\to_{Z}^{\mathbf{\lambda}}\mathbf{y}\). Let \(M=\{\mathbf{m}_{1},\dots,\mathbf{m}_{n}\}\). We define \(\varphi_{\mathsf{G}Z\wedge\mathsf{GF}X\wedge\mathsf{GF}Y}(\mathbf{z})\) by \[\exists\mathbf{z}^{\prime}\in Z;\mathbf{x}_{0},\mathbf{x}^{\prime},\mathbf{x}_{f} \in X\cap Z;\mathbf{y}_{0},\mathbf{y}_{f}\in Y\cap Z;\] \[\mathbf{\pi},\mathbf{\pi}^{\prime},\mathbf{\pi}^{\prime\prime},\mathbf{\rho},\mathbf{ \rho}^{\prime},\mathbf{\rho}^{\prime\prime}\succeq\mathbf{0}:\] \[\mathbf{z}\to_{Z}^{\mathbf{\ast}}\mathbf{z}^{\prime}\to_{Z}^{\mathbf{\ast}^{\prime \prime}}\mathbf{x}_{0}\to_{Z}^{\mathbf{\ast}}\mathbf{y}_{0}\to_{Z}^{\mathbf{\ast}}\mathbf{x}^{ \prime}\wedge \tag{11}\] \[\mathbf{x}^{\prime}\to^{\rho}\mathbf{x}_{f}\to^{\rho^{\prime}}\mathbf{y}_{f} \to^{\rho^{\prime\prime}}\mathbf{x}_{f}\wedge\] (12) \[\bigwedge_{j\in[1..n]}\theta_{j}\wedge\sum_{j\in[1..n]}(\mathbf{\rho}^{ \prime}(j)+\mathbf{\rho}^{\prime\prime}(j))\geq 1,\] where \[\theta_{j} =(\mathbf{\pi}(j)>0\leftrightarrow\mathbf{\pi}^{\prime}(j)>0\leftrightarrow\mathbf{ \pi}^{\prime\prime}(j)>0\leftrightarrow\mathbf{\rho}(j)>0)\] \[\wedge(\mathbf{\rho}^{\prime}(j)>0\to\mathbf{\rho}(j)>0)\wedge(\mathbf{\rho}^{ \prime\prime}(j)>0\to\mathbf{\rho}(j)>0).\] **Proposition 20**: _It is the case that \(\mathbf{z}\models_{M}\mathsf{G}Z\wedge\mathsf{GF}X\wedge\mathsf{GF}Y\) iff \(\varphi_{\mathsf{G}Z\wedge\mathsf{GF}X\wedge\mathsf{GF}Y}(\mathbf{z})\) holds._ \(\Leftarrow\)_ It follows directly from Proposition 17, since \(\varphi_{Z}\) is the same statement written in logic._ \(\Rightarrow\)_Let \(\pi\) be a non-Zeno infinite schedule such that \(\sigma\coloneqq\operatorname{exec}(\pi,\mathbf{z} Let \(\rho^{\prime}\coloneqq(1/\beta\cdot\mathrm{time}(\pi^{\prime}))\pi^{\prime}\). For all \(i\in\mathbb{N}\), let \(\mathbf{z}_{i+1}\coloneqq\mathbf{z}_{i}+\mathbf{\Delta}_{\rho^{\prime}}\). Since \(\mathrm{supp}(\pi^{\prime})\subseteq\mathrm{supp}(\pi)\), we have \(\mathbf{z}_{0}\mathbin{\rightarrow^{\rho^{\prime}}_{Z}}\mathbf{z}_{1}\). Moreover, as \(\mathbf{A}\mathbf{\Delta}_{\pi^{\prime}}=\mathbf{A}(\mathbf{z}^{\prime}-\mathbf{z})\leq \mathbf{0}\), we have \(\mathbf{A}\mathbf{\Delta}_{\rho^{\prime}}\leq\mathbf{0}\) and hence \(\mathbf{A}\mathbf{z}_{1}\leq\mathbf{A}\mathbf{z}_{0}\). By Proposition 22, we obtain \(\mathbf{z}_{0}\mathbin{\rightarrow^{\rho^{\prime}}_{Z}}\mathbf{z}_{1}\). By the same reasoning, we conclude that \[\mathbf{z}\mathbin{\rightarrow^{\rho}_{Z}}\mathbf{z}_{0}\mathbin{\rightarrow^{\rho^{ \prime}}_{Z}}\mathbf{z}_{1}\mathbin{\rightarrow^{\rho^{\prime}}_{Z}}\mathbf{z}_{2} \mathbin{\rightarrow^{\rho^{\prime}}_{Z}}\cdots.\] \(\Rightarrow\)) Let \(\pi^{\prime\prime}\) be a non-Zeno infinite schedule such that \(\sigma\coloneqq\mathrm{exec}(\pi^{\prime\prime},\mathbf{z})\models\mathsf{G}Z\). Let \(M^{\prime}\) be the set of modes used in \(\pi^{\prime\prime}\). From \(\mathbf{z}\), we move along \(\pi^{\prime\prime}\) to some point where all modes from \(M^{\prime}\) have been used. We take \(\pi\) as such a prefix. The other constraints hold by Proposition 21. _From \(\mathsf{G}\!Z_{0}\wedge\bigwedge_{i=1}^{n}\mathsf{G}\!F\!Z_{i}\) to \(\mathsf{G}\!Z\wedge\mathsf{G}\!F\!X\wedge\mathsf{G}\!F\!Y\)_ **Lemma 3**: _Given a \(d\)-dimensional MMS \(M\), point \(\mathbf{x}\in\mathbb{R}^{d}\) and zones \(Z_{0},\ldots,Z_{n}\subseteq\mathbb{R}^{d}\), it is possible to construct, in polynomial time, an \(nd\)-dimensional MMS \(M^{\prime}\) and zones \(X,Y,Z\subseteq\mathbb{R}^{nd}\) such that \(\mathbf{x}\models_{M}\mathsf{G}\!Z_{0}\wedge\mathsf{G}\!F\!Z_{1}\wedge\cdots \wedge\mathsf{G}\!F\!Z_{n}\) iff \((\mathbf{x},\ldots,\mathbf{x})\models_{M^{\prime}}\mathsf{G}Z\wedge\mathsf{G}\!F\!X \wedge\mathsf{G}\!F\!Y\). Furthermore, zone \(Z\) is bounded iff zone \(Z_{0}\) is bounded, and zones \(\{X,Y\}\) are all bounded iff zones \(\{Z_{1},\ldots,Z_{n}\}\) are all bounded._ We consider each \(\mathbf{s}\in\mathbb{R}^{nd}\) as a sequence of \(n\) points from \(\mathbb{R}^{d}\), _i.e._\(\mathbf{s}=(\mathbf{s}[1],\ldots,\mathbf{s}[n])\). Formally, for all \(\mathbf{s}\in\mathbb{R}^{nd}\) and \(i\in[1..n]\), let \(\mathbf{s}[i]\coloneqq(\mathbf{s}((i-1)\cdot d+1),\ldots,\mathbf{s}(i\cdot d))\). For each \(\mathbf{m}\in M\), let \(\mathbf{m}_{i}\in\mathbb{R}^{nd}\) be such that \(\mathbf{m}_{i}[i]=\mathbf{m}\) and \(\mathbf{m}_{i}[j]=\mathbf{0}\) for all \(j\neq i\). Let, \[M^{\prime} \coloneqq\{\mathbf{m}_{i}:\mathbf{m}\in M,i\in[1..n]\},\] \[Z \coloneqq Z_{0}\times Z_{0}\times\cdots\times Z_{0},\] \[X \coloneqq Z_{1}\times Z_{2}\times\cdots\times Z_{n},\text{ and}\] \[Y \coloneqq\{\mathbf{y}\in\mathbb{R}^{nd}:\mathbf{y}[1]=\cdots=\mathbf{y}[n]\in Z _{1}\}.\] Let \(\varphi\coloneqq\mathsf{G}\!Z_{0}\wedge\mathsf{G}\!\mathsf{G}\!F\!Z_{1} \wedge\cdots\wedge\mathsf{G}\!F\!Z_{n}\) and \(\varphi^{\prime}\coloneqq\mathsf{G}\!Z\wedge\mathsf{G}\!F\!X\wedge\mathsf{G}\!F\!Y\). It is the case that \(\mathbf{x}\models_{M}\varphi\) iff \((\mathbf{x},\ldots,\mathbf{x})\models_{M^{\prime}}\varphi^{\prime}\). ### _Model checking linear formulas_ We may now prove Theorem 1, _i.e._ show that the model-checking problem for linear LTL formulas is in P. Proof:: Let \(M\) be a \(d\)-dimensional MMS, let \(\mathbf{x}\in\mathbb{R}^{d}\), and let \(\psi\) be a semi-bounded linear LTL formula. We recursively build a formula \(\varphi_{\psi}\) from the polynomial-time first-order logic of Section IV-A such that \(\mathbf{x}\models_{M}\psi\) iff \(\varphi_{\psi}(\mathbf{x})\). For every \(A\subseteq AP\), let \(\mathrm{zone}(A)\) denote the zone obtained by taking the intersection of the zones from \(A\). _Case \(\psi=A\wedge\psi^{\prime}\)._ We take \(\varphi_{\psi}(\mathbf{x})\coloneqq\mathbf{x}\in\mathrm{zone}(A)\wedge\varphi_{\psi^{ \prime}}(\mathbf{x})\), which can be expressed as \(\mathrm{zone}(A)\) is represented by a system of inequalities. _Case \(\psi=B\)\(\mathsf{U}\)\((B^{\prime}\wedge\psi^{\prime})\)._ Note that \(\mathbf{x}\models_{M}B\)\(\mathsf{U}\)\(B^{\prime}\) almost amounts to \(\mathbf{x}\mathbin{\rightarrow^{*}_{\mathrm{zone}(B)}}\mathbf{y}\in\mathrm{zone}(B^{\prime})\), except that, contrary to the former, the latter requires \(\mathbf{y}\) to be part of \(\mathrm{zone}(B)\). In our case, we show that \(\mathrm{zone}(B^{\prime})\subseteq\mathrm{zone}(B)\). Recall that \(\uparrow\!B\supseteq\uparrow\!B^{\prime}\) by definition of linear LTL formulas. Let \(\mathbf{z}\in\mathrm{zone}(B^{\prime})\). We have \(\chi_{AP}(\mathbf{z})\supseteq B^{\prime}\) and \(\chi_{AP}(\mathbf{z})\in\uparrow\!B^{\prime}\subseteq\uparrow\!B\). Thus, \(\chi_{AP}(\mathbf{z})\supseteq B\) and so \(\mathbf{z}\in\mathrm{zone}(B)\). Thus, we take \[\varphi_{\psi}(\mathbf{x})\coloneqq\exists\mathbf{\lambda}\geq\mathbf{0},\mathbf{y}\in \mathrm{zone}(B^{\prime}):\varphi_{\mathrm{zone}(B)}(\mathbf{x},\mathbf{\lambda},\mathbf{y} )\wedge\varphi_{\psi^{\prime}}(\mathbf{y}),\] where \(\varphi_{Z}\) is the formula of Section IV-B with \(Z\coloneqq\mathrm{zone}(B)\). _Case \(\psi=(\mathsf{G}\!C_{0})\wedge\bigwedge_{i=1}^{n}\mathsf{G}\!F\!C_{i}\)._ Let \(Z_{i}\coloneqq\mathrm{zone}(C_{i})\) for all \(i\in[0..n]\). If \(n=0\), then we use formula \(\varphi_{\mathsf{G}\!Z_{0}}\) from Section IV-D. If \(n=1\), then we artificially define \(Z_{2}\coloneqq Z_{1}\). So, assume that \(n\geq 2\). By Lemma 3, we can construct, in polynomial time, an \(nd\)-dimensional MMS \(M^{\prime}\) and zones \(X,Y,Z\subseteq\mathbb{R}^{nd}\) such that \(\mathbf{x}\models_{M}\mathsf{G}\!Z_{0}\wedge\mathsf{G}\!F\!Z_{1}\wedge\cdots \wedge\mathsf{G}\!F\!Z_{n}\) iff \((\mathbf{x},\ldots,\mathbf{x})\models_{M^{\prime}}\mathsf{G}\!Z\wedge\!\mathsf{G}\!F\!X \wedge\!\mathsf{G}\!F\!Y\). Furthermore, zones \(X\) and \(Y\) are bounded since \(\{Z_{1},\ldots,Z_{n}\}\) are all bounded. Thus, we take \(\varphi_{\psi}(\mathbf{x})\coloneqq\varphi_{\mathsf{G}\!Z\wedge\mathsf{G}\!F\!X \wedge\!\mathsf{G}\!F\!Y}((\mathbf{x},\ldots,\mathbf{x}))\) from Section IV-C for \(M^{\prime}\). ## V NP-complete fragments In this section, we establish the NP-completeness of fragments \(\mathrm{LTL}_{\mathsf{B}}(\{\mathsf{F},\mathsf{G},\wedge\})\), \(\mathrm{LTL}_{\mathsf{B}}(\{\mathsf{F},\wedge,\vee\})\) and \(\mathrm{LTL}_{\mathsf{B}}(\{\mathsf{F},\wedge\})\). ### _Membership_ **Theorem 4**: _The fragment \(\mathrm{LTL}_{\mathsf{B}}(\{\mathsf{F},\mathsf{G},\wedge\})\) belongs to NP._ Proof:: Let \(M\) be a \(d\)-dimensional MMS, \(\mathbf{x}\in\mathbb{R}^{d}\) and \(\varphi\in\mathrm{LTL}_{\mathsf{B}}(\{\mathsf{F},\mathsf{G},\wedge\})\). Let \(\mathrm{sch}(M)\) denote the set of non-Zeno infinite schedules of \(M\). Let \(\mathrm{lps}(\mathcal{A}_{\varphi})\) denote the (finite) set of LPS of \(\mathcal{A}_{\varphi}\) starting from \(q_{0}\). Let \ ### _NP-hardness_ **Theorem 6**: _Fragment \(\mathrm{LTL}_{\text{B}}(\{\mathsf{F},\wedge\})\) is strongly NP-hard._ We reduce from the rational variant of SUBSETSUM [21], which asks, given \(S\subseteq\mathbb{Q}\) and \(t\in\mathbb{Q}\), whether some subset \(V\subseteq S\) satisfies \(\sum_{v\in V}v=t\). Given an instance where \(S=\{s_{1},\ldots,s_{n}\}\), we give a \((4n+1)\)-dimensional MMS \(M\) and a formula \(\varphi\in\mathrm{LTL}_{\text{B}}(\{F,\wedge\})\) such that \(\mathbf{0}\models_{M}\varphi\) holds iff there is a solution for \((S,t)\). A simple faulty approach is as follows. For each \(s_{i}\in S\), we could associate the modes \(\mathbf{y}_{i}=(0,\ldots 0,1,0,\ldots,0,s_{i})\) and \(\mathbf{n}_{i}=(0,\ldots 0,1,0,\ldots,0,0)\), where "1" appears in dimension \(i\). The goal would be to sum the modes in order to obtain \((1,\ldots,1,t)\). However, this is too naive. For example, consider \(S=\{8,9\}\) and \(t=4\). By taking \((0.5,\mathbf{y}_{1})\left(0.5,\mathbf{n}_{1}\right)\left(1,\mathbf{n}_{2}\right)\), we obtain \((1,1,4)\) even though \(4\) cannot be obtained from \(S\). We need a mechanism to ensure that, for each \(i\), either \(\mathbf{y}_{i}\) or \(\mathbf{n}_{i}\) is used by exactly one unit. For this reason, we will introduce the additional modes \(\overline{\mathbf{y}}_{i}\) and \(\overline{\mathbf{n}}_{i}\), and zones \(Y_{i}\), \(N_{i}\) and \(C_{i}\). We will require that \(Y_{i}\), \(N_{i}\) and \(C_{i}\) are all reached. As (partially) depicted in Figure 4, the only way to do so will be to either use schedule \((1,\mathbf{y}_{i})\left(1,\overline{\mathbf{y}}_{i}\right)\) or schedule \((1,\mathbf{n}_{i})\left(1,\overline{\mathbf{n}}_{i}\right)\). Moreover, the first zone reached, among \(Y_{i}\) and \(N_{i}\), will determine whether \(s_{i}\in S\) has been used. _Definition of \(M\) and \(\varphi\):_ Let us now proceed. We will refer to the first \(4n\) dimensions as \(c_{i,1},c_{i,2},c_{i,3},c_{i,4}\) for every \(i\in[1..n]\), and to the remaining dimension as \(c^{*}\). Intuitively, at the end of a satisfying execution, \(c^{*}\) will store the number \(t\), which was derived by summing up elements from \(S\). The other dimensions ensure that each element of \(S\) is added to \(c^{*}\) by a factor of \(1\) or \(0\), _i.e._ neither partially nor more than once. For all \(i\in[1..n]\), modes \(\{\mathbf{y}_{i},\mathbf{n}_{i},\overline{\mathbf{y}}_{i},\overline{\mathbf{n}}_{i}\}\) are defined by: \[\begin{array}{lcccc}\hline j&\mathbf{y}_{i}(j)&\mathbf{n}_{i}(j)&\overline{\mathbf{y}} _{i}(j)&\overline{\mathbf{n}}_{i}(j)\\ \hline c_{i,1}&0.5&-0.5&-1&1\\ c_{i,2}&1&1&1&1\\ c_{i,3}&1&1&0&0\\ c_{i,4}&0&0&1&1\\ c^{*}&s_{i}&0&0&0\\ \text{else}&0&0&0&0\\ \hline\end{array}\] Let \(\gamma\coloneqq\max(2,|t|,n\cdot\max(|s_{1}|,\ldots,|s_{n}|))\). For every \(i\in[1..n]\), we define zones \(Y_{i}\), \(N_{i}\) and \(C_{i}\) by the constraints: \[\begin{array}{lcccc}\hline\hline&Y_{i}&N_{i}&C_{i}\\ \hline c_{i,1}&=0.5&=-0.5&\in[-0.5,0.5]\\ c_{i,2}&\in[1,2]&\in[1,2]&=2\\ c_{i,3}&=1&=1&=1\\ c_{i,4}&\in[0,1]&\in[0,1]&=1\\ \text{else}&\in[-\gamma,\gamma]&\in[-\gamma,\gamma]&\in[-\gamma,\gamma]\\ \hline\end{array}\] Let \(T\) be the zone \(T\coloneqq\{\mathbf{x}\in C_{1}\cap\cdots\cap C_{n}:\mathbf{x}(c^{*})=t\}\). We define \(\varphi\coloneqq\mathsf{F}T\wedge\bigwedge_{i=1}^{n}(\mathsf{F}Y_{i}\wedge \mathsf{F}N_{i})\). Intuitively, the first zone that is reached among \(Y_{i}\) and \(N_{i}\) indicates whether number \(s_{i}\) is used in the solution to the SUBSET-SUM instance. Mode \(\mathbf{y}_{i}\) can be used to reach \(Y_{i}\) first, and likewise with \(\mathbf{n}_{i}\) for \(N_{i}\). Mode \(\overline{\mathbf{y}}_{i}\) can be used to go from \(Y_{i}\) to \(N_{i}\); and likewise for \(\overline{\mathbf{n}}_{i}\) for \(N_{i}\) to \(Y_{i}\). See Figure 4. _Correctness:_ The proof appears in the full version. ## VI Undecidable fragments In this section, we show that \(\mathrm{LTL}_{\text{B}}(\{\mathsf{U}\})\) and \(\mathrm{LTL}_{\text{B}}(\{\mathsf{G},\vee\})\) are undecidable, by reducing from the reachability problem for Petri nets with inhibitor arcs (_i.e._ zero-tests). A _Petri net with inhibitor arcs_ is a tuple \(\mathcal{N}=(P,T,\thicksim\mathbf{\Delta})\) where * \(P\) is a finite set of elements called _places_, * \(T\) is a disjoint finite set of elements called _transitions_, * \(\thicksim T\to\{\geq,=\}^{P}\), and * \(\mathbf{\Delta}\colon T\to\mathbb{Z}^{P}\). A transition \(t\) is _enabled_ in \(\mathbf{x}\in\mathbb{N}^{P}\) if \(\mathbf{x}\thicksim\mathbf{\sim}_{t}\mathbf{0}\) and \(\mathbf{x}+\mathbf{\Delta}_{t}\geq\mathbf{0}\). If it is enabled, then its _firing_ leads to \(\mathbf{x}^{\prime}\coloneqq\mathbf{x}+\mathbf{\Delta}_{t}\), denoted \(\mathbf{x}\to^{t}\mathbf{x}^{\prime}\). We write \(\mathbf{x}\to\mathbf{x}^{\prime}\) if \(\mathbf{x}\to^{t}\mathbf{x}^{\prime}\) for some \(t\). We define \(\to^{+}\) as the transitive closure of \(\to\), and \(\to^{*}\) as the reflexive closure of \(\to^{+}\). The _reachability problem_ asks, given a Petri nets with inhibitor arcs \(\mathcal{N}\), and \(\mathbf{x}_{\text{src}},\mathbf{x}_{\text{tgt}}\), whether \(\mathbf{x}_{\text{src}}\to^{+}\mathbf{x}_{\text{tgt}}\) in \(\mathcal{N}\). This problem is undecidable, _e.g._ see [22]. ### _From Petri nets with inhibitor arcs to MMS_ In this subsection, we will prove the following proposition through a series of intermediate propositions: **Proposition 24**: _Given a Petri net with inhibitor arcs \(\mathcal{N}\) and \(\mathbf{x}_{\text{src}},\mathbf{x}_{\text{tgt}}\), it is possible to compute an MMS \(M\), two points \(\mathbf{x},\mathbf{x}^{\prime}\), and a finite set of bounded zones \(AP\) such that_ 1. \(\mathbf{x}_{\text{src}}\to^{+}\mathbf{x}_{\text{tgt}}\) _in_ \(\mathcal{N}\) _iff_ \(\mathbf{x}\to^{*}_{AP}\mathbf{x}^{\prime}\) _in_ \(M\)_, and_ 2. _no infinite non-Zeno schedule_ \(\pi\) _satisfies_ \(\mathbf{x}\to^{\pi}_{AP}\) _in_ \(M\)_._ Let \(\mathcal{N}=(P,T,\thicksim\mathbf{\Delta})\) be a Petri net with inhibitor arcs. We define a \((|P|+3|T|)\)-dimensional MMS \(M\) together with zones \(AP\coloneqq\bigcup_{t\in T}\{A_{t},A^{\prime}_{t},B_{t},B^{\prime}_{t},C_{t}^{ \prime}\}\). We associate \(|P|\) dimensions to \(P\), which we collectively denote \(\mathbf{p}\). Each transition \(t\in T\) is associated to dimensions \(\{t_{A},t_{B},t_{C}\}\). Each transition \(t\in T\) gives rise to modes \(\{\mathbf{a}_{t},\mathbf{b}_{t},\mathbf{c}_{t}\}\). Informally, these three modes are respectively used to "request the firing of \(t\)", "fire \(t\)" and "release the control on \(t\)". For every \(s\neq t\) and \(I\in\{A,B,C\}\), we have \(\mathbf{a}_{t}(s_{I})=\mathbf{b}_{t}(s_{I})=\mathbf{c}_{t}(s_{I})\coloneqq 0\). The rest of the values are defined as follows: \[\begin{array}{l c c c}\hline\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\ there exist finite schedules \(\rho\top\) and \(\rho\bot\), respectively using only modes \(\{\mathbf{a}_{\top},\overline{\mathbf{a}_{\top}}\}\) and \(\{\mathbf{a}_{\bot},\overline{\mathbf{a}_{\bot}}\}\), such that \[\begin{pmatrix}1\\ 0\\ 0\\ 0\\ \mathbf{0}\end{pmatrix}\rightarrow_{A_{\top}}^{\rho\top}\begin{pmatrix}0\\ 0\\ 0\\ \lambda\\ \lambda\mathbf{x}\end{pmatrix}\rightarrow_{A^{P}\setminus\{A_{\top},A_{\bot}\}} ^{\pi_{\cdot}}\begin{pmatrix}0\\ 1\\ 0\\ \lambda\\ \lambda\mathbf{x}^{\prime}\end{pmatrix}\rightarrow_{A_{\bot}}^{\rho\bot}\begin{pmatrix} 0\\ 1\\ 1\\ 1\\ 0\\ \mathbf{0}\end{pmatrix}\] in \(M^{\prime}\). The above holds iff \((1,0,0,0,\mathbf{0})\rightarrow_{A^{P}}^{\star}(0,1,1,0,\mathbf{0})\) in \(M^{\prime}\) since zones enforce this ordering. It remains to show Item 2. For the sake of contradiction, suppose there exists an infinite non-Zeno schedule \(\pi\) such that \(\mathbf{x}\rightarrow_{A^{P}}^{\pi}\). All zones of \(AP\) enforce \(\top,\bot,\vdash,\star\in[0,1]\). Thus, we obtain a contradiction since: * If \(\mathrm{time}_{\mathbf{a}_{\top}}(\pi)+\mathrm{time}_{\overline{\mathbf{a}_{\top}}}( \pi)=\infty\), then \(\top\) drops below \(0\); * If \(\sum_{\mathbf{m}\in M}\mathrm{time}_{\mathbf{m}_{\vdash}}(\pi)=\infty\), then \(\vdash\) exceeds \(1\); * If \(\mathrm{time}_{\mathbf{a}_{\bot}}(\pi)+\mathrm{time}_{\overline{\mathbf{a}_{\bot}}}( \pi)=\infty\), then \(\bot\) exceeds \(1\). ### _Undecidability_ We prove the undecidability of the fragments \(\mathrm{LTL}_{\text{B}}(\{\mathsf{U}\})\) and \(\mathrm{LTL}_{\text{B}}(\{\mathsf{G},\vee\})\) using Proposition 24. **Lemma 5**: _Given \(\psi_{1},\dots,\psi_{n},\varphi\in\mathrm{LTL}_{\text{B}}(\{\mathsf{U}\})\), it is possible to compute a formula from \(\mathrm{LTL}_{\text{B}}(\{\mathsf{U}\})\) that is equivalent to formula \((\psi_{1}\vee\dots\vee\psi_{n})\)\(\mathsf{U}\)\(\varphi\)._ **Theorem 7**: \(\mathrm{LTL}_{\text{B}}(\{\mathsf{U}\})\) _and \(\mathrm{LTL}_{\text{B}}(\{\mathsf{G},\vee\})\) are undecidable._ Let \(\mathcal{N}\) be a Petri net with inhibitor arcs and let \(\mathbf{x}_{\text{sc}},\mathbf{x}_{\text{igt}}\). Let \(M\), \(\mathbf{x}\), \(\mathbf{x}^{\prime}\) and \(AP\) be given by Proposition 24. Let \(X^{\prime}:=\{\mathbf{x}^{\prime}\}\) and \(\psi\coloneqq(\bigvee_{Z\in AP}Z)\)\(\mathsf{U}\)\(X^{\prime}\). By Lemma 5, we can compute a formula \(\varphi\in\mathrm{LTL}_{\text{B}}(\{\mathsf{U}\})\) with \(\varphi\equiv\psi\). By Proposition 24, we have \(\mathbf{x}_{\text{sc}}\rightarrow^{+}\mathbf{x}_{\text{igt}}\) in \(\mathcal{N}\) iff \(\mathbf{x}\rightarrow_{A^{P}}^{\ast}\mathbf{x}^{\prime}\) in \(M\) iff \(\mathbf{x}\models_{M}\psi\) iff \(\mathbf{x}\models_{M}\varphi\). The proof for \(\mathrm{LTL}_{\text{B}}(\{\mathsf{G},\vee\})\) is essentially the same, but requires an extra "dummy dimension" that can be increased and decreased once (and only once) \(\mathbf{x}^{\prime}\) is reached. ## VII Conclusion We have introduced a linear temporal logic for MMS and established the complexity of model checking for each syntactic fragments: Each one is either P-complete, NP-complete or undecidable. This generalizes and unifies existing work on MMS and continuous vector addition systems/Petri nets. Future work includes fully dealing with unbounded zones; allowing for time constraints on temporal operators; and algorithmically optimizing objective functions on schedules satisfying a given LTL specification. It would also be interesting to go from theory to practice by providing a solver for linear LTL formulas, and more generally \(\mathrm{LTL}_{\text{B}}(\{\mathsf{F},\mathsf{G},\wedge\})\).
2305.15196
Feature-aligned N-BEATS with Sinkhorn divergence
We propose Feature-aligned N-BEATS as a domain-generalized time series forecasting model. It is a nontrivial extension of N-BEATS with doubly residual stacking principle (Oreshkin et al. [45]) into a representation learning framework. In particular, it revolves around marginal feature probability measures induced by the intricate composition of residual and feature extracting operators of N-BEATS in each stack and aligns them stack-wise via an approximate of an optimal transport distance referred to as the Sinkhorn divergence. The training loss consists of an empirical risk minimization from multiple source domains, i.e., forecasting loss, and an alignment loss calculated with the Sinkhorn divergence, which allows the model to learn invariant features stack-wise across multiple source data sequences while retaining N-BEATS's interpretable design and forecasting power. Comprehensive experimental evaluations with ablation studies are provided and the corresponding results demonstrate the proposed model's forecasting and generalization capabilities.
Joonhun Lee, Myeongho Jeon, Myungjoo Kang, Kyunghyun Park
2023-05-24T14:32:23Z
http://arxiv.org/abs/2305.15196v3
# Feature-aligned N-BEATS with Sinkhorn divergence ###### Abstract In this study, we propose Feature-aligned N-BEATS as a domain generalization model for univariate time series forecasting problems. The proposed model is an extension of the doubly residual stacking architecture of N-BEATS (Oreshkin et al. [34]) into a representation learning framework. The model is a new structure that involves marginal feature probability measures (i.e., pushforward measures of multiple source domains) induced by the intricate composition of residual operators of N-BEATS in each stack and aligns them stack-wise via an entropic regularized Wasserstein distance referred to as the Sinkhorn divergence (Genevay et al. [14]). The loss function consists of a typical forecasting loss for multiple source domains and an alignment loss calculated with the Sinkhorn divergence, which allows the model to learn invariant features stack-wise across multiple source data sequences while retaining N-BEATS's interpretable design. We conduct a comprehensive experimental evaluation of the proposed approach and the results demonstrate the model's forecasting and generalization capabilities in comparison with methods based on the original N-BEATS. ## 1 Introduction Machine learning models are typically proposed under the premise that the minimization of loss on a training distribution results in enhanced performance on a testing distribution, i.e., empirical risk minimization [42]. However, when deploying those models to real-world scenarios, an unknown target distribution often deviates substantially from the training distribution, posing challenges for effective model adaptation. This predicament is closely linked to the concept of domain shift in the machine learning literature [38]. In response to this, a multitude of studies has expanded the domain generalization (or adaptation) research [45; 46; 44]. In particular, classification tasks have been the predominant focus within these research areas [12; 22; 24]. Although several studies have discussed domain shift in the context of time series forecasting [19; 35; 20], this direction has not received substantial emphasis contrary to classification tasks. On the other hand, a recent deep neural network model employing doubly residual stacking principle (N-BEATS) [34] demonstrates remarkable forecasting capabilities and notable generalizability for time series data [35]. From these observations, we aim to propose Feature-aligned N-BEATS, a domain generalization model extending the doubly residual stacking architecture for univariate time series forecasting. To achieve this, we incorporate a specialized toolkit for learning invariant features within the doubly residual stacking architecture. In representation learning theory [5], the alignment (or regularization) of marginal feature measures (or distributions) is commonly employed for learning feature invariance. Many studies [31; 22; 28; 52] demonstrated that feature alignment is an effective resolution to domain generalization capabilities for classification tasks. On the other hand, the presence of multiple feature extractors within the doubly residual stacking architecture makes it challenging to define feature alignment in the backbone structure. Moreover, by the nature of time series forecasting, scale and dimensionality issues arise when dealing with both extremely distinct source domains and long-term forecasting tasks. To resolve these challenges, we devise a stack-wise alignment approach that aligns feature measures induced by the composition of residual operators and a normalization operator. This alignment process involves minimizing divergences between the feature measures on a stack-wise basis. This enables the model to learn invariant features for each stack while preserving the interpretability of the original N-BEATS (Section 4.1). In our approach, we utilize the Sinkhorn divergence [14] for the distance metric for stack-wise alignment. The inspiration for this choice comes from the adversarial framework [16], where an optimal transport distance is employed to calculate the divergence between a pushforward measure (induced by a generator) and a target measure. The Sinkhorn divergence, being an entropic regularized OT distance, has been recently employed in an adversarial framework for generating time series data [49]. Given that our feature measures are defined as pushforward measures and our objective is time series forecasting, we adopt the Sinkhorn divergence as the distance metric (Section 4.2). The training objective in our model is essentially the minimization of forecasting loss across multiple source domains, combined with the stack-wise Sinkhorn divergence between marginal feature measures (Section 4.3). Furthermore, we provide detailed descriptions of N-BEATS architecture along with its associated properties (Lemma 4.1). From this, we derive a representation learning bound (Theorem 4.1) based on the entropic regularized Wasserstein distance, which supports the feasibility of the stack-wise alignment. In Section 5, we provide a comprehensive evaluation analysis with real-world data, which demonstrates the generalization and forecasting capabilities of Feature-aligned N-BEATS. ## 2 Related Work Recurrent neural networks have emerged as a prominent architectural choice for sequence prediction tasks, and their utilization in forecasting has been considerably widespread [6; 39; 40; 4; 18]. Convolutional neural networks are utilized in time series forecasting for their capacity to extract local features and capture patterns invariant to their position within the sequence [8; 26]. Transformer-based methodologies employ self-attention mechanisms to capture temporal dependencies [25; 53; 48; 54; 47; 27; 50]. However, [50] argued that the permutation-invariant self-attention mechanism inherent in transformers leads to the loss of temporal information and demonstrates through experiments that a simple linear model outperforms sophisticated transformers in terms of performance. In contrast, multilayer perceptron-based approaches are devoid of these constraints, surpassing state-of-the-art transformer models in their empirical analyses [34; 7]. Recent advancements have introduced approaches to address robustness in domain shifts for time series data. [19] proposed sampling source domains resembling the target domain and employing regularization to encourage domain-invariant representations. [20] introduced a shared attention module with a domain discriminator to achieve domain-invariant features, while private modules capture domain-specific features. Additionally, [35] integrated domain generalization into time series forecasting, addressing situations with unknown target distributions during training within the framework of meta-learning. Nonetheless, an explicit toolkit for domain generalization is not considered in [35]. ## 3 Background Notation.We define the input and output spaces as \(\mathcal{X}:=\mathbb{R}^{\alpha}\) and \(\mathcal{Y}:=\mathbb{R}^{\beta}\), where \(\alpha\) and \(\beta\) represent the lookback and forecast horizons, respectively. The latent space is denoted as \(\mathcal{Z}:=\mathbb{R}^{\gamma}\), with \(\gamma\) representing the feature dimension. We also consider a subspace of the latent space as \(\widetilde{\mathcal{Z}}\subset\mathcal{Z}\). All the aforementioned spaces are equipped with the Euclidean norm \(\|\cdot\|\). We define \(\mathcal{P}:=\mathcal{P}(\mathcal{X}\times\mathcal{Y})\) as the set of all Borel joint probability measures on \(\mathcal{X}\times\mathcal{Y}\). For any \(\mathbb{P}\in\mathcal{P}\), \(\mathbb{P}_{\mathcal{X}}\) and \(\mathbb{P}_{\mathcal{Y}}\) represent its marginal probability measures on \(\mathcal{X}\) and \(\mathcal{Y}\), respectively. Furthermore, \(\mathcal{P}(\mathcal{X})\) and \(\mathcal{P}(\widetilde{\mathcal{Z}})\) denote the sets of all Borel probability measures on \(\mathcal{X}\) and \(\widetilde{\mathcal{Z}}\), respectively. ### Domain generalization in time series forecasting We consider a domain generalization task for univariate time series forecasting, in which there exists multiple source (or observable) domains \(\{\mathcal{D}^{k}\}_{k=1}^{K}\) with \(K\geq 2\) and target (or unseen) domain \(\mathcal{D}^{T}\). Each source domain \(\mathcal{D}^{k}\) is associated with a probability measure \(\mathbb{P}^{k}\in\mathcal{P}\) and the same holds for the target domain \(\mathcal{D}^{T}\) with \(\mathbb{P}^{T}\in\mathcal{P}\). The time series sequences within each domain are sampled from their respective joint distributions. Then, the objective is to derive a prediction model \(\mathfrak{F}:\mathcal{X}\rightarrow\mathcal{Y}\) such that \(\mathfrak{F}(\mathbf{s}_{t-\alpha+1},\cdots,\mathbf{s}_{t})\approx\mathbf{s}_ {t+1},\cdots\mathbf{s}_{t+\beta}\) for any \(\mathbf{s}=(\mathbf{s}_{t-\alpha+1},\cdots,\mathbf{s}_{t})\times(\mathbf{s}_{t +1},\cdots\mathbf{s}_{t+\beta})\sim\mathbb{P}^{T}\), by leveraging on \(\{\mathbb{P}^{k}\}_{k=1}^{K}\), i.e., \[\inf_{\mathfrak{F}}\ \ \mathcal{L}(\mathfrak{F}),\quad\text{with}\quad \mathcal{L}(\mathfrak{F}):=\frac{1}{K}\sum_{k=1}^{K}\mathbb{E}_{(x,y)\sim \mathbb{P}^{k}}\Big{[}l\big{(}\mathfrak{F}(x),y\big{)}\Big{]}, \tag{1}\] and \(l:\mathcal{Y}\times\mathcal{Y}\rightarrow\mathbb{R}_{+}\) denoting a loss function. This task involves a fundamental challenge of learning common features across diverse domains' sequences in addition to achieving accurate forecasting performance for an unknown target domain. ### Doubly residual stacking architecture The essential structure of doubly residual architectures in [34; 7] is summarized as follows: Given \(M,L\in\mathbb{N}\), the model comprises \(M\) stacks, with each stack consisting of \(L\) blocks. These blocks share the same model weight within their respective stack and are recurrently operated using the double residual stacking principle. More precisely, an \(m\)-th stack derives the doubly residual principle in a way that for any \(m=1,\ldots,M\) and \(x_{m,1}\in\mathcal{X}\), \[\hat{y}_{m}:=\sum_{l=1}^{L}(\xi_{\downarrow}^{m}\circ\psi^{m})(x_{m,l}),\ \ x_{m,l}:=x_{m,l-1}-(\xi_{\uparrow}^{m}\circ\psi^{m})(x_{m,l-1}),\quad l=2, \ldots,L, \tag{2}\] where \(\psi^{m}:\mathcal{X}\rightarrow\mathcal{Z}\) extracts features \(\psi^{m}(x_{m,l})\in\mathcal{Z}\) from the inputs \(x_{m,l}\in\mathcal{X}\) for each layer \(l\), and then \((\xi_{\downarrow}^{m},\xi_{\uparrow}^{m}):\mathcal{Z}\rightarrow\mathcal{Y} \times\mathcal{X}\) generates both forecasts \((\xi_{\downarrow}^{m}\circ\psi^{m})(x_{m,l})\in\mathcal{Y}\) and backcasts \((\xi_{\uparrow}^{m}\circ\psi^{m})(x_{m,l})\in\mathcal{X}\) branches of sequences. Note that \(\hat{y}_{m}\in\mathcal{Y}\) represents a stack forecast obtained through the hierarchical aggregation of each block's forecast. Additionally, the last backcast \(x_{m,L}\in\mathcal{X}\), derived by a residual sequence from blocks, serves as an input for the next stack, except when \(m=M\). Once the hierarchical aggregation of all stacks and the residual operations are completed, a prediction model \(\mathfrak{F}\) for the doubly residual stacking architecture is given by the following equations: for any \((x,y)\sim\mathbb{P}^{T}\) with \(x_{1,1}:=x\), \[y\approx\mathfrak{F}(x;\Psi,\Xi_{\downarrow},\Xi_{\uparrow}):=\sum_{m=1}^{M} \hat{y}_{m},\qquad x_{m,1}:=x_{m-1,L},\quad m=2,\ldots,M, \tag{3}\] subject to \(\hat{y}_{m}\) and \(x_{m-1,L}\) in (2), where \(\Psi:=\{\psi^{m}\}_{m=1}^{M}\), \(\Xi_{\downarrow}:=\{\xi_{\downarrow}^{m}\}_{m=1}^{M}\), and \(\Xi_{\uparrow}:=\{\xi_{\uparrow}^{m}\}_{m=1}^{M}\) are implemented by pure deep learning architecture consisting of fully-connected layers (for detailed configuration, please refer to Appendix A. of Supplementary Material). ### Domain-invariant feature representation Learning domain-invariant feature representations has been an effective approach to enhance domain generalization capabilities in classification tasks [31; 22; 28; 52]. In particular, [52] introduced a regularization term based on the adversarial learning framework [16], which enables their image classification model to learn invariant features across diverse source domains. Building upon the error analysis in domain adaptation models in [51], [1] proposed a new analysis on domain generalization models for classification tasks. This provides us valuable insight for developing a domain generalization toolkit within the context of doubly residual stacking models. The following proposition restates the analysis in [1, Theorem 1]. Before that, we introduce some notation for their classification tasks. Denote by \(\mathcal{H}\) a class of hypothesis functions \(h:\mathcal{X}\rightarrow[0,1]\) and by \(d_{\mathcal{H}}(\mathbb{P}_{\mathcal{X}}^{\prime},\mathbb{P}_{\mathcal{X}}^{ \prime\prime}):=2\sup_{h\in\mathcal{H}}|\mathbb{E}_{x\sim\mathbb{P}_{\mathcal{X }}^{\prime}}[\mathbf{1}_{\{h(x)=1\}}]-\mathbb{E}_{x\sim\mathbb{P}_{\mathcal{X }}^{\prime\prime}}[\mathbf{1}_{\{h(x)=1\}}]|\) an \(\mathcal{H}\)-divergence between any measures \(\mathbb{P}^{\prime}_{\mathcal{X}},\mathbb{P}^{\prime}_{\mathcal{X}}\in\mathcal{P}( \mathcal{X})\). For the class \(\mathcal{H}\), further denote by \(\widetilde{\mathcal{H}}:=\{\mathrm{sgn}(|h(\cdot)-h^{\prime}(\cdot)|-t):\forall h,h^{\prime}\in\mathcal{H},\ t\in[0,1]\}\), which induces marginal feature measures (see [51, Section 4.2], [1, Section 3.3]), and by \(d_{\widetilde{\mathcal{H}}}(\cdot,\cdot)\) the corresponding \(\widetilde{\mathcal{H}}\)-divergence on \(\mathcal{P}(\mathcal{X})\). **Proposition 3.1** ([1, Theorem 1]).: _Consider a convex hull of \(\{\mathbb{P}^{k}_{\mathcal{X}}\}_{k=1}^{K}\), i.e., \(\Lambda:=\{\sum_{k=1}^{K}\pi_{i}\mathbb{P}^{k}_{\mathcal{X}}|\pi\in\Delta_{K}\}\), where \(\Delta_{K}\) is a (K-1)-dimensional simplex such that each \(\pi\) represents a convex weight. Denote by \(\mathbb{P}^{*}:=\sum_{k=1}^{K}\pi_{k}^{*}\mathbb{P}^{k}_{\mathcal{X}}\in\Lambda\) a minimizer such that \(d_{\mathcal{H}}(\mathbb{P}^{T}_{\mathcal{X}},\mathbb{P}^{*}_{\mathcal{X}})= \min_{\mathbb{P}^{*}_{\mathcal{X}}\in\Lambda}d_{\mathcal{H}}(\mathbb{P}^{T}_{ \mathcal{X}},\mathbb{P}^{\prime}_{\mathcal{X}})\). Then an expected risk \(\epsilon^{T}\) under target measure \(\mathbb{P}^{T}_{\mathcal{X}}\) has an upper bound given as_ \[\epsilon^{T}(h)\leq\sum_{k=1}^{K}\pi_{k}^{*}\epsilon^{k}(h)+d_{\mathcal{H}}( \mathbb{P}^{T}_{\mathcal{X}},\mathbb{P}^{*}_{\mathcal{X}})+\max_{i,j\in\{1, \ldots,K\},\ i\neq j}d_{\widetilde{\mathcal{H}}}(\mathbb{P}^{i}_{\mathcal{X}},\mathbb{P}^{j}_{\mathcal{X}})+\lambda_{(\mathbb{P}^{T}_{\mathcal{X}},\mathbb{ P}^{*}_{\mathcal{X}})},\quad\forall h\in\mathcal{H}, \tag{4}\] _where \(\lambda_{(\mathbb{P}^{T}_{\mathcal{X}},\mathbb{P}^{*}_{\mathcal{X}})}:=\min\{ \mathbb{E}_{x\sim\mathbb{P}^{T}_{\mathcal{X}}}[|f^{*}(x)-f^{T}(x)|],\mathbb{E} _{x\sim\mathbb{P}^{*}_{\mathcal{X}}}[|f^{*}(x)-f^{T}(x)|]\}\) with \(f^{*}:=\sum_{k=1}^{K}\pi_{k}^{*}f^{k}\), and each \(f^{k}\) denotes a true labeling function under \(\mathbb{P}^{k}\) (i.e., \(y=f^{k}(x)\ \forall(x,y)\sim\mathbb{P}^{k}\)), and similarly \(f^{T}\) denotes a true labeling function under \(\mathbb{P}^{T}\)._ While the upper bound in (4) comprises four terms depending on distributional relation between the target domain and the multiple source domains, the first and third terms (representing the source risks, i.e., \(\{\epsilon^{k}\}_{k=1}^{K}\) and the pairwise divergences across all marginal feature measures, i.e., \(\{d_{\widetilde{\mathcal{H}}}(\mathbb{P}^{i}_{\mathcal{X}},\mathbb{P}^{j}_{ \mathcal{X}})\}_{i\neq j}^{K}\), respectively) are what models can learn within source domains. Inspired by this theoretical evidence and the representation learning framework, we propose a domain generalization model for time series forecasting problems by building on the doubly residual stacking architecture with a feature-alignment toolkit embracing the divergence of marginal feature measures. ## 4 Method ### Stack-wise marginal feature measures. Aligning marginal feature measures is the predominant approach in domain-invariant representation learning [12, 41]. These measures, referred to as pushforward measures \(\{g_{\mathcal{H}}\mathbb{P}^{k}_{\mathcal{X}}\}_{k=1}^{K}\), are induced by a given feature map \(g:\mathcal{X}\rightarrow\mathcal{Z}\) applied to the measures of source domains \(\{\mathbb{P}^{k}_{\mathcal{X}}\}_{k=1}^{K}\). Specifically, \(g_{\#}\mathbb{P}^{k}_{\mathcal{X}}(E)=\mathbb{P}^{k}_{\mathcal{X}}\circ g^{-1} (E)\) for any Borel set \(E\) on \(\mathcal{Z}\). However, defining feature measures for doubly residual architectures presents subtle challenges, as discussed in Section 3.2. These architectures include multiple feature extractors in \(\Psi=\{\psi^{m}\}_{m=1}^{M}\) as defined in (2), where each extractor \(\psi^{m}\) takes a sampled input passing through the residual operations of previous stacks. Within each stack, the input is repeatedly processed. Moreover, unlike classification tasks with finite-dimensional output, residual architectures operate in an infinite-dimensional space \(\mathcal{Y}\) due to sequential data. This can lead to the scaling issues when training with highly distinct source domains. Additionally, the scaling factor may represent instance-wise characteristics, as observed in the numerical magnitudes of \(\mathbf{s}\) values in Section 3.1, which exhibit noticeable variations in resolutions. Given these complexities, defining feature measures in a single shot is intricate for such architectures. To resolve these issues, we propose a stack-wise alignment of feature measures on the subspace \(\widetilde{\mathcal{Z}}\subseteq\mathcal{Z}\). This alignment involves calculating measures for each stack using the multiple compositions of feature extractions in \(\Psi=\{\psi^{m}\}_{m=1}^{M}\), backcasting operators in \(\Xi_{\uparrow}=\{\xi^{m}_{\uparrow}\}_{m=1}^{M}\) based on the residual principle described in (2), and a normalizing function \(\sigma:\mathcal{Z}\rightarrow\widetilde{\mathcal{Z}}\). **Definition 4.1**.: _The set of marginal feature measures in the \(m\)-th stack is defined by_ \[\{(\sigma\circ g^{m})_{\#}\mathbb{P}^{k}_{\mathcal{X}}\}_{k=1}^{K},\qquad m=1, \ldots,M,\] _where each \((\sigma\circ g^{m})_{\#}\mathbb{P}^{k}_{\mathcal{X}}\) is a pushforward of \(\mathbb{P}^{k}_{\mathcal{X}}\in\{\mathbb{P}^{k}_{\mathcal{X}}\}_{k=1}^{K}\) induced by \(\sigma\circ g^{m}:\mathcal{X}\rightarrow\widetilde{\mathcal{Z}}\), and \(\sigma:\mathcal{Z}\rightarrow\widetilde{\mathcal{Z}}\) is a normalizing function having \(C_{\sigma}\)-Lipschitz continuity (i.e., \(\|\sigma(z)-\sigma(z^{\prime})\|\leq C_{\sigma}\|z-z^{\prime}\|\)\(\forall z,z^{\prime}\in\mathcal{Z}\)), and \(g^{m}:\mathcal{X}\rightarrow\mathcal{Z}\) is defined by_ \[g^{m}(x):=(\psi^{m}\circ(r^{m})^{(L-1)}\circ(r^{m-1})^{(L)}\circ\cdots\circ(r^{1} )^{(L)})(x), \tag{5}\] (with \(g^{m}=(\psi^{m}\circ(r^{m})^{(L-1)})\) for \(m=1\)), and \(\psi^{m}:\mathcal{X}\to\mathcal{Z}\) is defined in (2), and \(r^{m}:\mathcal{X}\to\mathcal{X}\) is defined by_ \[r^{m}(x):=x-(\xi_{\uparrow}^{m}\circ\psi^{m})(x), \tag{6}\] _and \((r^{m})^{(L)}\) denotes \(L\)-times composition of \(r^{m}\) (with \((r^{m})^{(L-1)}(x):=x\) for \(L-1=0\))._ **Remark 4.1**.: \(\sigma\) _serves the purpose of enabling the model to learn invariant features by mitigating the influence of instance-wise characteristics associated with the scale information of each domain. Additionally, the Lipschitz condition imposed on \(\sigma\) prevents gradient explosion during model updates. We present two examples of such functions: (1) \(\mathrm{softmax}:\mathcal{Z}\to\widetilde{\mathcal{Z}}=(0,1)^{\gamma}\) with \(\mathrm{softmax}(z)_{j}=e^{z_{j}}/\sum_{i=1}^{\gamma}e^{z_{i}}\), \(j=1,\ldots,\gamma\), and (2) \(\tanh\) (hyperbolic tangent): \(\mathcal{Z}\to\widetilde{\mathcal{Z}}=(-1,1)^{\gamma}\) with \(\tanh(z)_{j}=(e^{2z_{j}}-1)/(e^{2z_{j}}+1)\), \(j=1,\ldots,\gamma\). Both functions are \(1\)-Lipschitz continuous, i.e., \(C_{\sigma}=1\). In Appendix E.3., we provide the ablation study under these functions, in addition to the case without the normalization._ If the feature alignment were embedded block-wise for every stack, the recurrent operation of the same block within each stack would derive a redundant gradient flow, which leads to exploding or vanishing gradients for long-term forecasting problems [36]. The proposed stack-wise feature alignment can mitigate such issues by sparsely propagating loss. Moreover, it can bring the interpretability of the original N-BEATS [34] by keeping the original stacking structure. Some heuristic demonstration for this argument is provided in Appendix E.1. The operator \(g^{m}\) acts as an accumulated feature map up to the \(m\)-th stack, incorporating the previous \(m-1\) backcasting residual operations for \(x\sim\mathbb{P}_{\mathcal{X}}^{k}\in\mathcal{P}(\mathcal{X})\). Despite involving the intricate composition of \(\Psi\) and \(\Xi^{\dagger}\), the fully-connected layers within \(\Psi\) and \(\Xi^{\dagger}\) exhibit Lipschitz continuity, ensuring the Lipschitz continuity of \(g^{m}\). The Lipschitz constants of fully-connected layers are explicitly provided in [43, Section 6]. From this and Remark 4.1, we state the following lemma. The proof is provided in Appendix A.: **Lemma 4.1**.: _For each \(m=1,\ldots,M\), let \(C_{m}>0\) and \(C_{m,\uparrow}>0\) be Lipschitz constants of \(\psi^{m}\) and \(\xi_{\uparrow}^{m}\), respectively. Then \((\sigma\circ g^{m})\) is \(C_{\sigma\circ g^{m}}\)-Lipschitz continuous with \(C_{\sigma\circ g^{m}}:=C_{\sigma}C_{m}(1+C_{m},C_{m,\uparrow})^{L-1}\Pi_{m=1} ^{m-1}(1+C_{m}C_{n,\uparrow})^{L}>0\) for \(m=2,\ldots,M\), \(C_{\sigma\circ g^{m}}:=C_{\sigma}C_{m}(1+C_{m},C_{m,\uparrow})^{L-1}\) for \(m=1\), and \(C_{\sigma}>0\) in Definition 4.1)._ Due to the residual principle (see Section 3.2), the accumulated feature maps \(\{g^{m}\}_{m=1}^{M}\) are nonseparable for \(\Psi\) and \(\Xi_{\uparrow}\). So stack-wise alignment of marginal feature measures via regularizing \(\{g^{m}\}_{m=1}^{M}\) potentially deteriorates the backcasting power of \(\Xi_{\uparrow}\), which could ultimately lead to prediction degrading of the model. Instead, we conduct the alignment by regularizing exclusively on feature extractors \(\Psi\). More precisely, we propose a stack-wise alignment of marginal feature measures (defined in Definition 4.1) as the following optimization: for any given \(\Xi_{\uparrow}=\{\xi_{\uparrow}^{m}\}_{m=1}^{M}\), \[\inf_{\Psi}\left\{\sum_{m=1}^{M}\max_{i,j\in\{1,\ldots,K\},\ i\neq j}d\big{(}( \sigma\circ g^{m})_{\#}\mathbb{P}_{\mathcal{X}}^{i},(\sigma\circ g^{m})_{\#} \mathbb{P}_{\mathcal{X}}^{j}\big{)}\right\}, \tag{7}\] where \(d(\cdot,\cdot):\mathcal{P}(\widetilde{\mathcal{Z}})\times\mathcal{P}( \widetilde{\mathcal{Z}})\to\mathbb{R}_{+}\) denotes a divergence (or metric), which will be defined as an entropic regularized optimal transport distance in next section. The schematic illustration of the stack-wise alignment can be found in Figure 1 of Appendix A. ### Sinkhorn divergence on input and latent spaces In the generative adversarial framework [16; 3], optimal transport distances are common metrics used to train generators to induce pushforward measures close to a given target measure. Recently, [49] utilized an entropic regularized optimal transport distance [13; 14] to generate time series data. From these studies, we speculate that the entropic regularized one brings both learning invariant factors from the pushforward measures (in Definition 4.1) and the computational efficiency for high-dimensional sequential data. Specifically, we consider the regularized quadratic Wasserstein-2 distance on \(\widetilde{\mathcal{Z}}\): for any \(\epsilon\geq 0\) \[\mathcal{W}_{\epsilon,\widetilde{\mathcal{Z}}}(\mu,\nu):=\inf_{\pi\in\Pi(\mu, \nu;\widetilde{\mathcal{Z}})}\left\{\int_{\widetilde{\mathcal{Z}}\times \widetilde{\mathcal{Z}}}\left(\left\|x-y\right\|^{2}+\epsilon\log\left(\frac{d \pi(x,y)}{d\mu(x)d\nu(y)}\right)\right)d\pi(x,y)\right\}, \tag{8}\] where \(\epsilon\) denotes the regularization degree and \(\Pi(\mu,\nu;\widetilde{\mathcal{Z}})\) is the space of all couplings (or transportation plans) the marginals of which are respectively \(\mu,\nu\in\mathcal{P}(\widetilde{\mathcal{Z}})\). By replacing \(\widetilde{\mathcal{Z}}\) with \(\mathcal{X}\) in (8), we further denote by \(\widehat{\mathcal{W}}_{\epsilon,\mathcal{X}}(\cdot,\cdot)\) the regularized quadratic Wasserstein-2 distance on \(\mathcal{X}\). Even if the residual architecture induces the intricate representation learning procedure (see Section 4.1, the proposed stack-wise alignment of the pushforward measures via (8) is well-defined and feasible provided that the pair-wise divergence of source domains' measures under the same distance is empirically manageable. By using the duality of the regularized optimal transport distance in [37, Remark 4.18 in Section 4.4] and the Lipschitz continuity of \(\{\sigma\circ g^{m}\}_{m=1}^{M}\) (see Lemma 4.1), we provide the following theorem, which supports this claim. The proof is provided in Appendix B. **Theorem 4.1**.: _For any \(\epsilon\geq 0\), the following holds_ \[\sum_{m=1}^{M}\max_{i,j\in\{1,\ldots,K\},\ i\neq j}\mathcal{W}_{ \epsilon,\widetilde{\mathcal{Z}}}\big{(}(\sigma\circ g^{m})_{\#}\mathbb{P}^{ i}_{\mathcal{X}},(\sigma\circ g^{m})_{\#}\mathbb{P}^{j}_{\mathcal{X}}\big{)} \leq C\max_{i,j\in\{1,\ldots,K\},\ i\neq j}\mathcal{W}_{\epsilon,\mathcal{X}} \big{(}\mathbb{P}^{i}_{\mathcal{X}},\mathbb{P}^{j}_{\mathcal{X}}\big{)},\] _with \(C:=\sum_{m=1}^{M}\max\{(C_{\sigma\circ g^{m}})^{2},1\}>0\) and \(C_{\sigma\circ g^{m}}>0\) defined in Lemma 4.1._ In [33, Lemma 3, Proposition 6], learning bounds for the maximum mean discrepancy (MMD) and the regularized distance (8) are investigated under a single-layered fully connected network. While the upper bound in Theorem 4.1 provides a similar learning bound for the distance (8), it demonstrates that the stack-wise alignment (7) via the regularized Wasserstein distance can be embedded into the intricate residual stacking architecture with the deep layered network. While the Lipschitz continuity of \(\{\sigma\circ g^{m}\}_{m=1}^{M}\) (see Lemma 4.1) allows to have a nice bound, there exists room for having a tighter bound by deriving the smallest Lipschitz constant [43] and applying the spectral normalization [30], which will be left for the future extension. The entropic term attached with \(\epsilon\) in (8) is supposed to improve computational stability of the classical optimal transport distance, whereas it causes a bias on the estimator for the (empirical) regularized Wasserstein distance. To mitigate this issue, according to [9], we adopt the following debiased version of the regularized distance (8), which is referred to as the Sinkhorn divergence: for any given \(\epsilon\geq 0\), \[\widehat{\mathcal{W}}_{\epsilon,\widetilde{\mathcal{Z}}}(\mu, \nu):=\mathcal{W}_{\epsilon,\widetilde{\mathcal{Z}}}(\mu,\nu)-\frac{1}{2} \big{(}\mathcal{W}_{\epsilon,\widetilde{\mathcal{Z}}}(\nu,\nu)+\mathcal{W}_{ \epsilon,\widetilde{\mathcal{Z}}}(\mu,\mu)\big{)}\quad\forall\mu,\nu\in \mathcal{P}(\widetilde{\mathcal{Z}}). \tag{9}\] In our algorithm, the feature alignment is realized by minimizing Sinkhorn divergences (with denoting \(d(\cdot,\cdot)=\widehat{\mathcal{W}}_{\epsilon,\widetilde{\mathcal{Z}}}( \cdot,\cdot)\) in (7)) between (empirical) marginal feature measures. ### Training objective and algorithm We now combine the results in Sections 4.1 and 4.2 to formulate the feature-aligned N-BEATS algorithm. To that end, we first represent all the residual operators in \(\Psi\), \(\Xi_{\downarrow}\), and \(\Xi_{\uparrow}\) (in Section 3.2) as parameterized forms \(\Psi(\Phi)=\{\psi^{m}(\cdot;\phi_{m})\}_{m=1}^{M}\), \(\Xi_{\downarrow}(\Theta_{\downarrow})=\{\xi_{\downarrow}^{m}(\cdot;\theta_{m, \downarrow})\}_{m=1}^{M}\), \(\Xi_{\uparrow}(\Theta_{\uparrow})=\{\xi_{\uparrow}^{m}(\cdot;\theta_{m,\uparrow} )\}_{m=1}^{M}\) respectively, with parameters sets \(\Phi:=\{\phi_{m}\}_{m=1}^{M}\), \(\Theta_{\downarrow}:=\{\theta_{m,\downarrow}\}_{m=1}^{M}\), and \(\Theta_{\uparrow}:=\{\theta_{m,\uparrow}\}_{m=1}^{M}\) denoting weights of the fully connected neural networks. In terms of the parameters sets, we provide the following functional \[\mathbf{L}_{\lambda}(\Phi,\Theta_{\downarrow},\Theta_{\uparrow}):=\mathcal{L} (\mathfrak{F}(\Phi,\Theta_{\downarrow},\Theta_{\uparrow}))+\lambda\mathcal{L} _{\mathrm{align}}(\Phi,\Theta_{\uparrow}), \tag{10}\] where \(\mathcal{L}(\mathfrak{F}(\cdot,\cdot,\cdot))\) denotes the forecasting loss functional from (1) and \(\mathcal{L}_{\mathrm{align}}(\cdot,\cdot)\) representing the feature alignment (7) with the Sinkhorn divergence (9) is given by \[\mathcal{L}_{\mathrm{align}}(\Phi,\Theta_{\uparrow}):=\sum_{m=1}^{M}\max_{i,j \in\{1,\ldots,K\},\ i\neq j}\widehat{\mathcal{W}}_{\epsilon,\widetilde{ \mathcal{Z}}}\big{(}(\sigma\circ g^{m}_{\Phi,\Theta_{\uparrow}})_{\#}\mathbb{ P}^{i}_{\mathcal{X}},(\sigma\circ g^{m}_{\Phi,\Theta_{\uparrow}})_{\#} \mathbb{P}^{j}_{\mathcal{X}}\big{)}, \tag{11}\] and \(g^{m}_{\Phi,\Theta},\ :=g^{m}(\cdot;\{\phi_{n}\}_{n=1}^{m},\{\theta_{n,\uparrow}\}_{n=1} ^{m})\), \(m=1,\ldots,M\), denotes the parameterized version of \(g^{m}\). To update \((\Phi,\Theta_{\downarrow},\Theta_{\uparrow})\) and the corresponding gradient using a mini-batch, we approximate each \(\widehat{\mathcal{W}}_{\epsilon,\widetilde{\mathcal{Z}}}((\sigma\circ g^{m}_{ \Phi,\Theta_{\uparrow}})_{\#}\mathbb{P}^{i}_{\mathcal{X}},(\sigma\circ g^{m}_{ \Phi,\Theta_{\uparrow}})_{\#}\mathbb{P}^{j}_{\mathcal{X}})\) as its empirical counterpart \(\widehat{\mathcal{W}}_{\epsilon,\widetilde{\mathcal{Z}}}(\mu^{m,(i)}_{\Phi, \Theta_{\uparrow}},\mu^{m,(j)}_{\Phi,\Theta_{\uparrow}})\), where the empirical measure is given by \(\mu^{m,(k)}_{\Phi,\Theta_{\uparrow}}:=\frac{1}{B}\sum_{b=1}^{B}\delta_{\widetilde{ z}^{(k)}_{b}}\), with \(\widetilde{z}^{(k)}_{b}:=\sigma\circ g^{m}_{\Phi,\Theta_{\uparrow}}(x^{(k)}_{b})\) and \(\{(x_{b}^{(k)},y_{b}^{(k)})\}_{b=1}^{B}\) from \(\mathcal{D}^{k}\), and \(B\) and \(\delta_{z}\) denoting mini-batch size and the Dirac measure centered on \(z\in\bar{\mathcal{Z}}\), respectively. As we mentioned in Section 4.1, the feature alignment of marginal feature distributions is realized by optimizing \(\Phi\), though \(\{g_{\Phi,\Theta_{\uparrow}}^{m}\}_{m=1}^{m}\) are non-separable for \(\Phi\) and \(\Theta_{\uparrow}\). At the same time, the forecasting objective (1) should be minimized by optimizing \((\Phi,\Theta_{\downarrow},\Theta_{\uparrow})\). To bring these two components together, we adopt the following alternate optimization inspired from [11, Section 3.1]: \[\Theta_{\downarrow}^{*},\Theta_{\uparrow}^{*}:=\operatorname*{arg\,min}_{ \Theta_{\downarrow},\Theta_{\uparrow}}\mathcal{L}(\mathfrak{F}(\Phi^{*}, \Theta_{\downarrow},\Theta_{\uparrow}))\quad\text{and}\quad\Phi^{*}:= \operatorname*{arg\,min}_{\Phi}\mathbf{L}_{\lambda}(\Phi,\Theta_{\downarrow} ^{*},\Theta_{\uparrow}^{*}) \tag{12}\] The training procedure with this optimization is summarized in Algorithm 1. ``` 0:\(\eta\) (learning rate), \(B\) (mini-batch size); 1 Initialize \(\Phi\), \(\Theta_{\downarrow}\), \(\Theta_{\uparrow}\); 2whilenot convergeddo 3 Sample \(\{(x_{b}^{(k)},y_{1}^{(k)}),\cdots,(x_{B}^{(k)},y_{B}^{(k)})\}\) from \(\mathcal{D}^{k}\) for each \(k=1,\ldots,K\); 4 Initialize \(\{\hat{y}_{1}^{(k)},\ldots,\hat{y}_{B}^{(k)}\}\leftarrow\{0,\ldots,0\}\) for each \(k=1,\ldots,K\); 5for\(m=1\)to\(M\)do 6for\(k=1\)to\(K\)do 7 Compute \(\{g_{\Phi,\Theta_{\uparrow}}^{m},(x_{1}^{(k)}),\ldots,g_{\Theta,\Theta_{ \uparrow}}^{m},(x_{B}^{(k)})\}\); 8\(\hat{y}_{b}^{(k)}\leftarrow\hat{y}_{b}^{(k)}+\xi_{\downarrow}^{m}(g_{\Phi, \Theta_{\uparrow}}^{m},(x_{b}^{(k)});\theta_{m,\downarrow})\ \forall b=1,\ldots,B\); 9 10 end for 11 12 end for 13 Compute \(\mu_{\Phi,\Theta_{\uparrow}}^{m,(k)}\) for each \(m=1,\ldots,M\) and \(k=1,\ldots,K\); 14 Update \(\Phi\) based on \(\lambda\mathcal{L}_{\operatorname*{align}}(\cdot,\Theta_{\uparrow})\), i.e., for each \(m=1,\ldots,M\), 15\(\phi_{m}\leftarrow\phi_{m}+\eta\nabla_{\phi_{m}}\Big{(}\lambda\sum\limits_{n= 1}^{M}\max\limits_{i,j\in\{1,\ldots,K\},\ i\neq j}\widehat{\mathcal{W}}_{ \epsilon,\bar{\mathcal{Z}}}\Big{(}\mu_{\Phi,\Theta_{\uparrow}}^{n,(i)},\mu_{ \Phi,\Theta_{\uparrow}}^{n,(j)}\Big{)}\Big{)}\); 16 Update \((\Phi,\Theta_{\downarrow},\Theta_{\uparrow})\) based on \(\mathcal{L}(\mathfrak{F}(\cdot,\cdot,\cdot))\), i.e., for each \(m=1,\ldots,M\), 17\((\{\phi_{m},\theta_{m,\downarrow},\theta_{m,\uparrow}\}\leftarrow(\phi_{m}, \theta_{m,\downarrow},\theta_{m,\uparrow})+\eta\frac{1}{K\cdot B}\sum\limits_ {k=1}^{K}\sum\limits_{b=1}^{B}\nabla_{(\phi_{m},\theta_{m,\downarrow},\theta_ {m,\uparrow})}l\big{(}\hat{y}_{b}^{(k)},y_{b}^{(k)}\big{)}\); 18 19 end for ``` **Algorithm 1**training Feature-aligned N-BEATS ## 5 Experiments ### Evaluation Protocol To comprehensively evaluate the proposed approach and capture semantic diversity among domains, we define a set of semantically similar domains called the _superdomain_\(\mathcal{A}\). The evaluation scenarios are categorized into _in-domain generalization_ (IDG), _cross-domain generalization_ (CDG), and _out-domain generalization_ (ODG) as follows: (1) \(\text{IDG}:\mathcal{D}\subset\mathcal{A}_{i},\mathcal{D}^{T}\in\mathcal{A}_{i}\); (2) CDG: \(\{\mathcal{D}^{k}\}_{k=1}^{\kappa-1}\subset\mathcal{A}_{i},\{\mathcal{D}^{k} \}_{k=\kappa}^{K}\subset\mathcal{A}_{j},\mathcal{D}^{T}\in\mathcal{A}_{i}\) where \(i\neq j,1<\kappa\leq K\); (3) ODG: \(\mathcal{D}\subset\mathcal{A}_{i},\mathcal{D}^{T}\in\mathcal{A}_{j}\) where \(i\neq j\). We construct a dataset consisting of two superdomains: finance and weather. Each superdomain comprises four subordinate domains: finance includes commodity, income, interest rate, and exchange rate, while weather includes pressure, rain, temperature, and wind. We collect financial data from the Federal Reserve Bank of St. Louis (FRED)3, and weather data from the National Oceanic and Atmospheric Administration (NOAA)4. It is worth noting that our dataset consists entirely of real-world data and covers a wide range of frequencies, including daily, weekly, monthly, quarterly, and yearly observations. We randomly split each domain into three sets: 70% for training, 10% for validation, and 20% for testing. To isolate the effects solely attributed to domain shift, we ensure that the number of data points for each domain is fixed at 7,500. For detailed data configuration, refer to Appendix D.1. To evaluate IDG, we designate one target domain and consider the remaining domains within the same superdomain as source domains, e.g., \(\mathcal{D}=\{\texttt{pressure},\texttt{rain},\texttt{temperature}\}\), and \(\mathcal{D}^{T}=\texttt{wind}\). To fair comparisons, we standardize the number of source domains to three across all CDG and ODG scenarios. In CDG, we select one domain from the superdomain of the target domain and two domains from the other superdomain as the source, e.g., \(\mathcal{D}=\{\texttt{commodity},\texttt{income},\texttt{pressure}\}\), and \(\mathcal{D}^{T}=\texttt{rain}\). ODG scenarios involve no overlap between the source and target domains in terms of \(\mathcal{A}\), e.g., \(\mathcal{D}=\{\texttt{pressure},\texttt{rain},\texttt{temperature}\}\), and \(\mathcal{D}^{T}=\texttt{commodity}\). We compare the performance of our approach against two N-BEATS [34] derived models: N-BEATS-G and N-BEATS-I. Additionally, we evaluate the performance of N-HiTS [7], an improved version of N-BEATS that has exhibited notable advancements in recent studies. Our experimental setup is motivated by two avenues of research on time series forecasting: (1) the limitations of methods based on transformer architectures [50], and (2) the effectiveness of doubly residual stacking networks for generalization [35]. To ensure a fair and thorough comparison, we carefully fine-tune all models and their hyperparameters. We conduct a grid search over a wide range of hyperparameter values and select the optimal settings based on their performance on the validation set. ### Experimental Details We adopt the symmetric mean absolute percentage error (smap) metric as the prediction loss function \(\mathcal{L}\) (10). The softmax function is employed as normalizing function \(\sigma\). The hyperparameters are set as follows: \(M=3\), \(L=4\), \(\alpha=50\), \(\beta=10\), and \(\gamma=512\). We utilize the Adam optimizer [21] for optimization and determine the learning rate, including initial value and scheduling strategy, stopping criterion, and batch size through grid search. Additional implementation details are provided in Appendix D.2. ### Experimental Results and Analysis We evaluate the performance using the smape and mase metrics, which are defined in Appendix D.3. Table 1 shows that incorporating feature alignment into the deep residual stacking architecture models improves their generalizability, resulting in better performance across most of the experiments. However, our framework demonstrates the least impact on N-BEATS-I. This may be due to limitations in the seasonality and trend-capturing modules of N-BEATS-I in effectively capturing diverse frequencies. Furthermore, the alignment of misrepresented feature distributions does not seem to provide significant benefits. For further qualitative analysis, please refer to Appendix F.1. To visualize the representations, i.e., samples of marginal feature measures, observed from N-BEATS-G with and without alignment, we use the uniform manifold approximation and projection (UMAP) technique [29]. In order to mitigate the influence of unaligned scale information, we use the softmax function to remove the scale information and analyze the semantic relationship between domains. Figure 1 highlights two noticeable trends: (1) instances are positioned closer to each other, and (2) the entropy of domains exhibits a significant increase. For comprehensive visual analyses on N-BEATS-I and N-HiTS, please refer to Appendix F.2. Further Analysis.We conduct a series of supplementary experiments to delve deeper into the effectiveness of our approach. These experiments encompass the following investigations: (1) Comparison between block-wise and stack-wise feature alignment. (2) Exploration of additional distance metrics for the feature alignment. (3) Comparison of several normalizing functions. (4) Analysis of the impact of our approach in the presence of subtle domain shifts. Comprehensive details and information on these experiments can be found in Appendix E. ## 6 Conclusion In this work, we propose Feature-aligned N-BEATS, a domain generalization model that combines the doubly residual stacking architecture of N-BEATS [34] with feature alignment techniques in representation learning theory. Our model leverages on stack-wise feature alignment, which takes into account domain-invariant learning and the interpretability of N-BEATS. Extensive experimental analysis indicates that Feature-aligned N-BEATS achieves remarkable generalization performance across different domains. Moving forward, potential future directions are to enhance stability of the model through the spectral normalization [30] on residual operators and to extend into conditional feature alignment [23; 52] within the residual architecture. Figure 1: **Visual comparison between with alignment (w) and without alignment (w/o).** In the first scenario, alignment is visible as green and red instances are connected (especially when \(\lambda=3\)). In the other scenario, instances are dispersed, particularly with increased dispersion of blue instances in the first and second stacks at \(\lambda=3\), resulting in heightened entropy between domains. Acknowledgement.M. Kang was supported by the NRF grant[2012R1A2C3010887] and the MSIT/IITP([17111117093], [NO.2021-0-00077], [No. 2021-0-01343, Artificial Intelligence Graduate School Program(SNU)]). K. Park acknowledges the support of the Presidential Postdoctoral Fellowship of Nanyang Technological University.
2302.08606
Intrinsic and extrinsic deep learning on manifolds
We propose extrinsic and intrinsic deep neural network architectures as general frameworks for deep learning on manifolds. Specifically, extrinsic deep neural networks (eDNNs) preserve geometric features on manifolds by utilizing an equivariant embedding from the manifold to its image in the Euclidean space. Moreover, intrinsic deep neural networks (iDNNs) incorporate the underlying intrinsic geometry of manifolds via exponential and log maps with respect to a Riemannian structure. Consequently, we prove that the empirical risk of the empirical risk minimizers (ERM) of eDNNs and iDNNs converge in optimal rates. Overall, The eDNNs framework is simple and easy to compute, while the iDNNs framework is accurate and fast converging. To demonstrate the utilities of our framework, various simulation studies, and real data analyses are presented with eDNNs and iDNNs.
Yihao Fang, Ilsang Ohn, Vijay Gupta, Lizhen Lin
2023-02-16T22:10:38Z
http://arxiv.org/abs/2302.08606v1
# Intrinsic and extrinsic deep learning on manifolds ###### Abstract We propose extrinsic and intrinsic deep neural network architectures as general frameworks for deep learning on manifolds. Specifically, extrinsic deep neural networks (eDNNs) preserve geometric features on manifolds by utilizing an equivariant embedding from the manifold to its image in the Euclidean space. Moreover, intrinsic deep neural networks (iDNNs) incorporate the underlying intrinsic geometry of manifolds via exponential and log maps with respect to a Riemannian structure. Consequently, we prove that the empirical risk of the empirical risk minimizers (ERM) of eDNNs and iDNNs converge in optimal rates. Overall, The eDNNs framework is simple and easy to compute, while the iDNNs framework is accurate and fast converging. To demonstrate the utilities of our framework, various simulation studies, and real data analyses are presented with eDNNs and iDNNs. Manifolds Deep learning eDNNs and iDNNs ## 1 Introduction The last two decades have witnessed an explosive development in deep learning approaches. These approaches have achieved breakthrough performance in a broad range of learning problems from a variety of applications fields such as imaging recognition [29], speech recognition [15], natural language processing [2] and other areas of computer vision [41]. Deep learning has also served as the main impetus for the advancement of recent artificial intelligence (AI) technologies. This unprecedented success has been made possible due to the increasing computational prowess, availability of large data sets, and the development of efficient computational algorithms for training deep neural networks. There have been increasing efforts to understand the theoretical foundations of deep neural networks, including in the statistics community [37, 34, 25, 3, 38, 27, 10]. Most of these efforts from model and algorithmic development to theoretical understanding, however, have been largely focused on the Euclidean domains. In a wide range of problems arising in computer and machine vision, medical imaging, network science, recommender systems, computer graphics, and so on, one often encounters learning problems concerned with non-Euclidean data, particularly manifold-valued data. For example, in neuroscience, data collected in diffusion tensor imaging (DTI), now a powerful tool in neuroimaging for clinical trials, are represented by the diffusion matrices, which are \(3\times 3\)_positive definite matrices_[1]. In engineering and machine learning, pictures or images are often preprocessed or reduced to a collection of _subspaces_ with each data point (an image) in the sample data represented by a subspace [16; 39]. In machine vision, a digital image can also be represented by a set of \(k\)-landmarks, the collection of which form _landmark-based shape spaces_[24]. One may also encounter data that are stored as _orthonormal frames_[8], _surfaces, curves_, and _networks_[28]. The underlying space where these general objects belong falls in the general category of _manifolds_ whose geometry is generally well-characterized, which should be utilized and incorporated for learning and inference. Thus, there is a natural need and motivation for developing deep neural network models over manifolds. This work aims to develop general deep neural network architectures on manifolds and take some steps toward understanding their theoretical foundations. The key challenge lies in incorporating the underlying geometry and structure of manifolds in designing deep neural networks. Although some recent works propose deep neural networks for specific manifolds [42; 14; 21; 22], there is a lack of general frameworks or paradigms that work for arbitrary manifolds. In addition, the theoretical understanding of deep neural networks on manifolds remains largely unexplored. To fill in these gaps, in this work, we make the following contributions: (1) we develop _extrinsic deep neural networks (eDNNs)_ on manifolds to generalize the popular feedforward networks in the Euclidean space to manifolds via equivariant embeddings. The extrinsic framework is conceptually simple and computationally easy and works for general manifolds where nice embeddings such as _equivariant embeddings_ are available; (2) we develop _intrinsic deep neural networks (iDNNs)_ for deep learning networks on manifolds employing a Riemannian structure of the manifold; (3) we study theoretical properties such as approximation properties and estimation error of both eDNNs and iDNNs, and (4) we implement various DNNs over a large class of manifolds under simulations and real datasets, including eDNNs, iDNNs and _tangential deep neural networks (iDNNs)_, which is a special case of iDNNs with only one tangent space. The rest of the paper is organized as follows. In Section 2, we introduce the eDNNs on manifolds and study their theoretical properties. In Section 3, we propose the iDNNs on manifolds that take into account the intrinsic geometry of the manifold. The simulation study and the real data analysis are carried out in Section 4. Our work ends with a discussion. ## 2 Extrinsic deep neural networks (eDNNs) on manifolds ### eDNNs and equivariant embeddings Let \(M\) be a \(d\)-dimensional manifold. Let \((x_{i},y_{i})\), \(i=1,\ldots,n\) be a sample of data from some regression model with input \(x_{i}\in\mathcal{X}=M\) and output \(y_{i}\in\mathcal{Y}=\mathbb{R}\), and we propose deep neural networks for learning the underlying function \(f:M\rightarrow\mathbb{R}\). The output space can be \(\mathcal{Y}=\{1,\ldots,k\}\) for a classification problem. In this work, we propose to develop two general deep neural network architectures on manifolds based on an extrinsic and an intrinsic framework, respectively. The first framework employs an equivariant embedding of a manifold into the Euclidean space and builds a deep neural network on its image after embedding, which is the focus of this section, while the intrinsic framework utilizes Riemannian or intrinsic geometry of the manifold for designing the deep neural networks (Section 3). Our initial focus will be on proposing appropriate analogs of feed-forward neural networks on manifolds which are popular DNNs in the Euclidean space and suitable objects for theoretical analysis. The theoretical properties of the proposed geometric DNNs will be studied. Before describing our proposed frameworks, we introduce our mathematical definition of DNNs and related classes. A DNN \(\tilde{f}\) with depth \(L\) and a width vector \(\mathbf{p}=(p_{0},\cdots,p_{L+1})\in\mathbb{N}^{L+2}\) is a function of the form \[\tilde{f}(\tilde{x}):=A_{L+1}\circ\sigma_{L}\circ A_{L}\circ\cdots\circ\sigma _{1}\circ A_{1}(\tilde{x}), \tag{1}\] where \(A_{l}:\mathbb{R}^{p_{l-1}}\rightarrow\mathbb{R}^{p_{l}}\) is an affine linear map defined by \(A_{l}(\tilde{x})=\mathbf{W}_{l}\tilde{x}+\mathbf{b}_{l}\) for \(p_{l}\times p_{l-1}\) weight matrix \(\mathbf{W}_{l}\) and \(p_{l}\) dimensional bias vector \(\mathbf{b}_{l}\), and \(\sigma_{l}:\mathbb{R}^{p_{l}}\rightarrow\mathbb{R}^{p_{l}}\) is an element-wise nonlinear activation map with the ReLU activation function \(\sigma(z)=\max\{0,z\}\) as a popular choice. We referred to the maximum value \(\max_{j=1,\ldots,L}p_{j}\) of the width vector as the width of the DNN. We denote \(\mathbf{\theta}\) as the collection of all weight matrices and bias vectors: \(\mathbf{\theta}:=\left((\mathbf{W}_{1},\mathbf{b}_{1}),\ldots,(\mathbf{W}_{L+1},\mathbf{b}_{L+1})\right)\), the parameters of the DNN. Moreover, we denote by \(\|\mathbf{\theta}\|_{0}\) the number of non-zero parameter values (i.e., the sparsity) and by \(\|\mathbf{\theta}\|_{\infty}\) the maximum of parameters. We denote by \(\mathcal{F}(L,(p_{0}\sim P\sim p_{L+1}),S,B)\) the class of DNNs with depth \(L\), input dimension \(p_{0}\), width \(P\), output dimension \(p_{L+1}\), sparsity \(S\) and the maximum of parameters \(B\). For simplicity, if the input and output dimensions are clear in the context, we write \(\mathcal{F}(L,P,S,B)=\mathcal{F}(L,(p_{0}\sim P\sim p_{L+1}),S,B)\). Let \(J:M\rightarrow\mathbb{R}^{D}\) be an embedding of \(M\) into some higher dimensional Euclidean space \(\mathbb{R}^{D}\) (\(D\geq d\)) and denote the image of the embedding as \(\tilde{M}=J(M)\). By definition of an embedding, \(J\) is a smooth map such that its differential \(dJ:T_{x}M\to T_{J(x)}\mathbb{R}^{D}\) at each point \(x\in M\) is an injective map from its tangent space \(T_{x}M\) to \(T_{J(x)}\mathbb{R}^{D}\), and \(J\) is a homeomorphism between \(M\) and its image \(\tilde{M}\). Our idea of building _an extrinsic DNN_ on manifold relies on building a DNN on the image of the manifold after the embedding. The geometry of the manifold of \(M\) can be well-preserved with a good choice of embedding, such as an equivariant embedding which will be defined rigorously in Remark 2.2 below. The extrinsic framework has been adopted for the estimation of Frechet means [5], regression on manifolds [31], and construction of Gaussian processes on manifolds [30], which have enjoyed some notable features such as ease of computations and accurate estimations. The key idea of proposing an extrinsic feedforward neural network on a manifold \(M\) is to build a one-to-one version of its image after the embedding. More specially, we say that \(f\) is an _extrinsic deep neural network (eDNN)_ if \(f\) is of the form \[f(x)=\tilde{f}(J(x)), \tag{2}\] with a DNN \(\tilde{f}\). We denote the eDNN class induced by \(\mathcal{F}(L,P,S,B)\) as \[\mathcal{F}_{eDNN}(L,P,S,B):=\{f=\tilde{f}\circ J:\tilde{f}\in\mathcal{F}(L,P, S,B)\}.\] The extrinsic framework is very general and works for any manifold where a good embedding, such as an equivariant embedding, is available. Under this framework, training algorithms in the Euclidean space, such as the stochastic gradient descent (SGD) with backpropagation algorithms, can be utilized working with the data \((J(x_{i}),y_{i}),i=1,\ldots,n\), with the only additional computation burden potentially induced from working higher-dimensional ambianc space. In our simulation Section 4, the extrinsic DNN yields better accuracy than the Naive Bayes classifier, kernel SVM, logistic regression classifier, and the random forester classifier for the planar shape datasets. Due to its simplicity and generality, there is a potential for applying eDNNs in medical imaging and machine vision for broader scientific impacts. **Remark 2.1**.: _In [36] and [6], a feedforward neural network was used for nonparametric regression on a lower-dimensional submanifold embedded in some higher-dimensional ambient space. It showed that with appropriate conditions on the neural network structures, the convergence rates of the ERM would depend on the dimension of the submanifold of instead of the dimension of the ambient space \(D\). In their framework, they assume the geometry of the submanifold is unknown. From a conceptual point of view, our extrinsic framework can be viewed as a special case of theirs by ignoring the underlying geometry. In this case, the image of the manifold \(\tilde{M}=J(M)\) can be viewed as a submanifold in \(\mathbb{R}^{D}\), so their results follow. On the other hand, our embedding framework allows us to work with very complicated manifolds, such as the quotient manifolds for which no natural ambient coordinates are available. An example is the planar shape which is the quotient of a typically high-dimensional sphere consisting of orbits of equivalent classes, with the submanifold structure only arising after the embedding. And such an embedding is typically not isometric._ _In [6], the charts were constructed by intersecting small balls in \(\mathbb{R}^{D}\) with the submanifold \(M\). In our case, we provide explicit charts of the submanifold based on the knowledge of the geometry of the original manifold \(M\) and the embedding map \(J\) that works with the ambient coordinates in \(\mathbb{R}^{D}\)._ **Remark 2.2**.: _One of the essential steps in employing an eDNN is the choice of the embedding \(J\), which is generally not unique. It is desirable to have an embedding that preserves as much geometry as possible. An equivariant embedding is one type of embedding that preserves a substantial amount of geometry. Figure 1 provides a visual illustration of equivariant embedding. Suppose \(M\) admits an action of a (usually 'large') Lie group \(H\). Then we say that \(J\) is an equivariant embedding if we can find a Lie group homomorphism \(\phi:H\to GL(D,\mathbb{R})\) from \(H\) to the general linear group \(GL(D,\mathbb{R})\) of degree \(D\) acting on \(\tilde{M}\) such that_ \[J(hp)=\phi(h)J(p)\] _for any \(h\in H\) and \(p\in M\). The definition seems technical at first sight. However, the intuition is clear. If a large group \(H\) acts on manifolds such as by rotation before embedding, such an action can be preserved via \(\phi\) on the image \(M\), thus potentially preserving many of the geometric features of \(M\), such as its symmetries. Therefore, the embedding is geometry-preserving in this sense. For the case of the planar shape, which is a collection of shapes consisting of \(k\)-landmarks modular Euclidean motions such as rotation, scaling, and translation, which is a quotient manifold of a sphere of dimension \(S^{2k-3}\), and the embedding can be given by the Veronese-whimming embedding which is equivariant under the special unitary group. Another example that's less abstract to understand is the manifold of symmetric positive definite matrices whose embedding can be given as the \(\log\) map (the matrix \(\log\) function) into the space of symmetric matrices, and this embedding is equivariant with respect to the group action of the general linear group via the conjugation group action. See Section 4 for some concrete examples of equivariant embeddings for well-known manifolds, such as the space of the sphere, symmetric positive definite matrices, and planar shapes._ ### Approximation analysis for eDNNs In this section, we study the ability of the eDNN class in approximating an appropriate smooth class of functions on manifolds. First, we define the ball of \(\beta\)-Holder functions on a set \(U\in\mathbb{R}^{D}\) with radius \(K\) as \[\mathcal{C}^{\beta}_{D}(U,K)=\{f:\|f\|_{\mathcal{C}^{\beta}_{D}(U)}\leq K\},\] where \(\|\cdot\|_{\mathcal{C}^{\beta}_{D}(U)}\) denotes the \(\beta\)-Holder norm defined as \[\|f\|_{\mathcal{C}^{\beta}_{D}(U)}=\sum_{m\in\mathbb{N}^{D}_{0}:\|m\|_{1}\leq [\beta]}\|\partial^{m}f\|_{\infty}+\sum_{m\in\mathbb{N}^{D}_{0}:\|m\|_{1}=[ \beta]}\sup_{x_{1},x_{2}\in U,x_{1}\neq x_{2}}\frac{|\partial^{m}f(x_{1})- \partial^{m}f(x_{2})|}{\|x_{1}-x_{2}\|_{\infty}^{\beta-[\beta]}}.\] Here, \(\partial^{m}f\) denotes the partial derivative of \(f\) of order \(m\) and \(\mathbb{N}_{0}:=\mathbb{N}\cup\{0\}\). To facilitate smooth function approximation on manifolds, following [36], we impose an additional smooth assumption on local coordinates which project inputs in an ambient space to a lower dimensional space. **Definition 1**.: _We say that a compact \(d\)-dimensional manifold \(M\subset\mathbb{R}^{D}\) has smooth local coordinates if there exist charts \((V_{1},\psi_{1}),\ldots,(V_{r},\psi_{r})\), such that for any \(\gamma>0\), \(\psi_{j}\in\mathcal{C}^{\gamma}_{D}(\psi_{j}(V_{j}))\)._ The next theorem reveals the approximation ability of the eDNN class. For a measure of approximation, we consider the sup norm defined as \(\|f_{1}-f_{2}\|_{L^{\infty}(M)}:=\sup_{x\in M}|f_{1}(x)-f_{2}(x)|\) for two functions \(f_{1},f_{2}:M\to\mathbb{R}\). **Theorem 1**.: _Let \(M\subset\mathbb{R}^{D}\) be a \(d\)-dimensional compact manifold and \(J:M\to\mathbb{R}^{D}\) be an embedding map. Assume that \(J(M)\) has smooth local coordinates. Then there exist positive constants \(c_{1},c_{2}\) and \(c_{3}\) depending only on \(D,d,\beta,K\) and the surface area of \(M\) such that for any \(\eta\in(1,0)\),_ \[\sup_{f_{0}:M\to[-1,1]\leq t_{1}f_{0}\circ J^{-1}\in\mathcal{C}^{\beta}_{D}(J (M),K)}\inf_{f\in\mathcal{F}_{eDNN}(L,P,S,B=1)}\|f-f_{0}\|_{L^{\infty}(M)}\leq\eta\] _with \(L\leq c_{1}\log\frac{1}{\eta}\), \(P\leq c_{2}\eta^{-\frac{\beta}{\beta}}\) and \(S\leq c_{3}\eta^{-\frac{\beta}{\beta}}\log\frac{1}{\eta}\)._ Figure 1: An simple illustration of equivariant embeddings Proof.: Let \(\tilde{f}_{0}=f_{0}\circ J^{-1}\), then \(\tilde{f}_{0}\) is a function on the \(d\)-dimensional manifold \(\tilde{M}=J(M)\subset\mathbb{R}^{D}\). Since \(\tilde{M}\) has smooth local coordinates, we can apply Theorem 2 in [36], there exists a network \(\tilde{f}\in\mathcal{F}(L,(D\sim P\sim 1),S,1)\) such that \(\|\tilde{f}-\tilde{f}_{0}\|_{L^{\infty}(\tilde{M})}<\eta\) with \(L\leq c_{1}\log\frac{1}{\eta}\), \(P\leq c_{2}\eta^{-\frac{\eta}{\eta}}\) and \(S\leq c_{3}\eta^{-\frac{\eta}{\eta}}\log\frac{1}{\eta}\) for some \(c_{1}>0,c_{2}>0\) and \(c_{3}>0\). Now, let \(f=\tilde{f}\circ J\in\mathcal{F}_{eDNN}(L,(D\sim P\sim 1),S,1)\). Then \[\|f-f_{0}\|_{L^{\infty}(M)}=\|\tilde{f}\circ J-\tilde{f}_{0}\circ J\|_{L^{ \infty}(M)}=\|\tilde{f}-\tilde{f}_{0}\|_{L^{\infty}(\tilde{M})}.\] Therefore, we get the desired result. ### Statistical risk analysis for eDNNs In this section, we study the statistical risk of the empirical risk minimizer (ERM) based on the eDNN class. We assume the following regression model \[y_{i}=f_{0}(x_{i})+\epsilon_{i} \tag{3}\] for \(i=1,\ldots,n\), where \(x_{1},\ldots,x_{n}\in M\) are i.i.d inputs following a distribution \(P_{x}\) on the manifold and \(\epsilon_{1},\ldots,\epsilon_{n}\) are i.i.d. sub-Gaussian errors. We consider the ERM over the eDNN class such that \[\hat{f}_{eDNN}=\operatorname*{argmin}_{f\in\mathcal{F}_{eDNN}(L,P,S,B)}\frac{ 1}{n}\sum_{i=1}^{n}(y_{i}-f(x_{i}))^{2}. \tag{4}\] A natural question to ask is whether the ERM type of estimators such as \(\hat{f}_{n}\) defined above achieve minimax optimal estimation of \(\beta\)-Holder smooth functions on manifolds, in terms of the excess risk \[R(\hat{f}_{eDNN},f_{0}):=E(\hat{f}_{eDNN}(x)-f_{0}(x))^{2}\] where the expectation is taken over the random variable \(x\sim P_{x}\). **Theorem 2**.: _Assume the model (3) with a \(d\)-dimensional compact manifold \(M\subset\mathbb{R}^{D}\) and an embedding map \(J:M\rightarrow\mathbb{R}^{D}\). Moreover, assume that \(J(M)\) has smooth local coordinates. Then the ERM estimator \(\hat{f}_{eDNN}\) over the eDNN class \(\mathcal{F}_{eDNN}(L,P,S,B=1)\) in (4) with \(L\asymp\log(n)\), \(n\gtrsim P\gtrsim n^{d/(2\beta+d)}\) and \(S\asymp n^{d/(2\beta+d)}\log n\) satisfies_ \[\sup_{f_{0}:M\rightarrow[-1,1]\ast\operatorname{\mathrm{I}}f_{0}\circ J^{-1} \in C^{\beta}_{D}(\mathbb{R}^{D},K)}R(\hat{f}_{eDNN},f_{0})\lesssim n^{-\frac{2 \beta}{2\beta+d}}\log^{3}n.\] Proof.: For any \(\tilde{f}_{1},\tilde{f}_{2}\in\mathcal{F}(L,P,S,B=1)\), we have \(\|\tilde{f}_{2}\circ J-\tilde{f}_{2}\circ J\|_{L^{\infty}(M)}=\|\tilde{f}_{2}- \tilde{f}_{2}\|_{L^{\infty}(\tilde{M})}\leq\|\tilde{f}_{2}-\tilde{f}_{2}\|_{L^ {\infty}(\mathbb{R}^{D})}\). Hence the entropy of the eDNN class \(\mathcal{F}_{eDNN}(L,P,S,B=1)\) is bounded by that of \(\mathcal{F}(L,P,S,B=1)\). Thus, by Lemmas 4 and 5 of [37], we have \[R\left(\hat{f}_{eDNN},f_{0}\right)\lesssim\inf_{f\in\mathcal{F}_{eDNN}(L,P,S,B =1)}\|f-f_{0}\|_{L^{\infty}(M)}^{2}+\frac{(S+1)\log\left(2n(L+1)P^{2L}(D+1)^{2} \right)+1}{n}.\] Therefore, by Theorem 1, if we take \(L,P\) and \(S\) as in the theorem, we get the desired result. ## 3 Intrinsic deep neural networks (iDNNs) on manifolds ### The iDNN architectures on a Riemannian manifold Despite the generality and computational advantage enjoyed by eDNNs on manifolds proposed in the previous section, one potential drawback is that an embedding is not always available on complex manifolds such as some intrinsic structure spatial domains. In this section, we propose a class of intrinsic DNNs on manifolds (iDNNs) by employing the intrinsic geometry of a manifold to utilize its exponential and log maps with respect to a Riemannian structure. Some works construct a DNN on the manifold via mapping the points on the manifold to a _single tangent space_ (e.g., with respect to some central points of the data) or proposing DNNs on specific manifolds, in particular, matrix manifolds [19; 14]. Using a DNN on a single tangent space approximation cannot provide a good approximation of a function on the whole manifold. Below we provide a rigorous framework for providing a local approximation of a function on a Riemannian manifold via Riemannian exponential and logarithm maps and thoroughly investigate their theoretical properties. The key ideas here are to first cover the manifold with images of the subset of tangent spaces \(U_{1},\ldots,U_{k}\) under the exponential map, approximate a local function over the tangent space using DNNs, which are then patched together via the transition map and a partition of unity on the Riemannian manifold. Specifically, let \(\{x_{1},\ldots,x_{K}\in M\}\) be a finite set of points, such that for an open set of subsets \(U_{k}\subset T_{x_{k}}M\) with \(k=1,\ldots,K\), one has \(\bigcup_{k=1}^{K}\exp_{x_{k}}(U_{k})=M\). Namely, one has \(\big{\{}\big{(}\exp_{x_{k}}(U_{k}),\ \exp_{x_{k}}\big{)},\ \ k=1\ldots,K\big{\}}\) as the charts of the manifold \(M\). For each \(k=1,\ldots,K\) one has orthonormal basis \(v_{k1},\ldots,v_{kd}\in T_{x_{k}}M\) and respectively the normal coordinates of \(x\in\exp_{x_{k}}(U_{k})\) \[v_{k}^{j}(x)=\big{\langle}\log_{x_{k}}x,v_{kj}\big{\rangle}\quad\text{for}\quad j =1,\ldots,d.\] Thus \[v_{k}(x)=\big{(}v_{k}^{1}(x),\ldots,v_{k}^{d}(x)\big{)}=\sum_{j=1}^{d}v_{k}^{ j}(x)v_{kj}\in T_{x_{k}}M.\] The normal coordinate allows one to perform elementwise non-linear activation to tangent vectors easily. For example, any \(1\leq k<l\leq K\) one has the transition map on \(\exp_{x_{l}}(U_{l})\cap\exp_{x_{k}}(U_{k})\) \[v_{k}^{j}(x)=\big{\langle}\log_{x_{k}}x,v_{kj}\big{\rangle}=\big{\langle}\log _{x_{k}}\exp_{x_{l}}v_{l}(x),v_{kj}\big{\rangle}\quad\text{for}\quad j=1, \ldots,d.\] A compact manifold \(M\) always admits a _finite partition of unity_\(\big{\{}\tau_{k},\ k=1,\ldots,K\big{\}}\), \(\tau_{k}(\cdot):M\to\mathbb{R}_{+}\) such that \(\sum_{k=1}^{K}\tau_{k}(x)=1\), and for every \(x\in M\) there is a neighbourhood of \(x\) where all but a finite number of functions are \(0\) (e.g., Proposition 13.9 of [40]). Therefore, for each function \(f:M\to\mathbb{R}\), we can write \[f(x)=\sum_{k=1}^{K}\tau_{k}(x)f\Big{(}\exp_{x_{k}}\big{(}\log_{x_{k}}x\big{)} \Big{)}\doteq\sum_{k=1}^{K}\tau_{k}(x)f_{k}(\log_{x_{k}}(x)). \tag{5}\] As a result, one can model the compositions \(f_{k}=f\circ\exp_{x_{k}}:U_{k}\to\mathbb{R}\) instead of \(f,\) for which we propose to use DNN. This idea gives rise to our iDNN architecture \(f(x)=\sum_{k=1}^{K}\tau_{k}(x)f_{k}\left(\log_{x_{k}}(x)\right)\). Figure 2 illustrates the core ideas of the iDNN architecture. Given a set of points \(\{x_{1},\ldots,x_{K}\}\subset M\), we define the iDNN class with depth \(L\), width \(P\), sparsity \(S\) and the maximum of parameters \(B\) as \[\mathcal{F}_{iDNN}(L,P,S,B)=\left\{\sum_{k=1}^{K}\tau_{k}(x)f_{k}\left(\log_{x _{k}}(x)\right):f_{k}\in\mathcal{F}(L,(d\sim P\sim 1),S,B)\right\}. \tag{6}\] ### Approximation analysis for iDNNs In this section, we investigate the approximation theory for the iDNN for smooth functions on manifolds. **Theorem 3**.: _Let \(M\subset\mathbb{R}^{D}\) be a \(d\)-dimensional compact manifold. Assume that \(\exp_{x_{k}}\in\mathcal{C}_{D}^{\gamma}(U_{k})\) for \(\gamma>\beta\) for every \(k=1,\ldots,K\). Then there exist positive constants \(c_{1},c_{2}\) and \(c_{3}\) depending only on \(D,d,\beta,K\) and the surface area of \(M\) such that for any \(\eta\in(1,0)\),_ \[\sup_{f_{0}:M\mapsto[-1,1]\,\pm\,t_{0}\in\mathcal{C}_{D}^{\beta}(M,K)\,f\in \mathcal{F}_{iDNN}(L,P,S,B=1)}\|f-f_{0}\|_{L^{\infty}(M)}\leq\eta.\] _with \(L\leq c_{1}\log\frac{1}{\eta}\), \(P\leq c_{2}\eta^{-\frac{d}{\beta}}\) and \(S\leq c_{3}\eta^{-\frac{d}{\beta}}\log\frac{1}{\eta}\)._ Proof.: We construct a DNN approximating \(f_{0k}=f_{0}\circ\exp_{x_{k}}\) for each \(k=1,\ldots,K\). Note that \(f_{0k}\) is \(\beta\)-Holder smooth by assumption. Therefore, by Theorem 1 in [36], there exist DNNs \(f_{1},\ldots,f_{K}\in\mathcal{F}(L,(d\sim P\sim 1),S,1)\) such that \(\|f_{k}-f_{0k}\|_{L^{\infty}(U_{k})}<\eta\) with \(L\leq c_{1}\log\frac{1}{\eta}\), \(P\leq c_{2}\eta^{-\frac{d}{\beta}}\) and \(S\leq c_{3}\eta^{-\frac{d}{\beta}}\log\frac{1}{\eta}\) for some \(c_{1}>0,c_{2}>0\) and \(c_{3}>0\). Now, let \(f=\sum_{k=1}^{K}\tau_{k}(x)f_{k}(\log_{x_{k}}(x))\in\mathcal{F}_{iDNN}(L,P,S,1)\). Then \[\|f-f_{0}\|_{L^{\infty}(M)} =\sup_{x\in M}\left|\sum_{k=1}^{K}\tau_{k}(x)f_{k}(\log_{x_{k}}( x))-\sum_{k=1}^{K}\tau_{k}(x)f_{0k}(\log_{x_{k}}(x))\right|\] \[\leq\sup_{x\in M}\sum_{k=1}^{K}\tau_{k}(x)\left|f_{k}(\log_{x_{k}} (x))-f_{0k}(\log_{x_{k}}(x))\right|\] \[\leq\max_{1\leq k\leq K}\|f_{k}-f_{0k}\|_{L^{\infty}(U_{k})}<\eta\] which completes the proof. **Remark 3.1**.: _[_36_]_ _and [6] propose feedforward neural networks on a manifold that's embedded in a higher-dimensional Euclidean space. In the approximation theory of [36] and [6], they utilize local charts and partition of unities, but due to the unknown geometry of the manifold, they need to use DNNs to approximate the local charts \(\psi_{j}\)s, the partition of unities functions as well as the mappings \(f\circ\psi_{j}^{-1}\). Under our iDNN framework, we utilize the Riemmanian geometry of the manifold and the \(\log\) map. Further, the partition of utilities functions can be constructed so there is no need to approximate them with DNNs._ ### Statistical risk analysis for iDNNs In this section, we study the statistical risk of the ERM over the iDNN class given by \[\hat{f}_{iDNN}=\operatorname*{argmin}_{f\in\mathcal{F}_{iDNN}(L,P,S,B)}\frac{ 1}{n}\sum_{i=1}^{n}(y_{i}-f(x_{i}))^{2}. \tag{7}\] for the nonparametric regression model (3) where the true function \(f_{0}\) is \(\beta\)-Holder smooth on a manifold. The following theorem shows that the iDNN estimator attains the optimal rate. We omit the proof since it is almost the same as the proof of Theorem 2 except using the approximation result for the iDNN class given in Theorem 3. **Theorem 4**.: _Assume the model (3) with a \(d\)-dimensional compact manifold \(M\) isometrically embedded in \(\mathbb{R}^{D}\). Then the ERM estimator \(\hat{f}_{iDNN}\) over the iDNN class \(\mathcal{F}_{iDNN}(L,P,S,B=1)\) in (7) with \(L\asymp\log(n)\), \(n\gtrsim P\gtrsim n^{d/(2\beta+d)}\) and \(S\asymp n^{d/(2\beta+d)}\log n\) satisfies_ \[\sup_{f_{0}:M\mapsto[-1,1]\,\text{s.t.}\,\,f_{0}\in\mathcal{C}_{D}^{\beta}(M, K)}R(\hat{f}_{iDNN},f_{0})\lesssim n^{-\frac{2\beta}{2\beta+d}}\log^{3}n.\] Proof.: For any two iDNNs \(f(\cdot)=\sum_{k=1}^{K}\tau_{k}(\cdot)f_{k}(\log_{x_{k}}(\cdot))\) and \(f^{\prime}(\cdot)=\sum_{k=1}^{K}\tau_{k}(\cdot)f^{\prime}_{k}(\log_{x_{k}}( \cdot))\) in \(\mathcal{F}_{iDNN}(L,P,S,B)\), we have \[\|f-f^{\prime}\|_{L^{\infty}(M)} \leq\sup_{x\in M}\sum_{k=1}^{K}\tau_{k}(x)\left|f_{k}(\log_{x_{k} }(x))-f_{0k}(\log_{x_{k}}(x))\right|\] \[\leq\max_{1\leq k\leq K}\|f_{k}-f_{0k}\|_{L^{\infty}(\mathbb{R}^{ d})}\,.\] Figure 2: The iDNN architecture on a Riemannian manifold \(M\). Given the base points \(\{x_{1},\dots,x_{K}\in M\}\) and the charts \(\{U_{k}\subset T_{x_{k}}M,k=1\dots,K\}\) on the manifold \(M\), the input data \(X\) is mapped to the \(kth\) chart \(U_{k}\) after the log map \(\log_{x_{k}}(\cdot)\). Afterward, the transformed data is fed into the DNN \(f_{k}\) on each chart \(k\). The final prediction \(Y\) is given by the partition of unity \(\tau(\cdot)\) as \(Y=\sum_{k=1}^{K}\tau_{k}(x)f_{k}\left(\log_{x_{k}}(x)\right)\). Therefore, the entropy of \(\mathcal{F}_{iDNN}(L,P,S,B)\) is bounded by the \(K\)-times of the entropy of the class \(\mathcal{F}(L,P,S,B)\). So by the same way as in the proof of Theorem 2, we get the desired. ## 4 Simulations study and real data analysis Applications will illustrate the practical impact and utilities of our methods to simulated data sets and some important real data sets, such as in the context of the AFEW database, HDM95 database, the ADHD-200 dataset, an HIV study, and others. The proposed eDNNs, dDNNs, and iDNNs will be applied to learning problems such as regression and classification on various manifolds, including the sphere, the planar shapes, and the manifold of symmetric positive definite matrices, which are the most popular classes of manifolds encountered in medical diagnostics using medical imaging and image classification in digital imaging analysis. For the eDNN models, we list explicit embeddings below and the corresponding lie groups that act on them equivariantly. For the iDNN models, we elaborate the exponential map and inverse-exponential (log) map on those manifolds. As mentioned before, the tDNN model is the special case of the iDNN model when \(K=1\), which utilizes the exponential map and inverse-exponential map as well. ### Sphere One of the simplest manifolds of interest is the sphere in particular in directional statistics and spatial statistics [12, 32, 11, 23, 18]. Statistical analysis of data from the two-dimensional sphere \(S^{2}\), often called directional statistics, has a fairly long history [12, 32, 11]. Modeling on the sphere has also received recent attention due to applications in spatial statistics, for example, global models for climate or satellite data [23, 18]. To build the eDNN on the sphere, first note that \(S^{d}\) is a submanifold of \(\mathbb{R}^{d+1}\), so that the inclusion map \(J\) serves as a natural embedding of \(S^{d}\) into \(\mathbb{R}^{d+1}\). It is easy to check that \(J\) is an equivariant embedding with respect to the Lie group \(H=SO(d+1)\), the group of \(d+1\) by \(d+1\) special orthogonal matrices. Intuitively speaking, this embedding preserves a lot of symmetries of the sphere. On the other hand, one can use the geodesics (in this case, the big circles on the sphere) for which the closed-form exponential map and inverse-exponential map are available to construct the iDNN model. Furthermore, given the base points \(x_{i},i=1,...,k\), one has \(\tau(x)=e^{-\frac{1}{1-|k-x_{i}|^{2}}}\) by utilizing the bump function on the sphere. In this simulation study, we consider the classification problem in terms of the Von Mises-Fisher distribution (MF) on the sphere \(S^{2}\), which has the following density: \[f_{\rm MF}(y;\mu,\kappa)\propto\exp\left(\kappa\mu^{T}y\right), \tag{8}\] where \(\kappa\) is a concentration parameter with \(\mu\) a location parameter. Then we simulate the data from \(K\) different classes on the sphere \(S^{d}\) via a mixture of MF as: \[u_{i1},...,u_{i10}\sim{\rm MF}(\mu_{i},\kappa_{1}),i=1,..,K. \tag{9}\] \[m_{ij}\sim unif\{u_{k1},...,u_{k10}\},\quad x_{ij}\sim{\rm MF}( m_{ij},\kappa_{2}),\] (10) \[i=1,..,K,\quad j=1,...,N. \tag{11}\] Here \(x_{ij}\) is the \(j\)th sample from \(i\)th class, \(\mu_{i}\) is the mean for the \(i\)th class, and \(\kappa\) is the dispersion for all classes. We first generated \(10\) means \(u_{i1},...,u_{i10}\) from the \({\rm MF}\) distribution for \(i\)th class. Then for each class, we generated \(N\) observations as follows: for each observation \(x_{ij}\), we randomly picked \(m_{ij}\) from \(u_{k1},...,u_{k10}\) with probability \(1/10\), and then generated a \({\rm MF}(m_{ij},\kappa_{2})\), thus leading to a mixture of \({\rm MF}\) distribution. Moreover, \(\kappa_{1}\) controls the dispersion of the intermediate variable \(m_{ij}\) while \(\kappa_{2}\) controls the dispersion of observations \(x_{ij}\). Figure 3 shows observations from the mixture model on the sphere under different dispersions. In the following simulation, we follow the mixture model on the hyper-sphere \(S^{2},S^{10},S^{50}\) with \(K=2\), \(N=2000\), \(\kappa_{1}=4\), \(\kappa_{2}=20\) and divide the data into 75 percent training set and 25 percent test set. We repeat this split \(50\) times. Then we compare the eDNN, tDNN, iDNN models to other competing estimators via the classification accuracy on the test set in Table 1. For competing estimators, we consider the k-nearest neighbors (kNN), the random forest (RF), the logistic regression (LR), and the support vector machine (SVM) with the radial basis function (RBF) kernel. The tuning parameters in each method are selected by evaluation on a validation data set whose size is \(25\%\) of the training set. For all DNN models, we apply a network architecture of \(5\) hidden layers with the numbers of widths \((100,100,100,100,100)\). The DNN model is the same as the eDNN model on Euclidean since the embedding map from the sphere to the higher Euclidean space is the identity map. In the tDNN model, we consider the \(Frechet\) mean of the training set as the base point and transform all data in the batch to tangent vectors before feeding to the neural network. In the iDNN model, we consider the north and south poles \((\pm 1,0,..,0)\) as base points and use the neural network with the same structure for all tangent spaces. All models are trained with Adam optimizer [26]. As shown in Table 1, our tDNN model and iDNN model outperform other competing estimators. Specifically, our tDNN models achieve the best accuracy \(94.88\pm 0.53\) and \(97.13\pm 0.39\) in the low dimensional cases. Our iDNN models obtained the best result \(80.72\pm 0.94\) and \(68.43\pm 1.20\) in the high dimensional spaces. ### The planar shape Let \(z=(z_{1},\ldots,z_{k})\), with \(z_{1},\ldots,z_{k}\in\mathbb{R}^{2}\), be a set of \(k\) landmarks. The planar shape \(\Sigma_{2}^{k}\) is the collection of \(z\)'s modulo under the Euclidean motions, including translation, scaling, and rotation. One has \(\Sigma_{2}^{k}=S^{2k-3}/SO(2)\), the quotient of sphere by the action of \(SO(2)\) (or the rotation), the group of \(2\times 2\) special orthogonal matrices; A point in \(\Sigma_{2}^{k}\) can be identified as the orbit of some \(u\in S^{2k-3}\), which we denote as \(\sigma(z)\). Viewing \(z\) as a vector of complex numbers, one can embed \(\Sigma_{2}^{k}\) into \(S(k,\mathbb{C})\), the space of \(k\times k\) complex Hermitian matrices, via the Veronese-Whitney embedding (see, e.g., [4]): \[J(\sigma(z))=uu^{*}=((u_{i}\bar{u}_{j}))_{1\leq,i,j\leq k}. \tag{12}\] One can verify that \(J\) is equivariant (see [24]) with respect to the Lie group \[H=SU(k)=\{A\in GL(k,\mathbb{C}),AA^{*}=I,\det(A)=I\},\] with its action on \(\Sigma_{2}^{k}\) induced by left multiplication. We consider a planar shape data set, which involves measurements of a group of typically developing children and a group of children suffering the ADHD (Attention deficit hyperactivity disorder). ADHD is one of the most common \begin{table} \begin{tabular}{l l l l l} \hline & \(S^{2}\) & \(S^{10}\) & \(S^{50}\) & \(S^{100}\) \\ \hline DNN & \(94.12\pm 0.67\) & \(96.22\pm 0.63\) & \(75.93\pm 1.07\) & \(62.53\pm 1.35\) \\ tDNN & \(94.88\pm 0.53\) & \(97.13\pm 0.39\) & \(80.07\pm 0.95\) & \(68.26\pm 1.16\) \\ iDNN & \(94.69\pm 0.65\) & \(97.11\pm 0.41\) & \(80.72\pm 0.94\) & \(68.43\pm 1.20\) \\ \hline kNN & \(92.16\pm 0.77\) & \(94.98\pm 0.60\) & \(69.18\pm 1.44\) & \(56.24\pm 1.30\) \\ LR & \(92.98\pm 0.76\) & \(88.64\pm 0.76\) & \(72.38\pm 1.14\) & \(66.73\pm 1.37\) \\ RF & \(93.66\pm 0.83\) & \(89.93\pm 0.65\) & \(70.29\pm 1.48\) & \(62.29\pm 1.45\) \\ SVM & \(94.07\pm 0.1\) & \(96.85\pm 0.44\) & \(79.38\pm 1.15\) & \(68.25\pm 1.18\) \\ \end{tabular} \end{table} Table 1: The test accuracy is calculated over \(50\) random split. The \(5\)-layers network (with \(100\) hidden nodes in each layer) is used for our DNN models in all experiments. Our tDNN model achieved the best result when the dimension was low \(S^{2},S^{10}\), while our iDNN is the best in high-dimension cases (\(S^{50},S^{100}\)). Moreover, our tDNN, iDNN models show better accuracy than the classical DNN, especially in high-dimensional cases. Figure 3: Observations for \(K=2\) classes from the mixture \(\mathrm{MF}\) distribution, \(N=100\). The nonlinear boundary between the two classes becomes hard to see with bare eyes due to the surging variance of the data as the \(\kappa_{1},\kappa_{2}\) dropping, which makes the classification problem harder. psychiatric disorders for children that can continue through adolescence and adulthood. Symptoms include difficulty staying focused and paying attention, difficulty controlling behavior, and hyperactivity (over-activity). In general, ADHD has three subtypes: (1) ADHD hyperactive-impulsive, (2) ADHD-inattentive, (3) Combined hyperactive-impulsive and inattentive (ADHD-combined). ADHD-200 Dataset ([http://fcon_1000.projects.nitrc.org/indi/adhd200/](http://fcon_1000.projects.nitrc.org/indi/adhd200/)) is a data set that records both anatomical and resting-state functional MRI data of 776 labeled subjects across 8 independent imaging sites, 491 of which were obtained from typically developing individuals and 285 in children and adolescents with ADHD (ages: 7-21 years old). The planar Corpus Callosum shape data are extracted, with 50 landmarks on the contour of the Corpus Callosum of each subject (see [17]). See Figure 4 for a plot of the raw landmarks of a normal developing child and an ADHD child) After quality control, 647 CC shape data out of 776 subjects were obtained, which included 404 (\(n_{1}\)) typically developing children, 150 (\(n_{2}\)) diagnosed with ADHD-Combined, 8 (\(n_{3}\)) diagnosed with ADHD-Hyperactive-Impulsive, and 85 (\(n_{4}\)) diagnosed with ADHD-Inattentive. Therefore, the data lie in the space \(\Sigma_{2}^{50}\), which has a high dimension of \(2\times 50-4=96\). As shown in the table 2, we consider the classification problem with 4 different classes. We also divided the dataset into a \(75\) percent training set and a \(25\) percent test set and evaluated the classification accuracy in the test set compared to other learning methods. Since the sample size is unbalanced, the total number of some classes is too small, i.e., ADHD-Hyperactive case. We also considered the classification with two classes by combing those ADHD samples into one class shown in the right figure in Figure 4. Similar to the sphere case, we select the k-nearest neighbors (kNN), the random forest (RF), the logistic regression (LR), and the support vector machine (SVM) with the radial basis function (RBF) kernel as competing estimators. The tuning parameters in each method are selected by evaluation on a validation data set whose size is \(25\%\) of the training set. For all DNN models, we utilize the same network architecture of \(5\) hidden layers with the numbers of \begin{table} \begin{tabular}{c c c c} \hline Disease status & Num. & Range of age in years(mean) & Gender(female/male) \\ \hline Typically Developing Children & 404 & \(7.09-21.83(12.43)\) & \(179/225\) \\ ADHD-Combined & 150 & \(7.17-20.15(10.96)\) & \(39/111\) \\ ADHD-Hyperactive/Impulsive & 8 & \(9.22-20.89(14.69)\) & \(1/7\) \\ ADHD-Inattentive & 85 & \(7.43-17.61(12.23)\) & \(18/67\) \\ All data & 647 & \(7.09-21.83(12.09)\) & \(237/410\) \\ \hline \end{tabular} \end{table} Table 2: Demographic information about processed ADHD-200 CC shape dataset, including disease status, age, and gender. Figure 4: CC shapes width \((100,100,100,100,100)\). The DNN model is applied to the raw data, while the eDNN model is applied to the embedded data by Veronese-Whitney embedding. And the preshape data (normalized raw data) lying in the hypersphere \(S^{100}\) is used for the tDNN model and iDNN model. In the iDNN model, we chose the north pole and south pole \((\pm 1,0,..,0)\) as base points and utilized the geometry of the hypersphere as before. In the tDNN model, we pick the \(Frechet\) mean of the training set as the base point and transform all data in a batch to tangent vectors before feeding to the neural network. All models are trained with Adam optimizer. The competition results can be observed in Table 3. Our tDNN model achieves the best accuracy at \(65.84\pm 3.10\) among 50 splits in the 2 classes case. Also, our iDNN model showed the best result of \(63.55\pm 3.80\) in the 4 classes case. ### Symmetric semi-positive definite matrix (SPD) Covariance matrices are ubiquitous and attractive in machine learning applications due to their capacity to capture the structure inside the data. The main challenge is to take the particular geometry of the Riemannian manifold of symmetric positive definite (SPD) matrices into consideration. The space \(\text{SPD}(d)\) of all \(d\times d\) positive definite matrices belongs to an important class of manifolds that possesses particular geometric structures, which should be taken into account for building the DNNs. [13] investigates its Riemannian structure and provides somewhat concrete forms of all its geometric quantities. [9] studies different notions of means and averages in \(\text{SPD}(3)\) with respect to different distance metrics and considers applications to DTI data and covariance matrices. Under the Riemannian framework of tensor computing [35], several metrics play an important role in machine learning on SPD matrices. Generally, the Riemannian distance \(d(P_{1},P_{2})\) between two points \(P_{1}\) and \(P_{2}\) on the manifold is defined as the length of the geodesic \(\gamma_{P_{1}\to P_{2}}\), i.e., the shortest parameterized curve connecting them. In the SPD manifold, the distance under the affine metric could be computed as [35]: \[d\left(P_{1},P_{2}\right)=\frac{1}{2}\left\|\log\left(P_{1}^{-\frac{1}{2}}P_{ 2}P_{1}^{-\frac{1}{2}}\right)\right\|_{F}.\] Other important natural mappings to and from the manifold and its tangent bundle are the logarithmic mapping \(Log_{P_{0}}\) and the exponential mapping \(Exp_{P_{0}}\) at the point \(P_{0}\). Under the affine metric, those two mappings are known in closed form: \[\forall S\in\mathcal{T}_{P_{0}},Exp_{P_{0}}(S)=P_{0}^{\frac{1}{2}}\exp\left(P _{0}^{-\frac{1}{2}}SP_{0}^{-\frac{1}{2}}\right)P_{0}^{\frac{1}{2}}\in\text{ SPD}(d)\] \[\forall P\in\text{SPD}(d),Log_{P_{0}}(P)=P_{0}^{\frac{1}{2}}\log\left(P_{0}^{ -\frac{1}{2}}PP_{0}^{-\frac{1}{2}}\right)P_{0}^{\frac{1}{2}}\in\mathcal{T}_{P _{0}},\] where \(\mathcal{T}_{P_{0}}\) denotes the tangent space at \(P_{0}\). Furthermore, we consider the log map on the matrix as the embedding \(J\), mapping \(\text{SPD}(d)\) to \(Sym(d)\), the space of the symmetric matrix. For example, let \(P\in\text{SPD}(d)\) with a spectral decomposition \(P^{(l)}=U\Sigma U^{T}\), we have the log-map of \(A\) as \(\log(P)=U\log(\Sigma)U^{T}\) where \(\log(\Sigma)\) denotes the diagonal matrix whose diagonal entries are the logarithms of the diagonal entries of \(\Sigma\). Moreover, the embedding \(J\) is a diffeomorphism, equivariant with respect to the actions of \(GL(d,\mathbb{R})\), the \(d\) by \(d\) general linear group. That is, for \(H\in GL(d,\mathbb{R})\), we have \(\log(HPH^{T})=H\log(P)H^{-1}\). \begin{table} \begin{tabular}{l c c} & 4 Classes & 2 Classes \\ \hline DNN & \(56.40\pm 10.83\) & \(61.09\pm 8.44\) \\ eDNN & \(62.98\pm 3.91\) & \(63.81\pm 3.72\) \\ tDNN & \(63.20\pm 3.70\) & \(\mathbf{65.84\pm 3.10}\) \\ iDNN & \(\mathbf{63.55\pm 3.80}\) & \(65.42\pm 3.41\) \\ \hline kNN & \(57.62\pm 3.37\) & \(61.26\pm 3.84\) \\ LR & \(61.35\pm 3.54\) & \(59.58\pm 3.44\) \\ RF & \(61.38\pm 3.50\) & \(63.20\pm 3.13\) \\ SVM & \(61.80\pm 3.92\) & \(64.89\pm 3.64\) \\ \end{tabular} \end{table} Table 3: The average accuracy on the test dataset is calculated over \(50\) random splits. The \(5\)-layers network (with \(100\) hidden nodes in each layer) is used for our DNN models in all experiments. Consequently, our tDNN model obtains the best accuracy in the 2 classes case while our iDNN model achieves the best accuracy in the 4 classes case. Furthermore, all our eDNN, tDNN and iDNN models outperform the classical DNN model, indicating the advantages of our frameworks. In the context of deep networks on SPD, we build up our model in terms of SPDNet introduced by [20], which mimicked the classical neural networks with the stage of computing an invariant representation of the input data points and a second stage devoted to performing the final classification. The SPDNet exploited the geometry based on threefold layers: The BiMap (bilinear transformation) layer, analogous to the usual dense layer; the induced dimension reduction eases the computational burden often found in learning algorithms on SPD data: \[X^{(l)}={W^{(l)}}^{T}P^{(l-1)}W^{(l)}\text{ with }W^{(l)}\text{ semi-orthogonal.}\] The ReEig (rectified eigenvalues activation) layer, analogous to the ReLU activation, can also be seen as an Eigen-regularization, protecting the matrices from degeneracy: \[X^{(l)}=U^{(l)}\max\left(\Sigma^{(l)},\epsilon I_{n}\right){U^{(l)}}^{T},\text { with }P^{(l)}=U^{(l)}\Sigma^{(l)}{U^{(l)}}^{T}.\] The LogEig (log eigenvalues Euclidean projection) layer: \(X^{(l)}=\operatorname{vec}\left(U^{(l)}\log\left(\Sigma^{(l)}\right){U^{(l)}} ^{T}\right)\), with again \(U^{(l)}\) the eigenspace of \(P^{(l)}\). Under our framework, the SPDNet is both an eDNN and a tDNN model. The LogEig layer applies the logarithmic mapping \(\log_{I}(P)=\operatorname{vec}\left(U^{(l)}\log\left(\Sigma^{(l)}\right){U^{ (l)}}^{T}\right)\), which is identical to the transformation in the LogEig layer. Thus, SPDNet can also be viewed as a tDNN model. In our experiments, we only consider tDNN models as one tangent space from the base point is sufficient to cover the entire manifold. Our eDNN models on SPD\((p)\) consist of 3 BiMap layers, 3 ReEig layers, one LogEig layer (for embedding), and a 5-layer DNN with 100 hidden nodes per layer. In tDNN models, we replace the LogEig layer with the intrinsic logarithmic mapping under different metrics. In our experiments, we evaluate the performance of tDNN and eDNN models on the AFEW and HDM05 datasets using the same setup and protocol as in [20]. The AFEW dataset [7] includes 600 video clips with per-frame annotations of valence and arousal levels and 68 facial landmarks, depicting 7 classes of emotions. The HDM05 dataset [33] contains over three hours of motion capture data in C3D and ASF/AMC formats, covering more than 70 motion classes across multiple actors. We divide the data into a 75-25 percent training-test split, with 10 repetitions, and use the validation set (25 percent of training data) to tune hyperparameters. We implement tDNN models on both affine metrics and log-Euclidean metrics, using the Frechet mean of the batch as the base point. As shown in Table 4, our tDNN model under the Log-Euclidean metric achieves the best results on both datasets, with a 35.85 \(\pm\) 1.49 accuracy on the AFEW dataset and 62.59 \(\pm\) 1.35 accuracy on the HDM05 dataset. ## 5 Discussion In this work, we develop intrinsic and extrinsic deep neural network architectures on manifolds and characterize their theoretical properties in terms of approximation error and statistical error of the ERM based estimator. The neural networks explore the underlying geometry of the manifolds for learning and inference. Future work will be focused on developing convolutional neural networks in manifolds for image classifications of manifold-values images, which have abundant applications in medical imaging and computer vision. ## Acknowledgments We would like to thank Dong Quan Nguyen, Steve Rosenberg, and Bayan Saparbayeva for the very helpful discussions. We acknowledge the generous support of NSF grants DMS CAREER 1654579 and DMS 2113642. The second author was supported by INHA UNIVERSITY Research Grant. \begin{table} \begin{tabular}{l c c} \hline Data & AFEW & HDM05 \\ \hline \((n,d)\) & \((2135,400^{2})\) & \((2086,93^{2})\) \\ \hline \hline eDNN(SPDNet) & \(34.23\pm 1.44\) & \(61.35\pm 1.12\) \\ tDNN-Log & \(\mathbf{35.85\pm 1.49}\) & \(\mathbf{62.59\pm 1.35}\) \\ tDNN-Affine & \(35.31\pm 1.68\) & \(62.23\pm 1.43\) \\ \hline \end{tabular} \end{table} Table 4: The accuracy of the test set was reported. We follow the setup and protocols in [20] and our tDNN models outperform the eDNN (SPDNet) under both log and affine metrics.
2301.09048
Discovering explicit Reynolds-averaged turbulence closures for turbulent separated flows through deep learning-based symbolic regression with non-linear corrections
This work introduces a novel data-driven framework to formulate explicit algebraic Reynolds-averaged Navier-Stokes (RANS) turbulence closures. Recent years have witnessed a blossom in applying machine learning (ML) methods to revolutionize the paradigm of turbulence modeling. However, due to the black-box essence of most ML methods, it is currently hard to extract interpretable information and knowledge from data-driven models. To address this critical limitation, this work leverages deep learning with symbolic regression methods to discover hidden governing equations of Reynolds stress models. Specifically, the Reynolds stress tensor is decomposed into linear and non-linear parts. While the linear part is taken as the regular linear eddy viscosity model, a long short-term memory neural network is employed to generate symbolic terms on which tractable mathematical expressions for the non-linear counterpart are built. A novel reinforcement learning algorithm is employed to train the neural network to produce best-fitted symbolic expressions. Within the proposed framework, the Reynolds stress closure is explicitly expressed in algebraic forms, thus allowing for direct functional inference. On the other hand, the Galilean and rotational invariance are craftily respected by constructing the training feature space with independent invariants and tensor basis functions. The performance of the present methodology is validated through numerical simulations of three different canonical flows that deviate in geometrical configurations. The results demonstrate promising accuracy improvements over traditional RANS models, showing the generalization ability of the proposed method. Moreover, with the given explicit model equations, it can be easier to interpret the influence of input features on generated models.
Hongwei Tang, Yan Wang, Tongguang Wang, Linlin Tian
2023-01-22T04:16:58Z
http://arxiv.org/abs/2301.09048v1
Discovering explicit Reynolds-averaged turbulence closures for turbulent separated flows through deep learning-based symbolic regression with non-linear corrections ###### Abstract This work introduces a novel data-driven framework to formulate explicit algebraic Reynolds-averaged Navier-Stokes (RANS) turbulence closures. Recent years have witnessed a blossom in applying machine learning (ML) methods to revolutionize the paradigm of turbulence modeling. However, due to the black-box essence of most ML methods, it is currently hard to extract interpretable information and knowledge from data-driven models. To address this critical limitation, this work leverages deep learning with symbolic regression methods to discover hidden governing equations of Reynolds stress models. Specifically, the Reynolds stress tensor is decomposed into linear and non-linear parts. While the linear part is taken as the regular linear eddy viscosity model, a long short-term memory neural network is employed to generate symbolic terms on which tractable mathematical expressions for the non-linear counterpart are built. A novel reinforcement learning algorithm is employed to train the neural network to produce best-fitted symbolic expressions. Within the proposed framework, the Reynolds stress closure is explicitly expressed in algebraic forms, thus allowing for direct functional inference. On the other hand, the Galilean and rotational invariance are crafily respected by constructing the training feature space with independent invariants and tensor basis functions. The performance of the present methodology is validated through numerical simulations of three different canonical flows that deviate in geometrical configurations. The results demonstrate promising accuracy improvements over traditional RANS models, showing the generalization ability of the proposed method. Moreover, with the given explicit model equations, it can be easier to interpret the influence of input features on generated models. ## I Introduction Accurate predictions of turbulent flows are of paramount importance for many engineering applications of computational fluid dynamics (CFD). With the increasing availability of computational resources over the last two decades, scale-resolving simulations such as large eddy simulation (LES) and direct numerical simulation (DNS) have gained widespread applications. While these methods indeed provide plenty of detailed insights into fluid flow physics, an immediate difficulty relates to the computational cost, which could be prohibitively large for many practical applications in physics and engineering sciences. By contrast, the computationally lower cost and superior robustness leave the Reynolds-averaged Navier-Stokes (RANS) models still the most widely used tool in industrial simulations. This scenario will remain unchanged into the coming decades [1]. However, the predictive accuracy of RANS equations could be severely plagued with the unsolved closure problem deeply rooted in RANS governing equations. Such deficiency is strikingly serious for flow with complex geometries and large separations [2]. Since the pioneering work of Boussinesq to develop a mathematical description of Reynolds stress by introducing the concept of eddy viscosity, there has been a continuous attempt to formulate more accurate RANS turbulence models. In recent years, thanks to the rapid advancement of high-performance computing architectures and the increasingly accessible high-fidelity flow data, the emerging machine learning (ML) and data science techniques provide a new alternative in the analysis and understanding of turbulent flows [3; 4]. This paradigm has been blossoming in turbulence modeling research. For example, Cheung _et al._[5] applied Bayesian techniques to calibrate the parameters of a traditional turbulence model against experimental data. Yan, Zhang, and Chen [6] took the ML method to augment a turbulence model to enhance its prediction in separation flow. These works try to improve the turbulence model performance without breaking the Boussinesq assumption. Rather than correcting the governing equations of existing turbulence models, another research subject is constructing new constitutive models for Reynolds stress tensors. The related ideas are generally conceptualized from the insightful hypothesis made by Pope [7], where the anisotropic Reynolds stress is expressed as an algebraic tensor polynomial. For example, Wu, Xiao, and Paterson [8] built a framework for enhancing RANS turbulence models with carefully designed input features to ensure Galilean and rotational invariance of the model predictions. Following this work, Yin _et al._[9] then improved the approach by reconsidering the criterion for selecting input features. Another school of thought is to recast the neural network architectures. The pioneering effort could trace back to the work by Ling, Kurzawski, and Templeton [10], in which they proposed a novel deep neural network architecture that integrated a set of tensor basis into neural networks to model anisotropic Reynolds stress tensor. Following this effort, Jiang _et al._[11] recently presented a ML-based turbulence modeling framework in which two parallel ML modules are combined to directly infer structural and parametric representations of turbulence physics, respectively. Despite the improved performance presented in the above-mentioned works and many others [12; 13], the black-box nature of most ML methods hampers the understanding of obtained data-driven models, making it difficult to provide physical interpretations and infer new flow physics. On the other hand, it may also increase the difficulty for disseminating the learned models to end users since there are no explicit mathematical equations. Recently, symbolic regression approaches have gained a renaissance in ML community. The task of symbolic regression is to find a symbolic function that best predicts the target given input variables. Some researchers have applied these approaches to detect hidden fluid properties, and the relevant results present towering promise and preliminary success [14; 15]. Driven by such prevalence, various symbolic regression methods, such as gene expression programming (GEP) and sparse regression, have been introduced to derive new Reynolds stress models that can be expressed in explicit algebraic forms. Weatherritt and Sandberg [16] proposed an expansion of the GEP method to model Reynolds stress tensor from high-fidelity data. This work was then further developed by Zhao _et al._[17] by integrating the CFD simulations with GEP training process. The learning target was set as the velocities rather than the Reynolds stress itself. However, due to the GEP being by nature a non-deterministic method, the mathematical form of the model it discovers can vary from different runs. By contrast, Schmelzer, Dwight, and Cinnella [18] introduced a deterministic symbolic regression method to infer algebraic stress models by leveraging the fast function extraction method. This method can identify the relevant candidate functions by imposing a sparsity constraint. Similarly, the sparse identification of non-linear dynamics approach was employed by Beetham and Capecelatro [19] to formulate new turbulence closures with affordable interpretability and transportability. The success of symbolic regression methods in discovering turbulence models motivates the present work, in which a new model discovery strategy based on the deep symbolic regression (DSR) method [20] is introduced to formulate Reynolds stress models that can be explicitly expressed in algebraic forms. The DSR method is a deterministic methodology and is built on the combination of deep learning and symbolic regression techniques. During the model training process, the DSR approach can ensure that all samples adhere to all constraints, without rejecting samples post hoc. In contrast, the GEP method may produce operations that violate imposed constraints, thus requiring rejections post hoc. This process could be problematic. In addition, most symbolic regression methods suffer from the expectation problem. That is, these methods are fundamentally designed to optimize the expected performance. However, symbolic regression generally aims to maximize the performance of few (or single) best-fitting samples. This problem can be solved by leveraging the risk-seeking reinforcement learning (RL) algorithm, as is done in the present work. In a sense, the utilization of RL could be one of the most appealing features of DSR method. It is a natural choice since symbolic space can be considered as an environment where states and actions are given as symbolic tokens [21]. Unlike supervised learning which needs a knowledgeable external supervisor to provide labeled data, RL is designed for decision-making problems. It generally does not need data to be labeled and can learn through trial and error [22]. RL has gained unprecedented interest in many domains, such as robotics [23], mathematics [24], and games [25]. Recent years also have witnessed a blossoming of RL in fluid mechanics [26]. Some typical applications include the behavior adaption of swimmers [27; 28], active flow control for drag reduction [29; 30; 31; 32] and conjugate heat transfer [33; 34], and aerodynamic shape optimization [35; 36]. The application of RL in formulating turbulence models is at a very early stage, with most works focusing on the development of reliable subgrid-scale models for LES. The pioneering study is proposed by Novati, de Laroussilhe, and Koumoutsakos [37], in which RL is leveraged to adjust the coefficients of the eddy-viscosity closure model, in order to reproduce the energy spectrum of homogeneous isotropic turbulence (HIT) predicted by DNS. This approach was then further developed by Kim _et al._[38] for wall-bounded turbulence. The main advantage of RL against supervised learning is that RL could address the distinction between a priori and a posteriori evaluation and account for compounding modeling errors [37]. In a similar but conceptually different study, Bae and Koumoutsakos [39] applied RL for the discovery of wall models for LES. Moreover, supervised learning could be ill-posed for turbulence modeling in implicit filtered LES due to labeled filter form is not available. This challenge can be alleviated by RL, as is done by Kurz, Offenhauser, and Beck [40], who applied RL with convolutional neural networks to find an optimal eddy-viscosity for implicitly filtered LES of HIT. By integrating the invariants and tensor basis in the DSR model training method, the objective of this work is to develop a new method for discovering turbulence closures in explicit algebraic forms, with emphases on 1. Interpretability: the obtained model can be explicitly expressed as a finite tensor polynomial, making it easier to infer the underlying physics and reveal the constitutive relationship between input features and output targets. 2. Galilean and rotational invariance: by constructing the feature space and the neural network architecture with an integrity tensor basis and invariants, both Galilean and rotational invariance can be achieved. 3. Transportability: it is straightforward to implement the learned model in RANS solvers since it is expressed as mathematical expressions. Consequently, there is no need to redeploy the production systems. The remainder of this paper is organized as follows. First, a brief introduction to the main governing equations used for performing simulation is provided in Section II, along with the general theory and implementation details underlying the construction of training methods. Next, three canonical flows with notably different geometry are selected as testing cases. And the simulation results that highlight the robustness and generalization ability of the proposed approach are detailed in Section III. Finally, a summary of the contribution is demonstrated in Section IV. ## II Methodology ### Governing equations In this work, the incompressible turbulent flow is taken to illustrate the modeling framework. The governing RANS equations read \[\nabla\cdot\mathbf{u}=0, \tag{1}\] \[\frac{\partial\mathbf{u}}{\partial t}+\mathbf{u}\cdot\nabla\mathbf{u}=-\nabla p+\nu\nabla^{2 }\mathbf{u}-\nabla\cdot\mathbf{\tau}, \tag{2}\] where \(\mathbf{u}\), \(p\), and \(\nu\) are the mean velocity, mean pressure (normalized by density), and viscosity, respectively. \(\mathbf{\tau}\) is the Reynolds stress accounting for unresolved turbulence. Herein the Reynolds stress is unknown and needs to be closed by a mathematical model. Hence, the effects regarding the discretization schemes and numerical algorithms aside, a turbulence closure model acts as the most critical factor in the prediction accuracy of RANS equations. In the present work, a symbolic regression method based on deep learning techniques will be leveraged to produce extra anisotropic terms for Reynolds stress, to improve the predictive accuracy of RANS equations. ### Baseline RANS turbulence model The most commonly used theory for the closure of Reynolds stress tensor is the linear eddy viscosity model (LEVM) proposed by Boussinesq \[\mathbf{\tau}=\frac{2}{3}\mathbf{k}\mathbf{l}-\nu_{t}\left[\nabla\mathbf{u}+(\nabla\mathbf{u})^{ \mathrm{T}}\right], \tag{3}\] where \(\mathbf{l}\) denotes the identity matrix, \((\cdot)^{\mathrm{T}}\) means transposition. \(\nu_{t}\) indicates the turbulent viscosity which is unknown and needs to be estimated by one RANS turbulence model. This assumption is widely used at the expense of accuracy and is susceptible to influences from certain flow configurations, as will be discussed in the following sections. The data-driven turbulence model framework developed in the present study is based on the widely applied standard \(k-\varepsilon\) model [41], in which the equations for the turbulent kinetic energy \(k\) and the turbulent dissipation rate \(\varepsilon\) are written as \[\frac{\partial k}{\partial t}+\mathbf{u}\cdot\nabla k=\nabla\cdot\left(\nu_{eff,k }\nabla k\right)+P_{k}-\varepsilon, \tag{4}\] \[\frac{\partial\varepsilon}{\partial t}+\mathbf{u}\cdot\nabla\varepsilon=\nabla \cdot\left(\nu_{eff,\varepsilon}\nabla\varepsilon\right)+C_{\varepsilon 1}\frac{ \varepsilon}{k}P_{k}-C_{\varepsilon 2}\frac{\varepsilon^{2}}{k}, \tag{5}\] where other relations for closing the transport equations are \[\nu_{eff,k}=\nu+\frac{\nu_{t}}{\sigma_{k}},\quad\nu_{eff,\varepsilon}=\nu+ \frac{\nu_{t}}{\sigma_{\varepsilon}},\quad P_{k}=-\mathbf{\tau}:\nabla\mathbf{u}. \tag{6}\] Starting with the Boussinesq eddy viscosity assumption, the turbulent eddy viscosity \(\nu_{t}\) is given as \[\nu_{t}=\frac{C_{\mu}k^{2}}{\varepsilon}. \tag{7}\] Furthermore, the five model coefficients \(C_{\mu},C_{\varepsilon 1},C_{\varepsilon 2},\sigma_{k},\sigma_{\varepsilon}\) are flow-specific tuning and fudge parameters. In the context of this work, the standard values due to Launder and Sharma [41] are adopted. The whole set of these coefficients is shown below \[C_{\mu}=0.09,C_{\varepsilon 1}=1.44,C_{\varepsilon 2}=1.92,\sigma_{k}=1.0, \sigma_{\varepsilon}=1.3. \tag{8}\] It is worth stressing that these constant coefficients are calibrated by simplified theories and limited experiments. In addition, the standard \(\varepsilon\) equation (Eq. 5) is not based on the exact transport equation. It is best viewed as an empirical equation since only large-scale flow motions are considered in its derivation. As a consequence, the accuracy of standard \(k-\varepsilon\) model is possibly impeded in turbulent separated flow simulations, which can be well solved by the data-driven turbulence model developed in this work. ### Data-driven RANS turbulence framework Although the Boussinesq assumption has been widely used in turbulence modeling research, it cannot produce an adequate approximation to Reynolds stress. In addition, it could become invalid in flow simulations featuring large separations [1; 2]. The goal of this work is to introduce a data-driven turbulence model with explicit mathematical expressions to increase the accuracy of traditional RANS simulations. Rather than simply correcting the parameters of traditional turbulence models, this work seeks a more comprehensive effort to improve the Boussinesq assumption by adding an extra stress tensor \(\mathbf{b}^{\perp}\) as \[\tau=\frac{2}{3}\mathbf{k}\mathbf{l}-\nu_{t}\left[\nabla u+(\nabla u)^{\mathrm{T}} \right]+k\mathbf{b}^{\perp}. \tag{9}\] The \(\mathbf{b}^{\perp}\) represents the non-linear part of Reynolds stress tensor (detailed in next section). Taking \(\mathbf{b}^{\perp}\) as the learning target, the framework used in this work is illustrated in Fig. 1. This framework consists of two key phases. During the training phase, a number of RANS and DNS/LES flow pairs are utilized to build a regression model. The training process aims to minimize the discrepancy (the training error in Fig. 1) between ML predictions and high-fidelity data. Then, the obtained ML model is used to predict the target flow fields for new unseen flows. This process corresponds to the prediction phase. The corrected fields are subsequently propagated through the modified RANS solver to improve the baseline RANS predictions. To avoid the divergence of the data-driven simulations, a common practice is to freeze the predictions of the ML model. More specifically, the ML model is only called at the initial time and its predictions are then kept as constants when solving the RANS equations. This mode is usually referred to as loose coupling and is used in present work. On the contrary, tight coupling requires that the ML model participates in the iterative process of RANS simulations. And readers who are interested in this subject can refer to the review by Duraisamy [4]. The framework shown in Fig. 1 has permeated the area of data-driven turbulence modeling research, with various learning targets used by different researchers. In the context of this work, the symbolic regression method based on deep learning is selected to serve as the ML model. The details about the construction of DSR method and turbulence modeling schemes will be outlined in the following sections. #### ii.1.1 Symbolic regression based on deep learning The deep neural network could be the most commonly used regression model for building data-driven models. An immediate difficulty relates to its interpretability, which tends to be prohibitively tricky since deep learning models are highly recursive [42]. By contrast, taking compact symbolic formulations to describe physical systems can provide inherently interpretable insights, thus making it easier to infer with existing theories. In this study, based on the DSR approach initially proposed by Petersen _et al._[20], a data-driven framework is developed for understanding the mathematical relationships among variables in a turbulent flow system. DSR is a gradient-based approach for symbolic regression. The core idea of DSR is to use a large model to search the space of a small model. More specifically, DSR employs expressive recurrent neural networks (RNNs) to generate best-fitting symbolic expressions under some pre-specified constraints. In DSR, each token for constructing symbolic expressions is sampled from the outputs of a special kind of RNNs, the long short-term memory (LSTM) neural network. The fitness of symbolic expressions is then used as the reward function to train the neural networks by a novel RL algorithm. By this means, it is possible to seamlessly combine the representational capacity of deep learning models and the natural interpretability of symbolic expressions. An overview of DSR approach is illustrated in Fig. 2, and the whole training process can be broadly summarized as follows 1. Design inputs to LSTM neural networks. LSTM neural networks can learn the dependence among data sequences, which means the obtained token would be partially determined by previously sampled tokens. In this sense, the search space for symbolic expressions is inherently hierarchical. To ensure the hierarchical information is well-captured, DSR leverages the parent and sibling nodes of the token being sampled as inputs to the LSTM networks. Consequently, the sampled token is mainly determined by its adjacent nodes in the expression tree. 2. Impose a priori constraints to the search space. Under the DSR framework, several domain-agnostic constraints are applied to reduce the search space * The minimum and maximum length of symbolic expressions have to be specified before training starts. * A binary operator at least has one child that is non-constant. * The child of a unary operator should not be the inverse of that operator. * The descendants of trigonometric operators cannot contain trigonometric operators. These constraints are concurrently applied in the training process by ditching the tokens that would violate any one of the constraints. Such a sampling process craftily respects all pre-specified limits without rejecting samples post hoc, which makes the DSR method more amenable to complex tasks. 3. Choose the reward function. Within the DSR approach, the neural networks are trained by a risk-seeking policy gradient algorithm, and the performance of sampled expressions is evaluated with a reward function. The symbolic expressions that best fit the training dataset could vary with different reward functions. This feature will be discussed in detail in the next section. 4. Optimize constants in expressions. The sampled expressions may include constants that need to be optimized to maximize the reward function. In DSR, an inner optimization loop for each sampled expression is performed before executing each training epoch. Figure 1: A schematic of the framework for data-driven turbulence modeling. The overall process includes two stages: training (left block) and prediction (right block). As can be observed in Fig. 2, a softmax activation function is used in the output layer. Therefore, the LSTM neural networks would emit a probabilistic distribution over symbolic expressions. The token with the highest probability is then sampled to constitute the mathematical expressions. If the token being sampled is against the pre-defined constraints, however, its sampling probability would be reset to zero. After finishing the training process, the pre-order traversal of sampled tokens can represent the symbolic expression tree in which internal nodes are mathematical operators and terminal nodes are input variables or constants. The mathematical expression can then be generated by the pre-order traversal of its corresponding expression tree. As aforementioned, within the DSR framework, a novel RL algorithm, known as the risk-seeking policy gradient algorithm, is chosen to train the neural network model. The objective of symbolic regression is to maximize the best-case performance. This objective could be plagued by traditional policy gradient methods since they are fundamentally designed to optimize the expected performance (the performance is evaluated by a group of samples). The risk-seeking policy gradient formulation aims to increase the reward of the top \(\epsilon\) fraction of samples from the distribution, without regard for samples below that threshold. Therefore, this algorithm overcomes a performance bottleneck encountered in traditional policy gradient methods, enhancing its practical applications. The update procedures to obtain best-fitted symbolic expressions can be summarized as shown in Algorithm 1. A brief introduction about the LSTM neural network and the RL algorithm is provided in appendixes A and B. For complete details about the DSR algorithm, the reader is invited to consult the original works by Petersen _et al._[20]. #### ii.2.2 DSR for turbulence model development Rather than recovering exact physical models from data, in the context of this work, the DSR method is employed to identify turbulence closures by searching over the space of tractable mathematical expressions to best fit the dataset from high-fidelity flow simulations. The theoretical foundation for this framework is the stress tensor decomposition proposed by Pope [7]. Taking the turbulent kinetic energy \(k\) and turbulence dissipation rate \(\epsilon\) as two scaling parameters, the Reynolds stresses and the rates of stain can be normalized as follows \[\mathbf{b}=\mathbf{\tau}/k-\frac{2}{3}\mathbf{I}, \tag{10}\] \[\mathbf{S}=\frac{1}{2}\frac{k}{\epsilon}\left[\nabla\mathbf{u}+\left(\nabla\mathbf{u} \right)^{\mathrm{T}}\right],\quad\mathbf{R}=\frac{1}{2}\frac{k}{\epsilon}\left[ \nabla\mathbf{u}-\left(\nabla\mathbf{u}\right)^{\mathrm{T}}\right], \tag{11}\] where \(\mathbf{I}\) is identity matrix, \(\left(\cdot\right)^{\mathrm{T}}\) denotes matrix transposition. It is postulated that \(\mathbf{S}\) and \(\mathbf{R}\) contain all information for determining \(\mathbf{b}\) and any tensor can be expressed by an infinite tensor polynomial. Leveraging the Cayley-Hamilton theorem, Pope proved that the normalized Reynolds stress tensor can be expressed as a finite polynomial linearly composed of tensor bases and scaler functions \[\mathbf{b}(\mathbf{S},\mathbf{R})=\sum_{m}G^{m}\left(I_{1},\ldots,I_{n}\right)\mathbf{T}^{m}, \tag{12}\] where \(G^{m}\) are several scalar coefficients determined by the invariants \(I_{n}\), and \(\mathbf{T}^{m}\) are independent, symmetric tensor basis functions. For statistically two-dimensional flows, as is the case in present work, the coefficients are merely dependent on at most two invariants (i.e., \(n=2\)) and the bases can be simplified to only three tensors (i.e., \(m=3\)), as shown below \[\mathbf{T}^{1}=\mathbf{S},\ \ \mathbf{T}^{2}=\mathbf{S}\mathbf{R}-\mathbf{R}\mathbf{S},\ \ \mathbf{T}^{3}=\mathbf{S}^{2}-\frac{1}{3}\mathbf{I}\cdot\text{Tr}\left(\mathbf{S}^{2}\right), \tag{13}\] Figure 2: An overview for generating expressions with DSR methodology. \[I_{1}=\text{Tr}\left(\mathbf{S}^{2}\right),\quad I_{2}=\text{Tr}\left(\mathbf{R}^{2} \right), \tag{14}\] where \(\text{Tr}\left(\cdot\right)\) denotes the trace of matrix. The utilization of invariants and tensor basis functions is the key to keep the Galilean and rotational invariance of turbulence model [10]. More generally, Embedding the invariance into the input features could better improve the model performance, which has been highlighted by some early works [43; 44]. In the present work, the DSR approach is utilized to generate symbolic expressions for the coefficients, given the invariants and tensor bases. A simplified overview of the DSR framework used in present study is schematically depicted in Fig. 3. Since the flow simulations in present work are statistically in two dimensions, the input tokens are composed of two invariants and several other constants and mathematical operators. The three mathematical expressions produced by DSR correspond to the scalar coefficients \(G^{1}\), \(G^{2}\), and \(G^{3}\), respectively. The Reynolds stress can then be represented by the linear combination of mathematical expressions and tensor bases. The discrepancy between the Reynolds stress calculated by symbolic expressions and that from high-fidelity simulations is taken as the reward signal to inform the training process so as to maximize the performance of best-fitting expressions. As is indicated in Fig. 3, however, only the non-linear part of Reynolds stress will be built by the developed framework. More specifically, instead of taking \(\mathbf{b}\) directly as the modeling subject, the deviatoric Reynolds stress is split into two portions as expressed by \[\begin{split}\mathbf{b}&=\mathbf{b}^{\parallel}+\mathbf{b}^{ \perp}\\ &=-2C_{\mu}\mathbf{S}+\mathbf{b}^{\perp},\end{split} \tag{15}\] where \(\mathbf{b}^{\parallel}\) and \(\mathbf{b}^{\perp}\) denote the linear and non-linear parts of Reynolds stress, respectively. Recalling that the standard \(k-\epsilon\) turbulence model is used as the starting point for building data-driven models, the constant parameter \(C_{\mu}\) belonging to \(k-\epsilon\) turbulence model (see Eq. 8) is kept in the linear portion. This linear part will be taken as classical LEVM and solved implicitly in modified RANS solvers. As a consequence, the framework presented in Fig. 3 can be accordingly expressed in mathematics as \[\mathbf{b}^{\perp}=\sum_{m=1}^{3}G^{m}\left(I_{1},I_{2}\right)\mathbf{T}^{m}, \tag{16}\] where coefficients \(G\) will be represented by symbolic expressions consisting of invariants, constants and mathematical operators. In other words, only the non-linear portion of the Reynolds stress tensor, \(\mathbf{b}^{\perp}\), will be predicted by the data-driven turbulence model. This strategy aligns with the stability and accuracy analysis of data-driven turbulence models. In some earlier works [10; 45], the predictions from ML models are directly injected into RANS solvers. The remaining flow quantities are subsequently propagated forward by solving RANS equations with a frozen Reynolds stress field. Although prior assessments show that the discrepancy between ML predictions and high-fidelity data is pretty small, other derived flow variables of ultimate interest, such as velocity and pressure, are possibly far removed from the true values for flows with higher Reynolds numbers. Recently, a few studies [46; 47] showed that embedding ML prediction, i.e. the Reynolds stress field, explicitly into RANS equations would inevitably result in the lack of accuracy of data-driven turbulence models. The minor error in predicted Reynolds stress would be significantly magnified during the forward iterations of RANS equations, thus leading to severe performance degeneration of data-driven RANS simulations. To avoid being ill-conditioned, it is necessary to decompose the Reynolds stress before it is injected into RANS simulations, as is done herein. This strategy has been proven to be effective in improving the stability and accuracy of data-driven RANS simulations [46; 79; 47]. ### Numerical setup All simulations in the present works are taken by the open-source CFD platform OpenFOAM [48]. The data-driven simulations can be divided into two rounds. In the first round, the baseline RANS simulation is performed by using the semi-implicit method for pressure-linked equations (SIMPLE) algorithm to achieve a converged solution. In the second round, a modified solver based on SIMPLE algorithm accepts the newly corrected non-linear part of the anisotropic Reynolds stress field. These corrections are then injected into RANS equations and iteratively solved until the residual reconverges. It should be emphasized that the turbulence closure consisting of symbolic expressions is only called once before the second round simulation starts, which means the predicted \(\mathbf{b}^{\perp}\) (non-linear part of anisotropic Reynolds stress tensor) is kept constant during the iteration of RANS equations. For discretizing the RANS equations, the second-order central difference scheme is chosen for all terms except for the convection term, for which the second-order upwind scheme is selected. For all flow cases, the mesh independence experiment is first carried out before iteration starts, and all meshes are non-uniform with increased resolution around the feature of interest. ### Summary of proposed approach The entire procedures for implementing the data-driven turbulence model framework based on deep learning-based symbolic regression can be summarized as follows 1. Obtain the training dataset consisting of baseline RANS and DNS simulation data pairs. And run baseline RANS simulations for the testing flow set using the standard \(k-\epsilon\) turbulence model (the baseline RANS solution will serve as the initial state for second round simulation evolved with symbolic predictions). 2. Obtain the input features from the training flow set, i.e., the invariants \(I_{n}\) and tensor basis functions \(\mathbf{T}^{m}\) from baseline RANS solutions. 3. Interpolate the DNS results to the RANS computational grids and then obtain the non-linear part of anisotropic Reynolds stress tensor (training target), i.e., \(\mathbf{b}^{\perp}\). 4. Construct the mapping function \(g:\omega_{(}\mathbf{S},\mathbf{R})\mapsto\mathbf{b}^{\perp}\) by DSR approach. The hyperparameters are carefully tuned before they are applied to the training. 5. Integrate the symbolic expressed with simulation code. Replace the Reynolds stress term in the RANS solver with \(-2C_{\mathbf{\mu}}\mathbf{S}+\mathbf{b}^{\perp}\), and then run a forward simulation until residual re-converges. As aforementioned, the loose coupling strategy (the symbolic predictions are only called once) is adopted in this work. So the non-linear part \(\mathbf{b}^{\perp}_{s}\) is frozen while the linear part \(-2C_{\mathbf{\mu}}\mathbf{S}\) is iteratively updated during the data-driven simulation. ## III Results In this section, different flow configurations are investigated to verify the method proposed in this work. These flows are characterized by massive separations, which are difficult for traditional RANS turbulence models to perform accurate predictions. The corrected Reynolds stress tensor is substituted into the solver to obtain an improved mean flow field. The computation framework is verified by comparing the reconverged flow field with the high-fidelity mean flow, presenting a generalization performance on massively varying geometries. ### Case setup for training and testing dataset To assess the performance of the proposed framework, the classical cases of flow over parameterized periodic hills are selected, as shown in Fig. 4 (a). The left and right sides are connected to achieve periodicity. Hence, the cyclic boundary conditions are used at the inlet (left side) and outlet (right side). The geometries and DNS results are provided by Xiao _et al._[49]. A parameter \(\alpha\) is utilized to change the hill steepness and the overall length of the geometry. The shape parameters for these three cases correspond to \(\alpha=0.8\), \(0.5\), and \(1.0\), respectively. Figure 3: Illustration of the DSR framework utilized in present work for building data-driven turbulence closures. The former is used as a training dataset. The obtained symbolic expression for turbulence closure is further applied to the latter two cases. In addition, the flow through a backward-facing step, which is significantly different from training data in geometry, is also selected to test the scope of application of the learned model, as shown in Fig. 4 (b). The geometry, DNS results, and experimental data are obtained from the works of Jovic and Driver [50] and Le, MOIN, and KIM [51]. For this backward-facing step case, A fixed velocity condition is enforced at the inlet, and a zero gradient condition is applied for the outflow. The no-slip boundary condition is applied at the top and bottom walls for all flow cases. Other information about the training and testing cases, such as Reynolds number and Iterative steps for achieving convergence (residual of streamwise velocity is around \(1\cdot 10^{-10}\)), are summarized in Tab. 1. A typical mesh description of periodic hill with \(\alpha=0.8\) is presented in Fig. 5. The computational time may vary with hardware, but the simulation should achieve proper convergence performance within hundreds of seconds on a modern multi-core workstation. Additionally, it is noted that the converge cost with discovered turbulence model is basically the same with \(k-\epsilon\) computation cost, as has been reported by previous studies [9]. ### Turbulence closure discovery The traditional RANS turbulence models share a common weakness in correctly predicting the Reynolds stress anisotropy, which is blamed for the poor accuracy in many RANS-based flow simulations, especially in those with strong flow separations. As previously mentioned, a RL algorithm, known as the risk-seeking policy gradient algorithm, is employed to train the neural network. The non-linear part of anisotropic Reynolds stress tensor is set as the training target. As a consequence, taking the root-mean-square-error function, the reward function of DSR approach is accordingly defined as \[\frac{1}{1+\frac{\sigma}{\sqrt{\frac{1}{n}\sum\limits_{i=1}^{n}\|\widehat{ \mathbf{b}_{i}^{\perp}}-\mathbf{b}_{i}^{\perp}\|^{2}}}}, \tag{17}\] where \(\widehat{\mathbf{b}^{\perp}}\) is the true value of the non-linear part of anisotropic Reynolds stress tensor obtained from DNS results and \(\mathbf{b}^{\perp}\) is the corresponding prediction by symbolic expressions. \(n\) corresponds to the size of training dataset. \(\sigma\) denotes the standard deviation of \(\widehat{\mathbf{b}^{\perp}}\). Here, the root-mean-square-error function is normalized by \(\sigma\) and is squashed to bound the range of reward value to (0, 1]. The main hyper-parameters used for symbolic training are listed in Tab. 2, and the reward curve is illustrated in Fig. 6, as a function of total expressions evaluated during training. The tokens used to build symbolic expressions contain four regular arithmetic mathematical operators (+, -, \(\times\), \(\pm\)), constants, and input variables (i.e., the invariants). These tokens are selected to offer a compromise between the readability of the turbulence closure equation and prediction accuracy. Using non-linear functions such as sin and tan could slightly improve the accuracy of DSR results, but the obtained expressions may be extremely complex. In addition, the computational cost could also be largely increased. Using the present configurations, the training takes about 100 hours on a workstation with 20 processor cores (Intel Xeon CPU E5-2650). The goal of training efforts is to determine the mathematical expressions of scalar coefficients \(G^{m}(I_{1},I_{2})\). To improve the training efficiency, the input invariants to the neural network are re-scaled by a sigmoidal function due to their strong varying in magnitude \[\widehat{I}_{i}=\frac{1-e^{-I_{i}}}{1+e^{-I_{i}}}. \tag{18}\] This operation consequently normalizes the input data to the range [-1, 1], thus reducing the negative impact of some possible outliers on the model performance. Without causing ambiguity, the symbol \(\widehat{(\cdot)}\) of normalized quantities are briefly dropped in the remainder of this work. As indicated in Algorithm 1, the constant placeholders in sampled expressions need to be optimized to maximize the reward function. In this work, a non-linear optimization algorithm, known as Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm [52], is leveraged to substitute the constant placeholders with optimized constants. Since running constant optimization can be prohibitively expensive, the number of constant placeholders in each expression is limited to 3 during training. The learned model takes the form (\(\beta=7\)) \[\mathbf{b}^{\perp}=\frac{\beta}{10}\left(G^{1}\mathbf{T}^{1}+G^{2}\mathbf{T}^{2}+G^{3}\bm {T}^{3}\right), \tag{19}\] where \[\begin{split} G^{1}&=0.1893I_{1}+0.2229I_{2}+0.1176 \\ G^{2}&=-0.1036I_{1}I_{2}^{3}-0.05182I_{1}^{2}I_{2}^{ 2}+0.1718I_{1}^{2}-0.2333\\ G^{3}&=-2.514I_{1}I_{2}^{4}-3.514I_{3}^{3}-0.01105I_{2}^ {2}-2I_{1}I_{2}+2.98I_{2}\end{split} \tag{20}\] In this learned expression for \(\mathbf{b}^{\perp}\) (Eq. 19), the damping factor \(\beta/10\) is selected following the suggestion by Shih [53]. While a detailed discussion can be found therein, this factor is leveraged to ensure realizability conditions of the Reynolds stress tensor. A similar strategy was also utilized by Beetham and Capecelatro [19]. As is indicated by Eq. 20, the scalar coefficient expression multiplying by a higher-order basis tensor exhibits a much more complex form than the expression multiplying by a low-order basis as the former contains more high-order invariants. This result implies that the higher-order terms could yield relatively complex qualitative behavior in turbulent flow simulations. In addition, the coefficient expressions are functions of the invariants herein. By contrast, the turbulence closure models discovered by some other approaches, such as sparse regression and fast function extraction, only contain constant coefficients that keep unchanged for the whole computational domain. As a consequence, the method proposed in this work shows a degree of superiority since the coefficients vary with the invariants at different grid points, which tends to produce a more robust closure model. ### Predictive results The discovered model is first validated _a prior_ for the training periodic hill flow case (\(\alpha=0.8\)). Within this effort, the accuracy of the learned model is evaluated by high-fidelity data based on the predicted anisotropic Reynolds stress tensor. In the following context, the unnormalized anisotropic Reynolds stress tensor \(\mathbf{a}=k\cdot\mathbf{b}(\mathbf{S},\mathbf{R})\) will be referred to as the tensor used for solving RANS equations. \begin{table} \begin{tabular}{l l} LSTM architecture (3 hidden layers) & \(64\to 64\to 64\) \\ Train epochs & 20 \\ Batch size & 640 \\ Learning rate & 0.001 \\ Entropy coefficient & 0.005 \\ Risk factor & 0.05 \\ Minimum length of expressions & 4 \\ Maximum length of expressions & 32 \\ Mathematical operators & [+, -, \(\times\), \(\div\)] \\ Constant token repeats & 3 \\ \end{tabular} \end{table} Table 2: Main DSR hyper-parameters used for searching symbolic expressions of turbulence closures. Figure 4: Schematic of the geometries of (a) periodic hills and (b) backward-facing step. Figure 5: Computation mesh of the periodic hill with \(\alpha=0.8\). As is presented in Fig. 7, the baseline RANS simulation provides a reasonably good prediction for shear component \(a_{12}\), even though some over-predictions can be observed at the downstream crest. However, its predictions for other three diagonal components are far removed from high-fidelity DNS observations. Such inaccuracy is more obvious for \(a_{22}\) and \(a_{33}\), of which the prediction bias is particularly stark near the bottom wall. By contrast, the learned model captures the correct sign and presents a higher precision in the magnitude for all components of the anisotropic Reynolds stress tensor. Although the improvement can be plagued at certain flow areas, the discovered model generally provides better predictions than LEVM. This superiority is relatively more significant for \(a_{11}\) and \(a_{22}\) in the region where flow separations could exist. The last two rows in Fig. 7 demonstrate the discrepancy between the highly-resolved DNS data and the results obtained from LEVM and the learned model, respectively. It indicates that the learned model is, to some extent, more capable to reduce the predictive error in the anisotropic stress tensor. As aforementioned, the independent basis tensors and invariants representing key turbulence features are leveraged during the training process to guarantee both Galilean and rotational invariance of the discovered model. To make a deeper investigation into the realizability of the discovered turbulence model, the barycentric map [54] based on the combination of eigenvalues is used to provide a non-distorted visual representation of the turbulence anisotropy. The anisotropic Reynolds stress tensor predicted by the discovered symbolic model is shown in Fig. 8, compared with baseline RANS and DNS results. The comparisons are performed on six streamwise locations at \(x/h=1.5\), \(x/h=1.0\), \(x/h=1.5\), \(x/h=2.0\), \(x/h=4.0\) and \(x/h=7.5\), respectively. As indicated in Fig. 8, the Reynolds stress anisotropy approaches the three-component limiting state for all three models as the point moves away from the bottom wall. This trend can ascribe to the significantly fewer flow separations in the bulk region, thus the turbulence gradually develops to be isotropic. It is also noted that the baseline RANS Reynolds stress is close to the plane-strain limiting state at all six locations, showing significant discrepancies from the DNS results. The baseline RANS flow field is generally dominated by the shear layer, thus the medium eigenvalue of baseline RANS stress anisotropy is close to zero [54]. By contrast, the discovered model is more capable to capture the stress anisotropy to achieve better agreement with DNS data, even though there are a tiny minority of outliers as shown in Fig. 8 (c). It is worth stressing that these few outliers could hardly impede the performance of the learned model, as is discussed in the following context. The improved velocity and pressure fields, compared with the DNS data and the results obtained via standard \(k-\epsilon\) model, are shown in Fig. 9 (both tow fields are non-dimensional). The streamline resulting from DNS data in Fig. 9 (a) shows that the baseline RANS model underestimates the size of separation bubble, revealing that the standard \(k-\epsilon\) model is not capable of precisely capturing the flow separation feature for this case. The learned model provides more accurate Reynolds stress anisotropy. Therefore, it successfully enlarges the RANS-predicted separation bubble. An analogous improvement is also reflected by the magnitude of pressure in Fig. 9 (b). The baseline RANS model over-predicts the pressure magnitude at the upstream and downstream hillcrest. In addition, it also under-predicts the pressure at the downstream bulk region. By contrast, the result predicted by the discovered model is more consistent with DNS data. The advantage of the discovered model is more specifically demonstrated by the velocity and pressure differences. At the upstream hillcrest where flow separation happens, the relative error of baseline RANS results to DNS results is nearly double larger than the error of learned flow fields to corresponding DNS flow fields. An additional quantitative investigation into the prediction performance of the learned model is described in Fig. 10, in which the predictive velocity profiles are compared against DNS data. The result of the baseline RANS method shows a large departure from the DNS result. As a comparison, the velocity profiles predicted via the discovered model demonstrate remarkable improvement over traditional RANS-based results in the whole flow region. By reviewing the RANS governing equation for momentum (Eq. 2), it is straightforward to deduce that the difficulty inherent in calculating the anisotropic Reynolds stress components \(a_{11}\) and/or \(a_{12}\) (note that the flow is two-dimensional) is perhaps the major contributing factor for the incapacity to accurately predict flow separations with traditional turbulence models. As can be seen in Fig. 7, while the standard \(k-\epsilon\) turbulence model generally makes good predictions for shear component \(a_{12}\), it cannot capture the right sign and magnitude of normal component \(a_{11}\). The learned model provides relatively better predictions for these two tensor components. Although the flow past the periodic hill is generally dominated by the shear layer (especially at the region where flow separation occurs), the deficiency in Reynolds stress anisotropy impairs the performance of \(k-\epsilon\) turbulence model, rendering the predictions for other quantities of interest, such as velocity and pressure, inaccurate. To further investigate how the Figure 6: Reward training curve for using DSR to discover turbulence closure models. The curve shows the best reward value as a function of total expressions evaluated so far. symbolic model contributes to the flow simulation, the contributions from the tensor basis functions consisting of the learned model \(\mathbf{b}^{\perp}\) as well as the iteratively updated linear part \(\mathbf{b}^{\parallel}\) are presented in Fig. 11. It can be observed that the linear part \(\mathbf{b}^{\parallel}\) and the first tensor basis \(\mathbf{T}^{1}\) dominate the calculation for \(a_{12}\), and the second tensor basis \(\mathbf{T}^{2}\) is the most important contributor for describing \(a_{11}\). Thus the linear part and the first two bases are the most important ingredients for accurately capturing the flow separation. More precisely, it is their gradients that determine the calculation of the recirculation region since the momentum equation is solved with the divergence of Reynolds stress. The third tensor basis \(\mathbf{T}^{3}\) makes the most important contribution for modeling \(a_{33}\), which enhances the anisotropic property of the predicted stress tensor. ### Application beyond the training scope It has been presented in Section III.3 that the model discovered with the present method shows promising improvements in the predictive accuracy of RANS-based simulations. To evaluate whether the learned model can achieve similar accuracy improvements for cases outside its training scope, the model in Eqs. 19 and 20 are applied to three test cases with different geometries, as shown in Fig. 4. As has been done in previous discussions, the performance of the learned model is assessed by comparing its predictions against the LEVM and high-fidelity data. Although the geometrical configurations of the first two test cases are similar to that of the training case, the flow behaviors are highly susceptible to the varying hill slope and channel length. Figs. 12 and 13 present barycentric maps of the Reynolds stress anisotropy predicted by the learned model for these two periodic hill test flows with \(\alpha=0.5\) and \(1.0\), respectively. For each test case, the Reynolds stress anisotropy at three typical streamwise locations, i.e., the upstream and downstream hill crests as well as the middle flat section, are selected to demonstrate the prediction performance of the learned model. Similar to the results shown in Fig. 8, the realizability condition for anisotropic Reynolds stress tensor is well-maintained by the learned model. In addition, it clearly shows that the anisotropy predictions at all three cross-sections are markedly improved, albeit not in complete accord with the DNS predictions. Substituting the corrected Reynolds stress fields into the CFD solver, the velocity improvements over LEVM can be observed in Fig. 14 for the test cases. While the improvements are not distinguishable for case \(\alpha=0.5\) at the recirculation region, the discovered model indeed makes contributions to improve the accuracy of flow separation prediction for case \(\alpha=1.0\). The observed inaccuracies in case \(\alpha=0.5\) can be ascribed to its larger hill slope, thus the flow separation is more severe than that of the training case. Moreover, the learned model generally produces better predictions for both test cases at the bulk region. The improvements can also be quantitatively verified by comparing the overall prediction error in streamwise velocity. Figs. 15 presents the distribution of error in streamwise velocity when applying the learned model, compared with the corresponding results obtained by baseline simulations. For both test cases, it can be observed that the learned model provides a higher proportion of low relative error than the baseline Figure 7: Comparison of anisotropic Reynolds stress components for periodic hill training flow (\(\alpha=0.8\)) using LEVM, DNS, and the learned model, respectively. From left to right: \(a_{11}\), \(a_{22}\), \(a_{33}\), and \(a_{12}\). From top to bottom: baseline RANS solutions, time-averaged DNS solutions, the learned model solutions, and the difference between exact data and predicted data by LEVM and the learned model, respectively. All results are normalized by the bulk velocity at crest \(\tilde{u}_{b}\). turbulence model. As the relative error increases, the number of data points corresponding to the learned model quickly decreases. Despite the learned model could produce higher errors in part of the computational domain, it generally outperforms the baseline turbulence model. The last test case considered here is turbulent flow over a backward-facing step at \(Re=5000\). Its geometry is far removed from the flow scenario under which the symbolic model is discovered. Thus this case would be a more comprehensive assessment of the generalization performance of the learned model. Since full-field data are not available for this flow configuration, the performance of discovered model is assessed by comparison with reported DNS data [50] and experimental data [51]. As expected in Fig. 16, the velocity profiles predicted by the learned model are much closer to high-fidelity velocity data across the whole domain (the figure is zoomed into the near-step region due to the limitation of available DNS and experiment data). It should be emphasized that the symbolic model is only trained on one flow through a periodic hill, but it still achieves promising improvements for out-of-scope flow configurations. ### Effect of reward functions As introduced in Section II.3.1, the deep symbolic regression method uses a RL algorithm to train the neural network, thus the choice of the reward function would have a direct effect on the final form of discovered models. The results discussed above are based on the model using a modified root-mean-square-error function (Eq. 17) as its reward function. In this section, another different reward function is utilized to investigate whether a different symbolic model can be discovered, and more importantly, whether similar accuracy improvements can be achieved by the new model. The new reward function is constructed as \[-\log(1+\frac{1}{n}\sum_{l=1}^{n}\|\widehat{\mathbf{b}_{l}^{\perp}}-\mathbf{b}_{l}^{ \perp}\|^{2}), \tag{21}\] where \(n\), \(\mathbf{b}^{\perp}\) and \(\widehat{\mathbf{b}^{\perp}}\) keep the same as is in Eq. 17. Compared with Eq. 17, the root-mean-square-error is not normalized in this reward function. With the use of the logarithmic function, it would be expected that different closure models can be discovered. Using the same training dataset (i.e., case \(\alpha=0.8\) in Fig. 4 (a)) and parameters as listed in Tab. 2, the discovered model Figure 8: Barycentric map of the predicted Reynolds stress anisotropy for periodic hill training flow (\(\alpha=0.8\)). The learned predictions on six streamwise locations at \(x/h=0.5\), \(x/h=1.0\), \(x/h=1.5\), \(x/h=2.0\), \(x/h=4.0\) and \(x/h=7.5\) are compared with the corresponding results from high-fidelity DNS simulations and the standard \(k-\epsilon\) RANS turbulence model in (a)-(f), respectively. Figure 10: Normalized streamwise velocity profiles from the learned symbolic model for periodic hill training flow (\(\alpha=0.8\)), in comparison with the DNS data as well as the baseline RANS results obtained via \(k-\epsilon\) model. Figure 9: Comparison of (a) normalized streamwise velocity \(\bar{U}\) and (b) pressure coefficient \(C_{P}\) for periodic hill training flow (\(\alpha=0.8\)) using LEVM, DNS, and the learned model, respectively. From top to bottom: baseline RANS solutions, time-averaged DNS solutions, the learned model solutions, and the difference between exact data and predicted data by LEVM and the learned model, respectively. by taking Eq. 21 as the reward function is \[\mathbf{b}^{\perp}=\frac{\beta}{10} [(0.8603I_{1}^{2}I_{2}^{2}+0.8603I_{1}^{2}I_{2}+0.3014I_{1}^{2}+0.72 06I_{1}I_{2}+0.5144I_{1})\mathbf{T}^{1} \tag{22}\] \[+(0.5544I_{1}^{2}I_{2}^{6}-0.5544I_{1}^{3}I_{2}^{4}-0.06858I_{1}^{ 2}I_{2}^{4}-0.5544I_{1}I_{2}^{2}-I_{1}I_{2}-0.3865)\mathbf{T}^{2}\] \[+(-1.328I_{1}^{3}I_{2}^{2}+1.328I_{1}^{2}I_{2}^{3}+I_{1}I_{2}^{2}- 3.289I_{1}I_{2}+3I_{2}^{2}+4.624I_{2})\mathbf{T}^{3}],\] where the damping factor \(\beta/10\) keeps the same value as in Eq. 19. For simplifying the discussion in the following context, Figure 11: Contribution to each component of the Reynolds stress anisotropy from each tensor basis for the learned model as well as the linear part. All results are normalized by the bulk velocity at crest \(\tilde{u}_{b}\). Figure 12: Barycentric map of the predicted Reynolds stress anisotropy for periodic hill test flow (\(\alpha=0.5\)). The learned predictions on three streamwise locations at \(x/h=0.5\), \(x/h=3.5\), \(x/h=5.5\) are compared with the corresponding results from high-fidelity DNS simulations and the standard \(k-\epsilon\) RANS turbulence model in (a), (b) and (c), respectively. the two discovered models in Eqs. 20 and 22 are referred to as model 1 and model 2, respectively. Comparing the two models discovered by different reward functions, the learned model 2 shows a more complex form, even though the training of both model 1 and model 2 is carried out using the same dataset and hyper-parameters. Specifically, for the scalar coefficient corresponding to the same tensor basis, the order of invariants in model 2 is much higher than that in model 1, which means that there are more multiplying operations when applying model 2. By contrast, the higher-order terms, such as \(0.5544I_{1}^{2}I_{2}^{6}\), fail to survive in the training evolution of model 1. On the other hand, it is noted that the symbolic expressions in model 1 contain fewer terms, resulting in a more compact form of turbulence closure model. The next step is to investigate the performance of model 2. Even though model 2 presents a different functional form from Figure 14: Normalized streamwise velocity profiles from the learned symbolic model for two periodic hill test cases, in comparison with the DNS data as well as the baseline RANS results obtained via the standard \(k-\varepsilon\) model. (a) \(\alpha=0.5\), (b) \(\alpha=1.0\) Figure 13: Barycentric map of the predicted Reynolds stress anisotropy for periodic hill test flow (\(\alpha=1.0\)). The learned predictions on three streamwise locations at \(x/h=1.5\), \(x/h=4.5\), \(x/h=7.5\) are compared with the corresponding results from high-fidelity DNS simulations and the standard \(k-\varepsilon\) RANS turbulence model in (a), (b) and (c), respectively. model 1, the use of a physics-based constraint of the tensor basis set could ensure that reasonable accuracy improvements can be achieved. For brevity, periodic hill cases \(\alpha=0.5\) and \(\alpha=0.8\) corresponding to the extrapolated test case and train case are selected to demonstrate the predictive performance of model 2, in comparison with the results obtained by model 1. As can be seen in Figs. 17 and 18, both the two discovered models can satisfy the realizability requirement. In a sense, model 2 performs even better than model 1 since there are no outliers in its predictions, as shown in 18 (c). In general, the two learned models outperform the baseline RANS turbulence model across the whole domain. Although model 2 has a more complex form, it has no distinct advantages over model 1 in improving the accuracy of Reynolds stress anisotropy. Fig. 19 shows a comparison of streamwise velocity predicted by the two learned models. It is noted that the two discovered models result in strikingly close velocity fields, even though model 2 bears little resemblance to model 1. This scenario can be ascribed to the statistical error associated with velocity and Reynolds stress. Specifically, the mean velocity is a first-order statistic, whereas the Reynolds stress is a second-order statistic that is less converged, as has been reported by Thompson _et al._[55]. Compared with baseline RANS predictions, both two discovered models can achieve promising accuracy improvements for all flow cases, even if the corresponding configuration is outside the training scope. ## IV Conclusions In this work, a data-driven RANS turbulence modeling approach based on the combination of deep learning and symbolic regression techniques is proposed. This approach leverages the representational capability of deep learning to search the symbolic space for generating interpretable expressions. A risk-seeking RL algorithm is leveraged to train the LSTM neural network so as to maximize the best sampled expressions. The resultant turbulence closures are explicitly given in the form of algebraic polynomials via the embedded invariants and tensor basis functions, thus allowing direct functional inference and achieving promising Galilean invariance properties. In addition, with the use of present method, it is straightforward Figure 16: Normalized streamwise velocity profiles (zoomed into the near-step region) from the learned symbolic model for backward-facing step test case, in comparison with the reported DNS data as well as the experiment data. Figure 15: Error distribution in normalized streamwise velocity for two periodic hill test cases. The relative error is calculated by \(U_{p}-U_{DNS}\), where \(U_{p}\) is predicted by the learned model or the standard \(k-e\) model and \(U_{DNS}\) indicates the DNS velocity. (a) \(\alpha=0.5\), (b) \(\alpha=1.0\). to implement the discovered model equations into existing RANS solvers, without the need to deploy a deep learning environment for new data-driven simulations. The performance of the proposed approach is validated by three canonical flows that differ in geometrical configurations. Although the training dataset consists of only one flow through a periodically constricted channel, the learned turbulence model, on the whole, demonstrates promising accuracy improvements compared with LEVM for all test flows, even for extrapolated cases that could hold flow features not seen during the training process. The algebraic form renders the discovered model more realizable in practical RANS simulations. Additionally, the reward function plays an important role in discovering the symbolic turbulence model since the neural network is trained by RL algorithms. In the context of this study, two reward functions are employed. While the two discovered turbulence models differ greatly in function forms, their predictions show a certain degree of similarities, especially for the low-order statistics. Despite the relative simplicity of the selected RANS simulations, the experience and insights gained from this work shed a light on the future development of interpretable data-driven turbulence models. One of the most challenging issues is developing a reliable and interpretable model that can present universally good performance for complex industrial flow simulations. The model performance could be enslaved by the policy used in RL training [37]. Another issue is about the relatively high computation cost regarding the constant optimization in DSR method, or more generally, the training. Fuelled by the advances of symbolic regression methods and RL algorithms in ML community, it is anticipated that more comprehensive frameworks can be constructed for turbulence modeling research. By integrating the RANS simulations with RL-based training process, the reward can be defined by other quantities of interest, so that the discovered model would be more consistent with physical observations [37]. In addition, by extending RL with special types of neural networks, a non-local constitutive models could be developed for improving the performance of data-driven turbulence models [40, 56]. Figure 17: Barycentric map of the Reynolds stress anisotropy predicted by learned model 2 (Eqs. 20) for periodic hill test case \(\alpha=0.5\), compared with the corresponding results from model 1 (Eqs. 22). Figure 18: Barycentric map of the Reynolds stress anisotropy predicted by learned model 2 (Eqs. 20) for periodic hill train case \(\alpha=0.8\), compared with the corresponding results from model 1 (Eqs. 22). ## V Data availability The relevant code and data used in this project are publicly available at [https://github.com/thw1021/DSRRANS](https://github.com/thw1021/DSRRANS). The DSR method is implemented by Petersen _et al._[20] using deep learning library TensorFlow[57], and the CFD simulations are carried out by using OpenFOAM[48]. ## VI Acknowledgements This work is supported by the National Key Research and Development Program (Grant Nos. 2019YFE0192600 and 2019YFB1503700), Natural Science Foundation of China (Grant No. 52006098), and Priority Academic Program Development of Jiangsu Higher Education Institutions. Y. Wang acknowledges the support of the Natural Science Foundation of China (Grant Nos. 11902153 and 12272178), the Research Fund of State Key Laboratory of Mechanics and Control of Mechanical Structures (Grant No. MCMS-I-0122G01) and Key Laboratory of Computational Aerodynamics, AVIC Aerodynamics Research Institute. ## Appendix A Brief overview of LSTM neural network The LSTM network is a special variant of RNN. It is designed to process data sequences and utilizes its internal memory to learn and harness information relative to what it has seen so far. As a consequence, the network predictions are not only determined by the current input but also conditionally dependent on the recent input sequence. A schematic of the LSTM network is shown in Fig. 20. A typical LSTM cell contains three gates: the input gate \(i\), output gate \(o\) and forget gate \(f\). The cell input and output are given by \(x\) and \(h\), respectively. Compared to the standard RNN, the LSTM network introduces an additional mechanism to carry information, i.e., the cell state \(c\), across many timesteps. The basic equations involved in the forward computation of LSTM networks are given as follows \[\begin{split} f_{t}&=\sigma\left(W_{f}\cdot[h_{t-1 },x_{t}]+b_{f}\right),\\ i_{t}&=\sigma\left(W_{i}\cdot[h_{t-1},x_{t}]+b_{i} \right),\\ c_{t}^{\prime}&=\tanh\left(W_{c}\cdot[h_{t-1},x_{t }]+b_{c}\right),\\ c_{t}&=f_{t}*c_{t-1}+i_{t}*c_{t}^{\prime},\\ o_{t}&=\sigma\left(W_{o}\cdot[h_{t-1},x_{t}]+b_{o} \right),\\ h_{t}&=o_{t}*\tanh\left(c_{t}\right),\end{split} \tag{10}\] where \(\sigma\) is the sigmoid activation function, \(W\) and \(b\) are the weights and biases, respectively. As indicated in Fig 20, the cell state \(c\) is combined with the input connection and the recurrent connection, and it will work together with the three gates to affect the next cell state by adding or removing information. Conceptually, the carry dataflow keeps reading and writing from memory, thus allowing past information to be injected at a later time. This mechanism is the key ingredient that ensures the LSTM network does not suffer from vanishing or exploding gradients. Figure 19: Normalized streamwise velocity profiles obtained by using the learned model 2, in comparison with the corresponding results from model 1. (a) Periodic hill test case \(\alpha=0.5\); (b) Periodic hill train case \(\alpha=0.8\). RL is one of the main branches of machine learning. It is concerned with how to learn through trial and error from environmental feedback so as to maximize a numerical reward signal. A simplified overview of the RL framework is presented in Fig. 21. As illustrated, there are two core components, the agent and environment, in RL. The environment represents the problem and the agent attempts to find the solution to the problem. In general, the learning agent interacts with the environment through three channels: the state, action and reward. The agent perceives the state of its environment and chooses actions to affect the environment. The environment then reacts to the actions by producing a reward signal. After finishing training, the agent is expected to perfectly map situations to actions so that the reward signal can be maximized. In the present work, the risk-seeking policy gradient algorithm is employed to train the LSTM neural networks to produce better-fitting symbolic expressions. The core idea of the risk-seeking policy gradient can be expressed as \[J(\theta;\epsilon)=\mathbb{E}_{\tau\sim p(\tau|\theta)}\left[R(\tau)\geq R_{ \epsilon}(\theta)\right], \tag{12}\] where \(R_{\epsilon}(\theta)\) denotes the \((1-\epsilon)\)-quantile of the reward distribution produced by the current policy. Therefore the learning objective \(J(\theta;\epsilon)\) is to maximize the reward of the top \(\epsilon\) fraction of samples. The samples below the threshold will not involve in the training process. This strategy has been proven to be instrumental in improving the performance of sampled symbolic expressions. In the context of this work, the distribution over mathematical expressions \(p(\tau\mid\theta)\) is like a policy. The algorithm agent samples new tokens when the parent and sibling inputs are observed. During every episode, the agent generates a sequence of expressions, and the reward value is then calculated by the reward function.
2302.10049
Tail recursion transformation for invertible functions
Tail recursive functions allow for a wider range of optimisations than general recursive functions. For this reason, much research has gone into the transformation and optimisation of this family of functions, in particular those written in continuation passing style (CPS). Though the CPS transformation, capable of transforming any recursive function to an equivalent tail recursive one, is deeply problematic in the context of reversible programming (as it relies on troublesome features such as higher-order functions), we argue that relaxing (local) reversibility to (global) invertibility drastically improves the situation. On this basis, we present an algorithm for tail recursion conversion specifically for invertible functions. The key insight is that functions introduced by program transformations that preserve invertibility, need only be invertible in the context in which the functions subject of transformation calls them. We show how a bespoke data type, corresponding to such a context, can be used to transform invertible recursive functions into a pair of tail recursive function acting on this context, in a way where calls are highlighted, and from which a tail recursive inverse can be straightforwardly extracted.
Joachim Tilsted Kristensen, Robin Kaarsgaard, Michael Kirkedal Thomsen
2023-02-20T15:54:19Z
http://arxiv.org/abs/2302.10049v3
# Tail recursion transformation for invertible functions ###### Abstract Tail recursive functions allow for a wider range of optimisations than general recursive functions. For this reason, much research has gone into the transformation and optimisation of this family of functions, in particular those written in continuation passing style (CPS). Though the CPS transformation, capable of transforming any recursive function to an equivalent tail recursive one, is deeply problematic in the context of reversible programming (as it relies on troublesome features such as higher-order functions), we argue that relaxing (local) reversibility to (global) invertibility drastically improves the situation. On this basis, we present an algorithm for tail recursion conversion specifically for invertible functions. The key insight is that functions introduced by program transformations that preserve invertibility, need only be invertible in the context in which the functions subject of transformation calls them. We show how a bespoke data type, corresponding to such a context, can be used to transform invertible recursive functions into a pair of tail recursive function acting on this context, in a way where calls are highlighted, and from which a tail recursive inverse can be straightforwardly extracted. **Keywords:** tail recursion, CPS transformation, program transformation, program inversion ## 1 Introduction When a function calls itself, either directly or indirectly, we say that the function is recursive. Furthermore, when the last operation of all branches in the definition of a recursive function is the recursive call, we say that the function is tail recursive. Unlike generally recursive functions, tail recursive functions can be easily compiled into loops in imperative languages (in particular assembly languages) doing away with the overhead of function calls entirely. This makes tail recursion a desirable programming style. Recall that a program is _reversible_ when it is written such that it only consists of invertible combinations of invertible atomic operations; this is the idea of reversibility as _local_ phenomenon. While every reversible program is also _invertible_ (in the sense that it has an inverse), the converse is not the case, as an invertible program may consist of a number of non-invertible functions that simply happen to interact in a way as to make the program invertible. As such, invertibility is a _global_ phenomenon. While recursion has been employed in both imperative and functional reversible programming languages [12, 24] for many years, tail recursion has been more cumbersome to handle. Here, we argue that relaxing (local) reversibility to (global) invertibility can drastically simplify the handling of tail recursion and even make it possible to use (adaptations of) conventional CPS transformation methods for transforming general recursive to tail recursive functions. To see this, consider the list reversal and list append functions ``` 1reversel[[]]=[] 2reversel(x:xs)= 3letys=reverselxsinlet(zs:_x)=snocl(ys,x)in(zs:_x) ``` The careful reader will have already realised that reversel is its own inverse. Here, we will refrain from clever realisations and focus on purely mechanical ways of providing inverse functions. For instance, the inverses ``` 1unsnocl(y:zs:_x)= 2let(ys,x)=unsnocl(zs:_x)in 3(y:ys,x) 4unsnocl(x:[[]])=([],x) 5unreversel(zs:_x)= are produced by rewriting "lety=fxint" to "letx=unfyint", and then swapping the order bindings in the remaining program t, starting from the last line and ending with the first, much in the style of Romanenko [19]. To transform these recursive functions into tail recursive functions, the standard technique is to introduce an iterator that passes around an explicit argument for accumulating the deferred part of the computation, e.g., ``` 1reverse2xs=reverse2_iter(xs,[]) 2 3reverse2_iter([],accum)=accum 4reverse2_iter(x:xs,accum)=reverse2_iter(xs,x:accum) ``` Implementing list reversal in this style makes it tail recursive, but it also loses an important property, namely _branching symmetry_. This is crucial, since branching symmetry was the entire reason why we could mechanically invert the implementations of snocl and reversel so easily: because the leaves of their cases are syntactically orthogonal. For instance, in reversel, when the input is an empty list, the result is also an empty list, and when the input is nonempty, the result is also nonempty. As a consequence of this loss of symmetry, the iterator function reverse2_iter it is not considered _well-formed for inversion_ as defined by Gluck & Kawabe [6]. Consequently, it cannot be implemented in a reversible functional programming language such as RFun [20, 24] or CoreFun[8], as it breaks the symmetric first match policy; the base case returning **accum** will also return the same value from the iterative case. Even worse, \(\mathtt{reverse2\_iter}\) cannot be inverted to a deterministic function using known methods [5, 16, 18]. Of course, this is because \(\mathtt{reverse2\_iter}\) is not injective, so the outputs of a particular input is not unique. It does not take much effort to show that \(\mathtt{reverse1}\) and \(\mathtt{reverse2}\) are semantically equivalent. Thus, since the latter does nothing but call \(\mathtt{reverse2\_iter}\) it is surprising that we cannot invert it. A brief analysis of the problem concludes that \(\mathtt{reverse2}\) restricts itself to a subset of the domain of \(\mathtt{reverse2\_iter}\), and since \(\mathtt{reverse2}\) is clearly injective, \(\mathtt{reverse2\_iter}\) as restricted to this smaller domain must be injective as well. By further analysis, we realise that the second component of the arguments to \(\mathtt{reverse2\_iter}\), as called by \(\mathtt{reverse2}\), is static and can be ignored. In this context \(\mathtt{reverse2\_iter}\) is in one of three configurations: accepting the restricted input, iterating, or returning an output. By introducing a data type, we can explicitly restrict \(\mathtt{reverse2\_iter}\) to this smaller domain: ``` 1dataConfigurationa=Inputa|Iteration(a,a)|Outputa 2 3reverse3xs=let(Outputys)=iterate(Inputxs)inys 4 5iterate(Inputxs)=iterate(Iteration(xs,[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[ [[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[ [[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[ [[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[ [[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[ Such programming languages usually introduce the notion of tail recursion by introducing an imperative style language feature. For instance, Mogensen's language for guarded equations [15] features a loop construct that allows it to call a partial function until it fails (by pattern matching not exhaustive), as illustrated by the function reverse4, defined by: ``` reverse4(xs)=let([],acc)=looprevStep(xs,[])inacc 2whererevStep(x:xs,acc)=(xs,x:acc); ``` Likewise, the Theseus programming language [9] provides a trace operation encoded via so-called _iteration labels_, as demonstrated in reverse5 below. ``` 1isoreverse5::[a]<->[a] 2\xs=iterate$inLxs,[] 3iterateinL$(x:xs)ys=iterate$inLxs,(x:ys) 4iterateinL$[],ys=iterate$inRys 5iterate$inRys=ys 6where 7iterate::([a]*[a])+[a] ``` We _do not_ introduce a new language feature, but instead relax the requirement that _all functions_ must be well-formed for inversion. Instead we require only that _the subject of the transformation_ must be well-formed for inversion. For instance, recall that the function snoc1 from Section 1 is well-formed for inversion, and consider Nishida & Vidal's CPS transformation of first-order functions [17] ``` 1dataContinuationa=ld|Fa(Continuationa) 2 3sonc2p=sonc2_iter(p,ld) 4where 5snoc2_iter(([],x),g)=sonc2_call(g,x:[]) 6snoc2_iter((y:ys,x),g)=sonc2_iter((ys,x),Fyg) 7snc2_call(ld,zs:_x)=zs:_x 8snc2_call(Fyg,zs:_x)=snc2_call(g,y:zs:_x) ``` Here, the computation has been split into two parts; one that computes a structure corresponding to the closure of the usual continuation function, and another that corresponds to evaluating the call to said continuation. Now, just as with reverse2_iter,snc2_iter andsnc2_call are not injective functions, but can be restricted to such when recognizing that one or more of its arguments are static (ld and [] respectively). Consequently, we can introduce a datatype that does away with these, and invert snoc2 as ``` 1dataConfiguration'inputaccargoutput= 2Inputinput 3|Iterate(input,Continuationacc) 4|Call(Continuationacc,arg) 5|Output'output 6 7snc6(ys,x)=let(Output'(zs:_x))=snc6_call(snc6_iter(Input'(ys,x)))in(zs:_x) 8snc6_iter(Input'(ys,x))=snc6_iter(Iterate((ys,x),ld)) 9snc6_iter(Iterate((y:ys,x),g))=snc6_iter(Iterate((ys,x),Fyg)) 10snc6_iter(Iterate(([],x),g))=(Call(g,[x])) * 4snoec6_call(Call(g,[x])) * 5snoec6_call(Iterate((zs:_x),Fyg)) * 6snoec6_call(Iterate((zs:_x),ld)) * 7 * 8unsnoec6_call(Output'((zs:_x))) * 9unsnoec6_call(Iterate(y:(zs:_x),g)) * 10unsnoec6_call(Iterate([x],g)) * 11 * 12 * 13 * 14 * 15 * 16 * 17 * 18 * 19 * 20 * 21 * 22 * 23 * 24 * 25 * 26 * 27 * 28 * 29 * 30 * 31 * 32 * 33 * 34 * 35 * 36 * 37 * 38 * 39 * 40 * 41 * 42 * 43 * 44 * 45 * 46 * 47 * 48 * 49 * 41 * 42 * 43 * 44 * 45 * 46 * 47 * 48 * 49 * 47 * 48 * 49 * 41 * 42 * 43 * 44 * 45 * 46 * 47 * 48 * 49 * 48 * 49 * 49 * 41 * 42 * 43 * 44 * 45 * 46 * 47 * 48 * 49 * 49 * 41 * 43 * 44 * 45 * 46 * 47 * 48 * 49 * 49 * 41 * 42 * 43 * 44 * 45 * 46 * 47 * 48 * 49 * 49 * 41 * 42 * 43 * 44 * 45 * 46 * 47 * 48 * 49 * 49 * 41 * 42 * 43 * 44 * 45 * 46 * 47 * 48 * 49 * 49 * 41 * 42 * 43 * 44 * 45 * 46 * 47 * 48 * 49 * 49 * 41 * 42 * 43 * 44 * 45 * 46 * 47 * 48 * 49 * 49 * 49 * 41 * 42 * 43 * 44 * 45 * 46 * 47 * 48 * 49 * 49 * 41 * 42 * 43 * 44 * 45 * 46 * 47 * 48 * 49 * 49 * 49 * 41 * 42 * 43 * 44 * 45 * 46 * 47 * 48 * 49 * 49 * 49 * 41 * 42 * 43 * 44 * 45 * 46 * 47 * 48 * 49 * 49 * 49 * 41 * 42 * 43 * 44 * 45 * 46 * 47 * 48 * 49 * 49 * 49 * 41 * 42 * 43 * 44 * 45 * 46 * 47 * 48 * 49 * 49 * 49 * 41 * 42 * 43 * 44 * 45 * 46 * 47 * 48 * 49 * 49 * 49 * 49 * 41 * 42 * 43 * 44 * 45 * 46 * 47 * 48 * 49 * 49 * 49 * 41 * 42 * 43 * 44 * 45 * 46 * 47 * 48 * 49 * 49 * 49 * 41 * 42 * 43 * 44 * 45 * 46 * 47 * 48 * 49 * 49 * 49 * 41 * 42 * 43 * 44 * 45 * 46 * 47 * 48 * 49 * 49 * 49 * 41 * 42 * 43 * 44 * 45 * 46 * 47 * 48 * 49 * 49 * 49 * 49 * 49 * 49 * 41 * 42 * 43 * 44 * 45 * 46 * 47 * 48 * 49 [MISSING_PAGE_POST] left=*] * [leftmargin=*] * [left*left=*] * [leftmargin=*] * [leftmargin=*] * [leftmargin=*] * [left=* **Definition 2**.: _A function \(f\), as defined by the equation \(f\)\(p=t\), is well-formed for inversion, if \(t\) is closed under \(p\). Moreover,_ * _If_ \(t\) _is an application, then_ \(t\equiv g\)__\(p_{0}\)_, where_ \(g\) _is well-formed for inversion as well._ * _If_ \(t\) _is a case-expression, then_ \(t\equiv\texttt{case}\)__\(t_{0}\)__of__\(p_{i}\to t_{i}\)_, where then_ \(t_{0}\) _is well-formed for inversion, each_ \(t_{i}\) _is closed under the corresponding pattern_ \(p_{i}\)_, and for all indices_ \(j\) _and_ \(k\)_, if_ \(j<k\) _then_ \(p_{j}\) _is syntactically distinguishable from_ \(p_{k}\) _and the leaf terms of_ \(t_{j}\) _are all syntactically distinguishable from the corresponding leaf terms of_ \(t_{k}\)_._ When a function is _well-formed for inversion_ in this way, we know how to invert it using existing methods, even though such methods may require some expensive search. However, functions that do not contain a case-expression are all trivially and efficiently invertible, and we can focus on the hard part, namely conditionals. Functions that are well-formed for inversion will be implemented with function clauses of the following two forms \[f\ p_{k} =t_{k}\] \[f\ p_{i} =g_{i}\ (t_{i0},t_{i1}).\] Each term \(t_{k}\) is well-formed for inversion and do not contain recursive calls to \(f\), \(k\) is less than \(i\), and \(g_{i}\) is well-formed for inversion. Furthermore, \(t_{i0}\) may contain recursive calls to \(f\) but \(t_{i1}\) is free of such calls. Moreover, the result of calling \(g_{i}\) with these arguments yield patterns that are distinguishable from the results of calling \(g_{j}\) on \((t_{j0},t_{j1})\) whenever \(i<j\). The first order CPS transformation proposed by Nishida & Vidal essentially defers the call to \(g_{i}\) by storing the unused parts of \(p_{i}\) and \(t_{i1}\) in a data structure, yielding the program transforms \[\texttt{data}\ \tau =\texttt{Id}\ \mid\texttt{G}_{i}(\tau_{t_{i1}},\tau)\] \[f_{0}\ x =f_{1}\ (\texttt{Id},x)\] \[f_{j}\ (g,p_{k}) =f_{n}\ (g,t_{k})\] \[f_{j}\ (g,p_{i}) =f_{l}\ (\texttt{G}_{i}(t_{i1},g),t_{i0})\] \[f_{n}\ (\texttt{Id},y) =y\] \[f_{n}\ (\texttt{G}_{i}(p_{i1},g),p_{i0}) =f_{n}\ (g,g_{i}\ (p_{i0},p_{i1}))\,,\] where \(1\leq j\leq n\) and \(1\leq l\leq n\). This transformation clearly preserves semantics (in the sense that \(f\) is semantically equivalent with \(f_{0}\)) since \(f_{j}\) essentially builds up a stack of calls to respective \(g_{i}\)'s, while \(f_{n}\) performs these calls in the expected order. The only problem is that each \(f_{j}\) may not be well-formed for inversion, and \(f_{n}\) is certainly not well-formed; the variable pattern in the first case is a catch-all that later cases cannot be syntactically orthogonal to. Consequently, we cannot use existing methods to invert these functions. Instead, we realize that their origins are well-formed for inversion, so we should have been able to invert them in a way that is "well formed enough". The idea is to represent each intermediate function with a datatype, and use the fact that each \(g_{i}\) is well-formed for inversion to construct the invertible program as \[\texttt{data}\;\tau =\texttt{Id}\;\mid\texttt{G}_{i}(\tau_{t_{i1}},\tau).\] \[\texttt{data}\;\tau^{\prime} =\texttt{In}(\tau_{x})\mid\texttt{F}_{j0}(\tau,\tau_{i0})\mid \texttt{H}_{k1}(\tau,\tau_{k}).\] \[\texttt{data}\;\tau^{\prime\prime} =\texttt{H}_{k2}(\tau,\tau_{k})\mid\texttt{Eval}(\tau,\tau_{k}) \mid\texttt{Out}(\tau_{y}).\] \[f^{\prime}_{0} =f^{\prime}_{2}\circ h\circ f^{\prime}_{1}.\] \[h\;(\texttt{H}_{k1}(f,x)) =\texttt{H}_{k2}(f,x).\] \[f^{\prime}_{1}\;(\texttt{In}(p_{l})) =f^{\prime}_{1}\;(\texttt{F}_{l}(\texttt{Id},p_{l}))\] \[f^{\prime}_{1}\;(\texttt{F}_{j}(g,p_{j})) =f^{\prime}_{1}\;\texttt{F}_{j0}(\texttt{G}_{j}(t_{i1},g),t_{i0}).\] \[f^{\prime}_{1}\;(\texttt{F}_{j}(g,p_{k})) =f^{\prime}_{1}\;\texttt{H}_{k1}(g,p_{k}).\] \[f^{\prime}_{2}\;(\texttt{H}_{k2}(g,p_{k})) =f^{\prime}_{2}\;\texttt{Eval}(g,p_{k}).\] \[f^{\prime}_{2}\;(\texttt{Eval}(\texttt{F}_{i0}(\texttt{G}_{i}(p_{ i1},g)),p_{i0})) =f^{\prime}_{2}\;(g,y_{i})\;\texttt{where}\;y_{i}=g_{i}(p_{i0},p_{i1}).\] \[f^{\prime}_{2}\;(\texttt{Eval}(\texttt{Id},y)) =\texttt{Out}(y).\] Now, just as with the CPS transformation, \(f^{\prime}_{0}\) is semantically equivalent to \(f\) because \(f^{\prime}_{1}\) collects calls \(\texttt{G}_{j}\) and \(f^{\prime}_{2}\) evaluates them. As such, the only difference is that the input is wrapped in In and Out. However, this time we can derive an inverse program as \[f^{\prime-1}_{0} =f^{\prime-1}_{1}\circ h^{-1}\circ f^{\prime-1}_{2} \tag{1}\] \[h^{-1}\;(\texttt{H}_{k2}(f,x)) =\texttt{H}_{k1}(f,x)\] (2) \[f^{\prime-1}_{2}\;(\texttt{Out}(y)) =f^{\prime-1}_{2}\;(\texttt{Eval}(\texttt{Id},y))\] (3) \[f^{\prime-1}_{2}\;(g,y_{i}) =f^{\prime-1}_{2}\;(\texttt{Eval}(\texttt{F}_{i0}(\texttt{G}_{i} (p_{i1},g)),p_{i0}))\] (4) \[\texttt{where}\;(p_{i0},p_{i1})=g^{-1}_{i}(y_{i})\] (5) \[f^{\prime-1}_{2}\;(\texttt{Eval}(g,p_{k})) =\texttt{H}_{k2}(g,p_{k})\] (6) \[f^{\prime-1}_{1}\;(\texttt{H}_{k1}(g,p_{k})) =f^{\prime-1}_{1}\;(\texttt{F}_{j}(g,p_{k}))\] (7) \[f^{\prime-1}_{1}\;\texttt{F}_{j0}(\texttt{G}_{j}(p_{i1},g),p_{i0}) =f^{\prime-1}_{1}\;(\texttt{F}_{j}(g,p_{j}))\] (8) \[f^{\prime-1}_{1}\;(\texttt{F}_{l}(\texttt{Id},p_{l})) =(\texttt{In}(p_{l})) \tag{9}\] The correctness of this technique can be shown as follows. **Theorem 1**.: _The function \(f^{\prime-1}_{0}\) is inverse to \(f_{0}\)._ Proof.: We remark the following for each step of the transformation: * (1) \(f^{\prime-1}_{0}\) is inverse to \(f^{\prime}_{0}\) (by definition of function composition) precisely if \(h^{-1}\), \(f^{\prime-1}_{1}\), and \(f^{\prime-1}_{2}\) are the inverse to \(h\), \(f^{\prime}_{1}\), and \(f^{\prime}_{2}\) respectively. * (2) \(h^{-1}\) is trivially inverse to \(h\) since it does not contain application or case-expressions. * (3) There is only one way of constructing the arguements to \(f^{\prime-1}_{2}\), namely using the constructor Out on the output of \(f^{\prime}_{2}\). * (4-5) Since \(g_{i}\) was well-formed for inversion, the output \(y_{i}\) is syntactically orthogonal to outputs of \(g_{j}\) when \(i\neq j\). The patterns it takes as arguments \((p_{i0},p_{i1})\) are syntactically orthogonal to all other such patterns, so the choice of constructors \(\mathtt{F}_{i0}\) and \(\mathtt{G}_{i}\) has to be unique as well. * (6) \(p_{k}\) is trivially recognized as one of the syntactically orthogonal parts of the left-hand side of \(f\), which was well-formed for inversion. * (7) There is only one way of constructing \(\mathtt{H}_{k1}\), namely using \(h^{-1}\). * (8) These are exactly the arguments of \(g_{j}\), Since \(f\) was well-formed for inversion, they must be closed under \(p_{j}\) (Definition 2), which we may now reconstruct by copying. * (9) Finally, the first argument of \(\mathtt{F}_{l}\) could only have been \(\mathtt{Id}\) in one program point, and the result has to be constructed using \(\mathtt{In}\), and we are done. By equations (7)-(9), \(f_{1}^{\prime-1}\) is inverse to \(f_{1}^{\prime}\), and by equations (3)-(6), \(f_{2}^{\prime-1}\) is inverse to \(f_{2}^{\prime}\). Since, by equation (2), \(h^{-1}\) is inverse to \(h\), it follows by equation (1) that \(f_{0}^{\prime-1}\) is inverse to \(f_{0}^{\prime}\). Now, since \(f_{0}^{\prime}\) was semantically equivalent to \(f_{0}\), \(f_{0}^{\prime-1}\) must be inverse to \(f_{0}\) as well, and we are done. ## 4 Known limitations In Definition 1 we required linearity, which is slightly stronger than it needs to be. The reason why we chose this restriction is because it commonly occurs in reversible programming [20, 24], and makes it easy to reject programs that are trivially non-invertible. However, the linearity restriction could be relaxed to _relevance_ (i.e., that variables must occur _at least once_ rather than _exactly once_) as in [8]. Moreover, we might even want to relax this restriction even further to say that all values that were available to a particular function of interest must be used at least once on every execution path. We do not believe that it can relaxed further than that, as an invertible program cannot lose information when it is not redundant. Additionally, one may want to relax the constraints of local invertibility to be operations for which an inverse is symbolically derivable. For instance, consider extending the syntax for patterns with integer literals, and terms with addition and subtraction. Hence, the following formulation of the Fibonacci-pair function is possible. ``` fib(a,b)=(a+b,a)decn=n-1 3fib_pair0=(1,1)fib_pairn=fib(fib_pair(decn)) ``` While this program is invertible, it requires a bit of inference to derive the inverse. For instance, that one of the arguments of \(\mathtt{fib}\) is preserved in its output, which is needed to infer \(\mathtt{unfib}\). Likewise for \(\mathtt{dec}\) and \(\mathtt{undec}\), the compiler must infer that subtracting a constant can be automatically inverted. Additionally, while the algebraic data-representation of natural number constants is syntactically distinguishable, with integer constants and variables the compiler has to insert guards, as in ``` 1fib_pair=unoutput.call7.glue.iterate7.input 2 3iterate7(Input2n)=iterate7(Iterate2(n,ld))iterate7(Iterate2(n,f))|n/=0=iterate7(Iterate2(n',F()f))wheren'=decn 4iterate7(Iterate2(0,f))=(Output2(f,(1,1)))call7(Input3(f,(1,1)))call7(Iterate3(f,(1,1)))call7(Iterate3(F()f,pair))wherey=fibpair 5call7(Iterate3(ld,x))=(Output3x)uncall7(Output3x)uncall7(Iterate3(ld,x))uncall7(Iterate3(f,y))|y/=(1,1)=uncall7(Iterate3(F()f,pair))wherepair=unfiby 6uncall7(Iterate3(f,(1,1)))=(Input3(f,(1,1)))uniterate7(Output2(f,(1,1)))uniterate7(Iterate2(0,f))uniterate7(Iterate2(n',F()f))wheren=undecn'uniterate7(Iterate2(n,ld))=(Input2n) 7unfib_pair=uninput.uniterate7.unglue.uncall7.output 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 390 400 410 422 430 431 432 433 434 435 436 437 438 4391 444 445 446 447 448 481 4492 4493 4944 4955 4956 5957 6000 6001 6002 6003 6004 60058 6006 60070 60080 60090 60091 60092 60093 600944 60095 60096 600970 60980 60999 610092 61092 62093 630944 64956 65957 66096 660970 670981 680999 69999 700999 7100999 720999 730999 740999 750999 760999 7610099 778099 7981099 79999 800000 700000 700000 700000 7100000 7100000 7100000 7200000 7300000 7400000 7500000 7600000 7780000 799999 8100000 799999 820999 830999 840999 850999 860999 870999 8810000 89999 899999 800000 899999 800000 899999 8112000 8130000 8140000 8150000 8160000 8170000 8180000 8180000 8199999 820999 830999 840999 8510000 8609999 870999 8880000 899999 899999 899999 800000 8100000 8200000 8300000 8400000 8500000 8600000 8700000 88800000 8899999 899999 8999999 9999999 9999999 9999999 999999 9999999 9999999 99999999 99999999 999999999 99999999 999999999 999999999 999999999 999999999999 9999999999999 The entry assertion must only be true on entry to the loop, while the exit condition will only be true in the final iterations. For completeness there are two statements in the loop: the upper (called _pre/post statement_) we can use to transform between the input/output state and the iterative state, while the lower (called _iterative statement_) is the most widely used as this has similar semantics to the normal while-loop. We will show the translation based on the reverse3 example from before. ``` 1dataConfigurationa=Inputa|Iteration(a,a)|Outputa 2reverse3x=let(Outputys)=interpret(Inputxs)inys 3interpret(Inputxs)=interpret(Iteration(xs,[[])) 4interpret(Iteration(x:xs,ys))=interpret(Iteration(xs,x:ys)) 5interpret(Iteration([[],ys))=(Outputys) ``` The first step is to apply our transformation to yield a tail recursive function; here, this has already been done. Next, we must translate the functional abstract data types to imperative values. The Configuration type will be translated into an enumeration type, with the values Input, Iteration, and Output encoded at integers (e.g. 1, 2, and 3). We would also need to encoded the function data (here the two lists), which could be done with an arrays and a given length. We will, however, not dwell on the data encoding, as our focus is the translation of code that our translation generates. We can now construct the reverse3 procedure that will contain the loop. This will be given the encoded list and return the reversed encoded list. Here a full compiler (again outside our scope) should also be aware that e.g. Janus restricts to call-by-reference, making it needed to compile the function to inline data handling. Though, this is not a restriction in reversible assembly languages. In the beginning of reverse3 we will create a local variable configuration that is initialised to Input. After the loop, this variable will be delocalised with the value Output. At the entry to the loop, the available variables will, thus, be configuration and the function data (i.e. the encoding of the two lists). The reversible loop will implement the interpret function. We assume that there exist a translation of the data handling, meaning that we have the two procedures ``` emptythatchecksiftheencodedlistisempty,and 1movethatmovethefirstelementofanencodelisttotheother. 2Withthis,wemechanicallyderivethefourcomponentsoftheloopas 3Entryassertion:configuration=Input.Wehavedefinedthatistheonlyvalidvalueatentrance.Afterwardsitwillnotbeused. 4Exitcondition:configuration=Output.Similartobefore,thisvalueisonlyusedonexitfromthefunction. 5Pre/poststatement:Line1and3.Thesetwolinescanbeimplementedastwoconditionsinsequence,similarto 6if(configuration=Input) 7thenconfiguration++//UpdatefromenumInputtoIteration * **fi** (configuration = Iteration and empty(ys)) * **if** (configuration = Iteration and empty(xs)) * **then** configuration++ // Update from enum Iteration to Output * **fi** (configuration = Output) Here, the first condition transforms the Input value to an Iteration value with the assertion that the resulting list is empty, while the second condition transforms a Iteration value with an empty list to an Output value with an assertion that we now have an output value. **Iterative statement: Line 2.** This performs the iterative computation, generating code similar to * **if** (configuration = Iteration and (not empty(xs))) * **then** move(xs,ys) // Update from enum Input to Iteration * **fi** (configuration = Iteration and (not empty(ys))) For completeness we check and assert that configuration = Iteration, though this is clear from the translation. We also assure correct data handling, by checking that the relevant lists are non-empty (matching the pattern matching of the function) and implement the relevant data handling (the move function). The generated program could be more efficient, but it clearly demonstrates how the datatype Configuration translates to a reversible loop. The hard work is in the encoding of the data. This approach also applies to functions that have more one function clause with Input and Output cases, and more iterative clauses. ## 6 Discussion and related work While it is possible to invert all injective functions [14, 1], inverse programs constructed this way are often not very efficient. In spite of this, specific inversion methods tend to have well-defined subsets of programs for which they can produce efficient inverses. Precisely classifying the problems which can be efficiently inverted is hard, so the problem is usually approached from a program-specific perspective. One approach is to restricting programs to be formulated in a way that is particularly conducive to inversion. Another approach is grammar-based-inversion, which works by classifying how hard it is to invert a function, based on the properties of a grammar derived from the function body that decides whether or not a given value is in its range [7, 13]. An alternative perspective on finding efficient inverse programs is to acknowledge the huge body of knowledge that has been produced in order to optimized programs running in the forward direction for time complexity, and see if we can bring those optimizations into the realm of reversible computing. In doing so we have not found a need to invent new class of programs to invert. Instead, we enable existing techniques for optimizing CPS transformed programs to be leveraged on programs which do not naturally allow for CPS transformation. The technique we use for transforming programs into tail recursive form is essentially Nishida & Vidal's method for continuation passing style for first order programs [17]. In doing so, we introduce an extra function that evaluates a data type that represents a continuation. In related work on grammar/syntax based inversion techniques [6, 18], _well-formed with respect to inversion_ means that the function is linear in its arguments (and so does not throw anything away), and that cases are syntactically orthogonal. Programs that are well-formed in this sense allow inversion by applying known inversion methods to the iteration function, which then becomes a non-deterministic inverse program (since it need not be injective). However, existing methods for non-determinism elimination can be applied to solve this problem since the original program was _well-formed_. ## 7 Conclusion In this work we have shown that invertible programs admit a tail recursion transformation, provided that they are syntactically well-formed. This was achieved using a version of the first order CPS transformation tailored to invertible programs. Alternatives that do not have tail recursion optimisation must instead rely on search, which can be prohibitively expensive. Instead of searching, we can enforce determinism by pattern matching. That is, transformations where the non-injective part is introduced by the compiler, we can use a "_new datatype trick_". Finally, we have shown correctness of our transformation and how the transformed programs can be efficiently compiled to the reversible loops found in reversible flowchart languages, which in turn may serve as a basis for efficient implementations in reversible abstract machines. ### Future work Currently, the transformation is implemented for at subset of Haskell. Future work will be to integrate this into a invertible functional programming languages such as Jeopardy [10, 11]. This work avoids the need for a symbolic and relational intermediate representation. Perhaps future iterations on such an approach will enable a relaxation of the existing methods' very strict requirements (such as linearity), and thus a less restrictive notion of well-formedness, but also a less syntactic notion of the complexity of function invertibility. A major improvement to the complexity of function invertibility would also be to eschew classifying _programs_ that are hard to invert in favor of classifying _problems_. One approach could be to see if the grammar-based approach from [13] can be relaxed to grammars that recognize the _output_ of the function, rather than grammars _generated by the syntactic structure of the output_ of a program. An example of such a relaxation would to allow existential variables. That is, to split the mechanism of introducing a variable symbol from the mechanism that associates it with a value (its binder). This is customary in logic programming languages such as Prolog, where programs express logical relationships that are solved for all possible solutions based on backtracking that redefines variable bindings. In a functional language, such a mechanism could try to postpone the need to use a free variable until as late as possible, allowing partially invertible functions that accept and return partial data structures (containing logical variables) that may be combined to complete ones (free of logical variables) when composed in certain ways. We are currently exploring this concept further in related work on the Jeopardy programming language [11]. The use of existential variables could further enable the relaxation of the linearity constraint beyond relevance, such that an iterator function may reconstruct a partial term (containing free variables) which is then unified with the available knowledge about its origin, if it is possible to unify it to a complete term (not containing free variables). We have developed an analysis to infer per-program-point sets of such information [10], which may be combined with control flow analysis to decide on a suitable program point in which to unify.
2307.02293
Improvement of image-type very-low-energy-electron-diffraction spin polarimeter
Spin- and angle-resolved photoemission spectroscopy (SARPES) with high efficiency and resolution plays a crucial role in exploring the fine spin-resolved band structures of quantum materials. Here we report the performance of SARPES instrument with a second-generation home-made multichannel very-low-energy-electron-diffraction (VLEED) spin polarimeter. Its energy and angular resolutions achieve 7.2 meV and 0.52{\deg}. We present the results of SARPES measurements of Bi(111) film to demonstrate its performance. Combined with the density functional theory (DFT) calculations, the spin polarization of the bulk states was confirmed from the spin-layer locking caused by the local inversion asymmetry. The surface states at binding energy of 0.77 eV are found with 1.0 {\pm} 0.11 spin polarization. The better resolutions and stability compared with the first-generation one provide a good platform to investigate the spin-polarized electronic states in materials.
Heming Zha, Wenjing Liu, Deyang Wang, Bo Zhao, XiaoPing Shen, Mao Ye, Shan Qiao
2023-07-05T13:47:44Z
http://arxiv.org/abs/2307.02293v1
# Improvement of image-type very-low-energy-electron-diffraction spin polarimeter ###### Abstract Spin- and angle-resolved photoemission spectroscopy (SARPES) with high efficiency and resolution plays a crucial role in exploring the fine spin-resolved band structures of quantum materials. Here we report the performance of SARPES instrument with a second-generation home-made multichannel very-low-energy-electron-diffraction (VLEED) spin polarimeter. Its energy and angular resolutions achieve 7.2 meV and 0.52\({}^{\circ}\). We present the results of SARPES measurements of Bi(111) film to demonstrate its performance. Combined with the density functional theory (DFT) calculations, the spin polarization of the bulk states was confirmed from the spin-layer locking caused by the local inversion asymmetry. The surface states at binding energy of 0.77 eV are found with \(1.0\pm 0.11\) spin polarization. The better resolutions and stability compared with the first-generation one provide a good platform to investigate the spin-polarized electronic states in materials. + Footnote †: _1 Center for Excellence in Superconducting Electronics, State Key Laboratory of Functional Materials for Informatics, Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai 200050, People's Republic of China_ _2 Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing 100049, People's Republic of China_ _3 School of Physical Science and Technology, ShanghaiTech University, Shanghai 201210, People \({}^{\ast}\) s Republic of China_ _4 Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai 201210, China_ _5 State Key Laboratory of Surface Physics, Department of Physics, Fudan University, Shanghai 200433, People's Republic of China_ "These authors contributed equally to this work. \({}^{\ddagger}\)The author to whom correspondence may be addressed: [email protected] ## I Introduction Angle-resolved photoelectron spectroscopy, which can measure the energy and momentum of electrons in materials, is one of the powerful tools to investigate the origin of physical phenomena by studying their electronic states. In recent years, high-temperature superconductors,[1, 2] giant magnetoresistance effect,[3] and topological materials[4] have been the hotspots of research on condensed matter physics. The peculiar characters of these materials are usually related to spin-orbital or exchange interactions, in which the electron spin plays an integral role. SARPES is the best method to investigate the spin-dependent band structures to explore the mechanism of those peculiar characters. With the recent development of high-performance electron analyzers, the band structures can be investigated more precisely to give hallmarked specifics such as gaps or kinks to study some problems in physics. For spin-resolved band structure measurements, However, the resolutions and efficiency are still too low to study the fine structures. The Mott-type spin polarimeter achieved an efficiency of only \(10^{-4,5,6}\) The first step to improve the efficiency was the VLEED-type spin polarimeter,[7] whose efficiency was about 100 times higher than that of the Mott-type. The second step was the image-type multichannel spin polarimeter.[8, 9] The first-generation image-type VLEED spin polarimeter based on exchange-scattering was developed by our group five years ago. It makes the observations of complex spin structures possible.[10] However, it has many defects that need to be improved. ## II The design of the second-generation image-type VLEED spin polarimeter The analyzer in the system is a VG Scienta R3000 hemispherical energy analyzer (HEA) and its specifications of energy and angular resolutions are 3.0 meV and \(0.1^{\circ}\) with entrance slit width of 0.2 mm and pass energy of 2 eV, respectively. The primary function of the electron optics of the second-generation image-type VLEED polarimeter is the same as that of the first-generation. On the exit plane of the HEA, the photoelectrons with the same energy and emergence angle are focused on the same position, where the positions with different x- and y-coordinates denoted in Fig. 1 correspond to the different energies and emergence angles, and their intensities form an image. As shown in Fig. 1(e), the image is point-to-point transferred by the lenses (LS) 1 and 2 to the ferromagnetic Fe(001)-P(1 \(\times\) 1)-O target, and the LS 2 and 3 point-to-point transfer the image on the target to the entrance plane of electron detector consisted of two microchannel plates (MCP) and a fluorescence screen. Meanwhile, the electron beam is turned 180 degrees two times by the magnetic field to ensure the normal incidences[11] to the target and electron detector. Under point-to-point transfer, the information of momentum, i.e., emergence angle, and energy of photoelectrons will not be lost. The spin polarization of the electron beam can be obtained by the asymmetry of scattering rates resulting from the exchange interaction for two measurements with opposite magnetizations of the target through a Helmholtz coil. For spin-resolved measurements, the voltage of the target is set as 5.78 V to achieve the maximum efficiency. For spin-integrated measurements, the voltage of the target is adjusted to -1 V to prevent electrons from hitting the target. In the first-generation polarimeter, the dipole magnetic field was generated by electromagnets shown by the black parts in Fig. 1(c). The different optical characters of dipole magnetic field along two directions, the bending of electron beam in the plane perpendicular to magnetic field and the linear motion along the y direction, make the electron beam cannot be focused to the same point in both x and y directions simultaneously. To compensate for this asymmetry, a couple of correction coils, as indicated by the gray slash parts in Fig. 1(c), were added. The dipole magnetic field was found unstable because of a large current of 18 A needed which results in a rise temperature of the electromagnets and caused an image shift in the x-direction. As illustrated in Fig. 1(c), the electromagnets are situated outside the vacuum. To make the vacuum chamber not distort the magnetic field, only non-magnetic material can be selected. Here stainless steel was chosen instead of mu-metal. However, this arrangement cannot shield the environment magnetic field, which can also introduce instability in electron optics. Furthermore, the gap d\({}_{1}\) as shown in Fig. 1(c) between the electromagnets is as large as 60 mm, resulting in a strong edge effect that the spatial variation of the magnetic field at the edge area can cause the electrons, initially moving horizontally in x-z plane, to experience a vertical Lorentz force along y direction, which results in the increase of aberrations. Moreover, the magnetic field of magnets with large gap can extend into the nearby electric lenses, causing the deflection of electrons in the lenses to deviate from their axes, which also can result in the increase of aberrations. In addition, after the construction, we found that deflectors were needed to compensate for the alignment errors. However, because of the space limitation, the deflectors could only be set in the magnetic field as shown by the red parts in Fig. 1(c). The second-generation polarimeter was constructed to overcome the above insufficiencies. Firstly, the electromagnets were replaced with permanent ones to make the magnetic field more stable. The whole polarimeter was enclosed by mu-metal vacuum chamber to shield the environment magnetic field. Meanwhile, the magnet poles were extended into the vacuum chamber to reduce the gap to attenuate the edge effect and the magnetic field inside the nearby lenses. Some lens elements were divided into four parts as shown by the oblique gray lines and the inset in Fig. 1(e). Its functions are twofold, acting as a quadrupole lens to compensate for the different optical characters of dipole magnet along x and y directions and as deflectors to compensate for the alignment errors. **III. The performance characterization** The size of the entrance aperture of spin polarimeter on exit plane of the HEA is 20 \(\times\) 20 mm\({}^{2}\), corresponding to an acceptance angle of 20\({}^{\circ}\) and an energy window of 150 meV for 2 eV pass energy. To examine the energy resolution of the spectrometer, the SARPES measurement of Fermi edge of polycrystalline Au was observed as shown in Fig. 2(a) with an analyzer pass energy of 2 eV and an entrance slit width of 0.2 mm. The sample was Au column bombarded by Ar ions _in situ_ and its temperature was kept at 5.6 K during the measurements. The instrumental energy resolution was determined by fitting the Fermi edge as shown in Fig. 2(c). The fitting process involved the convolution of a Gaussian function with the addition of a linear background and the Fermi-Dirac function at the sample temperature. The full width at half maximum (FWHM) of the Gaussian function provided a direct measurement of the instrumental resolution. Based on this fitting procedure, the instrumental resolution was found to be 7.2 \(\pm\) 0.12 meV. To evaluate the angular resolution, an aperture with sharp edge was set between the sample of a 40 \(\mu\)m diameter Au wire and the analyzer and the distance between the sample and the aperture is 7 mm. The SARPES spectrum for different emergence angles was observed as shown in Fig. 2(b). In general, the angular resolution refers to the ability of the system to distinguish between two beams with close emission angles. After a knife edge aperture, the ideal case of infinite angular resolution corresponds to an intensity distribution of a step function along the angular direction. In this ideal scenario, the corresponding first-order derivative of the intensity distribution is a delta function. However, in practical situation, the finite resolution of the system leads to a broadening of the intensity distribution. Figure 1: Diagram of VLEED-type polarimeter structures. 3D schematic diagram of the (a) first-generation and (b) second-generation polarimeters. (c), (d) The cross-section along the white dash lines shown in (a) and (b) respectively and the black parts present the magnets. In (c), the gray slash part is the correction coils and the red part is the deflection electrodes. (e) Schematic drawing of second-generation VLED polarimeter. The equipentional box is set to ensure that the electrons have a certain speed and go through a perfect circular orbit with a suitable radius. The bunch of red lines shows the schematic electron trajectories. The slash parts refer to the quadrupole lenses. Inset shows the schematic of the quadrupole lens. The Fe target was magnetized by Helmholtz coil in x or y direction. Consequently, the intensity distribution can no longer be described as a step function. In this case, the first-order derivative of the broadened intensity distribution can be fitted using a Gaussian function. The FWHM of this Gaussian function can then be used as the measurement of the angular resolution of the system. Here, by fitting the first-order derivative curve by Gaussian function, as shown in Fig. 2(d), the angular resolution of the SARPES instrument was evaluated to be about 0.52\({}^{\circ}\). ## IV. SARPES measurements of Bi(111) Bi(111) film grown on Si(111) substrate is a typical material with Rashba-splitting surface states, which is suitable to demonstrate and quantitatively character the performance of the polarimeter. We prepared the film by molecular beam epitaxy and transfer it to the analysis chamber _in situ[12]_ with an ultrahigh vacuum of 10\({}^{\circ}\)S Pa. The pass energy of HEA was set as 2 eV and the slit of HEA was set to 0.8 mm in following experiments for more photoelectron intensity corresponding to a resolution of 12 meV in spin-resolved mode. Fig. 3(a) shows the spin-integrated Fermi surface mapping of Bi(111) with 20 meV integration window and 0.25\({}^{\circ}\) angular step. There are six petal-shaped electron pockets along \(\vec{\Gamma}-\vec{M}\) directions and an electron pocket at \(\vec{\Gamma}\) point. The energy-momentum dispersion along \(\vec{M}-\vec{\Gamma}-\vec{M}\) (Fig. 3b) shows the Rashba-type surface state \(\alpha\) near the Fermi surface and another surface state \(\gamma\) at binding energy of 0.5 eV to 0.8 eV which are consistent with the previous reports [13, 14]. Fig. 3(c) and (d) compare the spin-resolved E-k images along \(\vec{\Gamma}-\vec{M}\) direction with the spin direction in plane and perpendicular to momentum measured by the first- and second-generation VLEED polarimeters, respectively. Through the comparison of the images, we can intuitively find the significant improvement of second-generation one in resolutions with clearer spin splitting. The bulk state \(\beta\) at the binding energy of 0.25 eV also shows a clear polarization that we did not observe in the first-generation one. The effective Sherman function (\(S_{eff}\)) of the system can be obtained from \(S_{eff}=A/P\), where P presents the spin polarization of injected electrons and asymmetry \(A=(I_{+}-I_{-})/(I_{+}+I_{-})\) is measured from the intensities \(I_{+}\) and \(I_{-}\) of reflected electrons with reversed magnetizations of iron target. The momentum distribution curves of asymmetries of \(\alpha\) and \(\gamma\) bands are shown in Figs. 3(e) and (f) along the cuts indicated by thick dashed lines A and B shown in Fig. 3(d), and the maximum asymmetries of \(\alpha\) and \(\gamma\) bands were found to be 0.33 \(\pm\) 0.023 and 0.35 \(\pm\) 0.036 respectively. The effective Sherman function can be determined by assuming the 100 % polarization of \(\alpha\) band [15], and the effective Sherman function of the spin polarimeter can be determined as 0.33. The efficiency of the spin polarimeter is 2.2 \(\times\) 10\({}^{-2}\), which is termed as figure of merit (FOM) \(\varepsilon=S_{eff}^{2}I/I_{0}\), where \(I\) and \(I_{0}\) are intensities of scattered and incident electrons and \(I/I_{0}\) was measured as 0.20. From the results, the polarization of the \(\gamma\) band in Fig. 3(f) can be determined as 1.0 \(\pm\) 0.11 at -0.082 A\({}^{\text{-1}}\) and -1.1 \(\pm\) 0.11 at 0.083 A\({}^{\text{-1}}\). The spin-orbital coupling constant \(\alpha_{R}\) (Rashbar parameter) was determined by linear fitting of the splitting magnitude to be 1.69 eV\({}^{\text{-1}}\) for the \(\gamma\) band, which is about 3 times that of the \(\alpha\) band [9]. With both 100 % spin polarization and larger splitting, it is more convenient to choose the \(\gamma\) band at binding energy of 0.77 eV to evaluate the effective Sherman function in the future to avoid the underestimation of \(S_{eff}\) caused by the poor energy resolution. Need to be mentioned, the acquisition time of SARPES image shown in Fig. 3(d) is only two hours in Figure 3: (a) Spin-integrated Fermi surface map near \(\vec{\Gamma}\) point at 6.0 K. (b) Spin-integrated band dispersion along cut marked by the dashed line in figure (a). (c), (d) Spin-resolved E-k images with the spin direction in-plane and perpendicular to momentum measured by first- and second-generation VLEED polarimeters respectively. (e), (f)The momentum distribution curves at binding energy of 0.02 eV and 0.77 eV obtained from figure (d) and the cuts of binding energies are indicated by the thick dashed lines. The momentum distribution curve here is obtained by integration of spectra in 8 meV and 0.23 degrees ranges in the energy and angular directions. Error bars are standard deviations of the results from five consecutive measurements. Figure 2: (a), (b) E-k image of polycrystalline Au and Angular device respectively. (c) Fermi distribution curve measured in fix mode with polycrystalline Au. (d) the intensity distribution cut from (b) along the angle direction. Inset shows the schematic of the angular device. total. First-principle calculations were done to confirm the origin of the spin polarization and verify that the \(\alpha\) and \(\gamma\) bands have 100 % polarizations. The DFT calculations are performed by the Vienna ab initio simulation package (VASP) with the Perdew-Burke-Ernzerhof (PBE) method [16, 17, 18]. A kinetic energy cutoff of 500 eV and a uniform \(9\times 9\times 1\) k-point grid in the Brillouin zone were proved to be sufficiently accurate for the calculations of the bands. Fig. 4(a) shows the side view of the slab model of 3 bilayer (BL) Bi, from the top first atom to the bottom sixth atom. Fig. 4(b) and (c) are the calculation results of the 18 BL slab model with and without the vacuum layer. The azure curves in Fig. 4(b) can be determined as surface states corresponding to the observed \(\alpha\) and \(\gamma\) bands shown in Fig. 3(d). As shown in Fig. 4(d), in-plane spin polarizations perpendicular to the momentum of bulk states of the first six Bi atomic layers are layer dependent synchrotron Bi is centrosymmetric, the layer dependent spin polarizations confirm that the spin polarizations of bulk states result from the local inversion asymmetry with the Bi-site point group ( \(C_{3v}\) ). The polarization of spin-layer locked state can be compensated by its inversion counterpart with opposite polarization to ensure the whole energy band remains doubly Kramer degeneracy [19, 20, 21, 10]. Under the photon energy of 21.2 eV, the detection depth of photoelectron is about 0.6 nm which corresponds to nearly 1.5 BL. Thus, we can observe the residual spin polarization of \(\beta\) bands attributed to the topmost three atoms here. From the above discussions, it is clear that the experimental and calculated results are consistent with each other. ## V Summary The first-generation image-type VLEED spin polarimeter is upgraded by using permanent magnets and quadrupole lenses. The effective Sherman function was determined by measuring Bi(111)/Si(111). Combined with the density functional theory calculations, we confirmed that the spin polarization of bulk \(\beta\) band originates from the spin-layer locking caused by the local inversion asymmetry and both the \(\alpha\) and \(\gamma\) surface states have 100 % spin polarizations. The excellent performance and stability of second-generation VLEED spin polarimeter provide a platform to investigate the complex spin-polarized electronic states in materials. ###### Acknowledgements. This work is supported by the National Key R&D Program of China (2022YFB3608000), and National Natural Science Foundation of China (No. U1632266, No. 11927807, No. U2032207). ## Data Availability Statement The data that support the findings of this study are available from the corresponding author upon reasonable request.
2310.10258
Minimal surfaces over harmonic shears
Harmonic mappings have long intrigued researchers due to their intrinsic connection with minimal surfaces. In this paper, we investigate shearing of two distinct classes of univalent conformal mappings which are convex in horizontal direction with appropriate dilatations. Subsequently, we present a family of minimal surfaces constructed by lifting the harmonic mappings obtained through shear construction method given by Clunie-Sheil. Furthermore, we contribute to addressing an open problem partially, proposed by Boyd and Dorff, by identifying the resulting minimal surfaces for certain values of the parameters in one of the classes of mappings. Notably, this family of minimal surfaces transforms from the well-established Enneper's surface to a Helicoid.
Simran Bedi, Sanjay Kumar
2023-10-16T10:33:11Z
http://arxiv.org/abs/2310.10258v1
# Minimal surfaces over harmonic shears ###### Abstract Harmonic mappings have long intrigued researchers due to their intrinsic connection with minimal surfaces. In this paper, we investigate shearing of two distinct classes of univalent conformal mappings which are convex in horizontal direction with appropriate dilatations. Subsequently, we present a family of minimal surfaces constructed by lifting the harmonic mappings obtained through shear construction method given by Clunie-Sheil. Furthermore, we contribute to addressing an open problem partially, proposed by Boyd and Dorff, by identifying the resulting minimal surfaces for certain values of the parameters in one of the classes of mappings. Notably, this family of minimal surfaces transforms from the well-established Enneper surface to a Helicoid. _Keywords_: Harmonic univalent mappings, Harmonic shear, Convex along real directions, Minimal Surfaces ## 1 Introduction The study of minimal surfaces has fascinated differential geometers, not solely because of the exotic structures that they exhibit but due to their inherent relationship with harmonic mappings. Results and properties of harmonic univalent functions are generally used to investigate the properties of minimal surfaces. The reason why we emphasize their univalency is because, geometrically, it means that the image of this map does not show any overlapping or self-intersection. Further, when we lift this univalent map from the complex plane to \(\mathbb{R}^{3}\), the resulting surface inherently does not self-intersect itself, called a "minimal graph". Construction of such rather special functions is an intricate task. One of the approach is to use the shearing technique, introduced by Clunie and sheil-small in 1984 [1], whose fundamental building blocks are: a conformal mapping and a dilatation. It proved to be a revolutionary paper in the study of harmonic mapping which gave rise to a lot of research in this field. Another revolutionary discovery was the Weierstrass-Enneper representation, which allows us to take a harmonic univalent function with an appropriate dilatation and lift it to a minimal graph. Several researchers have used this technique [2], however, it is often difficult to identify the resulting minimal graphs. One approach to recognize the minimal surface is to use a change of variables. In [3], Dorff and Muir have constructed and identified a family of minimal graphs associated with the generalized Koebe function using this approach. In [4], Boyd and Dorff proposed the following open problem, which is the prime object of our investigation: **Problem**.: _Determine the minimal graphs formed by lifting harmonic univalent mappings in the paper "Gauss curvature estimates for minimal graphs" by Nowak and Woloszkiewicz [5]._ We have contributed to answering this open problem partially, by recognizing the surfaces formed by lifting the harmonic maps for certain values, obtained through the shearing method applied on the conformal univalent mapping \(F(z)=z/(1+c\,z+z^{2})\), given in the aforementioned paper. We have also constructed a two-parameter family of harmonic mappings by shearing the same map with a different dilatation, \(\omega(z)=z(z+a)/(1+az)\). Also, We have chosen another family of conformal univalent maps which are convex in horizontal direction(CHD) given by, \(F(z)=z-(1/n^{2})z^{n};n=2,3,4,\ldots\)[6]. We have investigated its shearing for \(\omega(z)=z^{n}\) and then have lifted the resulting harmonic maps to construct corresponding minimal graphs for even \(n.\) We have plotted images of open unit disk under the conformal mappings, images of harmonic mappings, and the resulting minimal surfaces they give rise to. The projections of these minimal surfaces can be observed in the images showing their corresponding harmonic mappings. ## 2 Background Now that we have some essential ideas, we need the background for the theory of Harmonic mappings. This section will cover important definitions and theorems, which we will utilize throughout the rest of the paper. Let \(\mathbb{D}=\{z:|z|<1\}\) be the open unit disk in the complex plane. **Definition 1**.: Let \(S\) denotes the family of analytic functions on \(\mathbb{D}\) that are normalized and univalent; that is, \[S=\{f:\mathbb{D}\rightarrow\mathbb{C}\mid\text{$f$ is analytic and univalent with $f(0)=0,f^{\prime}(0)=1$}\}.\] The next theorem tells us that a harmonic function defined on \(\mathbb{D}\) can be expressed in terms of analytic functions, known as its canonical decomposition [6]. **Theorem 1**.: _Let \(f=u+iv\) be a complex-valued harmonic function such that \(f:D\rightarrow\mathbb{C}\), where \(D\) is a simply-connected domain, then there exist analytic functions \(h\) and \(g\) such that \(f=h+\bar{g}.\)_ **Definition 2**.: Let \(S_{H}^{0}\) be the family of complex-valued harmonic mappings on \(\mathbb{D}\) that are univalent and normalized; that is, \[S_{H}^{0}=\{f:\mathbb{D}\rightarrow\mathbb{C}\mid\text{$f$ is harmonic, univalent with $h(0)=0,g(0)=0,h^{\prime}(0)=1,g^{\prime}(0)=0$}\}.\] Note that the harmonic mapping \(f=h+\bar{g}\) can also be expressed as \(f=\operatorname{Re}\left(h+g\right)+i\operatorname{Im}\left(h-g\right)\). By a result of Lewy [7], \(f=h+\bar{g}\) is locally univalent and sense-preserving in \(\mathbb{D}\) if and only if the Jacobian of \(f\), \(J_{F}(z)=|h^{\prime}(z)|^{2}-|g^{\prime}(z)|^{2}>0\) for all \(z\in\mathbb{D}.\) The _dilatation_ of \(f\) is defined as \(\omega(z)=g^{\prime}(z)/h^{\prime}(z)\). Jacobian being positive is equivalent to the condition \(|\omega(z)|<1\) for all \(z\in\mathbb{D}\). **Definition 3**.: A domain \(\Omega\) is said to be convex in the horizontal direction(CHD) if it has a connected intersection with every line parallel to the real axis. It is also called convex in the direction of real axis(CRA). The following theorem [1] establishes the construction of a harmonic map with a specified dilatation which is a crucial tool for construction of minimal surfaces: **Theorem 2** (Clunie and Sheil-Small ).: _A harmonic map \(f=h+\bar{g}\) locally univalent in \(\mathbb{D}\) is a univalent mapping of \(\mathbb{D}\) such that \(f(\mathbb{D})\) is a CHD Domain if and only if, h-g is a conformal univalent mapping of \(\mathbb{D}\) onto a CHD Domain._ This process is known as shearing or shear method and can be presented step by step as: 1. Choose a conformal univalent map \(F\) and decompose it as \(F=h-g\). 2. Choose an appropriate dilatation \(\omega\) and write it as \(\omega=g^{\prime}/h^{\prime}\). 3. Solve for \(h\) and \(g\) using equations obtained in the preceding two steps. 4. Write \(f=h+\bar{g}\), which is the required harmonic univalent map convex in horizontal direction obtained by _shearing_ a conformal map along parallel lines. Now, we lay some background about minimal surfaces before discussing their connection with harmonic mappings. **Definition 4**.: A minimal surface is a surface whose mean curvature vanishes at all its points. At each point, the bending upward in one direction is matched with the bending downward in the orthogonal direction. Every point on the surface is a saddle point. We can also define minimal surfaces in terms of first fundamental form, second fundamental form, area function and isothermal coordinates [8]. In this paper, we will come across two well-known minimal surfaces which are: Enneper's surface and helicoid. The helicoid is parametrized on \(\mathbb{D}\setminus(-1,0)\) as: \[Y_{0}(z)=\left(\operatorname{Re}\left(z-\frac{1}{z}\right),\operatorname{Im} \left(z+\frac{1}{z}\right),2\ \operatorname{Im}(\log z)\right)\] Enneper's surface is parametrized on \(\mathbb{D}\) as: \[Y_{2}(z)=\left(\operatorname{Re}\left(z-\frac{z^{3}}{3}\right),\operatorname{ Im}\left(z+\frac{z^{3}}{3}\right),2\ \operatorname{Re}(-z^{2})\right)\] Note that the negative sign in the third component function has the effect of reflecting the surface through xy-plane. Moreover, scalings by a factor, substitution by a \(m\ddot{o}bius\) transformation and reflection across planes containing two axes does not affect the geometry of the surface. Karl Weierstrass and Alfred Enneper discovered a way to easily construct minimal surfaces by demonstrating their connection to harmonic mappings in the next theorem [9]. **Theorem 3** (Weierstrass-Enneper representation).: _Let \(\Omega\subset\mathbb{C}\) be a simply connected domain containing the origin. If a minimal graph_ \[\{(u,v,F(u,v)):u+iv\in\Omega\}\] _is parameterized by sense-preserving isothermal parameters \(z=x+iy\in\mathbb{D}\), the projection onto its base plane defines a harmonic mapping \(w=u+iv=f(z)\) of \(\mathbb{D}\) onto \(\Omega\) whose dilatation is the square of an analytic function. Conversely, if \(f=h+\bar{g}\) is a harmonic univalent mapping of \(\mathbb{D}\) onto \(\Omega\) with dilatation \(\omega=g^{\prime}/h^{\prime}\) being the square of an analytic function, then with \(z=x+iy\in\mathbb{D}\), the parameterization_ \[\mathbf{X}(z)=\left(\operatorname{Re}\{h(z)+g(z)\},\operatorname{Im}\{h(z)-g( z)\},2\operatorname{Im}\left\{\int_{0}^{z}\sqrt{g^{\prime}(\zeta)h^{\prime}( \zeta)}d\zeta\right\}\right)\] _defines a minimal graph whose projection into the complex plane is \(f(\mathbb{D})\). Except for the choice of sign and an arbitrary additive constant in the third coordinate function, this is the only such surface._ ## 3 Shearing of one-slit and two-slit conformal mappings In this section, we apply shearing technique on a family of conformal mappings given in [5] denoted by \(F_{c}(z)\) with two different dilatations \(\omega_{a}(z)=z(z+a)(1+az)\) and \(\omega(z)=z^{n}\). We further lift the obtained two-parameter family of harmonic mappings \(f_{c,a}\) from complex plane to minimal surfaces in \(\mathbb{R}^{3}\) for \(a=0\) and even \(n\), using Theorem 3. Consider the family of univalent conformal map \(F_{c}\) defined by, \[F_{c}(z)=\frac{z}{1+cz+z^{2}} \tag{1}\] where \(c\in(-2,2)\). It maps the unit disk univalently onto a domain convex in the horizontal direction [10]. For special case of \(c=-2\), the image of under \(F_{c}(\mathbb{D})\) is a single slit domain like Koebe domain depicted in Fig: 1a, which is entire complex plane except for a slit along the negative real axis represented as: \[\mathbb{C}\setminus\{x:x\in(-\infty,-1/4)\}\] and similarly a slit along positive real axis for \(c=2\) shown in Fig: 1d given by: \[\mathbb{C}\setminus\{x:x\in(1/4,\infty)\}.\] Figure 1: Image of the conformal map \(F_{c}(|z|=0.999)\) For \(c=0\), it maps the unit disk onto a two slit domain represented in Fig: 1b, that is, entire complex plane except for two half-lines given by: \[\mathbb{C}\setminus\{x:x\in(-\infty,-1/2)\cup(1/2,\infty)\}.\] Let us consider the harmonic shear of \(F_{c}\) defined in (1), with the dilatation \[\omega_{a}(z)=\frac{z(z+a)}{(1+az)},\] such that \(-1\leq a\leq 1\). In particular, for \(a=-1\), \(a=1\) and \(a=0,\ \omega(z)\) takes value \(-z,z\) and \(z^{2}\) respectively. This requires solution of the pair of differential equations given by: \[h^{\prime}_{c,a}(z)-g^{\prime}_{c,a}(z)=F^{\prime}_{c}(z)\;\;\text{and}\;\;g^{ \prime}_{c,a}(z)=z\frac{(z+a)}{(1+az)}h^{\prime}_{c,a}(z).\] After straightforward but tedious computations for normalized \(h_{c,a}\) and \(g_{c,a}\), the solution is expressed as: \[h_{c,a}(z)=-\frac{p(z)}{q(z)}, \tag{2}\] \[\text{where}\;\;p(z)= 16\sigma_{2}+16\sigma_{1}-4c^{2}\sigma_{2}+16z^{2}\sigma_{2}-4c^ {2}\sigma_{1}+16z^{2}\sigma_{1}+2z\sigma_{3}\sigma_{4}-4c^{2}z^{2}\sigma_{2}\] \[+2ac^{3}\sigma_{1}-4c^{3}z\sigma_{1}-8ac\sigma_{2}+16cz\sigma_{2} -4c^{2}z^{2}\sigma_{1}+2ac^{3}\sigma_{2}-4c^{3}z\sigma_{2}\] \[-8ac\sigma_{1}+16cz\sigma_{1}+2ac^{3}z^{2}\sigma_{1}-8acz^{2} \sigma_{2}-8ac^{2}z\sigma_{2}\] \[+2ac^{4}z\sigma_{2}+2ac^{3}z^{2}\sigma_{2}-8acz^{2}\sigma_{1}-8 ac^{2}z\sigma_{1}\] \[+2ac^{4}z\sigma_{1}+2az^{2}\sigma_{3}\sigma_{4}-cz^{2}\sigma_{3} \sigma_{4}\] \[-c^{2}z\sigma_{3}\sigma_{4}+acz\sigma_{3}\sigma_{4},\] \[q(z)= (c^{2}-4)\sigma_{3}\,\sigma_{4}(1+cz+z^{2}),\] such that \(\sigma_{1}=\text{atanh}\left(\frac{\left(c^{2}-4\right)\left(c+2\,z\right)}{ \sigma_{3}\,\sigma_{4}}\right),\;\sigma_{2}=\text{atanh}\left(\frac{4\,c-c^{ 3}}{\sigma_{3}\,\sigma_{4}}\right),\;\sigma_{3}=(c-2)^{3/2}\text{and}\;\sigma _{4}=(c+2)^{3/2}\). Similary, solving for \(g_{c,a}(z)\) gives, \[g_{c,a}(z)=-r(z)/q(z), \tag{3}\] \[\text{where}\quad r(z) =16\sigma_{2}+16\sigma_{1}-4c^{2}\sigma_{2}+16z^{2}\sigma_{2}-4c ^{2}\sigma_{1}+16z^{2}\sigma_{1}-2z\sigma_{3}\sigma_{4}-4c^{2}z^{2}\sigma_{2}\] \[+2ac^{3}\sigma_{1}-4c^{3}z\sigma_{1}-8ac\sigma_{2}+16cz\sigma_{2} -4c^{2}z^{2}\sigma_{1}+2ac^{3}\sigma_{2}-4c^{3}z\sigma_{2}\] \[-8ac\sigma_{1}+16cz\sigma_{1}+2ac^{3}z^{2}\sigma_{1}-8acz^{2} \sigma_{2}-8ac^{2}z\sigma_{2}\] \[+2ac^{4}z\sigma_{2}+2ac^{3}z^{2}\sigma_{2}-8acz^{2}\sigma_{1}-8 ac^{2}z\sigma_{1}\] \[+2ac^{4}z\sigma_{1}+2az^{2}\sigma_{3}\sigma_{4}-cz^{2}\sigma_{3} \sigma_{4}\] \[+a\,c\,z\sigma_{3}\sigma_{4}\] and \(q(z)\), \(\sigma_{1}\), \(\sigma_{2}\), \(\sigma_{3}\) are as above. So, \(f_{c,a}(z)=h_{c,a}(z)+\overline{g_{c,a}(z)}\) is the desired map corresponding to conformal map \(F_{c,a}\). By Theorem 2, we have \(f_{c,a}\in S^{0}_{H}\) and is convex in direction of real axis. So, \(f_{c,a}\) is the required harmonic map obtained by applying shear construction. Next we discuss special cases by taking \(c=-2\) and \(c=2\) in (1). By shearing \[F_{-2}(z)=\frac{z}{(1-z)^{2}}\text{ with dilatation }\omega_{a}(z)=\frac{z(z+a)}{1+az},\] and solving for \(h\) and \(g\), we get \[h(z)=-\frac{6\,z+3\,a\,z^{2}-a\,z^{3}-6\,z^{2}+2\,z^{3}}{6\left(z-1\right)^{3}}\; \;\mbox{and}\;\;g(z)=-\frac{3\,a\,z^{2}-a\,z^{3}+2\,z^{3}}{6\left(z-1\right)^{3}}\] So, the required map \(f_{-2,a}\) can be expressed as \(f_{-2,a}(z)=\mbox{Re}\{h(z)+g(z)\}+i\,\mbox{Im}\{h(z)-g(z)\}\), i.e, \[f_{-2,a}(z)=\mbox{Re}\left\{\frac{a}{3}-\frac{z^{2}+(a-1)z-\frac{a}{3}+\frac{2} {3}}{(z-1)^{3}}-\frac{2}{3}\right\}+\frac{i}{4}\,\mbox{Im}\left\{\left(\frac{ 1+z}{1-z}\right)^{2}-1\right\} \tag{4}\] By Theorem 2, we see that \(f_{-2,a}\in S^{0}_{H}.\) For \(a=-1,\)\(f(\mathbb{D})\) is a right half plane. In order to study mapping properties of \(f_{-2,a},\) let \(w=(1+z)/(1-z)\), i.e, \(z=(w-1)/(w+1)\), which leads us to: \[f_{-2,a}(z) =\operatorname{Re}\left\{\frac{1}{4}\left(\frac{1+a}{3}w^{3}+(1-a )w-\frac{2(2-a)}{3}\right)\right\}+i\operatorname{Im}\left\{\frac{1}{4}(w^{2}- 1)\right\}\] \[=\frac{1}{4}\left(\frac{1+a}{3}(x^{3}-3xy^{2})+(1-a)x-\frac{2(2-a )}{3}\right)+i\frac{1}{2}xy,\ x>0.\] Note that each point \(z\neq 1\) on the unit circle is carried onto a point \(w\) on the imaginary axis so that \(x=0\) and \(F_{-2,a}=-(2-a)/6.\) Similar discussion as in the case of harmonic koebe function K(z) in [9] shows that for \(-1<a\leq 1,\)\(f(\mathbb{D})\) is a slit domain i.e, it maps onto the entire plane minus the interval \((-\infty,-(2-a)/6)\) on the negative real axis. For each \(a\), the tip of the slit is located at \(-(2-a)/6\), which we can visualize in Fig: 2a, 2b and 2c. Similarly, proceeding the same way for \(F_{2}(z)=z/(1+z)^{2}\), where the harmonic shear is given by: \[f_{2,a}(z)=\operatorname{Re}\left\{\frac{a}{3}-\frac{z^{2}+(a+1)z+\frac{a}{3}+ \frac{2}{3}}{(z+1)^{3}}+\frac{2}{3}\right\}+i\operatorname{Im}\left\{\frac{z} {(1+z)^{2}}\right\}. \tag{5}\] \(f(\mathbb{D})\) is the left half plane for a=1 and a slit domain where the tip of the slit is located at \((a+2)/6\) for each \(-1\leq a<1\), as can be seen in Fig: 2g, 2h and 2i. Notice that the images of harmonic shear in fig: 2, has been changed only horizontally, which is the direction of the shear. We also observe that the shear for \(c=2,a=1\) in fig: 2i has collapsed the image of unit disk onto the point \(1/2\). Therefore, we get the following result: **Theorem 4**.: _Consider the conformal univalent map \(F_{c}\) of the unit disk \(\mathbb{D}\) onto a domain convex in direction of real axis given by (1), where \(c\in[-2,2]\). Let the dilatation function be given by \(\omega_{a}(z)=z(z+a)/(1+az)\), where \(-1\leq a\leq 1\), Then the horizontal shear of \(F_{c}\) with \(c\in(-2,\underline{2})\) and dilatation \(\omega_{a}\) is given by a two parameter family of harmonic mappings \(f_{c,a}(z)=h_{c,a}(z)+\overline{g_{c,a}(z)}\) where \(h_{c,a}(z)\) and \(g_{c,a}(z)\) are given by (2) and (3) such that \(f_{c,a}\in S^{0}_{H}\). For special cases of c=2 and c=2 with \(-1\leq a\leq 1\), the shear is given by (4) and (5). Additionally, we have \(f(\mathbb{D})\) is a slit domain where the tip of the slit is located at \(-(2-a)/6\) for \(c=-2\) and at \((a+2)/6\) for \(c=2\) such that \(f_{-2,a},f_{2,a}\in S^{0}_{H}\)._ ### Minimal surfaces for \(a=0\) For the special case \(a=0\), we have \(\omega(z)=z^{2}\) which is a square analytic dilatation. Then we can apply Weierstrass-Enneper formula to lift the harmonic mapping to a minimal graph on \(\mathbb{D}\). Thus, we have the following result: **Theorem 5**.: _For \(c\in(-2,2)\), define \(f_{c}=h_{c}+\bar{g_{c}}:\mathbb{D}\rightarrow\mathbb{C}\) to be the harmonic mapping satisfying \(h_{c}(z)-g_{c}(z)=F_{c}(z)\) and \(g^{\prime}_{c}(z)=\omega(z)h^{\prime}_{c}(z)\) normalized by \(h_{c}(0)=g_{c}(0)=g^{\prime}_{c}(0)=h^{\prime}_{c}(0)-1=0\), where \(F_{c}\) is given by_ \[F_{c}(z)=\frac{z}{1+cz+z^{2}},\] \[\omega(z)=z^{2}.\] _Then, \(f_{c}\in S^{0}_{H}\) and \(f_{c}(\mathbb{D})\) is convex in the horizontal direction. As \(c\) varies from 0 to 2, \(f_{c}(\mathbb{D})\) transforms from a infinite vertical strip mapping to a single slit mapping along positive real axis, and as c varies from \(-2\) to 0, it transforms from a single slit mapping along negative real axis to a strip mapping. Furthermore, since \(w\) is a square analytic function, \(f_{c}\) lifts to a minimal graph \(X_{c}\) on \(\mathbb{D}\) for each \(c\in(-2,2)\). For some special cases, \(c=-2,c=0\) and \(c=2,X_{-2}(\mathbb{D})\) is part of Enneper's surface, \(X_{0}(\mathbb{D})\) is part of helicoid and \(X_{2}(\mathbb{D})\) is again part of Enneper's surface._ Proof.: By Theorem 2, \(f_{c}\in S_{H}^{0}\) and \(f_{c}(\mathbb{D})\) is convex in the horizontal direction.Then, the solution of the pair of differential equations \[h_{c}^{\prime}-g_{c}^{\prime}=F_{c}^{\prime}\] \[\omega h^{\prime}-g^{\prime}=0\] defines a family of harmonic mappings \(f_{c}=\operatorname{Re}\{h_{c}+g_{c}\}+i\operatorname{Im}\{h_{c}-g_{c}\}\). For \(c\in(-2,2)\), \[u=\operatorname{Re}\{h_{c}+g_{c}\}\] \[=\operatorname{Re}\left\{\frac{z}{(c-2)(z+1)}+\frac{z}{(c+2)(z-1 )}-\frac{8\left(\operatorname{atanh}\left(\frac{c\left(c^{2}-4\right)}{ \sigma_{1}}\right)-\operatorname{atanh}\left(\frac{\left(c^{2}-4\right)(c+2z )}{\sigma_{1}}\right)\right)}{\sigma_{1}}\right\}\] \[\qquad-\operatorname{Re}\left\{\frac{z\left(z^{2}+1\right)}{ \left(z^{2}-1\right)\left(z^{2}+cz+1\right)}\right\},\] \[\text{where }\sigma_{1}=(c-2)^{3/2}(c+2)^{3/2}\] \[v=\operatorname{Im}\{h_{c}-g_{c}\}=\operatorname{Im}\left\{ \frac{z}{z^{2}+cz+1}\right\}.\] Figure 3: Minimal surfaces over harmonic map \(f_{c}\), \(c=0,2,-2,1\) with dilatation \(z^{2}\). Since \(\omega(z)=z^{2}\), by Theorem 2, \(f_{c}\) lifts to a minimal graph corresponding to each \(c\in(2,2)\). Applying this theorem yields the following representations of minimal graphs for \(c\in(-2,2)\): \[X_{c}(z)=(u,v,F(u,v)),\] where \(x_{1}\) and \(x_{2}\) are as above, and \[F(u,v)=2\operatorname{Im}\left\{\frac{\frac{2}{c^{2}-4}+\frac{cz}{c^{2}-4}}{z^ {2}+cz+1}-\frac{2}{c^{2}-4}+\frac{2c\operatorname{atanh}\left(\frac{c\left(c^{2 }-4\right)}{\sigma_{1}}\right)}{\sigma_{1}}-\frac{2c\operatorname{atanh}\left( \frac{\left(c^{2}-4\right)(c+2z)}{\sigma_{1}}\right)}{\sigma_{1}}\right\}\] where \(\sigma_{1}=(c-2)^{3/2}(c+2)^{3/2}\). We can observe how \(f_{c}(\mathbb{D})\) transforms from an infinite strip to a slit when \(c\) goes from 0 to 2 in Fig: 2e and 2h and vice-versa when \(c\) goes from -2 to 0 can be seen in Fig: 2b and Fig: 2e. We establish the claim of identifying the surfaces as helicoid and Enneper's surface by considering the following cases: Case 1: \(c=-2\), the analytic function becomes \(F_{-2}(z)=\frac{z}{(z-1)^{2}}.\) Applying the shearing technique with dilatation \(\omega(z)=z^{2}\) and then lifitng the resulting harmonic map, we have represetation of the corresponding minimal graph as \(X_{-2}(\mathbb{D})=(u,v,F(u,v))\) as: \[X_{-2}(z)=\left(\operatorname{Re}\left\{-\frac{z^{2}-z+\frac{2}{3}}{(z-1)^{3}} -\frac{2}{3}\right\},\operatorname{Im}\left\{\frac{z}{(z-1)^{2}}\right\},2 \,\operatorname{Im}\left\{\frac{1}{6}-\frac{3z-1}{6(z-1)^{3}}\right\}\right).\] Applying the change of variables technique to recognize the surface, and substituting \(w=(1+z)/(1-z)\) and again expressing as \(z=(w-1)/(w+1)\), the surface representation becomes: \[X_{-2}(z)=\left(\operatorname{Re}\left\{\frac{w^{3}}{12}+\frac{w}{4}-\frac{1}{ 3}\right\},\operatorname{Im}\left\{\frac{w^{2}}{4}-\frac{1}{4}\right\},2 \,\operatorname{Im}\left\{\frac{w^{3}}{24}-\frac{w}{8}+\frac{1}{12}\right\} \right).\] Interchanging the second and third coordinates, Translation by 1/3, and scaling by a factor of 4, we get \[X_{-2}(\mathbb{D})=\left(\operatorname{Re}\left\{\frac{w^{3}}{3}+w\right\}, \operatorname{Im}\left\{\frac{w^{3}}{3}-w\right\},\operatorname{Im}\left\{w^{ 2}\right\}\right).\] We see that our original surface \(X_{-2}(\mathbb{D})\) is part of the enneper surface formed by using right half plane as domain instead of the standard unit disk. Case 2: For c=0, the function \(F_{c}(z)\) becomes \(F(z)=z/(1+z^{2})\) with dilatation \(\omega(z)=z^{2}\). Solving for \(h\) and \(g\), we get \(h(z)=(z+\operatorname{atan}\left(z\right)+z^{2}\operatorname{atan}\left(z \right))/(2\left(z^{2}+1\right))\) and \(g(z)=(\operatorname{atan}\left(z\right)-z+z^{2}\operatorname{atan}\left(z \right))/(2\left(z^{2}+1\right))\), such that \(f=h+\overline{g}\) is the harmonic map required to lift to a minimal graph. So, we have \[X_{0}(z)=(x_{1},x_{2},x_{3}) =\left(\operatorname{Re}\{\operatorname{atan}\left(z\right)\}, \operatorname{Im}\{\frac{z}{z^{2}+1}\},2\operatorname{Im}\left\{\frac{1}{2}- \frac{1}{2\left(z^{2}+1\right)}\right\}\right)\] \[=\left(\operatorname{Re}\left\{\frac{i}{2}\,\log\left(\frac{i+z }{i-z}\right)\right\},\operatorname{Im}\left\{\frac{z}{z^{2}+1}\right\},2 \operatorname{Im}\left\{\frac{1}{2}-\frac{1}{2\left(z^{2}+1\right)}\right\} \right),\] which are hardly recognizable as representations of part of the helicoid. However, applying the change of variables \(w=(i+z)/(i-z)\) and again writing \(z=i(w-1)/(w+1)\), \[X_{0}(z)=\left(\operatorname{Re}\left\{\frac{i}{2}\,\log\left(w\right)\right\}, \operatorname{Im}\left\{\frac{i}{4}\left(w-\frac{1}{w}\right)\right\},2 \operatorname{Im}\left\{-\frac{1}{8}\left(w+\frac{1}{w}-2\right)\right\}\right).\] Scaling by 4 gives, \[X_{0}(z)=\left(\operatorname{Im}\{2\log{(w)}\},\operatorname{Re}\left\{w-\frac{1} {w}\right\},\operatorname{Im}\left\{-\left(w+\frac{1}{w}\right)\right\}\right).\] Since, negative sign over \(x_{3}\), interchanging of the second and third coordinates, and then first and third coordinates won't alter the geometry and gives us the formula of helicoid. So we see \(X_{0}(\mathbb{D})\) is the same surface as \(Y_{0}(z\in\mathbb{C}:\operatorname{Re}z>0))\), i.e., \(X_{0}(\mathbb{D})\) is part of the helicoid. Case 3: For c=2, the conformal map takes the form \(F_{2}(z)=\dfrac{z}{(1+z)^{2}}.\) After similar computations we get, \[X_{2}(z)=\left(\operatorname{Re}\left\{\frac{2}{3}-\frac{z^{2}+z+\frac{2}{3}}{ (z+1)^{3}}\right\},\operatorname{Im}\left\{\frac{z}{(1+z)^{2}}\right\},2 \,\operatorname{Im}\left\{\frac{1}{6}-\frac{\frac{z}{2}+\frac{1}{6}}{(z+1)^{3 }}\right\}\right).\] Substituting \(w=(1-z)/(1+z)\) and expressing in terms of \(z\) as \(z=(1-w)/(1+w)\),then translating by \(-1/3\), scaling by a factor of \(-1/4\) and finally adjusting the negative over \(x_{3}\) the surface representation becomes: \[X_{2}(z)=\left(\operatorname{Re}\left\{\frac{w^{3}}{3}+w\right\},\operatorname {Im}\left\{w^{2}\right\},\operatorname{Im}\left\{\frac{w^{3}}{3}-w\right\} \right).\] We see interchanging the coordinates suitably gives the parameterization of Enneper's surface. We have plotted the minimal surfaces formed for \(c=-2,0,1\) and \(2\) in Fig: 3a, 3b, 3c and 3d. Their projection onto \(x\)-\(y\) plane can be observed and matched with images of the corresponding harmonic maps in Fig: 2b, 2e and 2h. Further, this idea of lifting can be generalized by changing the dilatation to \(\omega(z)=z^{2n}\), \(n\in\mathbb{N}\). In 2014, Ponnusamy et al. have constructed a two-parameter family of harmonic mappings \(f_{n,c}\) with dilatation \(z^{n}\) by shearing the four slit conformal mapping defined by \(\phi(z)\) in [11]. We will further lift the resulting shears to a family of minimal surfaces for even \(n\) and \(A=0\). \[\phi(z)=A\,\log{\left(\frac{1+z}{1-z}\right)}+B\,\frac{z}{1+cz+z^{2}}.\] Taking \(A=0,B=1\) serves our purpose. By writing \(c=-2\,\cos\gamma\) where \(\gamma\in(0,\pi)\). Applying shearing technique gives, \[h(z)=-\frac{i}{2\,\sin\gamma}(e^{-i\gamma}I_{2}-e^{i\gamma}I_{3}),\] \[\text{where }I_{2}=\int_{0}^{z}\frac{d\zeta}{(\zeta-e^{-i\gamma})^{2}(1-\zeta^{n} )}\,\text{ and }\,I_{3}=\int_{0}^{z}\frac{d\zeta}{(\zeta-e^{i\gamma})^{2}(1-\zeta^{n})}.\] Using \(\dfrac{1}{1-z^{n}}=-\frac{1}{n}\sum\limits_{k=0}^{n-1}\dfrac{z_{k}}{z-z_{k}}\), where \(z_{k}=e^{\frac{2\pi ik}{n}}\) for \(k=0,1,2,...n-1\) and assuming that \(\eta=e^{i\gamma}\neq z_{k}\), consider \[I_{\eta} =\int_{0}^{z}\frac{d\zeta}{(\zeta-\eta)^{2}(1-\zeta^{n})}\] \[=-\frac{1}{n}\sum\limits_{k=0}^{n-1}\left\{\frac{1}{\eta-z_{k}} \left(\frac{1}{\eta-z}-\frac{1}{\eta}\right)-\frac{1}{(\eta-z_{k})^{2}}\left[ \log{\left(\frac{\eta-z}{\eta}\right)}-\log{\left(\frac{z_{k}-z}{z_{k}}\right) }\right]\right\}.\] Figure 4: Minimal surfaces over harmonic shears of \(F(z)=z/(1+cz+z^{2})\) with dilatation \(\omega(z)=z^{n}\) for even n. Let \(\mathbf{N}=\{0,1,\ldots,n-1\}\) be an index set. Suppose that \(a\in\mathbf{N}\), then we define \(\mathbf{N}_{a}=\mathbf{N}\setminus\{a\}\). In the case of \(\gamma=2\pi m/n\), where \(m=0,1,2,\ldots,n-1\), \[I_{3,m} =\int_{0}^{z}\frac{d\zeta}{(\zeta-z_{m})^{2}(1-\zeta^{n})}\] \[=\frac{1}{n}\sum_{k\in\mathbf{N}_{m}}\int_{0}^{z}\frac{z_{k}\,d \zeta}{(\zeta-z_{m})^{2}(\zeta-z_{k})}+\frac{1}{n}\int_{0}^{z}\frac{z_{m}\,d \zeta}{(\zeta-z_{m})^{3}}.\] The above sum can be computed as above and the last integral is, \[\frac{1}{n}\int_{0}^{z}\frac{d\zeta}{(\zeta-z_{m})^{3}}=\frac{1}{2n}\left[ \frac{1}{(z-z_{m})^{2}}-\frac{1}{z_{m}^{2}}\right].\] We can readily find \(g\) from \(h-\phi\) for \(A=0\). Thus, we get the harmonic map \(h=f+\overline{g}\) that we can lift to a minimal surface for square dilatation by taking \(\omega(z)=z^{2n}\). Since, \(x_{3}=2\operatorname{Im}\left\{\int_{0}^{z}\sqrt{\omega(\zeta)}\,h^{\prime}( \zeta)\,d\zeta\right\}\), we compute \[\int_{0}^{z}\sqrt{\omega(\zeta)}\,h^{\prime}(\zeta)\,d\zeta=-\frac{i}{2\,\sin \gamma}\left(e^{-i\gamma}\int_{0}^{z}\frac{\zeta^{n}}{(\zeta-e^{-i\gamma})^{2} (1-\zeta^{2n})}d\zeta-e^{i\gamma}\int_{0}^{z}\frac{\zeta^{n}}{(\zeta-e^{i \gamma})^{2}(1-\zeta^{2n})}d\zeta\right).\] Using, \(\frac{z^{n}}{1-z^{2n}}=\frac{z^{n}}{(1-z^{n})(1+z^{n})}=\frac{1}{2}\left[ \frac{1}{1-z^{n}}-\frac{1}{1+z^{n}}\right],\) the above expression becomes: \[=\frac{-i}{2\,\sin\gamma}\left(e^{-i\gamma}\int_{0}^{z}\frac{1}{ 2(\zeta-e^{-i\gamma})^{2}(1-\zeta^{n})}-e^{-i\gamma}\int_{0}^{z}\frac{1}{2( \zeta-e^{-i\gamma})^{2}(1+\zeta^{n})}\right)d\zeta\] \[\quad-\frac{-i}{2\,\sin\gamma}\left(e^{i\gamma}\int_{0}^{z}\frac{ 1}{2(\zeta-e^{i\gamma})^{2}(1-\zeta^{n})}-e^{i\gamma}\int_{0}^{z}\frac{1}{2( \zeta-e^{i\gamma})^{2}(1+\zeta^{n})}\right)d\zeta.\] We observe that the first and third integral in above expression are \(I_{2}\) and \(I_{3}.\) For the second (say \(I_{4}\)) and fourth integral (\(I_{5}\)), again by partial fractions we can express: \[\frac{1}{1+z^{n}}=-\frac{1}{n}\sum_{l=0}^{n-1}\frac{z_{l}}{z-z_{l}},\] where \(z_{l}\)'s are \(n^{th}\) roots of equation \(z^{n}+1=0\) such that \(z_{l}=e^{\frac{2\pi il}{n}+\frac{\pi i}{n}}\), where \(l=0,1,2,\ldots,n-1\), then \[\int_{0}^{z}\sqrt{\omega(\zeta)}\,h^{\prime}(\zeta)\,d\zeta=\frac{-i}{2\,\sin \gamma}\left(\frac{e^{-i\gamma}}{2}I_{2}-\frac{e^{-i\gamma}}{2}I_{4}-\frac{e^{ i\gamma}}{2}I_{3}+\frac{e^{i\gamma}}{2}I_{5}\right),\] where, \[I_{4}=\int_{0}^{z}\frac{1}{(\zeta-e^{-i\gamma})^{2}(1+\zeta^{n})}d\zeta\,\, \,\text{and}\,\,\,I_{5}=\int_{0}^{z}\frac{1}{(\zeta-e^{i\gamma})^{2}(1+\zeta^ {n})}d\zeta.\] Assuming that \(\rho=e^{i\gamma}\neq z_{l}\), i.e, \(\gamma\neq(2l+1)\frac{\pi}{n}\), and computing in a similar manner just like \(I_{\eta}\), we get \[I_{\rho} =\int_{0}^{z}\frac{d\zeta}{(\zeta-\rho)^{2}(1-\zeta^{n})}\] \[=-\frac{1}{n}\sum_{k=0}^{n-1}\left\{\frac{1}{\rho-z_{l}}\left( \frac{1}{\rho-z}-\frac{1}{\rho}\right)-\frac{1}{(\rho-z_{l})^{2}}\left[\log \left(\frac{\rho-z}{\rho}\right)-\log\left(\frac{z_{l}-z}{z_{l}}\right)\right] \right\}.\] In case of \(\gamma=(2s+1)\frac{\pi}{n}\) for \(s=0,1,2,\ldots,n-1\) and computing in similar manner just like \(I_{3,m}\) above, we get \(I_{3,p}=I_{5}\). \[I_{3,s}=\int_{0}^{z}\frac{d\zeta}{(\zeta-z_{s})^{2}(1-\zeta^{n})}=-\frac{1}{n} \sum_{l\in{\bf N}_{s}}\int_{0}^{z}\frac{z_{l}\,d\zeta}{(\zeta-z_{s})^{2}(\zeta -z_{l})}+\frac{1}{n}\int_{0}^{z}\frac{z_{s}\,d\zeta}{(\zeta-z_{s})^{3}}.\] This last integral can again be computed as done for \(I_{3,m}\) above. In Fig: 4, we have plotted the minimal surfaces obtained after calculating \(x_{3}\) coordinate, for different values of \(c\) and even \(n\). ## 4 Harmonic shears of inner region of an epicycloid In this section, we will apply shear construction method on a new class of conformal mappings denoted by \(F_{n}(z)\) with dilatation \(\omega(z)=z^{n}\) and will further lift to a family of minimal surfaces using Theorem 3. We have created images of these minimal surfaces for \(n=2,3,4\) in Fig: 6. We have also plotted images of conformal mappings \(F_{n}(z)\) alongside images of corresponding harmonic mappings \(f_{n}(z)\) for \(n=2,3,4\) in Fig: 5. let \[F_{n}(z)=h_{n}(z)-g_{n}(z)=z-\frac{1}{n^{2}}\,z^{n}, \tag{6}\] be the class of conformal univalent mappings of the unit disk \(\mathbb{D}\) onto a domain convex in horizontal direction and \(n\) is any natural number greater than equal to 2. We observed that images of the open unit disk under conformal maps form the interior region of an _epicycloid_ characterized by \(n-1\) cusps. Let \[\omega(z)=\frac{g^{\prime}(z)}{h^{\prime}(z)}=z^{n},\] be the dilatation. Differentiating (1) gives us, \[h^{\prime}_{n}(z)-g^{\prime}_{n}(z)=1-\frac{1}{n}\cdot z^{n-1}.\] Considering the pair of these differential equations above, we get \[h^{\prime}_{n}(z)=\frac{1-\frac{1}{n}\cdot z^{n-1}}{1-z^{n}}.\] \begin{table} \begin{tabular}{l l l} \hline \(\gamma\) & \(I_{2}\) & \(I_{3}\) \\ \hline is not \(2\pi m/n\) & \(I_{\bar{\eta}}\) & \(I_{\eta}\) \\ is \(2\pi m/n\) & \(I_{3,n-m}\) & \(I_{3,\ \mathrm{m}}\) \\ \hline \end{tabular} \end{table} Table 1: The integrals \(I_{2}\) and \(I_{3}\) for the analytic part \(h=\frac{1}{2\sin\gamma}i\,(e^{-i\gamma}I_{2}-e^{i\gamma}I_{3})\) of the harmonic shear \(f\) with a dilatation \(\omega(z)=z^{n}\). Figure 5: Conformal mapping \(F_{n}\) of the unit disk onto inner region of an \(epicycloid\) with \(n-1\) cusps, alongside its harmonic shear \(f_{n}\) on right with \(n+2\) concave boundary arcs for the dilatation \(z^{n}\), \(n=2,3,4\). Integrating from \(0\) to \(z\) and normalizing such that \(h(0)=0\) and similarly, solving for \(g(z)\) gives: \[h_{n}(z) =z\,_{2}F_{1}\left(1,\frac{1}{n};\frac{1}{n}+1;z^{n}\right)+\frac{ \log{(z^{n}-1)}}{n^{2}}-\frac{\pi i}{n^{2}}, \tag{7}\] \[g_{n}(z) =z\,_{2}F_{1}\left(1,\frac{1}{n};\frac{n+1}{n};z^{n}\right)-z+ \frac{\log{(z^{n}-1)}+z^{n}}{n^{2}}-\frac{\pi i}{n^{2}}, \tag{8}\] where \({}_{2}F_{1}(a,b;c;z)\) represents a hypergeometric function, which is a power series such that, \[{}_{2}F_{1}(a,b;c;z)=\sum_{n=0}^{\infty}\frac{(a)_{n}(b)_{n}}{(c)_{n}}\frac{z^ {n}}{n!},\] where \(a,b,\) and \(c\in\mathbb{C}\) and \((x)_{n}=x(x+1)\cdots(x+n-1)\) is the Pochhammer symbol. So, the desired map corresponding to analytic map \(F_{n}\) is \(f_{n}=h_{n}+\overline{g_{n}}\). By Theorem 2, \(f_{n}\in S_{H}^{0}\) and \(f_{n}\) is convex in horizontal direction. We see that image of equally spaced radial segments and concentric circles under sense-preserving harmonic univalent map \(f_{n}\) is a radial plot with n+2 concave boundary arcs. Notice that, although the image of \(F_{n}\) is bounded, the image of \(f_{n}\) is unbounded.The harmonic map \(f_{n}\) can also be expressed as \(f_{n}(z)=x_{1}(z)+ix_{2}(z)\) where \[x_{1}(z) =\operatorname{Re}\left\{h(z)+g(z)\right\}\] \[=\operatorname{Re}\left\{2z\,_{2}F_{1}\left(1,\frac{1}{n};\frac{ n+1}{n};z^{n}\right)-z+\frac{2\log{(z^{n}-1)}+z^{n}}{n^{2}}-\frac{2\pi i}{n^{2}} \right\},\] \[x_{2}(z) =\operatorname{Im}\{h(z)-g(z)\}\] \[=z-\frac{z^{n}}{n^{2}}.\] For \(n=2m,f_{n}\) lifts to a minimal graph where \[x_{3} =2\,\operatorname{Im}\left\{\int_{0}^{z}\sqrt{\omega(\zeta)}h_{n }^{\prime}(\zeta)\,d\zeta\right\}\] \[=2\,\operatorname{Im}\left\{\int_{0}^{z}\zeta^{m}\left(\frac{1- \frac{1}{2m}\cdot\zeta^{2m-1}}{1-\zeta^{2m}}\right)\,d\zeta\right\}\] \[=2\,\operatorname{Im}\left\{\int_{0}^{z}\left(\frac{\zeta^{m}}{1- \zeta^{2m}}-\frac{1}{2m}\cdot\frac{\zeta^{3m}}{\zeta(1-\zeta^{2m})}\right)\,d \zeta\right\}\] \[=2\,\operatorname{Im}\left\{\int_{0}^{z}\left(\frac{1}{2}\left( \frac{1}{1-\zeta^{m}}-\frac{1}{1+\zeta^{m}}\right)-\frac{1}{2m}\cdot\left( \frac{\zeta^{m-1}}{2(\zeta^{m}+1)}-\frac{\zeta^{m-1}}{2(\zeta^{m}-1)}-\zeta^{m -1}\right)\right)\,d\zeta\right\}\] \[=2\,\operatorname{Im}\left\{z\,_{2}F_{1}\left(1,\frac{1}{m};\frac {1}{m}+1;z^{m}\right)-z\,_{2}F_{1}\left(1,\frac{1}{m};\frac{1}{m}+1;-z^{m} \right)-\frac{1}{2m}\left(\frac{\operatorname{atanh}{(z^{m})}-z^{m}}{m} \right)\right\}. \tag{9}\] Thus, we obtain the following result: **Theorem 6**.: _Consider the conformal univalent map of the unit disk \(\mathbb{D}\) onto a domain convex in direction of real axis given by \((\ref{eq:2})\). Let the dilatation function be given by \(\omega(z)=z^{n}\),then the horizontal shear of \(F_{n}\) with dilatation \(\omega\) is given by \(f_{n}(z)=h_{n}(z)+\overline{g_{n}(z)}\), where \(h_{n}(z)\) and \(g_{n}(z)\) are given by (7) and (8) such that \(f_{n}\in S_{H}^{0}\). In particular, when \(n\) is an even positive integer, \(f_{n}(\mathbb{D})\) lifts to a minimal graph \((x_{1},x_{2},x_{3})\) where \(x_{1}=\operatorname{Re}\left\{h(z)+g(z)\right\}\), \(x_{2}=\operatorname{Im}\{h(z)-g(z)\}\) and \(x_{3}\) is given by \((\ref{eq:2})\)._ The images of the unit disk under \(F_{n}\) and \(f_{n}\) for \(n=2,3\) and \(4\) are shown in Fig: 5 as plots of the images of equally spaced radial segments and concentric circles. The images of the resulting minimal surfaces obtained by lifting \(f_{n}\) in \(\mathbb{R}^{3}\) are depicted in Fig: 6. Figure 6: Minimal surface over harmonic shear \(f_{n}\) with dilatation \(\omega=z^{2n}\) for \(n=2,3,4\).
2305.14288
LLM-powered Data Augmentation for Enhanced Cross-lingual Performance
This paper explores the potential of leveraging Large Language Models (LLMs) for data augmentation in multilingual commonsense reasoning datasets where the available training data is extremely limited. To achieve this, we utilise several LLMs, namely Dolly-v2, StableVicuna, ChatGPT, and GPT-4, to augment three datasets: XCOPA, XWinograd, and XStoryCloze. Subsequently, we evaluate the effectiveness of fine-tuning smaller multilingual models, mBERT and XLMR, using the synthesised data. We compare the performance of training with data generated in English and target languages, as well as translated English-generated data, revealing the overall advantages of incorporating data generated by LLMs, e.g. a notable 13.4 accuracy score improvement for the best case. Furthermore, we conduct a human evaluation by asking native speakers to assess the naturalness and logical coherence of the generated examples across different languages. The results of the evaluation indicate that LLMs such as ChatGPT and GPT-4 excel at producing natural and coherent text in most languages, however, they struggle to generate meaningful text in certain languages like Tamil. We also observe that ChatGPT falls short in generating plausible alternatives compared to the original dataset, whereas examples from GPT-4 exhibit competitive logical consistency.
Chenxi Whitehouse, Monojit Choudhury, Alham Fikri Aji
2023-05-23T17:33:27Z
http://arxiv.org/abs/2305.14288v2
# LLM-powered Data Augmentation for Enhanced Crosslingual Performance ###### Abstract This paper aims to explore the potential of leveraging Large Language Models (LLMs) for data augmentation in crosslingual commonsense reasoning datasets, where the available training data is extremely limited. To achieve this, we employ several LLMs including Dolly-v2, StableVicuna, ChatGPT, and GPT-4 to augment three datasets: XCOPA, XWinograd, and XStoryCloze. Subsequently, we assess the effectiveness of fine-tuning smaller crosslingual models, mBERT and XLMR, using the synthesised data. We compare the performance of training with data generated in English and target languages, as well as translating the English-generated data into the target languages. Our experiments reveal the overall advantages of incorporating data generated by LLMs. Training on synthetic data generated by GPT-4, whether English or multilingual, improves performance consistently compared to the baseline. Other models also exhibit an overall increase in performance, however, their effectiveness decreases in some settings. We also ask native speakers to evaluate the naturalness and logical soundness of the generated examples for different languages. Human evaluation reveals that LLMs like ChatGPT and GPT-4 excel at generating natural text in most languages, except a few such as Tamil. Moreover, ChatGPT trails behind in generating plausible alternatives in comparison to the original dataset, while GPT-4 demonstrates competitive logic consistency in the synthesised data1. Footnote 1: Code and other materials are available at [https://github.com/mbzuai-nlp/Gen-X](https://github.com/mbzuai-nlp/Gen-X). ## 1 Introduction The success of NLP models greatly depends on the availability and quality of training data. This poses a significant challenge for multilingual NLP, as data for languages other than English is typically limited (Ponti et al., 2019; Joshi et al., 2020). An approach to address this data scarcity challenge is through zero-shot cross-lingual transfer or multitask training, in which a model is trained across data of diverse tasks and languages, exhibiting the capability to handle unseen tasks, particularly in larger models (Artetxe and Schwenk, 2019; Nooralahzadeh et al., 2020; Huang et al., 2021). However, when aiming for task-specific objectives, a smaller, fine-tuned model dedicated to that particular task outperforms general-purpose, zero-shot larger models. In addition, a smaller task-specific model is more practical and cost-effective for deployment and training. Nevertheless, developing a powerful task-specific model becomes challenging in the absence of training data (Lauscher et al., 2020). Conversely, recent powerful large language models (LLMs) excel at handling general instructions and have shown promise in data generation tasks (Wang et al., 2022). In this work, we leverage LLMs to generate synthetic data for various multilingual commonsense reasoning tasks, XCOPA (Ponti et al., 2020), XWinograd (Tikhonov and Ryabinin, 2021), and XStoryCloze (Lin et al., 2022), where the training data is limited even for English (see Table 1). To augment the training data, we employ LLMs by providing them with instructions and examples from the original training data and then requesting the LLMs to generate diverse new examples. We explore the generation of synthetic data in English using different LLMs, including open-source models like Dolly-v22 and StableVicuna3, as well as ChatGPT and GPT-4. Although the weights and capabilities of the latter two models remain undisclosed, they can generate texts in languages beyond English. Footnote 2: [https://github.com/databricsklabs/dolly](https://github.com/databricsklabs/dolly) Footnote 3: [https://github.com/Stability-AI/StableLM](https://github.com/Stability-AI/StableLM) We develop task-specific models by fine-tuning multilingual pre-trained language models such as mBERT (Devlin et al., 2019) and XLM-R (Con neau et al., 2020), using the generated data. We then compare their performance against models trained on a limited set of human-created data in the target language whenever available, and otherwise through zero-shot transfer learning from manually created English training data. Our experiments demonstrate that training the models with _relatively large_ synthetically generated datasets yields better performance than training with _limited_ manually-created datasets. This finding empirically confirms the utility of synthetic data generated by LLMs for improving downstream task-specific models. We expand the multilingual data synthesis using ChatGPT and GPT-4 on XCOPA and find that generating multilingual datasets generally surpasses the effectiveness of zero-shot cross-lingual transfer, with the exception of ChatGPT-generated multilingual data on a larger fine-tuned XLMR. We further carry out a manual annotation to assess the quality of the generated dataset in different languages by evaluating the naturalness and logical soundness of the generated dataset compared to the human-written one. The annotation results reveal that while ChatGPT and GPT-4 successfully generate natural text in most languages, they struggle with generating understandable text in certain languages such as Tamil. Moreover, a noticeable gap is observed in terms of commonsense coherence when comparing ChatGPT-generated data to human-constructed data, on the other hand, GPT-4 significantly narrows this difference. In brief, our work has the following key contributions: (1) Augmenting three low-resource, crosslingual commonsense reasoning datasets by leveraging and instructing four LLMs; (2) Fine-tuning smaller models, mBERT and XLMR, using the synthesised data and showcasing the practical value of the LLM-generated data; (3) Performing an extensive analysis of the effects of various target languages in data generation and scaling, including a human evaluation of the naturalness and logical coherence of the generated data in different languages. (4) Releasing synthesised datasets for public use and reproducibility. ## 2 Related Work Multilingual and Low-Resource NLPRecently, there has been increased attention on expanding NLP beyond English, including the development of multilingual models Devlin et al. (2019); Conneau et al. (2020); Xue et al. (2021); Scao et al. (2022) as well as the creation of benchmarks to address multilingual challenges Conneau et al. (2018); Artetxe et al. (2019); Adelani et al. (2021); Winata et al. (2023). Among the prevailing challenges faced across various languages, a common theme is the scarcity of available data. Consequently, when data is lacking, one approach is to employ zero-shot cross-lingual transfer. Studies conducted by Winata et al. (2023) have demonstrated the effectiveness of zero-shot cross-lingual transfer for related languages. Additionally, Muennighoff et al. (2022) show that models fine-tuned only with English instruction data are capable to understand multilingual instructions. In this work, we are tackling a similar scenario where the availability of data is limited. Multilingual Data AugmentationLauscher et al. (2020) show that few-shot can drastically increase the cross-lingual performance of small models, proving that multilingual data augmentation is an effective strategy. A series of works try to predict the cross-lingual accuracy of models through measurements and modelling Xia et al. (2020), and study strategies for multilingual data augmentation, such as choosing the transfer languages Lin et al. (2019), and predicting multilingual few-shot accuracies leading for optimal data augmentation approaches Srinivasan et al. (2022). Many works focus on synthetic data augmentation for code-mixing, including utilising linguistic theories Lee et al. (2019); Pratapa et al. (2018), machine translation models Tarunesh et al. (2021), parallel corpus and Wikipedia Winata et al. (2019); Whitehouse et al. (2022), and employing ChatGPT Dai et al. (2023). Our work explores data augmentation on commonsense crosslingual datasets with powerful instruction-tuned LLMs. \begin{table} \begin{tabular}{l|c c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multicolumn{2}{c}{**Train**} & \multicolumn{2}{c}{**Validation**} & \multicolumn{2}{c}{**Test**} \\ \cline{2-7} & en & xx & en & xx & en & xx \\ \hline XCOPA & \(400\) & \(0\) & \(100\) & \(100\) & \(500\) & \(500\) \\ XWinograd & \(1858\) & \(0\) & \(233\) & \(0\) & \(233\) & \(424\) \\ XStoryCloze & \(300\) & \(300\) & \(60\) & \(60\) & \(1511\) & \(1511\) \\ \hline \hline \end{tabular} \end{table} Table 1: Number of examples available in XCOPA, XWinograd, and XStoryCloze. xx denotes the average number of non-English examples per language. Since a validation split is not specified in XStoryCloze, we take 60 random examples from the train split for validation. ## 3 Dataset Augmentation This section explains the datasets used in the experiments and the detailed instruction setup. ### Dataset Our experiments use XCOPA, XWinograd, and XStoryCloze, which are selected due to the limited availability of training data and the fact that commonsense reasoning datasets present greater challenges for data synthesis. Table 1 summarises the statistics of the datasets. XWinograd has no train/validation/test split, and we follow an 80/10/10 split for the experiments. XCOPAis a crosslingual Choice of Plausible Alternatives dataset that translates and re-annotates the validation and test sets of English (EN) COPA (Roemmele et al., 2011) into 11 target languages (ET: Estonian, HT: Haitian Creole, ID: Indonesian, IT: Italian, QU: Quechua, SW: Swahili, TA: Tamil, TH: Thai, TR: Turkish, VI: Vietnamese, and ZH: Chinese)4. Each instance consists of a premise, a question (_ctaase/result_), and two alternatives and the task is to predict the more plausible alternative. XWinogradexpands the original English Winograd Schema Challenge (WSC) (Levesque et al., 2012) to five other languages (FR: French, JA: Japanese, PT: Portuguese, RU: Russian, and ZH)5, which consists of pronoun resolution problems aiming to evaluate the commonsense reasoning ability of a machine. Given a statement with two noun phrases and a pronoun, the challenge of WSC is to determine the referent of the pronoun, which can only be inferred from the context. Footnote 5: [https://huggingface.co/datasets/Xtopa](https://huggingface.co/datasets/Xtopa) Footnote 6: We contacted the authors for access to XStoryCloze. XStoryCloze is collected by Lin et al. (2022)7 by translating the validation split of the original English StoryCloze dataset (Mostafazadeh et al., 2016) into 10 other typologically diverse languages (RU, ZH, ES: Spanish, AR: Arabic, HI: Hindi, ID, TE: Telugu, SW, EU: Basque, and MY: Burmese). Each example consists of a four-sentence commonsense story, a correct ending, and a wrong ending. Footnote 7: We are collecting more examples for the Winograd Schema Challenge. Each example consists of a 4-sentence story, one correct ending sentence which is a plausible continuation of the story, and one wrong ending sentence which is logically inconsistent with the context. Here are \(n\) examples of the data: Example 1: Sent:ent: Largely hides from the Winograd Schema Challenge. Each example consists of a 4-sentence story, one correct ending sentence which is a plausible continuation of the story, and one wrong ending sentence which is logically inconsistent with the context. Here are \(n\) examples of the data: Example 1: Sent: Sentence: Harley hides from the Winograd Schema Challenge _ is sexy. Who/What is easy? Correct answer: Dyna. Wrong an-score money. What happened as a result? Correct choice: He cut back on making frivolous purchases. Wrong choice: He withdrew money from his savings account.... Example _n_:... Based on the examples above, generate \(m\) new examples in (language). Example 2: Prennis: The main wanted to save money. What happened as a result? Correct choice: He cut back on making frivolous purchases. Wrong choice: He withdrew money from his savings account.... Example _n_:... Based on the examples above, generate \(m\) new examples in (language). Example 3: Prennis: The politician made a controversial statement. What happened as a result? Correct choice: The politician faced criticism from the media. Wrong choice: The politician’s approval ratings increased. Example 4: Prennis: The politician made a controversial statement. What happened as a result? Correct choice: The politician faced criticism from the media. Wrong choice: The politician’s approval ratings increased. Example 5: When we are gathering more examples for the COPA dataset which will be used to test a system’s ability of Commonsense Causal judgments. The format of the data: A premise: a statement of something that happened, and two choices that could plausibly (_occur as the result / be the cause_) of the premise. The correct choice is the alternative that is more plausible than the wrong choice. Here are \(n\) examples in (language): Example 1: Prennis: The man wanted to save money. What happened as a result? Correct choice: He cut back on making frivolous purchases. Wrong choice: He withdrew money from his savings account.... Example _n_:... Based on the examples above, generate \(m\) new examples in (language). Example 6: Prennis: The politician made a controversial statement. What happened as a result? Correct choice: The politician’s approval ratings increased. Example 7: What was the cause? Correct choice: Footnote 7: When we are gathering more examples for the COPA dataset which will be used to test a system’s ability of Commonsense Causal judgments. The format of the data: A premise: a statement of something that happened, and two choices that could plausibly (_occur as the result / be the cause_) of the premise. The correct choice is the alternative that is more plausible than the wrong choice. Here are \(n\) examples of the data: Example 1: Sentence: Harley hides from the Winograd Schema Challenge. Each example consists of a 4-sentence story, one correct ending sentence which is a plausible continuation of the story, and one wrong ending sentence which is logically inconsistent with the context. Here are \(n\) examples of the data: Example 1: Sent:ent: Time is very tired every single morning. Sent: 2: She does not get enough sleep because of her two jobs. Sent: 3: Tina decides to quit one of the jobs. Start:-4: She now gets enough sleep to function everyday. Correct one: Tina is well rest. Wrong ending: Tina is more tired than ever before.... Example _n_:... Based on the examples above, provide \(m\) new similar examples. Requirements: 1) the story should read like a coherent story, with a specific beginning and ending, where something happens in between 2) both ending sentences should be entirely reasonable, realistic and sensible when read in isolation, and 3) both ending sentences should follow up the story by sharing at least one of the characters of the story. Example 8: Sent:1 Jordan was a high school student who wanted to become a doctor. Sent:2: He spent all his free time studying biology and chemistry. Sent-3: One day, his school hosted a science fair competition. Sent-4: Jordan’s project won first place. Correct ending: Jordan went on to study medicine in college. Wrong ending: Jordan gave up his dream of becoming a doctor. Example 9: Sent:2:1 Jordan was a high school student who wanted to become a doctor. Sent:2: He spent all his free time studying biology and chemistry. Sent-3: One day, his school hosted a science fair competition. Sent-4: Jordan’s project won first place. Correct ending: Jordan went on to study medicine in college. Wrong ending: Jordan gave up his dream of becoming a doctor. Example 10: Sent:2:2:3:4:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:5:1:2:3:5:5:1:2:3:5:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:1:2:3:5:5:1:2:3:5:5:1:2:3:5:1:2:3:5:5:1:2:3:5:2:5:1:3:2:5:3:5:1:2:3:5:5:1:2:3:5:5:1:2:3:5:2:3:5:1:2:3:5:5:1:2:3:5:5:1:2:3:5:5:1:2:3:5:1:2:3:5:5:1:2:3:5:2:5:1:2:3:5:5:1:2:3:5:2:5:5:1:2:3:5:5:1:2:3:5:1:2:3:5:5:1:2:3:5:2:5:1:2:3:5:5:1:2:3:5:5:2:5:5:1:2:3:5:5:2:5:5:1:2:3:5:5:1:2:3:5:5:2:5:1:2:3:5:5:1:2:5:3:5:2:5:5:1:2:3:5:5:2:5:1:2:5:3:5:5:2:5:1:2:5:3:5:5:1:2:5:5:2:5:5:1:2:5:5:3:5:2:5 nighoff et al., 2022) and Flan-T5 (Chung et al., 2022), struggle to follow the complex instructions. Conversely, more recent LLMs such as ChatGPT, GPT-4, Dolly-v2, StableVicuna, which are designed to handle more intricate and general-purpose instructions, have demonstrated success in following our instructions for data. GPT-4 and ChatGPT also stand out with the capability of generating examples in non-English languages. We explore synthetic data generation with the four aforementioned LLMs, balancing between open-access models and closed models, (see SS5.1). Specifically, we use dolly-v2-12b, which is derived from EleutherAI's Pythia-12b (Biderman et al., 2023) and fine-tuned on a \(\sim\)15K instructions generated by Databricks employees; and StableVicuna13B, an RLHF (reinforcement learning from human feedback) fine-tuned Vicuna model on various conversational and instructional datasets, where Vicuna is an open-source LLaMA model (Touvron et al., 2023) fine-tuned on user-shared conversations collected from ShareGPT7. Footnote 7: [https://github.com/lm-sys/FastChat](https://github.com/lm-sys/FastChat) ### Instructions and Responses We utilise LLMs to generate synthetic examples for all datasets by instructing them. We start constructing the instructions using the descriptions of the dataset papers as a reference and provide LLMs with some examples, randomly sampled from the _train (+validation)_ split of the original dataset, and ask them to generate similar data points. We experiment with various instructions and evaluate the synthesised data on a smaller scale, update the instructions based on the errors, and then choose the best instruction to generate the final datasets. The final instruction and responses (ChatGPT as an example) can be seen in Table 2. We first request LLMs to generate a total of 3\(\sim\)4K data points for each dataset, and then parse and filter the responses, where we only keep the unique examples. LLMs tend to generate inconsistent output with the invalid format and often generate fewer samples than requested. We report the success rate for different LLMs on the three datasets in Table 3, which indicates that GPT-4 has the most robustness. Among the datasets, XWinograd, specifically, has the lowest generation success rate for LLMs, because the data requires both answers to be from the generated sentence, with only one pronoun being replaced. In addition, we observed pronoun inconsistency in the generated XWinograd data. Despite the requirement for interchangeable pronouns in the options, models frequently fail to comply. For example, "The dog bit the mailman because _ entered the yard." is generated by ChatGPT with the options 'The dog " or "the mailman", however, "_" in the sentence cannot be replaced by the same pronoun for the given two options, hence it may make the task easier and the example is considered suboptimal. We keep those instances in the dataset and discuss further in SS6.1. ## 4 Experimental Setups We first generate synthetic English data for XCOPA, XWinograd, and XStoryCloze. We contrast data generation across Dolly-v2, StableVicuna, ChatGPT, and GPT-4, and compare them with a baseline of training models with the original English data. The size of the final synthesised data for the three datasets is 3.7k, 2K, and 1.7K, respectively. We then fine-tune mBERT, XLMR-base and XLMR-large8 with the synthesised data, and measure the zero-shot cross-lingual transfer performance across different languages, where we use the original validation set in target languages. Footnote 8: See Appendix A for more details of models used in the paper. For XCOPA, we additionally experiment with generating data points directly in non-English languages, by providing examples in the target language and specifying the language desired for the generated data (see Table 2). However, since no examples for _cause_ are included in TH and TR train/validation data (however they appear in the test split), we do not generate XCOPA for the two languages. We use ChatGPT and GPT-4 for multilingual synthetic data generation, as both Dolly-v2 and StableVicuna exhibit limitations in effectively generating multilingual text. The size of the multilingual synthesised data is \(\sim\)3.6K in each language. We fine-tune models on all datasets as multiple \begin{table} \begin{tabular}{l|c c c} \hline \hline **Model** & **XCOPA** & **XWinograd** & **XStoryCloze** \\ \hline Dolly-v2 & 41.6\% & 22.4\% & 41.2\% \\ StableVicuna & 36.1\% & 33.8\% & 36.1\% \\ ChatGPT & 86.4\% & 43.8\% & 77.6\% \\ GPT-4 & 89.7\% & 85.0\% & 89.3\% \\ \hline \hline \end{tabular} \end{table} Table 3: Generation Success Rate in English (valid examples obtained / total examples requested) with different LLMs on the three datasets. choice tasks9 by searching best learning rate from {\(5e^{-6}\), \(10e^{-6}\)}, and batch size from {8, 16, 32}. All the fine-tuning experiments are conducted on a single 40G A100. For generating data with Dolly-v2 and StableVicuna, we use 2\(\times\)40G A100. Footnote 9: In our preliminary experiments, we find that formulating XWinograd as a binary text classification results poorly, in line with the observation from Liu et al. (2020) that the task formulation is essential to the performance of Winograd. ## 5 Results and Discussion This section presents the main results of fine-tuned models on the three datasets and compares performance with generated data in different LLMs, languages, and different scales. ### General Result Table 4 presents the average accuracy of fine-tuned mBERT, XLMR-Base, and XLMR-Large models across all languages on the three datasets. The models are trained using original data (_ORI_), different LLM-generated data (_GEN_), as well as a combination of both sources (_O_+_G_) in English, comparing the zero-shot cross-lingual transfer. Across different datasets, LLMs, and fine-tuned models, consistent improvements are observed when using both original and LLM-generated data. Among the models, Dolly-v2 performs the best on Xingorad when fine-tuned on mBERT, while GPT-4 achieves the highest accuracy in other settings. The most significant improvement shows in XWinograd with XLMR-Base, where the addition of an extra 2k datapoints leads to an average accuracy enhancement of 12.8 compared to the baseline, across all four LLMs. When using only LLM-generated data, smaller models like mBERT and XLMR-Base generally outperform the baseline. However, with XLMR-Large, which achieves stronger baselines. e.g. >80 in XWingograd and XStoryCloze, the accuracy remains similar or even worse compared to using the original data. GPT-4-generated data demonstrates the best robustness but still experiences a decline in performance in XWinograd when the generated data size is similar to the original data. This highlights the challenges of generating data at a human-level quality. ### Multilingual Data Generation The zero-shot cross-lingual approach is commonly used when a multilingual dataset is insufficient. In this subsection, we investigate whether a synthetically generated multilingual dataset outperforms training solely in English. We choose the XCOPA dataset and explore two settings: synthetic multilingual data by asking LLMs to generate responses in the target languages directly and translating the English-generated data to target languages with Google Translate API. We exclude Dolly-v2 and StableVicuna due to their limited effectiveness in generating non-English text. Although GPT-4 exhibits the most promising performance, it is significantly costlier compared to ChatGPT. Therefore, we also consider using ChatGPT as a contrasting \begin{table} \begin{tabular}{l l|c c c c c c c c c} \hline \hline \multicolumn{1}{l|}{**Fine-tuned**} & \multicolumn{2}{c}{**LLM for**} & \multicolumn{3}{c}{**XCOPA**} & \multicolumn{3}{c}{**XWinograd**} & \multicolumn{3}{c}{**XSToryCloze**} \\ \cline{3-10} \multicolumn{1}{c|}{**Model**} & \multicolumn{1}{c|}{**Generation**} & \multicolumn{1}{c|}{_ORI\({}_{400}\)_} & \multicolumn{1}{c}{_GEN\({}_{3,7k}\)_} & \multicolumn{1}{c}{_O+G\({}_{4,1k}\)_} & \multicolumn{1}{c}{_ORI\({}_{1,8k}\)_} & \multicolumn{1}{c}{_GEN\({}_{2k}\)_} & \multicolumn{1}{c}{_O+G\({}_{3,8k}\)_} & \multicolumn{1}{c}{_ORI\({}_{300}\)_} & \multicolumn{1}{c}{_GEN\({}_{1,7k}\)_} & \multicolumn{1}{c}{_O+G\({}_{2k}\)_} \\ \hline \multirow{4}{*}{mBERT} & Dolly-v2 & 47.9 & 53.3\(\pm\)5.4 & 54.0\(\pm\)6.1 & 52.9 & **59.6\(\pm\)**6.7 & **59.3\(\pm\)**6.4 & 65.0 & **68.7\(\pm\)**1.7 & 68.1\(\pm\)**3.1 \\ & StableVicuna & 47.9 & 52.9\(\pm\)5.0 & 54.7\(\pm\)7.6 & 52.9 & 53.7\(\pm\)0.8 & 55.5\(\pm\)5.6 & 65.0 & 64.6\(\pm\)0.4 & 67.3\(\pm\)**3.2 \\ & ChatGPT & 47.9 & 55.0\(\pm\)7.1 & 54.1\(\pm\)6.2 & 52.9 & 56.0\(\pm\)3.1 & 58.3\(\pm\)5.4 & 65.0 & 64.3\(\pm\)0.7 & 68.3\(\pm\)3.3 \\ & GPT-4 & 47.9 & **56.4\(\pm\)**1.8 & **57.2\(\pm\)**9.3 & 52.9 & 54.9\(\pm\)**9.2 & 57.5\(\pm\)**4.6 & 65.0 & 68.0\(\pm\)**3.0 & **69.8\(\pm\)**4.8 \\ \hline \multirow{4}{*}{XLMR-Base} & Dolly-v2 & 54.8 & 58.1\(\pm\)3.3 & 58.1\(\pm\)3.3 & 53.5 & 56.5\(\pm\)3.0 & 66.3\(\pm\)12.8 & 73.0 & 75.8\(\pm\)2.8 & 76.5\(\pm\)**5.5 \\ & StableVicuna & 54.8 & 57.6\(\pm\)2.8 & 59.3\(\pm\)5.5 & 53.5 & 59.0\(\pm\)5.5 & 66.0\(\pm\)12.5 & 73.0 & 69.6\(\pm\)3.4 & 74.2\(\pm\)1.2 \\ & ChatGPT & 54.8 & 58.2\(\pm\)3.4 & 59.4\(\pm\)1.6 & 53.5 & 62.7\(\pm\)9.2 & 65.9\(\pm\)12.4 & 73.0 & 67.4\(\pm\)5.6 & 74.5\(\pm\)1.5 \\ & GPT-4 & 54.8 & **62.7\(\pm\)**7.9 & **63.0\(\pm\)**8.2 & 53.5 & **63.3\(\pm\)**9.8 & **66.9\(\pm\)**13.4 & 73.0 & **74.6\(\pm\)**1.6 & **79.3\(\pm\)**1.6 \\ \hline \multirow{4}{*}{XLMR-Large} & Dolly-v2 & 63.0 & 58.6\(\pm\)4.4 & 65.0\(\pm\)1.2 & 80.1 & **76.9\(\pm\)**3.2 & 83.1\(\pm\)3.0 & 85.0 & 84.8\(\pm\)1.2 & 86.4\(\pm\)1.4 \\ & StableVicuna & 63.0 & 64.4\(\pm\)1.4 & 68.7\(\pm\)5.7 & 80.1 & 68.2\(\pm\)11.9 & 82.0\(\pm\)1.9 & 85.0 & 74.6\(\pm\)10.4 & 84.8\(\pm\)0.2 \\ \cline{1-1} & ChatGPT & 63.0 & 64.6\(\pm\)1.6 & 68.1\(\pm\)5.1 & 80.1 & 73.2\(\pm\)6.9 & 83.2\(\pm\)**7.1 & 85.0 & 77.3\(\pm\)1.7 & 85.8\(\pm\)0.8 \\ \cline{1-1} & GPT-4 & 63.0 & **72.1\(\pm\)**9.1 & **72.2\(\pm\)**9.2 & 80.1 & 76.4\(\pm\)3.7 & **83.5\(\pm\)**3.4 & 85.0 & **86.0\(\pm\)**1.0 & **88.4\(\pm\)**1.4 \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison of Average Accuracy across all languages for mBERT, XLMR-Base, and XLMR-Large on XCOPA, XStoryCloze, and XWinograd. Training datasets include _ORI_ (original EN data), _GEN_ (LLM-generated EN data), and _O_+_G_ (both), with the number of examples used for training indicated by the subscripts. The best results obtained with the same amount of training data are highlighted in bold. Green and red subscripts denote improvement and decline in performance compared to the baseline (_ORI_). See per language results in Appendix C. experiment under resource-constrained conditions. Table 5 shows the results for the languages that are available for all settings, excluding TR and TH (unavailable for LLM-generation, refer to SS4), and QU (not supported by the Google Translate API). We can see the impact of the generated data varies across different fine-tuned models and languages, aligning with the findings of Kumar et al. (2022). Training on GPT-4 synthesized data displays consistent improvement across all scenarios and languages, except the zero-shot crosslingual result on HT with XLMR-Large. More fluctuating results can be observed with ChatGPT-generated data. A comparison between \(GEN_{EN}+ORI\) and \(GEN_{XX}+ORI\) indicates that utilising data generated in target languages generally leads to improved performance with GPT-4 generated data, as well as in base models with ChatGPT-generated data. However, for XLMR-Large, employing ChatGPT-generated data in target languages mostly yields negative outcomes. In languages such as TA and VI, training on generated data in the target languages results in more performance degradation compared to zero-shot cross-lingual transfer. This suggests that ChatGPT performs worse in those languages than XLMR-Large (Ahuja et al., 2023). Translating the English dataset generally shows overall better results than training on the data generated directly in the target languages, with the exception of XLMR-Large with GPT-4. For SW, XLMR models finetuned with ChatGPT-generated data exhibit performance decline in most cases, even when the English-generated data benefits all other languages. This observation suggests that XLMR struggles with SW. In subsection 6.1 we select TA, SW, and the two best languages ID and EN, along with EN, for human evaluation. Additionally, we conduct experiments involving adding Target Languages in Validation (TLV). This results in minor variations in the performance, consistent with the findings of Ponti et al. (2020). We include the full results in Table 10 in Appendix C. ### Dataset Scaling Up We further investigate the impact of training on a larger scale of generated data on model performance. We focus on the XCOPA dataset and expand the generated data with ChatGPT to 28.6k examples in English. We also compare the results of zero-shot cross-lingual transfer with translating the English-generated data to target languages. \begin{table} \begin{tabular}{l l|l|c c c c c c c c c} \hline \hline **Fine-tuned** & **LLM** & **Training data** & **AVG** & **EN** & **ET** & **HT** & **ID** & **IT** & **SW** & **TA** & **VI** & **ZH** \\ \hline \multirow{8}{*}{mBERT} & Baseline & \(ORI\) & 47.2 & 53.8 & 44.2 & 48.6 & 47.2 & 46.2 & 45.4 & 48.4 & 43.6 & 47.4 \\ \cline{2-11} & \multirow{4}{*}{ChatGPT} & \(GEN_{EN}+ORI\) & 54.6 & 59.6 & **56.4** & 53.6 & 53.8 & 51.4 & 51.6 & 50.4 & 55.0 & 59.2 \\ & & \(GEN_{XX}+ORI\) & 56.8 & 59.6 & 58.8 & 54.6 & 56.2 & 61.2 & 54.6 & 53.6 & 52.0 & 60.2 \\ & & \(GEN_{X}^{New}+ORI\) & **58.7** & 59.6 & **59.8** & 58.2 & **62.8** & 61.0 & 52.6 & 56.8 & 58.2 & 59.4 \\ \cline{2-11} & \multirow{4}{*}{GPT-4} & \(GEN_{EN}+ORI\) & 59.3 & 72.6 & 58.8 & 53.0 & 62.0 & 61.0 & 50.0 & 54.0 & 57.6 & 64.6 \\ & & \(GEN_{XX}+ORI\) & 61.8 & 72.6 & 61.2 & 58.2 & 62.2 & 66.4 & 57.4 & 52.6 & 63.0 & 62.2 \\ & & \(GEN_{EX}^{New}+ORI\) & 62.6 & 72.6 & 58.6 & 55.2 & 65.6 & 65.4 & 53.8 & 62.6 & 64.6 & 65.4 \\ \hline \multirow{8}{*}{XLMR-Base} & Baseline & \(ORI\) & 55.6 & 57.6 & 54.6 & 50.6 & 59.6 & 54.8 & 55.0 & 53.4 & 54.8 & 59.6 \\ \cline{2-11} & \multirow{4}{*}{ChatGPT} & \(GEN_{EX}+ORI\) & 59.8 & 63.8 & 61.6 & 51.6 & 62.6 & 59.8 & 51.6 & 60.4 & 64.8 & 62.0 \\ \cline{2-11} & \multirow{4}{*}{ChatGPT} & \(GEN_{XX}+ORI\) & 59.9 & 63.8 & 60.6 & 55.0 & 64.6 & 59.6 & 54.6 & 56.4 & 59.6 & 64.8 \\ \cline{2-11} & & \(GEN_{EX}^{New}+ORI\) & 61.1 & 63.8 & 60.0 & 58.0 & 65.0 & 60.8 & 53.8 & 60.2 & 62.6 & 66.0 \\ \cline{2-11} & \multirow{4}{*}{GPT-4} & \(GEN_{EN}+ORI\) & 63.6 & **69.6** & 63.8 & 51.2 & 67.2 & 62.4 & 58.4 & 63.8 & 66.8 & 69.4 \\ \cline{2-11} & & \(GEN_{XX}+ORI\) & 64.0 & 69.6 & 62.2 & 56.2 & 68.6 & 63.8 & 57.8 & 60.8 & 66.8 & 70.0 \\ \cline{2-11} & & \(GEN_{EN}^{New}+ORI\) & 63.9 & 69.6 & 61.6 & 56.6 & 68.4 & 65.2 & 58.2 & 60.2 & 66.0 & 69.6 \\ \hline \multirow{8}{*}{XLMR-Large} & Baseline & \(ORI\) & 64.4 & 71.4 & 62.8 & 51.4 & 69.0 & 65.8 & 60.6 & 62.0 & 69.4 & 66.8 \\ \cline{2-11} & \multirow{4}{*}{ChatGPT} & \(GEN_{EN}+ORI\) & 69.5 & 76.4 & 69.8 & 48.2 & 76.0 & 72.8 & 63.4 & 67.8 & 73.4 & 77.8 \\ \cline{1-1} & \multirow{4}{*}{ChatGPT} & \(GEN_{XX}+ORI\) & 65.2 & 76.4 & 62.4 & 55.2 & 75.0 & 62.2 & 58.2 & 55.4 & 66.2 & 76.2 \\ \cline{1-1} & & \(GEN_{EN}^{New}+ORI\) & 67.0 & 76.4 & 60.0 & 59.6 & 66.2 & 66.6 & 59.0 & 64.8 & 74.8 & 75.6 \\ \cline{1-1} \cline{2-11} & \multirow{4}{*}{GPT-4} & \(GEN_{EN}+ORI\) & 73.7 & **84.6** & 70.4 & 50.0 & 80.8 & 80.2 & 65.8 & 72.8 & 78.4 & 80.4 \\ \cline{1-1} & & \(GEN_{EX}+ORI\) & 74.6 & **84.6** & **77.0** & 56.0 & 82.2 & 77.0 & 65.0 & 73.0 & 76.2 & 80.0 \\ \cline{1-1} & & \(GEN_{EN}^{New}+ORI\) & 74.1 & 84.6 & **74.2** & 57.2 & 82.0 & 77.4 & 62.2 & 75.0 & 74.4 & 79.6 \\ \hline \hline \end{tabular} \end{table} Table 5: Accuracy on XCOPA. \(ORI\) corresponds to the original data, \(GEN_{EN}\) and \(GEN_{XX}\) represents data generated in English and target languages. \(Trans\) denotes translations of the English-generated data. We show languages that are available in all settings. Improvement and decline in performance are represented with green and red shadows. The results in Table 6 demonstrate the positive impact of scaling up the generated data on model performance. Particularly, XLMR-Large exhibits the most significant improvement. ## 6 Human Evaluation To better evaluate the quality of the generated datasets and compare them with the human-created data, we ask native speakers to annotate the multilingual data generated by ChatGPT and GPT-4. For each dataset, we first select 50 generated examples in English, and then request two annotators to evaluate the examples in two categories: 1) **Text Naturalness**. The annotators are asked to choose one of the following options for each example: "the text sounds natural", "the text sounds awkward but understandable", or "the text is not understandable", and 2) **Logic Soundness**. This category focuses on the commonsense aspect of the examples. The annotators are required to select the most appropriate description from: "the correct option is (clearly) more plausible", "both options are equally plausible", "both options are implausible", or "the wrong option is actually more plausible". We only ask the annotators to evaluate the logic if the text is at least understandable. For XWinograd, we introduce an additional evaluation criterion. Annotators are asked to determine whether the two noun phrases in the examples can be replaced by the same pronoun (refer to SS3.3). For XCOPA, we extend the annotations to non-English languages, where we choose the two languages that demonstrate the most notable improvement, namely ZH and ID, as well as the two languages that exhibit the least improvement or regression in performance with ChatGPT-generated data, namely TA and SW (see Table 5). In addition to the original examples and the generated examples in the target languages, we include 50 examples that are translated from the same English-generated examples (that were selected for annotation). To ensure impartiality, all the examples are shuffled, and the annotators are not provided with information regarding the source of the examples (human-created, LLM-generated, or translated). ### Human Evaluation Results Figure 1 presents the annotation results for XCOPA, averaged from two annotators for each language. Looking at the Text Naturalness plot, we can see that for EN, ID, ZH, and SW, both ChatGPT and GPT-4 achieved higher naturalness than the original dataset. This is particularly prominent in ID, revealing the fluency issue in the original ID data in XCOPA, which is also confirmed by a native speaker. In contrast, TA demonstrates surprisingly low performance, with most examples being classified as "not understandable". This accounts for the significant decline of XLMR-Large performance on TA when trained on ChatGPT-generated data in TA. However, intriguingly, models trained on TA data generated by GPT-4 showed improvement over the baselines, despite the poor quality evaluation from human annotators. This result is unexpected. Upon further investigation with native speakers, they noted that the text sometimes contains some unrelated, nonsensical words. Nevertheless, readers can intuitively grasp the meaning. Therefore, although the text is extremely difficult to comprehend and appears unnatural, it is not strictly impossible to understand. We hypothesise that the trained model can still learn from such text. The translated text is typically less natural than the original and generated data (apart from ID due to issues in the original data). This result affirms that LLMs generally excel in generating fluent text for the languages it supports. In terms of logic soundness, ChatGPT falls short compared to the original dataset. We further illustrate the categorised issues in the last column of the plots in Figure 1. We can see that for ChatGPT, the majority of the examples are labelled as "both options are equally plausible", only SW has more problematic examples with "the wrong option is actually more plausible". We suspect that this issue arises from the instruction provided (taken from the description of the original COPA dataset), which states that "both options could be plausible, but one is more plausible." In some cases, ChatGPT generates two choices that are excessively similar in terms of plausibility. On the other hand, GPT-4 \begin{table} \begin{tabular}{l|c c|c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c|}{\(GEN_{EN}+OHI_{EN}\)} & \multicolumn{2}{c}{\(GEN_{EN}^{Num}+ORI_{EN}\)} \\ \cline{2-5} & 3.7k & 28.6k & 3.7k & 28.6k \\ \hline mbert & 54.3 & 56.0 & 58.0 & **60.1** \\ xlmr-base & 60.1 & **61.8** & 61.2 & 61.7 \\ xlmr-large & 69.7 & **72.4** & 67.2 & 71.4 \\ \hline \hline \end{tabular} \end{table} Table 6: Accuracy on XCOPA when scaling up the generated data to over 28K with ChatGPT. We report average results on all XCOPA languages excl. QU, since it is not available with the Google Translate API. tends to generate options with more clear-cut differences in plausibility, mirroring the original data. We note that despite the description/instruction that both alternatives could happen, both the original dataset and the data synthesised by GPT-4 tend to present one plausible and one _implausible_ option. For English XWinograd and XstoryCloze, the majority of the examples in both original and generated examples are evaluated as natural and logically sound. For XWinograd, although more than 47 examples are evaluated to exhibit high text quality and follow commonsense logic, only 23 ChatGPT-generated examples fulfil the requirement that both noun phrases should be interchangeable with the same pronoun. GPT-4 examples demonstrate better consistency, with 36 following this rule, whereas all original examples are found satisfactory. ## 7 Conclusions This paper explores the effectiveness of utilising LLMs for data augmentation in cross-lingual datasets with limited training data. We specifically focus on commonsense reasoning tasks that are challenging for data synthesis. Our experiments including four LLMs for data generation on three datasets, showcase enhanced cross-lingual zero-shot transfer on smaller fine-tuned task-specific language models. However, the impact varies across different datasets and languages. Notably, larger models such as XLMR-Large, which have higher baselines, demonstrate more difficulty in achieving performance improvements with LLM-generated data. Among the four LLMs, GPT-4-generated data exhibits mostly consistent superior performance. Expanding data generation directly in target languages also shows general improvements compared to cross-lingual zero-shot with the English-generated data. Human evaluation of the synthesised multilingual dataset shows that the ChatGPT and GPT-4 generated data demonstrate high naturalness in most languages, even surpassing the original data. However, in certain languages like TA, both models fail to generate natural text. Additionally, when assessing the logical soundness of the dataset, examples synthesised by ChatGPT reveal notable inconsistencies regarding more plausible options compared to the original human-created data. In contrast, GPT-4 exhibits a performance on par with human-written data. In conclusion, leveraging LLMs for data augmentation shows promise. However, the choice of LLM used for data generation significantly influences the quality of the resulting data, as well as its applicability to the language under consideration. In circumstances where a more advanced model such as GPT-4 cannot be accessed, other models can be utilised, though this might result in performance difficulties in certain non-English languages - a challenge that also exists for GPT-4 - and concerns regarding the logical coherence. Figure 1: Human evaluation of 50 random examples from the original XCOPA, ChatGPT (top) and GPT-4 (bottom) generated data in target languages, and translation of English generated data. Examples are annotated by two native speakers in each language. The subplots in the last column show the logic issues of the XCOPA data, where the three bars for each language represent _Original_, \(Gen_{XX}\), and \(Gen_{EN}^{Trans}\) (from left to right).
2308.12605
APLA: Additional Perturbation for Latent Noise with Adversarial Training Enables Consistency
Diffusion models have exhibited promising progress in video generation. However, they often struggle to retain consistent details within local regions across frames. One underlying cause is that traditional diffusion models approximate Gaussian noise distribution by utilizing predictive noise, without fully accounting for the impact of inherent information within the input itself. Additionally, these models emphasize the distinction between predictions and references, neglecting information intrinsic to the videos. To address this limitation, inspired by the self-attention mechanism, we propose a novel text-to-video (T2V) generation network structure based on diffusion models, dubbed Additional Perturbation for Latent noise with Adversarial training (APLA). Our approach only necessitates a single video as input and builds upon pre-trained stable diffusion networks. Notably, we introduce an additional compact network, known as the Video Generation Transformer (VGT). This auxiliary component is designed to extract perturbations from the inherent information contained within the input, thereby refining inconsistent pixels during temporal predictions. We leverage a hybrid architecture of transformers and convolutions to compensate for temporal intricacies, enhancing consistency between different frames within the video. Experiments demonstrate a noticeable improvement in the consistency of the generated videos both qualitatively and quantitatively.
Yupu Yao, Shangqi Deng, Zihan Cao, Harry Zhang, Liang-Jian Deng
2023-08-24T07:11:00Z
http://arxiv.org/abs/2308.12605v2
# APLA: Additional Perturbation for Latent Noise with Adversarial Training Enables Consistency ###### Abstract Diffusion models have exhibited promising progress in video generation. However, they often struggle to retain consistent details within local regions across frames. One underlying cause is that traditional diffusion models approximate Gaussian noise distribution by utilizing predictive noise, without fully accounting for the impact of inherent information within the input itself. Additionally, these models emphasize the distinction between predictions and references, neglecting information intrinsic to the videos. To address this limitation, inspired by the self-attention mechanism, we propose a novel text-to-video (T2V) generation network structure based on diffusion models, dubbed Additional Perturbation for Latent noise with Adversarial training (APLA). Our approach only necessitates a single video as input and builds upon pre-trained stable diffusion networks. Notably, we introduce an additional compact network, known as the Video Generation Transformer (VGT). This auxiliary component is designed to extract perturbations from the inherent information contained within the input, thereby refining inconsistent pixels during temporal predictions. We leverage a hybrid architecture of transformers and convolutions to compensate for temporal intricacies, enhancing consistency between different frames within the video. Experiments demonstrate a noticeable improvement in the consistency of the generated videos both qualitatively and quantitatively. 1University of Electronic Science and Technology of China, Chengdu, China 2Massachusetts Institute of Technology, Cambridge, USA [email protected], [email protected], [email protected],[email protected], [email protected] ## Introduction Generating video is a challenging task in computer vision, whose aim is to generate high-fidelity, diverse videos from various inputs like text, images, audio, or sketches. Recent works in deep learning have spurred considerable advancements in this domain, most notably through the advent of diffusion models [14]. These models craft videos by iteratively introducing noise to an initial input, then undoing this noise through a series of denoising stages. Their strength is particularly notable in generating high-definition, extended-duration videos with intricate semantics and dynamics. With the advancing capabilities of diffusion models, cross-modality generation tasks, including text-to-image (T2I) and text-to-video (T2V), have made substantial progress. Yet, the substantial data size of videos presents challenges in training video generation models from scratch. A recent approach, "Tune-A-Video" [26], aimed to utilize pre-trained T2I models for video synthesis. However, its outcomes exhibited inconsistencies across video frames. Ensuring frame consistency in video generation, especially within fine-tuned models, remains a hurdle. While some efforts have achieved acceptable frame consistency using diffusion models [14], intricate video details, especially in complex scenarios, are often absent. For instance, even with identical prompts and inputs, the "Tune-A-Wide" approach still manifests inconsistencies in generated videos. Moreover, extending fine-tuning epochs could potentially compromise the semantic coherence between successive frames. In light of these challenges, we propose a novel architecture, _i.e._, APLA. Using self-attention [21], APLA is geared to capture inherent video characteristics and establish connections between frames by adaptively generating parameters. To facilitate efficient information extraction from inputs, we devise a much smaller model compared to diffusion models, as shown in the success of style transfer and multi-task learning [11]. The result can be observed visually in Fig. 2. Consequently, we engineer a decoder-only structure for our Video Generation Transformer (VGT). With masking and self-attention mechanisms, VGT exhibits an improvement in predicting Figure 1: **The comparison (by the same prompt: “A man is skiing”) between Tune-A-Video and the proposed APLA.** (a) The result of Tune-A-Video is that the snowboard splits into multiple parts on these frames. (b) The obtained outcome by our APLA method which keeps the single snowboard in all frames. unknown frames, demonstrating its ability to distill intrinsic input information. We also introduce a novel loss function, hyper-loss, to encourage the model to focus on nuanced input details. Lastly, to further enhance the generated quality of video and improve the consistency between different frames, we introduce adversarial training to improve the quality of output while strengthening the robustness. The contributions of this work can be summarized as follows: 1. [leftmargin=*] 2. A novel architecture, _i.e._, VGT, builds on top of pre-trained diffusion models, which enhances the consistency between video frames by learning the correlation information between input frames. 3. A fusion of the diffusion model and adversarial is employed for video generation, where adversarial training is directly applied to the discrepancy in noise distributions, rather than judging the similarity between input and output images. 4. Quantitative and qualitative experiments that demonstrate the effectiveness of the proposed approach, which achieves SOTA performance in frame consistency of generated videos. ## Related Work ### Cross-Modal Video Generation Generating videos using multiple input modalities presents a formidable challenge within the realm of deep learning. A particularly prominent endeavor in video generation is the synthesis of videos through text-to-video (T2V) techniques, which entails creating videos grounded in natural language descriptions. This innovative synthesis process can be conceptualized as an evolution beyond the established domain of text-to-image (T2I) synthesis. In a broader context, models for text-to-image (T2I) synthesis can be systematically categorized into two distinctive classes: transformer-based models and diffusion-based models. The former category, as exemplified by references such as [14], leverages the power of extensively trained large-scale language models such as GPT-3 or T5. These models adeptly transform textual input into latent vectors, which are subsequently employed for downstream image generation. Conversely, the latter category represented by models like [13], also incorporates a text encoder in a similar vein, yet diverges in their approach by integrating the text-encoded information into the diffusion process. The prowess of diffusion-based models in crafting intricate images stands evident. This capability has been the foundation for their evolution from text-to-image (T2I) synthesis to the more dynamic realm of text-to-video (T2V) synthesis. However, these initial methods had shortcomings like detail deficiency, temporal inconsistencies, and limited control. Recent approaches have emerged to address these issues and enhance text-to-video (T2V) synthesis. ### Diffusion Model The inception of the Denoising Diffusion Probabilistic Model (DDPM), documented in [11], took inspiration from thermodynamic frameworks. This model, functioning as a Markov chain, exhibits an auto-encoder structure. Nevertheless, its inference process faced sluggishness owing to the intricate denoising stages encompassed. To address this issue, Denoising Diffusion Implicit Models (DDIM)[15] introduced an innovative strategy: by introducing variable variance for predicted noise, they achieved swift inference for the diffusion model within a concise span of steps. Building upon the foundations laid by DDIM, subsequent endeavors [23] aimed to propel DDPM inference to even greater speeds. Broadly speaking, the Diffusion model's key strength lies in its remarkable capacity for handling tasks involving cross-modality and multi-modality interactions. This strength is vividly demonstrated by the Latent Diffusion Model (LDM) [17], which not only exhibited the potential of diffusion for high-resolution image generation through the utilization of latent spaces but also showcased the model's ability to excel in cross-modality scenarios. The application of the Diffusion model to video generation has prompted various approaches [11]. Among these, Tune-A-Video [22] stands out for introducing a novel perspective on video generation via diffusion modeling. This innovative methodology views video generation as a process of refining a pre-trained stable diffusion. In doing so, Tune-A-Video reshapes our understanding of how diffusion models can be harnessed to address the challenges of video generation with a fresh and effective approach. ## Methodology In this section, we begin by presenting the overall structure of APLA. Subsequently, we delve into the details of VGT, designed to extract intrinsic information. Notably, we intro Figure 2: Visual demonstrations of APLA using different prompts. duce two versions of VGT, each showcasing distinct advantages in the experiment. Following this, we discuss hyper-loss and introduce our adversarial training strategy. ### Apla To enhance inter-frame consistency, alongside optimizing high-quality outputs, it's imperative to account for the inter-connections among distinct frames. While prior research assumed that inherent information could be naturally grasped by the model without supplementary steps, the challenge becomes more pronounced in complex tasks like video generation, where depicting high-level temporal features during inference proves more intricate compared to image generation. In light of this, we introduce a novel architecture (depicted in Fig. 3) that builds upon the diffusion model and integrates an additional module. This module is specifically designed to capture intrinsic information and foster inter-frame connections within the temporal domain. This approach sets us apart from previous methods and addresses the nuances presented by video generation. A visual comparison between APLA and Tune-A-Video is illustrated in Fig. 1. Specifically, the added module incorporates a self-attention mechanism aimed to extract information directly from inputs, all without introducing any additional loss. Our hypothesis is rooted in the potential of self-attention mechanisms to gather relevant details from the input itself. This implies the ability to dynamically generate parameters based on input, empowering the model with strong inductive capabilities. To ensure content consistency with the introduced module, we propose that the module's output should manifest as subtle perturbations, significantly smaller in magnitude than the output of the pre-trained model (referred to as U-Net in this paper). The evolving ratio of perturbation to U-Net output over epochs can be visually observed in Fig. 5(b). Let the input be denoted as \(x\in\mathbb{R}^{B\times H\times W\times F\times C}\), where \(B\), \(H\), \(W\), \(F\), and \(C\) represent height, width, frame, and channel dimensions respectively. We define the encoder as \(\mathcal{E}\), decoder as \(\mathcal{D}\), and \(\tilde{x}=\mathcal{D}(z)=\mathcal{D}(\mathcal{E}(x))\) signifies the intended outcome. Furthermore, considering the process in the latent space, let \(Z\) denote the latent space with \(z\in Z\), where \(z\in\mathbb{R}^{B\times h\times w\times F\times c}\). The diffusion process and U-Net are represented by \(\phi\) and \(\pi\) respectively. The term \(z_{T}\) in the denoising stage signifies the latent variable's evolution over \(T\) steps with added noise. A pivotal addition is our module, termed Video Generation Transformer (VGT). Denoted as \(\varphi\), it encapsulates an abstract function. The series of denoising autoencoders is represented as \(\epsilon_{\theta}(x_{t},t)\), with \(t=1,\ldots,T\), where \(t\) corresponds to the specific step in the sequence. Then, we have \[z_{t}=\phi(z_{t-1},t-1), \tag{1}\] Figure 3: **The process of training the networks. VGT extracts intrinsic information from latent variables, considering various time steps for noise incorporation, and especially including the clean latent variable devoid of noise, namely the original latent variable \(z\). As VGT is not trained ever, the output of VGT is tiny thus the change of the output is small, which is helpful to improve the consistency of different frames without changing the content a lot. The discriminator receives the predicted noise and the noise residuals for corresponding time steps in the diffusion stage.** \[\hat{z}_{T-1}=\pi(z_{T},T). \tag{2}\] \[\hat{z}_{t-1}=\pi(\hat{z}_{t},t), \tag{3}\] where \(\hat{z}_{t}\) represents the predicted output of denoising U-Net at the \(t\)-th step. In fact, we add the perturbation to the U-Net, which aims to capture intrinsic information: \[\hat{z}_{t-1}^{*}=\pi(\hat{z}_{t},t)+\varphi(z_{t},t), \tag{4}\] thus the original object function: \[\mathcal{L}_{MSE}:=\mathbb{E}_{\mathcal{E}(x),e\sim\mathcal{N}(0,1),t}\left[ \left\|\epsilon-\epsilon_{\theta}\left(\hat{z}_{t},t\right)\right\|_{2}^{2} \right], \tag{5}\] can be rewritten as: \[\mathcal{L}_{MSE}:=\mathbb{E}_{\mathcal{E}(x),e\sim\mathcal{N}(0,1),t}\left[ \left\|\epsilon-\epsilon_{\theta}\left(\hat{z}_{t}^{*},t\right)\right\|_{2}^{2 }\right]. \tag{6}\] For clarity, the intuitive pseudocode is illustrated in the Algorithm 1. With the adversarial training that enhances the robustness and quality of the generator output, a discriminator is set to receive the predicted noise and noise residuals corresponding step. More elaborate discussions are presented in the following sections. ### Video Generation Transformer (VGT) We introduce the proposed VGT, designed as a decoder-only architecture. With self-attention mechanisms, this Transformer efficiently focuses on input features. Compared to pure encoder-based structures like BERT [4], the pure Transformer decoder architecture [13] accommodates more tokens, boosting its processing capacity. The design of the decoder within the Transformer stands out as a unique case in the realm of autoregressive models [14], showcasing potential in unsupervised time series prediction [1]. Besides, this design empowers the Transformer decoder to aptly extract temporal information from input data. For tasks involving time sequences, the self-attention mechanisms of the Transformer decoder enhance output distribution coherence by extracting contextual insights. Notably, even with incomplete sequence inputs, Transformer decoders promote model diversity [1], thereby bolstering generalization capabilities. Nonetheless, in previous video generation models employing Transformer architectures, like Video Transformer (VIVIT) [1], solely the Transformer encoder is utilized to derive a latent variable. This variable is subsequently fed into another network, often a classifier, to fulfill the downstream task. Consequently, VIVIT lacks the direct capability to generate videos. It is precisely due to this limitation that we introduce the Video Generation Transformer (VGT). Our framework aims to reconstruct or generate videos, leading to the proposal of two distinct VGT variants. The first is a pure Transformer decoder approach, Figure 4: **An illustration of VGT-Pure and VGT-Hyper.** The left side shows the transformer decoder structure, which adapted mask operation on the self-attention mechanism especially. The right side shows the two versions of VGT. The Temporal Transformer Decoder only receives the class (_i.e._, _cls_) token of output sequences of the Spatial Transformer Decoder. The rest tokens of the output of the Spatial Transformer Decoder are used to multiply with the tokens of the Temporal Transformer Decoder output dislodging _cls_ token in VGT-Pure, while the whole output of the Temporal Transformer Decoder is transmitted to Transposed Convolution Block directly in VGT-Hyper. referred to as VGT-pure, while the second combines self-attention with 3D convolution, termed VGT-Hyper. ### VGT-Pure The initial model variant is a pure Transformer decoder. We denote the input sequence in the \(\ell_{th}\) layer as \(\mathbf{z}_{s}^{\ell}\) and \(\mathbf{z}_{t}^{\ell}\), where \(s\) and \(t\) stand for spatial and temporal aspects respectively. Furthermore, we define a token as \(\mathbf{z}_{cls,s}^{k,\ell}\), with "cl" signifying the class token, and \(k\) representing the \(k\)-th token in the sequence, excluding the class token. In this context, we replace Multi-Headed Self Attention (MAS) with MASKed Multi-Headed Self Attention (MMAS). Likewise, each transformer block encompasses layer normalization (LN). The spatial decoder block can be succinctly represented as follows: \[\mathbf{z}_{s}^{\ell+1}=\mathtt{MMSA}(\mathtt{LN}(\mathbf{z}_{s}^{\ell}))+ \mathbf{z}_{s}^{\ell}, \tag{7}\] and similarly the temporal decoder block as: \[\mathbf{z}_{t}^{\ell+1}=\mathtt{MMSA}(\mathtt{LN}(\mathbf{z}_{t}^{\ell}))+ \mathbf{z}_{t}^{\ell}. \tag{8}\] We consider \(L\) layers of spatial decoder at all, and the \(\mathbf{z}_{t}^{L}\) can be represented as: \[\mathbf{z}_{s}^{L}=\left[\mathbf{z}_{cls,s}^{L},\mathbf{z}_{s}^{1,L},\mathbf{ z}_{s}^{2,L},\dots,\mathbf{z}_{s}^{F,L}\right]+\mathbf{p}, \tag{9}\] where \(\mathbf{p}\) denotes the positional embedding, and \(\mathbf{z}\) is divided into tokens. Among these tokens, the first one is \(\mathbf{z}_{cls,s}\in\mathbb{R}^{F\times(H_{patch}\times W_{patch})\times L}\), serving as a compact feature commonly used for categorical embedding. Here, \(H_{patch}\) represents the patch's height, and \(W_{patch}\) is the width of a patch. When we split \(\mathbf{z}_{cls,s}\) along the temporal frame dimension, we obtain individual tokens \(\mathbf{z}_{cls,s}^{1,L},\mathbf{z}_{cls,s}^{2,L},\dots,\mathbf{z}_{cls,s}^{F,L}\), where \(\mathbf{z}_{cls,s}^{i,L}\in\mathbb{R}^{1\times(H_{patch}\times W_{patch})\times L}\) for \(i=1,2,\dots,F\). Simultaneously, we maintain the collection \(\left[\mathbf{z}_{s}^{1,L},\mathbf{z}_{s}^{2,L},\dots,\mathbf{z}_{s}^{F,L}\right]\) as \(\mathbf{z}_{s}^{C,L}\), facilitating skip connections. This arrangement leads us to use \(\mathbf{z}_{cls,s}^{L}\) as the input for the temporal decoder, setting it apart from the rest. This ultimately yields the expression for \(\mathbf{z}_{t}^{1}\): \[\mathbf{z}_{t}^{1}=\left[\mathbf{z}_{cls,t}^{1},\mathbf{D}\mathbf{z}_{cls,s}^ {1,L},\mathbf{D}\mathbf{z}_{cls,s}^{2,L},\dots,\mathbf{D}\mathbf{z}_{cls,s}^{ F,L}\right]+\mathbf{p}, \tag{10}\] where \(\mathbf{D}\) is the decoder block. Suppose we have \(M\) layers temporal decoder in total. Similarly, the output of \(M_{th}\) can be written as: \[\mathbf{z}_{t}^{M}=\left[\mathbf{z}_{cls,t}^{M},\mathbf{z}_{cls,t}^{1,M}, \mathbf{z}_{cls,t}^{2,M},\dots,\mathbf{z}_{cls,t}^{F,M}\right]+\mathbf{p}, \tag{11}\] Similarly, we denote \(\left[\mathbf{z}_{cls,t}^{1,M},\mathbf{z}_{cls,t}^{2,M},\dots,\mathbf{z}_{cls,t}^{F,M}\right]\) as \(\mathbf{z}_{t}^{C,M}\), then the output of VGT-Pure can be written as: \[\mathbf{y}=\mathbf{z}_{s}^{C,L}\odot\mathbf{z}_{t}^{C,M}, \tag{12}\] \begin{table} \begin{tabular}{l|c|c} \hline \hline \multicolumn{3}{c}{**VGT version**} \\ \hline **Version** & **PSNR** & **Trainable Parameters** \\ \hline VGT-Pure & 52.746 & **60.362M** \\ VGT-Hyper & **58.552** & 97.136M \\ VGT-Pure-EN & 54.236 & **60.362M** \\ VGT-Hyper-EN & 42.736 & 97.136M \\ \hline \hline \end{tabular} \end{table} Table 1: The different version of VGT, while “EN” represents the transformer encoder to be used instead of the encoder, which means there is no mask operation. We compare the PSNR of the reconstruction quality, as the input is the single video generated randomly. Meanwhile, we compare the trainable parameters of different VGT, while the VGT-Pure and VGT-Pure-EN own the lowest trainable parameters, for the mask operation did not change the quantities of trainable parameters. Figure 5: (a) is the comparison of different versions of VGT. “EN” represents the use of a transformer encoder instead of a decoder, which means the mask operation was not included. As the picture shows, VGT-Hyper performs the best while the encoder version of VGT-Hyper performs the worst. For VGT-Pure, the encoder version performs similarly to the decoder version, while the performance of the two versions is between VGT-Hyper and VGT-Hyper-EN. (b) shows the ratio of VGT output and U-Net in the denoising step. The result shows that the norm of VGT output is very tiny compared with the U-Net output, which shows that the output of VGT did not change the original output much while improving the consistency of different frames laterally. \[\hat{\mathbf{z}}_{t}=\texttt{MLP}(\mathbf{y}), \tag{13}\] where \(\odot\) represents the Hadamard product, MLP represents multilayer perceptron and \(t\) is \(t\)-th step. we get \(\hat{\mathbf{z}}_{t}\in\mathbb{R}^{B\times F\times(H_{patch}\times W_{patch})\times (C\times P\times P)},\) then we rearrange the \(\hat{\mathbf{z}}_{t}\) into \(\varphi(z_{t})\), where \(\varphi(z_{t})\in R^{B\times F\times C\times(H_{patch}\times P)\times(W_{patch }\times P)}\) and \(\varphi(\cdot)\) is the function representation of VGT-Pure while \(z_{t}\) is the input of VGT-Pure, and \((H_{patch}\times P)=H\) and \((W_{patch}\times P)=H\). ### VGT-Hyper In this section, we introduce the second variant of VGT, named VGT-Hyper, which leverages 3D convolution(Tran et al., 2015). Particularly, in Eq. 12, rather than employing element-wise multiplication, we opt for 3D convolution. We represent the convolution block with the matrix \(\mathbf{M}\), leading to the following expression: \[\mathbf{y}^{*}=\texttt{M}\mathbf{z}_{t}^{M}, \tag{14}\] \[\hat{\mathbf{z}}_{t}=\texttt{MLP}(\mathbf{y}^{*}). \tag{15}\] Unlike VGT-pure, VGT-Hyper demonstrates superior performance in the reconstruction task, as indicated in Tab. 1, all the while maintaining a higher number of trainable parameters. VGT-Hyper capitalizes on the benefits inherent in a transformer decoder, underscoring the efficacy of the mask operation for time series tasks, depicted in Fig. 5(a). ### Hyper-Loss for Latent Noise Fitting Recognizing the limitations of Mean Squared Error (MSE) for certain generative tasks (Zhang et al., 2018), we introduce a novel loss function tailored for video generation. We adopt a perceptual loss approach within the diffusion model, akin to prior studies (Lugmayr et al., 2022). In detail, we formalize \(\ell_{1}\) loss and perceptual loss separately as follows: \[\mathcal{L}_{L1}:=\mathbb{E}_{\mathcal{E}(x),e\sim\mathcal{N}(0,1),t}\left[ \left\|\epsilon-\epsilon_{\theta}\left(z_{t}^{*},t\right)\right\|_{1}\right], \tag{16}\] and \[\mathcal{L}_{per}:=\mathbb{E}_{\mathcal{E}(x),e\sim\mathcal{N}(0,1),t}\left[ dist_{per}(\epsilon,\epsilon_{\theta}\left(z_{t}^{*},t\right))\right]. \tag{17}\] Expanding upon this, we incorporate a hyper-loss that encompasses the weighted combination of Mean Squared Error (MSE), \(\ell_{1}\) loss, and perceptual loss. The \(\ell_{1}\) loss serves as a regularization term to promote sparsity in the solution, while the perceptual loss encourages the model to generate more photorealistic images. This concept is illustrated as follows: \[\mathcal{L}_{hyper}:=\alpha*\mathcal{L}_{MSE}+\beta*\mathcal{L}_{L1}+\gamma* \mathcal{L}_{per}, \tag{18}\] where \(\alpha\), \(\beta\) and \(\gamma\) represent the weights. ### Adversarial Training with 1\(\times\)1 Convolution In our approach, we view adversarial training as a valuable form of regularization. For video generation, the distinction between integrating Generative Adversarial Networks (GANs) and employing perceptual loss lies in their treatment of temporal information. Perceptual loss is primarily concerned with the structural attributes of individual frames, whereas reconstruction loss focuses on pixel-level closeness. In contrast, GAN loss centers around maintaining consistency across frames, promoting temporal coherence. Besides, it is important to recognize that GANs have a propensity for capturing global information by treating all frames holistically. This approach leads to an enhancement in video quality through adversarial training. The discriminator, in the APLA, receives the output of the generator, namely the predicted noise to compare with the noise residual, which is obtained in the diffusion process. More concretely, the diffusion process \(T\) steps, while the denoising process aims to inverse this process, which predicts the noise residual, the difference of the \(t\)-th step and \(t\)-1-th step, namely the adding noise. In the denoising process, for instance, the generator predicts the \(t\)-th step noise residual, then the discriminator receives the corresponding \(t\)-th step noise in the diffusion process, aiming to decline the distance of two noise distributions (noise residual and predicted noise residual). The proposed discriminator structure is streamlined, comprising only a 1\(\times\)1 convolutional layer. This kernel comprehensively considers frame positional data, aiding temporal similarity extraction. Denoting the discriminator as \(D(\cdot)\) and \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline \multicolumn{5}{c}{**Frame Consistency**} \\ \hline with \(x\sim p(x)\), we arrive at a min-max problem: \[\max_{D}\min_{G}\mathbb{E}_{x\sim p(x)}\left[\log D_{G}^{*}(x)\right]+\mathbb{E}_ {x\sim p_{g}}\left[\log\left(1-D_{G}^{*}(x)\right)\right], \tag{19}\] where \(G\) represents the generator, which is the united block of U-Net and VGT in this paper, while \(D\) represents the discriminator. Let the generation loss of the generator, in this paper, which is the fusion of U-Net and VGT, as \(L_{g}\), where \(L_{g}=\mathbb{E}_{x\sim p_{(}x)}\left[\log D_{G}^{*}(x)\right]+\mathbb{E}_{x \sim p_{(}x)}\left[\log\left(1-D_{G}^{*}(x)\right)\right]\). Hence, the final optimization objective is \[\min_{\theta}\left(\mathcal{L}_{hyper}+\lambda L_{g}\right)+\max_{G}L_{g}, \tag{20}\] where \(\lambda\) is a coefficient to adjust the performance and \(\theta\) is the parameters of the network. ## Experiments ### Implementation Details We build the model based on Tune-A-Video Wu et al. (2022) and utilize the pre-trained weights of the Stable Diffusion Rombach et al. (2022). We uniformly sample 24 frames at a resolution of 512\(\times\)512 from the input video and train the models for 750 steps and 1500 steps with a learning rate of 3e-5 and a batch size of 1. As the limited CUDA memory, we choose the VGT-pure only for the following experiment. During inference, we employ the DDIM sampler Song et al. (2020) with classifier-free guidance Ho and Salimans (2022) in our experiments. For a single video, it takes about 90 minutes for 1500 steps and approximately 1 minute (while Tune-A-Video takes 60 minutes) for sampling on an NVIDIA 3090 GPU. **Dataset:** To evaluate our approach, we use representative videos taken from the DAVIS dataset Pont-Tuset et al. (2017). During fine-tuning, we just train the model on a single video. The video descriptions and captions are automatically used an off-the-shelf captioning model Li et al. (2023), which is regarded as the default prompt of our video. **Qualitative Results:** A visual comparison is presented between our approach and the baseline in Fig. 1(a), focusing on reconstructing tasks using the same prompt. We observe that while Tune-A-Video does not reconstruct the details in each frame well. Specifically, the sled visibly splits into two pieces in some frames. Our model provides more stable and smooth results compared with the Tune-A-Video. Also, readers can find more results obtained by our method from Fig. 2. **Quantitative Results:** Quantitative assessment is conducted, as depicted in Tab. 2. Regarding content consistency, we evaluate the CLIP score Radford et al. (2021) across all generated frames by employing average cosine measurements. This metric serves as an indicator of the semantic coherence within the generated videos. In terms of frame consistency, we employ the flow consistency index (FCI) for comparison. UnlikeVarghese et al. (2020), we directly compute the flow field between consecutive frames, independent of the input video. Specifically, we determine the optical flow field between two adjacent frames, assess alterations in the local optical flow field concerning each pixel's value and its domain, and subsequently average all the computed changes. The results underscore that, in comparison to our baseline model, enhancements are observed in both content consistency and frame consistency. ### Ablation Studies We conduct ablation studies to assess the importance of different components of ALVA, as shown in Tab. 3. The proposed full APLA model performs the best considering context consistency and frame consistency together. Without some components, APLA's performance degrades but still performs better than Tune-A-Video. We also discuss the influence of the number of epochs. From observing Tab. 3, we see that with the epoch increasing, the quantitative score is increasing. However, from the visual result, we see that too many epochs can cause overfitting, which destroys the result influenced by the prompt. Without the Discriminator, as the number of epochs increases, it is easy to fall into local minima which decreases the CLIP score and FCI. For w\o VGT&Discriminator, although the final FCI is decent, it cannot retain the semantic consistency and needs too many epochs to reach a good result. For w\o HyperLoss&Discriminator, the single VGT can just reach a normal level and it is hard for it to approach a better score because of the limitation of convergence. As w\o VGT&Hyper-Loss, the model performance is close to w\o Discriminator and even better. However, it still needs too many epochs to reach such a good result. ## Conclusion In this study, we introduce APLA, which includes a compact module for capturing intrinsic or temporal information, and the novel VGT architecture, a pure transformer decoder similar to GPT. To fortify the robustness and quality \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline \multicolumn{5}{c}{**Frame Consistency**} \\ \hline **Method** & **CLIP Score** & **FCI** & **CLIP Score (1500 epochs)** & **FCI (1500 epochs)** \\ \hline Full Model (ours) & **96.21** & 0.2764 & **96.76** & 0.2470 \\ w\o Discriminator & 94.42 & 0.2714 & 93.70 & 0.2178 \\ w\o VGT\&Discriminator & 91.44 & **0.1918** & 96.13 & 0.2655 \\ w\o Hyper-Loss\&Discriminator & 93.97 & 0.2476 & 93.06 & 0.2588 \\ w\o VGT\&Hyper-Loss & 94.83 & 0.2534 & 96.38 & **0.2172** \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation studies on APLA’s different components. Compared via CLIP score and FCI on each model variation trained with 750 epochs(default) and 1500 epochs respectively. of our APLA model, we employ adversarial training during its training process. Through experiments, our model achieves state-of-the-art performance in video reconstruction and videos from textual prompts (T2V).
2303.00173
BP-NTT: Fast and Compact in-SRAM Number Theoretic Transform with Bit-Parallel Modular Multiplication
Number Theoretic Transform (NTT) is an essential mathematical tool for computing polynomial multiplication in promising lattice-based cryptography. However, costly division operations and complex data dependencies make efficient and flexible hardware design to be challenging, especially on resource-constrained edge devices. Existing approaches either focus on only limited parameter settings or impose substantial hardware overhead. In this paper, we introduce a hardware-algorithm methodology to efficiently accelerate NTT in various settings using in-cache computing. By leveraging an optimized bit-parallel modular multiplication and introducing costless shift operations, our proposed solution provides up to 29x higher throughput-per-area and 2.8-100x better throughput-per-area-per-joule compared to the state-of-the-art.
Jingyao Zhang, Mohsen Imani, Elaheh Sadredini
2023-03-01T02:02:47Z
http://arxiv.org/abs/2303.00173v3
BP-NTT: Fast and Compact in-SRAM Number Theoretic Transform with Bit-Parallel Modular Multiplication ###### Abstract Number Theoretic Transform (NTT) is an essential mathematical tool for computing polynomial multiplication in promising lattice-based cryptography. However, costly division operations and complex data dependencies make efficient and flexible hardware design to be challenging, especially on resource-constrained edge devices. Existing approaches either focus on only limited parameter settings or impose substantial hardware overhead. In this paper, we introduce a hardware-algorithm methodology to efficiently accelerate NTT in various settings using in-cache computing. By leveraging an optimized bit-parallel modular multiplication and introducing costless shift operations, our proposed solution provides up to 29\(\times\) higher throughput-per-area and 10-138\(\times\) better throughput-per-power compared to the state-of-the-art. ## I Introduction With the rise of cloud computing and the Internet of Things (IoT), concerns about data privacy and security are escalating, especially for vulnerable edge devices. Lattice-based cryptography is the most promising candidate to serve as the foundation of future information security due to its superior balance of security and operational speed. Currently, three of four NIST-standardized post-quantum cryptography algorithms (PQCs) [1] and almost all homomorphic encryption (HE) schemes are based on lattice-based cryptography. Typically, lattice-based cryptography is primarily based on the hardness of two problems: module learning with error (for PQCs) and ring learning with error (for HEs). The algorithms based on these two problems involve polynomial operations, such as modular addition and modular multiplication. With a complexity of \(O(N^{2})\), the modular multiplication of polynomials is the most time-consuming operation. To mitigate the computing bottleneck, number-theoretic transform (NTT) is commonly employed to accelerate polynomial modular multiplication with the principles of the Fast Fourier Transform (FFT), which lowers the computing complexity of polynomial multiplication to \(O(NlogN)\). However, the complicated data dependencies among different NTT stages and the required division operations make efficient hardware-based acceleration challenging. To accelerate NTT effectively, ASIC/FPGA-based hardware acceleration designs are proposed [2, 3, 4]. Although performance is enhanced, they still suffer from frequent data movement between processing components and memory, which inhibits further performance growth. To eliminate the data movement bottleneck, in-memory computing techniques for cryptography algorithms are proposed [5, 6, 7, 8, 9, 10]. The challenges with existing solutions are that they (1) expand the trusted computing base to off-chip memories [8], thus, introducing security vulnerabilities; (2) introduce complex peripheral circuits [8, 9], thus, incurring high area overhead; or (3) are specialized only for NTT processing [10], thus, sacrificing generality and flexibility. These restrictions make it even more challenging to enable secure computing on vulnerable and resource-constrained edge devices. To analyze the computational bottleneck of the NTT, modular multiplication, and reduction (these kernels count for more the 50% of the computation in PQC algorithms based on our profiling on CPU) and to answer the question of _"where is the best place to compute in the memory hierarchy?"_, we first generate the roofline model of the lattice-based cryptography algorithms, such as CRYSTAL-Dilithium [11] and CRYSTAL-Cyber [12] using Intel Advisor [13] (Fig. 1). Our observation is that the main performance bottleneck for these kernels are the L1 and L2 bandwidth, and they are not bounded by the memory bandwidth bottleneck. Based on these insights, we re-purpose existing on-chip 6T SRAM arrays into large vector computation units and co-design them with a novel bit-parallel modular multiplication algorithm and our proposed implicit shift operations to enable energy-efficient, fast, and low-overhead NTT acceleration, especially for the IoT devices. Our proposed solution, BP-NTT (Bit-Parallel NTT), addresses the inefficiency of existing schemes while preserving safety and flexibility. Since only the chip itself can be considered a trusted computing base with any off-chip data requiring encryption [14], our proposed solution provides data confidentiality by not offloading the plaintext to off-chip memories. Enabled by our proposed data organization and modular multiplication, a single 256\(\times\)256 SRAM subarray in BP-NTT design can support up to a 250-point polynomial with 256-bit coefficients or a 4500-point polynomial with 14-bit coefficients, which covers requirements of the lattice-based Fig. 1: Roofline model for lattice-based cryptography.
2304.08588
Designing Policies for Truth: Combating Misinformation with Transparency and Information Design
Misinformation has become a growing issue on online social platforms (OSPs), especially during elections or pandemics. To combat this, OSPs have implemented various policies, such as tagging, to notify users about potentially misleading information. However, these policies are often transparent and therefore susceptible to being exploited by content creators, who may not be willing to invest effort into producing authentic content, causing the viral spread of misinformation. Instead of mitigating the reach of existing misinformation, this work focuses on a solution of prevention, aiming to stop the spread of misinformation before it has a chance to gain momentum. We propose a Bayesian persuaded branching process ($\operatorname{BP}^2$) to model the strategic interactions among the OSP, the content creator, and the user. The misinformation spread on OSP is modeled by a multi-type branching process, where users' positive and negative comments influence the misinformation spreading. Using a Lagrangian induced by Bayesian plausibility, we characterize the OSP's optimal policy under the perfect Bayesian equilibrium. The convexity of the Lagrangian implies that the OSP's optimal policy is simply the fully informative tagging policy: revealing the content's accuracy to the user. Such a tagging policy solicits the best effort from the content creator in reducing misinformation, even though the OSP exerts no direct control over the content creator. We corroborate our findings using numerical simulations.
Ya-Ting Yang, Tao Li, Quanyan Zhu
2023-04-17T20:08:33Z
http://arxiv.org/abs/2304.08588v2
# Designing Policies for Truth: Combating Misinformation with Transparency and Information Design ###### Abstract Misinformation has become a growing issue on online social platforms (OSPs), especially during elections or pandemics. To combat this, OSp's have implemented various policies, such as tagging, to notify users about potentially misleading information. However, these policies are often transparent and therefore susceptible to being exploited by content creators, who may not be willing to invest effort into producing authentic content, causing the viral spread of misinformation. Instead of mitigating the reach of existing misinformation, this work focuses on a solution of prevention, aiming to stop the spread of misinformation before it has a chance to gain momentum. We propose a Bayesian persuaded branching process (\(\mathrm{BP}^{2}\)) to model the strategic interactions among the OSP, the content creator, and the user. The misinformation spread on OSP is modeled by a multi-type branching process, where users' positive and negative comments influence the misinformation spreading. Using a Lagrangian induced by Bayesian plausibility, we characterize the OSP's optimal policy under the perfect Bayesian equilibrium. The convexity of the Lagrangian implies that the OSP's optimal policy is simply the fully informative tagging policy: revealing the content's accuracy to the user. Such a tagging policy solicits the best effort from the content creator in reducing misinformation, even though the OSP exerts no direct control over the content creator. We corroborate our findings using numerical simulations. ## I Introduction Misinformation has become a growing concern on online social platforms (OSP), as false information can spread rapidly and have significant consequences [1]. For instance, false stories about candidates were shared widely through OSp during the 2016 US presidential election; misinformation about the virus, mask-wearing policies, and vaccine concerns spread through social networks during the COVID-19 pandemic. To address this issue, OSp have implemented policies such as labeling, tagging, or notifying to alert users to potentially false or misleading information [2, 3]. Previous studies have shown that these policies effectively (to some extent) curb the spread of misinformation [4]. However, as mandated by related regulations or ethical standards [5, 6], these policies are often transparent, meaning that they are publicly announced to the content creators (creators) and users. Aware of the OSP's policies, creators take advantage of this transparency by spending the least possible effort to make their posts as trustworthy as possible so as to pass the screening. On the other hand, OSp are constantly upgrading their policies to combat such tactics. As the two parties fall into an endless arms race caused by the conflict of interests, it is natural to ask: _Does the two reach an equilibrium in the end?_ Does the transparency requirement give content creators the upper hand? To address these questions, we propose a persuasion game that captures the interactions among the OSP, the content creator, and the user as illustrated in Fig. 1. The OSP designs a tagging policy whose realized tags indicate the content accuracy of an arbitrary post. Such a policy does not directly control any decisions or utilities but influences others' behaviors through information provision. Hence, this tagging policy is referred to as the information structure [7], and the OSP's problem is termed information design. Fully aware of this policy, the content creator exerts a private effort in creating the content, with the assumption that the more effort exerted, the more accurate the content is. Finally, the user observes the tagging policy and the realized tags, then decides on their views and comments. The OSP aims to persuade the user not to facilitate the misinformation circulation and incentivize the content creator to spend the highest possible effort (i.e., not to create misinformation). The proposed model differs from the seminal Bayesian persuasion model [8] in that the user cannot directly observe the prior distribution. As a result, the receiver must form a conjecture about the content creator's behavior to update their beliefs. This conjecture must be consistent with the agent's equilibrium behavior, which leads to the concept of Fig. 1: The Bayesian Persuaded Branching Process. The OSP first commits to an information structure \(\pi\), followed by a private effort \(\lambda\) exerted by the content creator, influencing the distribution of true/fake posts \(\omega\). The users offer positive/negative comments to the post after observing the realized tag/label \(s\), and then forward it to others. perfect Bayesian equilibrium (PBE) as the natural solution concept for our game. In addition, the user's action (e.g., comment and share) might influence the trends on the social platform [9]. Hence, we use the branching process (BP) to capture misinformation spreading [10], which affects the content creator's reputation and the OSP's payoff. This research demonstrates that a simple information structure can be a powerful tool in combatting misinformation spread. By adopting a fully informative policy, such as using tagging to indicate content accuracy, content creators are incentivized to produce trustworthy material. Although the OSP may not have direct control over content creators, it can nudge user perceptions through the information structure. The collective behaviors of users, under these perceptions, determine the content creators' reputations, effectively making users the OSP's proxy in terms of incentive provision. Our contributions can be summarized as follows. * We propose a three-player persuasion game to capture the interactions induced by the conflict of interests between the OSP, the content creator, and the user, where the multi-type branching process is utilized to model the spread of misinformation content; * We develop a Lagrangian approach to identify players' strategies under perfect Bayesian equilibrium, which is known to be challenging to solve [11, 12, 13]. Through Bayesian plausibility, we transform the equilibrium problem into the posterior belief space and develop an equality-constrained nonlinear programming associated with the equilibria, facilitating the study of the optimal information structure. **Related works** Existing research on misinformation typically focuses on a finite set of players connected by a graph, with the reliability of articles, news, and content drawn from a known distribution [14, 15]. For instance, [14] has proposed a model for online sharing behavior of fully Bayesian users under potential misinformation and studies the impact of network structure, demonstrating that social media platforms that aim to maximize engagement might help propagate misinformation. [15] has investigated how the platform should design a signaling mechanism to influence users' engagement for maximizing engagement or minimizing misinformation purposes. In contrast, our approach considers the population-wide effects of misinformation circulation. Specifically, we analyze the proportion of individuals receiving negative comments among all receivers using branching processes, which is shown to match well to statistical characteristics of information cascades [10], and has been utilized in studying the determinants behind misinformation spreading [16]. Rather than analyzing misinformation circulation [17] through branching processes, we aim to prevent it from being created in the first place. To study this preemptive solution, we introduce a third player, i.e., the content creator, into the Bayesian persuasion framework, where the OSP incentivizes the content creator to produce accurate content. The ultimate goal is to curb misinformation spread by promoting the creation of truthful and reliable content. ## II Online Misinformation Circulation: Modeling and Information Design This section introduces a three-player persuasion game that models the interactions among the online social platform, the content creators, and the users. Naturally, misinformation circulation on OSP involves a population of content creators and users. To simplify the exposition, we consider a representative creator and a population of users with identical utilities. We pick a representative user, referred to as "the user," when discussing strategic reasoning in the persuasion game, as the population all share the same interest. In contrast, we consider "users" when treating population-level misinformation dissemination using branching processes. ### _The Bayesian Persuaded Branching Processes Model_ In the persuasion game, the OSP designs a tagging policy about a state variable that reflects the accuracy of the content of the post. Fully aware of this tagging policy, the content creator then exerts a private effort, which is unobservable to both the OSP and the user, to determine the distribution of the accuracy of the content. Less effort leads to more misinformation prevailing over social media. Finally, the user takes action by commenting on the post and sharing after knowing the tagging policy and observing the tag realization. Note that the state variable remains hidden from the user throughout the game, as individuals lack the necessary resources to verify the authenticity. The action taken by the user results in a _trend_ (negative or positive about the post) in social media. To understand this notion, we consider a multi-type branching process (introduced later in Section III-A). Denote by \(X(t)\) the number of users who have just received the post with a negative comment at time \(t\) (\(x\)-type user). Similarly, \(Y(t)\) denotes the number of users who have received a positive comment (\(y\)-type user). After reading the received post, users forward it to some of their followers/friends with their own (either negative or positive) comments, producing "offsprings" (the new \(x/y\)-type users). The trend is measured through the proportion of negative comments over all the comments: \(\eta(t)=X(t)/(X(t)+Y(t))\). For the rest of the paper, we refer to the OSP, the content creator, and the user as the sender, the agent, and the receiver, respectively, following the custom in persuasion literature [8]. To summarize the discussion above, the persuasion game is given by the tuple \(\langle\Omega,\Sigma,\Lambda,p,\mathcal{A},\eta^{*},u_{S},u_{A},u_{R},\rangle\), where 1. \(\Omega\) is the state space endowed with Borel algebra, and \(\omega\in\Omega\) reflects how accurate the content of the post is; 2. \(\Sigma\) is the signal space of the sender (Borel algebra), and \(s\in\Sigma\) denotes the tag associated with the post; 3. \(\Lambda\) is the action set of the agent, and each \(\lambda\in\Lambda\) represents how much effort the agent exerts in producing trustworthy content; 4. \(p:\Lambda\rightarrow\Delta(\Omega)\) is the control function of the agent, whose effort \(\lambda\) is turned into the state distribution \(p(\cdot|\lambda)\) over the accuracy of the content \(\Omega\); * \(\mathcal{A}\) is the action set of the receiver, which is a continuum \([0,1]\), and \(a\in\mathcal{A}\) denotes the probability of offering a positive comment; * \(\eta^{*}\) is the proportion of negative comment \(\eta(t)\) as \(t\rightarrow\infty\) obtained from the stabilized branching processes, which is related to the reputation of the agent and the impact of misinformation spreading; * \(u_{S}:\Omega\times\mathcal{A}\rightarrow\mathbb{R}\), \(u_{A}:\mathcal{A}\times\Lambda\rightarrow\mathbb{R}\), \(u_{R}:\Omega\times\mathcal{A}\rightarrow\mathbb{R}\) are utility functions of the sender, the agent, and the receiver, respectively. The definitions of these utilities are as follows. The Receiver's UtilityTo minimize the mismatch between the comment and the truth, the receiver's utility is \(u_{R}(\omega,a)=-(a-\omega)^{2}\). Suppose that the receiver believes that the state variable is subject to \(\mu\in\Delta(\Omega)\), its best response under this belief is \[a^{*}(\mu)=\operatorname*{arg\,max}_{a\in[0,1]}\mathbb{E}_{\omega\sim\mu}-(a- \omega)^{2}=\mathbb{E}_{\mu}[\omega]. \tag{1}\] The Agent's UtilityThe agent is concerned with the effort and its reputation measured through \(\eta^{*}\) (the proportion of negative comments on its post). Denote by \(c(\lambda)\) the cost induced by the effort \(\lambda\); and by \(u_{A}(a)=1-\eta^{*}(a)\) the agent's reputation when the receiver responds with \(a\). The agent's utility is given by \[u_{A}(a,\lambda)=u_{A}(a)-c(\lambda), \tag{2}\] The Sender's UtilityThe sender's goal is to mitigate the influence of misinformation: if the content is misleading, the sender prefers fewer positive comments. Define \[u_{S}(\omega,a)=-(1-\omega)(1-\eta^{*}(a))+\omega(1-\eta^{*}(a)). \tag{3}\] The game unfolds in three stages. 1) In the first stage, the sender designs and commits to a tagging policy (also termed signaling \(\pi:\Omega\rightarrow\Delta(\Sigma)\), specifying a conditional distribution \(\pi(\cdot|\omega)\) over the possible tags when the authenticity of a post \(\omega\) is revealed. 2) Second, observing the tagging policy \(\pi\), the agent chooses an effort \(\lambda\) that leads to a distribution over the accuracy of the post \(p(\cdot|\lambda)\in\Delta(\Omega)\). 3) Finally, when encountering an arbitrary post \(\omega\) from \(p(\cdot|\lambda)\), the receiver receives a tag from the tagging policy and subsequently determines their view. Note that the tagging policy \(\pi\) is transparent, whereas the agent's \(\lambda\) is hidden from the user. ### _Perfect Bayesian Equilibrium_ It is worth noting that what distinguishes the introduced model from the classical Bayesian persuasion [8] is that the receiver now does not explicitly acquire the prior distribution \(p(\lambda)\), as \(\lambda\) is unobservable. Hence, when the receiver acts, they must resort to a conjecture on the agent's action to update the posterior beliefs \(\mu\). This conjecture must be consistent with the agent's equilibrium choice, which naturally leads to the perfect Bayesian equilibrium (PBE). The formal definition and associated details are presented in the following. A Perfect Bayesian Equilibrium (PBE) of the proposed persuasion game consists of a tagging policy \(\pi\), the agent's effort \(\lambda\), and a belief system \(\{\mu_{s},s\in\Sigma\}\)1, which satisfies the following properties: Footnote 1: A belief system is a collection of posterior beliefs \(\mu_{s}\), and \(\mu_{s}\) denotes the belief when the receiver receives signal \(s\). * given a tagging policy \(\pi\) (sender) and a belief system \(\{\mu_{s},s\in\Sigma\}\) (receiver), the agent's effort \(\lambda\) maximizes the expected utility, i.e., \[\lambda=\arg\max\sum_{\omega}p(w|\lambda)[\sum_{s}\pi(s|\omega)u_{A}(\mu_{s}, \lambda)]\] (4) \[u_{A}(\mu_{s},\lambda):=u_{A}(a^{*}(\mu_{s}),\lambda)\] * the receiver's belief is consistent with the agent's effort \(\lambda\) and the tagging policy \(\pi\), i.e., \[\mu_{s}=\frac{\pi(s|\cdot)\odot p(\cdot|\lambda)}{\langle\pi(s| \cdot),p(\cdot|\lambda)\rangle},\] (5) \[\pi(s|\cdot)=[\pi(s|\omega_{1}),\ldots,\pi(s|\omega_{N})]\in \mathbb{R}^{|\Omega|}\] (6) \[p(\cdot|\lambda)=[p(\omega_{1}|\lambda),\ldots,p(\omega_{N}| \lambda)]\in\mathbb{R}^{|\Omega|}\] (7) where \(\odot\) denotes the point-wise product. * the tagging policy maximizes the sender's expected utility, i.e., \[\pi\in\arg\max\sum_{\omega}p(\omega|\lambda)\sum_{s}\pi(s|\omega )u_{S}(a^{*}(\mu_{s}),\omega).\] (8) ### _Binary-State State Model_ For simplicity, we focus on a binary case study. Under such a circumstance, the state space only consists of two elements \(\Omega=\{0,1\}\) with \(0\) indicating the content contains misinformation while \(1\) represents the content is accurate. Hence, the signal space is also assumed to be binary: \(\Sigma=\{0,1\}\), where \(0\) and \(1\) denote the "fake" and "real" tags, respectively2. Since the state space is binary, the corresponding prior distribution of the accuracy of the content lives in the simplex spanned by \(p_{0}=[1,0]\) and \(p_{1}=[0,1]\). Hence, we assume that the effort spent by the agent \(\lambda\) is a scalar from \([0,1]\), and the resulting prior distribution is the convex combination of \(p_{0}\) and \(p_{1}\): \(p(\lambda)=(1-\lambda)p_{0}+\lambda p_{1}\). Footnote 2: In general, a sufficient signal space needs to be of \((|\Omega|+1)\)-cardinality [11]. Yet, as we later show in Proposition 8, the binary signal space suffices. As the state space is finite, the players' strategies are finite-dimensional vectors, and hence, we can "vectorize" our analysis so that convex analysis tools can be utilized. We impose the following customary assumption [8, 18] to ensure that the agent's equilibrium problem is well-behaved. **Assumption 1**: _For the agent's utility given by (2), we assume that \(u_{A}(\cdot)\) is non-negative and bounded, and \(c(\cdot)\in C^{2}\) is strictly increasing and convex. In addition, \(c(0)=\nabla c(0)=0\), and \(\nabla c(1)>1\)._ Let \(v_{A}(\mu)=u_{A}(a^{*}(\mu))\) denote the agent's payoff under the receiver's belief \(\mu\). Moreover, let \(\bar{v}_{A}(\omega|\pi):=\sum_{s}\pi(s|\omega)v_{A}(\mu_{s})\) denote the agent's expected payoff conditional on the generated state \(\omega\) under the tagging policy \(\pi\), and let \(\bar{v}_{A}(\pi)\) be the corresponding vector: \(\bar{v}_{A}(\pi)=[\bar{v}_{A}(0|\pi),\bar{v}_{A}(1|\pi)]\). Similarly, we have the following notations for the sender. Given the receiver's belief \(\mu\), the sender's expected payoff is denoted by \(v_{S}(\mu):=\mathbb{E}_{\omega\sim\mu}[u_{S}(a^{*}(\mu),\omega)]\). Then, let \(\bar{v}_{S}(\omega|\pi)=\sum_{s}\pi(s|\omega)v_{S}(\mu_{s})\), and therefore \(\vec{v}_{S}(\pi):=[\bar{v}_{S}(0|\pi),\bar{v}_{S}(1|\pi)]\). To characterize the PBE in the proposed model, we use backward induction, i.e., first analyzing the optimal actions of the receiver, then the agent, and finally the sender. To begin with, the receiver's best response (comment) under the belief \(\mu_{s}\) is given by (1). The best-response \(a^{*}(\mu_{s})\) then affects the spread of misinformation in social media through branching processes presented in Section III-A. ## III Content Spreading Through Branching Process This section treats the spread of misinformation through branching processes. Specifically, we focus on the evolution of the trend \(\eta(t)\), the proportion of negative comments, as the receiver forwards the post to others. One key finding is that the evolutionary dynamics of \(\eta(t)\) under the branching process stabilizes in the limit, and the receiver's belief completely determines the stationary point \(\eta^{*}\). ### _Multi-type Branching Processes_ Suppose that the number of the receiver's friend \(N\) is independent and identically distributed with expectation \(\mathbb{E}[N]=m_{N}\) and is finite. The receiver shares the post with \(Bin(N,q)\) friends, where \(q\in[0,1]\) represents the impact or attractiveness of the post (assumed to be constant). Hence, the number of "offspring" (friends receiving the sharing) of the receiver, denoted by \(\xi\), is subject to a binomial distribution: \(\xi\sim Bin(N,q)\) with \(\mathbb{E}[\xi]=m_{N}\cdot q:=m\). Denote by \(\tau_{n}\) the time when \(n^{th}\) individual "wakes up", meaning that such individual becomes active on an OSP and is ready to share the post. Let \(X_{n}=X(\tau_{n}^{+})\), \(Y_{n}=Y(\tau_{n}^{+})\), \(Z_{n}=X_{n}+Y_{n}\), and \(\xi_{n}\stackrel{{ i.i.d.}}{{\sim}}Bin(N,q)\). If the \(x\)-type receiver (who receives negative comments) wakes up, then \[\begin{split} X_{n+1}&=X_{n}-1+\textbf{1}_{x}\xi_{n },\\ Y_{n+1}&=Y_{n}+\textbf{1}_{y}\xi_{n},\end{split} \tag{9}\] and if the \(y\)-type receiver wakes up, \[\begin{split} X_{n+1}&=X_{n}+\textbf{1}_{x}\xi_{n },\\ Y_{n+1}&=Y_{n}-1+\textbf{1}_{y}\xi_{n}.\end{split} \tag{10}\] where the indicator function \(\textbf{1}_{x}\) means that the receiver makes a negative comment while \(\textbf{1}_{y}\) indicates the opposite (the positive comment). The total population is updated by \(Z_{n+1}=Z_{n}-1+\xi_{n}\). The probability of a receiver who receives the post with a negative comment also commenting negatively can be characterized by a negative-to-negative factor \(\alpha_{xx}(s)\) depending on the tag \(s\). Likewise, we denote by a positive-to-negative factor \(\alpha_{yx}(s)\) the probability of a receiver who receives a positive comment comment meaning negatively. As the receiver's comment only depends on the belief \(\mu_{s}\) [see the best response in (1)], \(\alpha_{xx}(s)=\alpha_{yx}(s)=1-a^{*}(\mu_{s})=1-\mathbb{E}_{\mu_{s}}[\omega]\). Simply put, the higher the \(E_{\mu_{s}}[w]\), the more confident the receiver is about the content accuracy, and hence, the less likely the receiver is to give a negative comment. ### _Stochastic Approximation Analysis_ To analyze the limit behavior of the process, we apply stochastic approximation [19] and consider the continuous-time dynamics of the multiple-type branching. Since there are only two types in the branching process, it suffices to consider the dynamics of the total population and that of the \(x\)-type. Toward this end, let \(\bar{Z}_{n}=\frac{Z_{n}}{n}\), \(\bar{X}_{n}=\frac{X_{n}}{n}\), and \(\gamma_{n}=\frac{1}{n+1}\), and then we aggregate the branching equations in (9) and (10), leading to the following: \[\begin{split}\bar{Z}_{n+1}&=\bar{Z}_{n}+\gamma_{n} \big{(}\xi_{n}-1-\bar{Z}_{n}\big{)}\textbf{1}_{\{\bar{Z}_{n}>0\}},\\ \bar{X}_{n+1}&=\bar{X}_{n}+\gamma_{n}\big{[}\textbf{1 }_{\{x-wakes\}}\big{(}\textbf{1}_{x}\xi_{n}-1\big{)}\\ &\qquad\qquad+\textbf{1}_{\{y-wakes\}}\textbf{1}_{x}\xi_{n}-\bar {X}_{n}\big{]}\textbf{1}_{\{Z_{n}>0\}},\end{split} \tag{11}\] where \(\mathbb{E}[\textbf{1}_{\{x-wakes\}}]=\frac{X_{n}}{Z_{n}}\), \(\mathbb{E}[\textbf{1}_{\{y-wakes\}}]=1-\frac{X_{n}}{Z_{n}}\) indicate the probabilities of an individual of \(x\)-type and \(y\)-type wakes up. Let \(\bar{X}_{0}=X_{0}\), \(\bar{Z}_{0}=X_{0}+Y_{0}\) be the initial conditions. As the discrete-time trajectory of (11) is an asymptotic pseudo-trajectory of the continuous-time system in (12) [19], the two systems share the same limiting behavior. Hence, we arrive at Proposition 1. \[\begin{split}\dot{z}&=h^{z}(z,x)=(m-1-z)\textbf{1}_{ \{z>0\}},\\ \dot{x}&=h^{x}(z,x)=\big{[}\eta\big{(}\alpha_{xx}(s) \cdot m-1\big{)}\\ &+(1-\eta)\alpha_{yx}(s)\cdot m-x\big{]}\textbf{1}_{\{z>0\}}, \eta=\frac{x}{z}\end{split} \tag{12}\] **Proposition 1**: _For \(\mathbb{E}[N^{2}]<\infty\), the \(\{\bar{Z}_{n}\},\{\bar{X}_{n}\}\) sequences converge to \(\bar{Z}^{*},\bar{X}^{*}\) almost surely, where \(\bar{Z}^{*}=m-1\) and \(\bar{X}^{*}=\eta^{*}(s)\bar{Z}^{*}\) with \(\eta^{*}(s)=\frac{\alpha_{yx}(s)}{1-\alpha_{xx}(s)+\alpha_{yx}(s)}\) are solutions to (12)._ The proof for the above proposition follows [17]. Note that \(\eta^{*}(s)\) and \(\eta^{*}(a)\) can be used interchangeably because the receiver decides an action \(a\) based on the posterior belief \(\mu_{s}\) with respect to the tag \(s\). Since the receiver's comment only depends on the belief, we can characterize the limiting trend under tag \(s\) by the following statement. **Corollary 1**: _As \(\alpha_{yx}(s)=\alpha_{xx}(s)=1-\mathbb{E}_{\mu_{s}}[\omega]\), then the proportion of negative comments \(\eta^{*}(s)=\eta^{*}(a(\mu_{s}))=\alpha_{yx}(s)=1-\mathbb{E}_{\mu_{s}}[\omega]\)._ ### _Optimality Conditions under Stable Branching_ Given the receiver's best response \(a^{*}(\mu_{s})\) and the stabilized branching process, one can simplify the agent's problem, as the trend \(\eta^{*}(s)\) admits a simple formula. Since \(\eta^{*}(a)=1-\mathbb{E}_{\mu}[\omega]\) from Corollary 1, we notice that \(v_{A}(\mu)=u_{A}(a^{*}(\mu))=1-\eta^{*}(a)=\mathbb{E}_{\mu}[\omega]=\mu(1)\), which is linear in \(\mu(1)\). In the binary-state case, the belief \(\mu_{s}\) is uniquely determined by its second entry \(\mu(1)\). Hence, the following discussion will treat \(\mu_{s}\) as a scalar. The same treatment also applies to the prior \(p\). The agent's optimality conditions under the signaling in (4) can be rewritten as \[\max_{\lambda\in[0,1]}\langle p(\lambda),\vec{v}_{A}(\pi)\rangle-c(\lambda).\] Due to the linearity of the first term and the convexity of the second term, the optimality follows the first-order condition: \[\langle p_{1}-p_{0},\vec{v}_{A}(\pi)\rangle=\nabla c(\lambda), \tag{13}\] As later shown in the ensuing section, the agent's marginal cost \(\nabla c\) plays a significant part in the feasibility of the sender's information structures. Since \(\eta^{*}(a)=1-\mathbb{E}_{\mu}[\omega]\), the sender's expected utility under the belief \(\mu\) is \(v_{S}(\mu)=-(1-\mathbb{E}_{\mu}[\omega])\mathbb{E}_{\mu}[\omega]+\mathbb{E}_{ \mu}^{2}[\omega]\). In the binary-state case, \(v_{S}(\mu)=\mu(\mu-1)+\mu^{2}\). Hence, the sender's problem is given by \[\max_{\pi,\lambda} \langle p(\lambda),\vec{v}_{S}(\pi)\rangle\] s.t. \[\langle p_{1}-p_{0},\vec{v}_{A}(\pi)\rangle=\nabla c(\lambda) \tag{14}\] \[\text{ consistent belief system in (\ref{eq:11})}\] Note that the agent's decision variable \(\lambda\) also appears in the maximization, as we assume that the tie breaks in favor of the sender should there exists multiple effort level \(\lambda\) satisfying the first constraint in (14). The consistency requirement in (5) involves division operation, leading to a highly nonlinear programming problem. To simplify our analysis, the proposition in the following section IV-A transforms the sender's problem into the posterior belief space using Bayesian Plausibility. ## IV Perfect Bayesian Equilibrium Characterization: A Lagrangian Approach ### _Bayesian Plausibility_ Bayesian plausibility is a sanity check for any information structure: all possible posterior beliefs induced by the realized signals should be consistent with the prior under the information structure. Formally, the proposition below reformulates the sender's problem where the decision variable is a distribution over posteriors \(\tau\in\Delta(\Delta(\Omega))\) instead of a policy \(\pi\). **Proposition 2** (Bayesian Plausibility): _Given an effort \(\lambda\), there exists a signaling mechanism (tagging policy) \(\pi\) satisfying the conditions in (14) if and only if there exists a distribution over posteriors \(\tau\in\Delta(\Delta(\Omega))\) such that_ \[\mathbb{E}_{\tau}[\mu]=p(\lambda),\] \[\mathbb{E}_{\tau}\left[\mathbb{E}_{\mu}[\nabla\log p(\lambda)]v_{ A}(\mu)\right]=\nabla c(\lambda).\] Proof:: We first prove the equivalence between the signaling mechanism \(\pi\) and the distribution \(\tau\). Without loss of generality, assume that for each signal \(s\in\Sigma\), the receiver has a distinct posterior belief \(\mu_{s}\). Starting from \(\pi\), and fixing \(\lambda\), the probability of generating \(\mu_{s}\) is \[\tau(\mu_{s})=\sum_{\omega}\pi(s|\omega)p(\omega;\lambda)=\langle\pi(s|\cdot ),p(\lambda)\rangle.\] Following the above equation, one can compute the distribution of posteriors using the signaling. Conversely, recall that the Bayes rule gives \[\mu_{s}=\frac{\pi(s|\cdot)\odot p(\lambda)}{\langle\pi(s|\cdot),p(\lambda) \rangle}=\frac{\pi(s|\cdot)\odot p(\lambda)}{\tau(\mu_{s})},\] implying that \(\pi(s|\cdot)=\tau(\mu_{s})(\mu_{s}\odot p(\lambda))\), where \(\odot\) denotes the point-wise division. The equation above indicates that one can recover the signaling using the distribution of posteriors \(\tau\). Finally, note that \[\pi(s|\cdot)=\tau(\mu_{s})(\mu_{s}\odot p(\lambda))\Leftrightarrow\sum_{s} \pi(s|\cdot)p(\lambda)=\sum_{s}\tau(\mu_{s})\mu_{s},\] which proves the first equality in the proposition. The posterior distribution \(\tau\) associated with \(\pi\) is called the Bayesian-plausible distribution in the literature [8]. To recover the agent's optimality condition (also called incentive-compatibility constraint), consider the constraint: \[\langle p_{1}-p_{0},\vec{v}_{A}(\pi)\rangle=\nabla c(\lambda).\] Plugging the above equation into the left-hand side gives \[\langle p_{1}-p_{0},\vec{v}_{A}(\pi)\rangle\] \[=\sum_{\omega}\left(\sum_{s}\pi(s|\omega)v_{A}(\mu_{s})\right)(p_ {1}(\omega)-p_{0}(\omega))\] \[=\sum_{\omega}\left(\sum_{s}\tau(\mu_{s})\frac{\mu_{s}(\omega)}{ \pi(\omega;\lambda)}v_{A}(\mu_{s})\right)(p_{1}(\omega)-p_{0}(\omega))\] \[=\sum_{s}\tau(\mu_{s})\sum_{\omega}\left(\frac{p_{1}(\omega)-p_{0 }(\omega)}{p(\omega;\lambda)}\mu_{s}(\omega)\right)v_{A}(\mu_{s})\] \[=\sum_{s}\tau(\mu_{s})\sum_{\omega}\left(\frac{\nabla_{\lambda}p (\omega;\lambda)}{p(\omega;\lambda)}\mu_{s}(\omega)\right)v_{A}(\mu_{s})\] \[=\mathbb{E}_{\tau}[\mathbb{E}_{\mu}[\nabla_{\lambda}\log p(\omega ;\lambda)]v_{A}(\mu)]\] Let \(f(\mu)=\mathbb{E}_{\mu}[\nabla_{\lambda}\log p(\omega;\lambda)]v_{A}(\mu)- \nabla c(\lambda)\), then the sender's problem can be rewritten as \[\max_{\tau\in\Delta(\Delta(\Omega)),\lambda} \mathbb{E}_{\tau}[v_{S}(\mu)],\] (15) s.t. \[\mathbb{E}_{\tau}[\mu]=p(\lambda), \tag{16}\] \[\mathbb{E}_{\tau}[f(\mu)]=0, \tag{17}\] where (16), referred to as the Bayesian plausibility constraint (BP), corresponds to the consistency in (5); (17), referred to as the incentive-compatibility constraint (IC), rephrases the agent's optimality condition in (13). ### _The Lagrangian Characterization_ With Bayesian plausibility, the sender's problem becomes equality-constrained nonlinear programming, which naturally prompts one to consider the Lagrange multiplier method. In what follows, we present a PBE characterization through the lens of Lagrangian. The discussion begins with the feasible domain of the maximization in (15). **Proposition 3** (Implementable Effort, Feasible Condition): _In the binary-state model, let \(\bar{\lambda}\) be the value such that \(\nabla c(\lambda)=p_{1}-p_{0}\). Then, \(\lambda\) is feasible if and only if \(\lambda\leq\bar{\lambda}\)._ Proof:: We begin with the necessity. In the binary-state model, the incentive compatibility (IC) constraint reduces to \[(p_{1}-p_{0})(\bar{v}_{A}(1)-\bar{v}_{A}(0))=\nabla c(\lambda),\] where \(\bar{v}_{A}(\omega|\pi)=\sum_{s}\pi(s|\omega)v_{A}(\mu_{s})\). Note that \(v_{A}(\mu)=\mu\in[0,1]\), implying that \(\bar{v}_{A}\) never exceeds \(1\), and so is the difference \(\bar{v}_{A}(1)-\bar{v}_{A}(0)\). Hence, \(p_{1}-p_{0}\geq\nabla c(\lambda)\). As the cost function \(c\) is strictly increasing, \(\nabla c(\lambda)>\nabla c(\bar{\lambda})=p_{1}-p_{0}\), for \(\lambda>\bar{\lambda}\), which means that \(\lambda\) is not IC. For sufficiency, consider \(\lambda\in[0,\bar{\lambda}]\), and \(p(\lambda)=(1-\lambda,\lambda)\). Let \(\Delta p=p_{1}-p_{0}\), we construct a Bayesian-plausible \(\tau\) as follows and refer it as a "hybrid tagging policy". \(\mathrm{supp}(\tau)=\{0,\lambda,1\}\) (these scalars denote the second entries of beliefs), and \[\tau(0)=\frac{(1-\lambda)\nabla c(\lambda)}{\Delta p},\tau(\lambda)=1-\frac{ \nabla c(\lambda)}{\Delta p},\tau(1)=\frac{\lambda\nabla c(\lambda)}{\Delta p}.\] It is straightforward to verify that this posterior distribution satisfies both constraints in the sender's problem. This construction implies that for any \(\lambda\in[0,\bar{\lambda}]\), one can find a feasible \(\tau\), and hence, \(\lambda\) is also implementable. **Remark 1**: _The maximum effort the receiver is willing to exert \(\bar{\lambda}\) in uncovering the truth solely depends on their marginal cost \(\nabla c(\bar{\lambda})\), regardless of their reputation. The higher the marginal cost \(\nabla c(\lambda)\) is, the smaller the upper bound \(\bar{\lambda}\) is, leading to a modest feasible set, which means that the agent cannot afford to create authentic content in this case, regardless of their reputation._ The term "hybrid" of the constructed \(\tau\) in the proof above is due to the observation that \(\tau\) is a convex combination of two representative tagging policies. Consider \(\overline{\tau}\) and \(\underline{\tau}\): \(\mathrm{supp}(\overline{\tau})=\{0,1\}\), \(\overline{\tau}(0)=1-\lambda\), and \(\overline{\tau}=\lambda\); \(\mathrm{supp}(\underline{\tau})=\{\lambda\}\) and \(\overline{\tau}(\lambda)=1\). \(\overline{\tau}\) is the fully informative tagging, where the receiver, upon receiving the tag, is certain about the accuracy: the post is either fake \(0\) or true \(1\). In contrast, \(\underline{\tau}\) is the opposite: the uninformative tagging. The corresponding belief system is degenerate, including only one belief that is exactly the prior distribution. This degeneracy indicates that the receiver does not acquire any helpful information from the tag to update the belief on the content accuracy. From the above construction, we arrive at the following proposition, stating that the sender strictly prefers and incentivizes the agent to exert a positive effort level. **Proposition 4** (Positive Effort Level): \(\lambda=0\) _is implementable under the uninformative signaling: \(\pi(\cdot|\omega)=\mathrm{unif}(\Sigma)\) for any \(\omega\in\Omega\). This uninformative signaling is strictly dominated by the hybrid signaling with \(\lambda\in(0,\bar{\lambda})\)._ From Assumption 1, \(\lambda=0\) implies \(\nabla c(\lambda)=0\), and further implies that \[(p_{1}-p_{0})\left(\sum_{s}\pi(s|1)v_{A}(\mu_{s})-\sum_{s}\pi(s|0)v_{A}(\mu_{ s})\right)=0\] The uninformative signaling naturally satisfies the above equation; hence, \(\lambda=0\) is implementable. This uninformative signaling (\(\lambda=0\)) induces a degenerate posterior distribution: \(\mathrm{supp}(\tau)=\{p_{0}\}\), and the sender's expected utility is 0. In contrast, consider the signaling in the proof above, \(\mathrm{supp}(\tau)=\{0,\lambda,1\}\), \(\lambda\in(0,\bar{\lambda})\), with \(\tau(0)\), \(\tau(\lambda)\), and \(\tau(1)\) as in Proposition 3. Note that \(v_{S}(1)=1\), the sender's expected utility is \(\mathbb{E}_{\tau}[v_{S}(\mu)]=\lambda>0\). **Corollary 2**: _As long as the set \((0,\bar{\lambda})\) is not empty, the sender can always create a tagging policy that incentivizes the agent to invest positive effort in discovering the truth, regardless of the value of the cost function._ The above discussion addresses the agent's feasibility condition. In what follows, we shift the focus to the sender's problem, given an implementable effort \(\lambda\). Denote by \(\tau^{\lambda}\) and \(V^{\lambda}\) the optimal solution to the sender's problem (15) (fixing \(\lambda\)), and the corresponding objective value, respectively. Consider the following set \(F^{\lambda}\subset\mathbb{R}^{|\Omega|+2}\): \(F^{\lambda}=\{(\mu,f(\mu),v_{S}(\mu)):\mu\in\Delta(\Omega)\}\). By construction, each entry of any element in \(F^{\lambda}\) corresponds to the integrand in the three objects in the sender's problem (15). These integrands are referred to as ex-post values. Denote by \(co(F^{\lambda})\) the convex hull of \(F^{\lambda}\), including all the ex-ante values that can be generated using a probability \(\tau\in\Delta(\Delta(\Omega))\). A standard argument from constrained programming gives the following. **Proposition 5**: _Given an implementable effort \(\lambda\), the maximal utility the sender can attain is \(V^{\lambda}=\max\{v:(p(\lambda),0,v)\in co(F^{\lambda})\}\)._ It suffices to note that \(\mu=p(\lambda)\) naturally satisfies (16), and \(f(\mu)=0\) induces (17). Therefore, any point \((\mu,f(\mu),v)\in\{(p(\lambda),0,v)\in co(F^{\lambda})\}\) is feasible to (15). Therefore, \(V^{\lambda}\), as a convex combination of these points, is the maximal value. The above proposition gives a geometric intuition where the solution should be: \((p(\lambda),0,V^{\lambda}))\) lies on the boundary of the convex set \(co(F^{\lambda})\). Hence, there exists a supporting hyperplane at \((p(\lambda),0,V^{\lambda}))\), leading to the following. **Proposition 6** (Lagrangian Characterization): _Given an implementable \(\lambda\), a distribution of posteriors \(\tau^{\lambda}\) is a solution to the sender's problem if and only if it satisfies (16), (17), and there exists \(\psi\in\mathbb{R}\), \(\rho\in\mathbb{R}\), and \(\varphi\in\mathbb{R}^{|\Omega|}\) such that_ \[\mathcal{L}(\mu,\psi,\varphi)=v_{S}(\mu)+\psi f(\mu)-\langle\varphi,\mu\rangle \leq\rho,\text{ for all }\mu\in\Delta(\Omega),\] _where the equality holds for all \(\mu\) such that \(\tau^{\lambda}(\mu)>0\)._ We begin with the necessity. As \((p(\lambda),0,V^{\lambda})\) is a boundary point of a closed convex set, the separating hyperplane theorem tells that there exists a normal vector \(d=(-\varphi,\psi,1)\in\mathbb{R}^{|\Omega|+2}\) and a scalar \(\rho\) such that \(\langle d,y\rangle\leq\rho\) for all \(y\in co(F^{\lambda})\), where the equality holds for \(y=(p(\lambda),0,V^{\lambda})\). Rearranging terms in this inner product, we obtain that \(\mathcal{L}(\mu,\psi,\varphi)\leq\rho\). It remains to show that \(\mathcal{L}(\mu,\psi,\varphi)=0\) for all \(\mu\in\{\mu:\tau^{\lambda}(\mu)>0\}\). Suppose, for the sake of contradiction, that there exists some \(\mu\in\mathrm{supp}(\tau^{\lambda})\) such that \(\mathcal{L}(\mu,\psi,\varphi)<\rho\). Note that \(\mathcal{L}(\mu,\psi,\varphi)\leq\rho\), then \(V^{\lambda}=\mathbb{E}_{\tau^{\lambda}}[\mathcal{L}(\mu,\psi,\varphi)]<\rho\). Rearranging terms, we obtain \(\langle d,(p(\lambda),0,V^{\lambda})\rangle<\rho\), which contradicts the fact that the supporting hyperplane passes through the point \((p(\lambda),0,V^{\lambda})\). For the sufficiency part, if \(v_{S}(\mu)+\psi f(\mu)\leq\rho+\langle\varphi,\mu\rangle\) for all \(\mu\in\Delta(\Omega)\), then for any \(\tau\), \[\mathbb{E}_{\tau}[v_{S}(\mu)]+\psi\mathbb{E}_{\tau}[f(\mu)]\leq\rho+\mathbb{E}_{ \tau}[\langle\varphi,\mu\rangle].\] Since \(\tau^{\lambda}\) satisfies (16) and (17), the above reduces to \(\mathbb{E}_{\tau^{\lambda}}[v_{S}(\mu)]\leq\rho+\langle\varphi,p(\lambda)\rangle\). If \(\tau^{\lambda}\) is such that \(\mathcal{L}(\mu,\psi,\varphi)=\rho\), for all \(\mu\in\mathrm{supp}(\tau^{\lambda})\), then \(\mathbb{E}_{\tau^{\lambda}}[v_{S}(\mu)]=\rho+\langle\varphi,p(\lambda)\rangle\) meaning that the expected utility \(\mathbb{E}_{\tau}[v_{S}(\mu)]\) reaches the upper bound at \(\tau^{\lambda}\). Fixing \(\lambda\in(0,\bar{\lambda})\), consider the Lagrangian function \(\mathcal{L}\) introduced in the above. Its second-order derivative is given by \(\frac{\partial^{2}\mathcal{L}}{\partial\mu^{2}}=\nabla^{2}v_{S}(\mu)+\frac{2 \psi}{\lambda(1-\lambda)}\). From the definition, \(\nabla^{2}v_{S}(\mu)>0\), and hence, the sign of \(\frac{\partial^{2}\mathcal{L}}{\partial\mu^{2}}\) depends on \(\psi\), for which we have the following characterization. **Proposition 7**: _For any \(\lambda\in(0,\bar{\lambda}]\), the Lagrange multiplier \(\psi\) associated with the solution \(\tau^{\lambda}\) is non-positive._ Consider a relaxation to the original problem without IC constraint (17): \[\widetilde{V}^{\lambda}=\max_{\tau\in\Delta(\Delta(\Omega))}\mathbb{E}_{\tau}[ v_{S}(\mu)]\text{ subject to }\mathbb{E}_{\tau}[\mu]=p(\lambda), \tag{18}\] which is exactly the standard Bayesian persuasion [8]. Denote by \(\tilde{\tau}^{\lambda}\) the solution to the relaxed problem when fixing \(\lambda\). Applying the Lagrangian characterization developed in Proposition 6, there exists \(\tilde{\rho}\) and \(\tilde{\varphi}\) such that \(v_{S}(\mu)\leq\tilde{\rho}+\tilde{\varphi}\mu\), for all \(\mu\in[0,1]\), with equality if \(\tilde{\tau}(\mu)>0\). Define \(g(\lambda)=\mathbb{E}_{\tilde{\tau}}[f(\mu)]\). Let \(\tau^{\lambda}\) be the solution to the original problem. We aim to prove \(\psi g(\lambda)\leq 0\) in the following. The definition of two Lagrangians give \[\rho+\varphi\lambda=\mathbb{E}_{\tau^{\lambda}}[v_{S}(\mu)]\leq\mathbb{E}_{ \tilde{\tau}^{\lambda}}[v_{S}(\mu)]=\tilde{\rho}+\tilde{\varphi}\lambda. \tag{19}\] Finally, taking the expectation of the original Lagrangian in Proposition 6 with respect to \(\tilde{\tau}\), we obtain \[\mathbb{E}_{\tilde{\tau}}[v_{S}(\mu)]+\psi\mathbb{E}_{\tilde{\tau}}[f(\mu)] \leq\rho+\psi\lambda\Leftrightarrow\tilde{\rho}+\tilde{\varphi}\lambda+\psi g (\lambda)\leq\rho+\varphi\lambda \tag{20}\] Combining (20) and (19) leads to \(\psi g(\lambda)\leq 0\). The rest of the proof establishes that \(g(\lambda)\geq 0\) for \(\lambda\in(0,\bar{\lambda}]\). Note that the sender's expected utility \(v_{S}(\mu)=2\mu^{2}-\mu\) is convex in \(\mu\). The standard persuasion analysis gives that the unique optimal signaling is the fully informative one [20, Section 3], implying that \(\operatorname{supp}(\tilde{\tau})=\{0,1\}\), and \(\tilde{\tau}(0)=1-\lambda\), \(\tilde{\tau}(1)=\lambda\). Direct calculation yields \(f(\mu)=\frac{\mu(\mu-\lambda)}{(1-\lambda)}-\nabla c(\lambda)\). Hence, \(g(\lambda)=\mathbb{E}_{\tilde{\tau}}[f(\mu)]=1-\nabla c(\lambda)\geq 0\), for \(\lambda\in(0,\bar{\lambda}]\), implying that \(\psi\leq 0\). Even though it seems that \(\frac{\partial^{2}\mathcal{L}}{\partial\mu^{2}}\) is not necessarily non-negative, the following proposition asserts that the Lagrangian must be a convex function of \(\mu\), which leads to the main conclusion of this work: the sender's optimal signaling is the fully informative one, under which the agent is incentivized not to create misinformation to the best effort. **Proposition 8**: _The Lagrangian function is convex with respect to \(\mu\), and hence, the optimal signaling is fully informative and implements \(\bar{\lambda}\)._ Suppose, for the sake of contradiction, that the multiplier \(\psi\) associated with the solution is such that \(\frac{\partial^{2}\mathcal{L}}{\partial\mu^{2}}<0\). This contradiction indicates the convexity of the Lagrangian. Notice that \(\nabla^{2}v_{S}(\mu)=4\), and that \(\frac{\partial^{2}\mathcal{L}}{\partial\mu^{2}}\) is a constant, then one can see that the Lagrangian function is strictly concave everywhere. Therefore, the sender's optimal signaling is degenerate (only one belief) and is strictly dominated (see Corollary 4), which contradicts the optimality. Consequently, the standard argument from Bayesian persuasion literature [20, Section 3] also applies to the proposed model, leading to the statement above. **Corollary 3**: _A viable "prevent than cure" solution to misinformation is simply the most straightforward tagging policy: revealing the truth to the user._ Lastly, we discuss the impact of the cost function \(c(\lambda)\) on the agent's reputation \(1-\eta^{*}\), from which we further elaborate on how the sender provides the agent incentive to spend effort. Assume the cost function \(c(\lambda)\) takes the quadratic form: \(c(\lambda)=k\lambda^{2}\), and \(k>\frac{1}{2}\) so that \(\nabla c(1)>1\). **Proposition 9**: _Under the hybrid tagging, the equilibrium trend \(\mathbb{E}_{\tau}[\eta^{*}(\mu)]\) admits the following characterizations, depending on the Hessian parameter \(k\): 1) for \(k\geq 1\), \(\bar{\lambda}\leq 1/2\). \(\mathbb{E}_{\tau}[\eta^{*}(\mu)]=1-\lambda\geq 1/2\), for \(\lambda\in(0,\bar{\lambda}]\); 2) for \(1/2<k<1\), \(\bar{\lambda}>1/2\), \(\mathbb{E}_{\tau}[\eta^{*}(\mu)]=1-\lambda<1/2\), for \(\lambda\in(1/2,\bar{\lambda})\)._ **Remark 2** (Indirect Incentive Provision): \(k=1,\bar{\lambda}=1/2\) _is a turning point of the equilibrium trend. When it is costly for the agent to produce trustworthy content (\(k\geq 1\)), the average comment turns against themselves. The best the agent can hope is to exert the highest effort \(\bar{\lambda}\) and to keep the user in the neutral position (\(\mathbb{E}[\eta^{*}(\mu)]=1-\bar{\lambda}=1/2\)). In contrast, the agent can earn a reputation for being a reliable information source when \(\bar{\lambda}>1/2\), as the equilibrium trend under the maximum effort is in favor of themselves \(\mathbb{E}[\eta^{*}]=1-\bar{\lambda}<1/2\). To sum up, the agent is always willing to exert the highest effort, whatever the cost is; and the sender achieves such an incentive provision through the receiver's action._ ## V Numerical Studies This section studies the proposed Bayesian persuaded branching processes model in three different tagging policies: fully informative, uninformative, and hybrid informative tagging policy. For each experiment, the branching setup is given by \(X_{0}=Y_{0}=50,m_{N}=50,q=0.5\), and \(\tau_{n}=\tau_{1500}\). The numerical results in this section are the average of \(1000\) independent simulations. Fully Informative Tagging PolicyIn this scenario, the policy for the OSP is to tag the post with its true state, i.e., \(s=\omega\). For the authentic post, \(\omega=1,s=1,\mathbb{E}_{\mu_{s}}[\omega]=1\), and thus \(\alpha_{yx}=\alpha_{xx}=1-\mathbb{E}_{\mu_{s}}[\omega]=0\). The result for the proportion of negative comments \(\eta^{*}\) is shown in Figure 1(a). On the other hand, for misinformation post, \(\omega=0,s=0,\mathbb{E}_{\mu_{s}}[\omega]=0\) and thus \(\alpha_{yx}=\alpha_{xx}=1-\mathbb{E}_{\mu_{s}}[\omega]=1\). Under this tagging policy, each tag carries no ambiguity; consequently, the receiver is certain about the content's accuracy and comment on the post accordingly. As a result, the fully informative tagging leads to a positive trend for the authentic post [see Figure 1(a)], while a negative one for misinformation [see Figure 1(b)]. The shaded yellow region in the figure indicates the standard deviation of \(\eta^{*}\), while the blue line represents the mean. Uninformative Tagging PolicyUnder the uninformative tagging, the OSP tags the post randomly, i.e., choosing \(s=0\) and \(s=1\) with probability \(\frac{1}{2}\) regardless of \(\omega\). According to Proposition 4, \(\lambda=0\) and \(\mathbb{E}_{\mu_{s}}[\omega]=0\), which leads to \(\alpha_{yx}=\alpha_{xx}=1-\mathbb{E}_{\mu_{*}}[\omega]=1\) and \(\eta^{*}=1\). The trend evolution is the same as in Figure 1(b). Hybrid Tagging PolicyWe finally consider the hybrid tagging policy in Proposition 3. The cost function is of the quadratic form as in Proposition 9. For \(k=1\), the maximum feasible effort is \(\bar{\lambda}=\frac{1}{2}\), under which the trend is neural: \(\eta^{*}=0.5\). For any other \(\lambda\in(0,\bar{\lambda})\), however, the resulting \(\eta^{*}\) is strictly greater than half, as demonstrated in the upper side of Figure 1(c). The numerical results coincide with the analysis in Proposition 9, showing that the agent needs to exert the best effort to investigate the truth so as not to hurt their reputation. For \(\frac{1}{2}<k<1\), we take \(k=\frac{3}{5}\) as an example. In this case, the maximum effort is \(\bar{\lambda}=\frac{5}{6}\), and any implementable \(\lambda>\frac{1}{2}\) leads to positive trends as shown in the lower part of Figure 1(c). The more effort the agent spends, the more positive the trend is; hence, the higher reputation the agent earns. ## VI Conclusion This work has investigated a preemptive approach to mitigate misinformation spread on OSP by disincentivizing the content creator to create misleading content in the first place. We have developed a three-player persuasion game to model the strategic interaction among the OSP, the content creator, and the user. By transforming the perfect Bayesian equilibrium into the posterior belief space, we have reformulated the OSP's equilibrium problem as an equality-constrained nonlinear programming (with a convex objective), which admits a concise Lagrangian characterization. The convexity of the Lagrangian implies that the OSP can solicit the best effort from the content creator in reducing misinformation, even though the OSP exerts no direct control over the content creator. One direction of the future work would be to investigate other mitigation mechanisms, including verification of the accuracy of the content and the accountability of the content creators.
2310.03542
Localization of Dirac modes in the $\mathrm{SU}(2)$ Higgs model at finite temperature
We investigate the connection between localization of low-lying Dirac modes and Polyakov-loop ordering in the lattice $\mathrm{SU}(2)$ Higgs model at finite temperature, probed with the staggered Dirac operator. After mapping out the phase diagram of the model at a fixed temporal extension in lattice units, we study the localization properties of the low-lying modes of the staggered Dirac operator, how these properties change across the various transitions, and how these modes correlate with the gauge and Higgs fields. We find localized low modes in the deconfined and in the Higgs phase, where the Polyakov loop is strongly ordered, but in both cases they disappear as one crosses over to the confined phase. Our findings confirm the general expectations of the "sea/islands" picture, and the more detailed expectations of its refined version concerning the favorable locations of localized modes, also in the presence of dynamical scalar matter.
György Baranka, Matteo Giordano
2023-10-05T13:53:39Z
http://arxiv.org/abs/2310.03542v3
# Localization of Dirac modes in the \(\mathrm{SU}(2)\)-Higgs model at finite temperature ###### Abstract We investigate the connection between localization of low-lying Dirac modes and Polyakov-loop ordering in the lattice \(\mathrm{SU}(2)\)-Higgs model at finite temperature, probed with static external staggered fermions. After mapping out the phase diagram of the model at a fixed temporal extension in lattice units, we study the localization properties of the low-lying modes of the staggered Dirac operator, how these properties change across the various transitions, and how these modes correlate with the gauge and Higgs fields. We find localized low modes in the deconfined and in the Higgs phase, where the Polyakov loop is strongly ordered, but in both cases they disappear as one crosses over to the confined phase. Our findings confirm the general expectations of the "sea/islands" picture, and the more detailed expectations of its refined version concerning the favorable locations of localized modes, also in the presence of dynamical scalar matter. ## I Introduction Although it is well established that the finite-temperature QCD transition is an analytic crossover [1; 2], the microscopic mechanism that drives it is still being actively studied. The main goals of this line of research are a better understanding of the connection between deconfinement and restoration of chiral symmetry, both taking place in the crossover region; and of the fate of the anomalous \(\mathrm{U}(1)_{A}\) symmetry, especially in the chiral limit. In this context, the fact that also the nature of the low-lying Dirac eigenmodes changes radically in the crossover region has aroused some interest. While delocalized in the low-temperature, confined and chirally broken phase, these modes become in fact spatially localized in the high-temperature, deconfined and (approximately) chirally restored phase, up to a critical point in the spectrum known as "mobility edge" [3; 4; 5; 6; 7; 8; 9] (see Ref. [10] for a recent review). As the strength of chiral symmetry breaking is controlled by the density of low-lying Dirac modes [11], while the change in their localization properties is mainly due to the ordering of the Polyakov loop in the high-temperature phase [12; 13; 14; 15; 16; 10], low-lying eigenmodes could provide the link between deconfinement and restoration of chiral symmetry. The connection between low-mode localization and Polyakov-loop ordering is qualitatively explained by the "sea/islands" picture, initially proposed in Ref. [12], and further developed in Refs. [13; 14; 15; 16; 10]. In the deconfined phase, typical gauge configurations display a "sea" of ordered Polyakov loops, which on the one hand provides a spatially (approximately) uniform region where Dirac modes can easily delocalize, and on the other hand opens a (pseudo)gap in the near-zero spectrum. Polyakov-loop fluctuations away from order, and more generally gauge-field fluctuations with reduced correlation in the temporal direction, allow for eigenvalues below the gap; since in the deconfined phase these fluctuations typically form well separated "islands", they tend to "trap" the low eigenmodes, causing their localization. The sea/islands mechanism is quite general, and requires essentially only the ordering of the Polyakov loop for low-mode localization to take place [17]. This leads one to expect localization of low Dirac modes to be a generic phenomenon in the deconfined phase of a gauge theory, an expectation so far fully confirmed by numerical results, both for pure gauge theories [12; 16; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27] and in the presence of dynamical fermionic matter [28; 29]. An interesting aspect of the deconfinement/localization relation is that while the thermal transition can be a smooth, analytic crossover, the appearance of a mobility edge can only be sudden, taking place at a well-defined temperature. If the connection between deconfinement and localization is indeed general, one can then associate the (possibly smooth) thermal transition with a (definitely sharp) "geometric" transition (a similar suggestion, although in connection with deconfinement and center vortices, was made in Ref. [30], from which we borrowed the terminology). This point of view is supported by the fact that the geometric and the thermodynamic transition coincide when the latter is a genuine phase transition [23; 24; 25; 26; 27; 28; 29]. As a further test of the universality of the sea/islands mechanism, one can investigate whether a change in the localization properties of low modes takes place across other thermal transitions where the Polyakov loop gets ordered, besides the usual deconfinement transition. As an example, Ref. [26] studied low-mode localization across the "reconfinement" transition in trace-deformed \(\mathrm{SU}(3)\) gauge theory at finite temperature [31; 32; 33; 34; 35]. While localized modes are present in the deconfined phase also at nonzero deformation parameter, where the Polyakov-loop expectation value is different from zero, they disappear as the system reconfines and the Polyakov-loop expectation value vanishes. Yet another test of universality consists in changing the type of dynamical matter from fermionic to scalar. As long as a phase with ordered Polyakov loops exists, this should not affect the expectations of the sea/islands picture, and localized modes should appear in the spec trum of the Dirac operator in that phase. In this context, the Dirac operator can be seen simply as a mathematical probe of certain properties of the gauge fields or, more physically, as a probe of how these fields couple to external, static (i.e., infinitely heavy) fermion fields. A model allowing one to carry out both these tests at once is the lattice fixed-length SU(2)-Higgs model [36]. At zero temperature the phase diagram of this model has been studied in depth both with analytical [36] and numerical [37; 38; 39; 40; 41; 42] methods. This model has two parameters, namely the (inverse) gauge coupling \(\beta\) and the Higgs-gauge coupling \(\kappa\), and it displays two lines of transitions in the \((\beta,\kappa)\) plane as follows [42]: * a line of crossovers at \(\beta\approx\beta_{\rm bulk}\), starting from the bulk transition (crossover) of the pure gauge SU(2) theory [43] at \((\beta,\kappa)=(\beta_{\rm bulk},0)\), and ending at some point \((\beta_{e},\kappa_{e})\); * a line of crossovers coming down from large \(\kappa\) at small \(\beta\), meeting the first line at \((\beta_{e},\kappa_{e})\), turning into a line of first-order transitions at \((\beta_{f},\kappa_{f})\), and tending to \(\kappa\approx 0.6\) as \(\beta\to\infty\). These transition lines separate three phases of the system: a confined phase at low \(\beta\) and low \(\kappa\); a deconfined phase at high \(\beta\) and low \(\kappa\); and a Higgs phase at high \(\kappa\). A similar phase diagram was found at finite temperature, although the transition lines were all identified as crossovers in that case [42]. The absence of a sharp transition between the confined and the Higgs phase at any \(\kappa\) at sufficiently low \(\beta\) was proved in Ref. [36], where it was also shown that in this region all local correlation functions, and so the spectrum of the theory, depend analytically on the couplings. While fermions are absent in the SU(2)-Higgs model, one can still probe this system using static external fermions coupled to the SU(2) gauge field, as pointed out above. One can then study how the corresponding Dirac spectrum behaves, and check what happens to the localization properties of its low modes across the various transitions, in particular as one crosses over to the Higgs phase starting from either the confined or the deconfined phase. Since eigenvalues and eigenvectors of the Dirac operator are nonlocal functions of the gauge fields, they can display non-analytic behavior even in the strip of the \((\beta,\kappa)\) plane where all local correlators are analytic functions of the couplings, and so they could allow one to sharply distinguish the confined and the Higgs phase. (A different approach to this issue, based on the analogies between gauge-Higgs theories and spin glasses, is discussed in the review Ref. [44] and references therein.) In this paper we study the spectrum and the eigenvectors of the staggered lattice Dirac operator in the SU(2)-Higgs model at finite temperature. After briefly describing the model, in section II we introduce the tools we use to investigate the localization properties of staggered eigenmodes. In section III we map out the phase diagram of the model at finite temperature, working at fixed temporal extension in lattice units. In section IV we analyze the staggered eigenmodes, focussing in particular on how their localization properties change across the transitions between the confined, deconfined, and Higgs phases. We then study in detail the correlation between eigenmodes and the gauge and Higgs fields, to identify the field fluctuations mostly responsible for localization. Finally, in section V we draw our conclusions and show some prospects for the future. ## II \(\mathrm{SU}(2)\)-Higgs model and localization In this section we describe the fixed-length SU(2)-Higgs model, and discuss how to characterize the localization properties of Dirac modes, and how these correlate with the gauge and Higgs fields. ### \(\mathrm{SU}(2)\)-Higgs model on the lattice We study the lattice SU(2)-Higgs model in 3+1 dimensions, defined by the action \[S=-\frac{\beta}{2}\sum_{n}\sum_{1\leq\mu<\nu\leq 4}\mathrm{tr}\,U_{\mu\nu}(n)- \frac{\kappa}{2}\sum_{n}\sum_{1\leq\mu\leq 4}\mathrm{tr}\,G_{\mu}(n)\,, \tag{1}\] where we omitted an irrelevant additive constant. Here \(n=(\vec{x},t)\), \(n_{\mu}=0,\ldots,N_{\mu}-1\), are the sites of a hyperbolic \(N_{s}^{3}\times N_{t}\) lattice, i.e., \(N_{1,2,3}=N_{s}\) and \(N_{4}=N_{t}\), where \(\mu=1,\ldots,4\) denotes the lattice directions and \(\hat{\mu}\) the corresponding unit vectors. The dynamical variables are the SU(2) matrices \(U_{\mu}(n)\) and \(\phi(n)\), representing respectively the gauge variables associated with the link connecting \(n\) and \(n+\hat{\mu}\), and the unit-length Higgs field doublet (recast as a unitary matrix) associated with site \(n\), and \[\begin{split} U_{\mu\nu}(n)&=U_{\mu}(n)U_{\nu}(n+ \hat{\mu})U_{\mu}(n+\hat{\nu})^{\dagger}U_{\nu}(n)^{\dagger}\,,\\ G_{\mu}(n)&=\phi(n)^{\dagger}U_{\mu}(n)\phi(n+\hat {\mu})\,,\end{split} \tag{2}\] are the plaquette variables associated with the elementary lattice squares, and the nontrivial part of the discretized covariant derivative of the Higgs field, which we will refer to as the Higgs-gauge field coupling term. Periodic boundary conditions are imposed on \(U_{\mu}(n)\) and \(\phi(n)\) in all directions. In what follows we will also make use of the Polyakov loop winding around the temporal direction, \[P(\vec{x})=\mathrm{tr}\,\prod_{t=0}^{N_{t}-1}U_{4}(\vec{x},t)\,. \tag{3}\] Expectation values are defined as \[\begin{split}\langle O\rangle&=\frac{1}{Z}\int DU \int D\phi\,e^{-S(U,\phi)}O(U,\phi)\,,\\ Z&=\int DU\int D\phi\,e^{-S(U,\phi)}\,,\end{split} \tag{4}\] where \(DU\) and \(D\phi\) denote the products of the SU(2) Haar measures associated with \(U_{\mu}(n)\) and \(\phi(n)\). We study this model at finite temperature \(T=1/(aN_{t})\), where \(a\) is the lattice spacing, which can be set by suitably tuning the parameters of the model, namely the inverse gauge coupling \(\beta\) and the Higgs-gauge field coupling \(\kappa\). However, since we are not interested here in taking the continuum limit, we treat the model simply as a two-parameter anisotropic statistical mechanics system, keeping \(N_{t}\) fixed as we take the thermodynamic limit \(N_{s}\to\infty\), and as we change \(\beta\) and \(\kappa\) freely. To study the phase diagram in the \((\beta,\kappa)\) plane we use the average plaquette, Polyakov loop, and Higgs-gauge field coupling term, \[\begin{split}\langle U\rangle&=\frac{1}{N_{t}V}\sum_ {n}\left\langle U(n)\right\rangle\,,\qquad\langle P\rangle=\frac{1}{V}\sum_{ \vec{x}}\left\langle P(\vec{x})\right\rangle\,,\\ \langle G\rangle&=\frac{1}{N_{t}V}\sum_{n}\left\langle G (n)\right\rangle\,,\end{split} \tag{5}\] where \(V=N_{s}^{3}\) is the lattice volume, and the corresponding susceptibilities, \[\begin{split}\chi_{U}&=\frac{1}{N_{t}V}\left( \left\langle\left(\sum_{n}U(n)\right)^{2}\right\rangle-\left\langle\sum_{n}U (n)\right\rangle^{2}\right)\,,\\ \chi_{P}&=\frac{1}{V}\left(\left\langle\left(\sum_{ \vec{x}}\operatorname{tr}P(\vec{x})\right)^{2}\right\rangle-\left\langle\sum_{ \vec{x}}\operatorname{tr}P(\vec{x})\right\rangle^{2}\right)\,,\\ \chi_{G}&=\frac{1}{N_{t}V}\left(\left\langle\left( \sum_{n}G(n)\right)^{2}\right\rangle-\left\langle\sum_{n}\operatorname{tr}G(n )\right\rangle^{2}\right)\,.\end{split} \tag{6}\] In Eqs. (5) and (6) we denoted with \(U(n)\) and \(G(n)\) the average plaquette and gauge-Higgs coupling term touching a lattice site \(n\), \[\begin{split} U(n)&=\frac{1}{24}\sum_{1\leq\mu<\nu \leq 4}\operatorname{tr}\left(U_{\mu\nu}(n)+U_{\mu\nu}(n-\hat{\mu})\right.\\ &\qquad\qquad\qquad+U_{\mu\nu}(n-\hat{\nu})+U_{\mu\nu}(n-\hat{ \mu}-\hat{\nu})\right),\\ G(n)&=\frac{1}{8}\sum_{1\leq\mu\leq 4}\operatorname{tr} \left(G_{\mu}(n)+G_{\mu}(n-\hat{\mu})\right).\end{split} \tag{7}\] ### Localization of staggered eigenmodes We are interested in the spectrum of the staggered Dirac operator in the background of the SU(2) gauge fields for fermions in the fundamental representation, \[D^{\text{stag}}=\frac{1}{2}\sum_{\mu}\eta_{\mu}(U_{\mu}\mathrm{T}_{\mu}- \mathrm{T}_{\mu}^{\dagger}U_{\mu}^{\dagger})\,, \tag{8}\] where \(\eta_{\mu}\) are the usual staggered phases and \(\mathrm{T}_{\mu}\) are the translation operators with periodic (resp. antiperiodic) boundary conditions in space (resp. time), i.e., \[\begin{split}\eta_{\mu}(n)&=(-1)^{\sum_{\alpha<\mu} n_{\alpha}}\,,\\ (\mathrm{T}_{\mu})_{n,n^{\prime}}&=b_{\mu}(n_{\mu}) \delta_{n_{\mu}+1,n^{\prime}_{\mu}}\prod_{\alpha\neq\mu}\delta_{n_{\alpha},n^ {\prime}_{\alpha}}\,,\end{split} \tag{9}\] with \(n_{\mu}=N_{\mu}\) identified with \(n_{\mu}=0\), and \(b_{\mu}(n_{\mu})=1\), \(\forall\mu,n_{\mu}\), except for \(b_{4}(N_{t}-1)=-1\). Since the staggered operator is anti-Hermitian and anticommutes with \(\varepsilon(n)=(-1)^{\sum_{\alpha}n_{\alpha}}\), its spectrum is purely imaginary and symmetric about the origin. We write \[D^{\text{stag}}\psi_{l}(n)=i\lambda_{l}\psi_{l}(n)\,,\qquad\lambda_{l}\in \mathbb{R}\,, \tag{10}\] with eigenvectors \(\psi_{l}(n)\) carrying an internal "color" index, \(\psi_{l,c}(n)\), \(c=1,2\), that has been suppressed for simplicity, and focus on \(\lambda_{l}\geq 0\) only. Notice that since \(\sigma_{2}U_{\mu}(n)\sigma_{2}=U_{\mu}(n)^{*}\), \(D^{\text{stag}}\) commutes with the antiunitary "time-reversal" operator \(T=\sigma_{2}K\), where \(K\) denotes complex conjugation. Since \(T^{2}=-\mathbf{1}\), \(D^{\text{stag}}\) displays in this case doubly degenerate eigenvalues, and belongs to the symplectic class in the symmetry classification of random matrices [45; 46]. In the following it is understood that we work with the reduced spectrum, including only one eigenvalue from each degenerate pair. Participation ratioThe localization properties of the staggered eigenmodes can be studied directly by looking at the eigenvectors, or indirectly by looking at the corresponding eigenvalues. In the first case one can study the volume scaling of the so-called participation ratio (PR) of the modes, \[\text{PR}_{l}=\frac{1}{N_{t}V}\text{IPR}_{l}^{-1}\,,\qquad\text{IPR}_{l}=\sum_{ n}\|\psi_{l}(n)\|^{4}\,, \tag{11}\] where \(\|\psi_{l}(n)\|^{2}=\sum_{c=1}^{2}|\psi_{l,c}(n)|^{2}\), modes are normalized to \(1\), \(\sum_{n}\|\psi_{l}(n)\|^{2}=1\), and IPR is the inverse participation ratio. The quantity \(\text{PR}_{l}\) measures the fraction of lattice volume \(N_{t}V\) occupied by a given mode, and similarly \(N_{t}V\cdot\text{PR}_{l}=\text{IPR}_{l}^{-1}\) gives the "mode size". After averaging over an infinitesimally small spectral bin around a point \(\lambda\) in the spectrum and over gauge configurations, as the spatial size \(N_{s}\) grows the resulting average \(\text{PR}(\lambda,N_{s})\) tends to a constant if modes near \(\lambda\) are delocalized on the entire lattice, and goes to zero as the inverse of the lattice volume if they are localized in a finite region. Equivalently, the similarly averaged mode size diverges linearly in the lattice volume for delocalized modes and tends to a constant for localized modes. In this paper we denote the average of any observable \(O_{l}\) associated with mode \(l\), following the procedure described above, as \[O(\lambda,N_{s})=\frac{\langle\sum_{l}\delta(\lambda-\lambda_{l})O_{l}\rangle}{ \langle\sum_{l}\delta(\lambda-\lambda_{l})\rangle}\,, \tag{12}\] having made explicit the dependence on the spatial size of the lattice. The volume scaling of \(\text{PR}(\lambda,N_{s})\) defines the fractal dimension of modes in the neighborhood of \(\lambda\), \[\alpha(\lambda)=3+\lim_{N_{s}\to\infty}\frac{\log\text{PR}(\lambda,N_{s})}{\log N _{s}}\,. \tag{13}\] The multifractal properties of eigenmodes can be investigated by looking at the generalized inverse participation ratios, \[(\text{IPR}_{q})_{l}=\sum_{n}\|\psi_{l}(n)\|^{2q}\,, \tag{14}\] with \((\text{IPR}_{2})_{l}=\text{IPR}_{l}\)[47]. Their average according to Eq. (12) scales with the system size as \(\text{IPR}_{q}(\lambda,N_{s})\propto N_{s}^{-D_{q}(\lambda)(q-1)}\), with generalized fractal dimensions \(D_{q}\) (notice \(D_{2}=\alpha\)). One has \(D_{q}=3\) for delocalized modes and \(D_{q}=0\) for localized modes, while a nontrivial \(D_{q}\) signals eigenmode multifractality [48]. Spectral statisticsThe localization properties of the eigenmodes reflect on the statistical properties of the eigenvalues [49]: for localized modes one expects independent fluctuations of the eigenvalues, while for delocalized modes one expects to find the correlations typical of dense random matrix models. It is convenient in this context to study the probability distribution of the so-called unfolded level spacings [45; 46], \[s_{l}=\frac{\lambda_{l+1}-\lambda_{l}}{\langle\lambda_{l+1}-\lambda_{l} \rangle_{\lambda}}\,, \tag{15}\] computed locally in the spectrum, i.e., \[p(s;\lambda,N_{s})=\frac{\langle\sum_{l}\delta(\lambda-\lambda_{l})\delta(s-s _{l})\rangle}{\langle\sum_{l}\delta(\lambda-\lambda_{l})\rangle}\,. \tag{16}\] In Eq. (15), \(\langle\lambda_{l+1}-\lambda_{l}\rangle_{\lambda}\) denotes the average spacing in the relevant spectral region, which for large volumes equals \(\langle\lambda_{l+1}-\lambda_{l}\rangle_{\lambda}\to\frac{1}{N_{t}V\rho( \lambda)}\), where \(\rho(\lambda)\) is the spectral density, \[\rho(\lambda)=\lim_{V\to\infty}\frac{1}{N_{t}V}\left(\sum_{l}\delta(\lambda- \lambda_{l})\right)\,. \tag{17}\] The statistical properties of the unfolded spacings are expected to be universal [45], i.e., independent of the details of the model, and can be compared to the theoretical predictions obtained from exactly solvable models. As the system size increases, for localized modes \(p(s;\lambda,N_{s})\) should approach the exponential distribution, \(p_{\text{P}}(s)=e^{-s}\), appropriate for independent eigenvalues obeying Poisson statistics [45]. For delocalized modes \(p(s;\lambda,N_{s})\) should instead approach the distribution \(p_{\text{RMT}}(s)\) predicted by the appropriate Gaussian Ensemble of Random Matrix Theory, which is the Gaussian Symplectic Ensemble in the case at hand [45; 46]. This quantity is known exactly, but is not available in closed form. An accurate approximation is provided by the symplectic Wigner surmise, \[p_{\text{WS}}(s)=\left(\frac{64}{9\pi}\right)^{3}s^{4}e^{-\frac{64}{9\pi}s^{2 }}\,. \tag{18}\] Mobility edgeLocalized and delocalized modes are generally found in disjoint spectral regions separated by critical points known as _mobility edges_, where the localization length diverges and the system undergoes a phase transition along the spectrum, known as Anderson transition [48]. At the mobility edge the critical eigenmodes display a fractal dimension different from those of localized or delocalized modes, as well as a rich multifractal structure. This is reflected in critical spectral statistics different from both Poisson and RMT statistics. To monitor how the localization properties change along the spectrum using its statistical properties, it is convenient to use the integrated unfolded level spacing distribution, \[I_{s_{0}}(\lambda,N_{s})=\int_{0}^{s_{0}}ds\,p(s;\lambda,N_{s})\,, \tag{19}\] where \(s_{0}\simeq 0.563\) is chosen so to maximize the difference between the expectations for Poisson and RMT distributions, \(I_{s_{0},\text{P}}\simeq 0.431\) and \(I_{s_{0},\text{RMT}}\simeq 0.0797\), estimated using \(p_{\text{P}}\) and \(p_{\text{WS}}\), see Eq. (18) above. This quantity allows one to determine the mobility edge very accurately by means of a finite-size-scaling analysis [50]. In fact, as the system size increases \(I_{s_{0}}(\lambda,N_{s})\) tends to \(I_{s_{0},\text{P}}\) or \(I_{s_{0},\text{RMT}}\) depending on the localization properties of the modes in the given spectral region, except at the mobility edge where it is volume-independent and takes the value \(I_{s_{0},c}\) corresponding to the critical statistics. This, however, requires large-scale simulations to achieve a sufficient quality of the data, and several large volumes. One can give up some of the accuracy but save a lot in computing effort by using the critical value of the spectral statistic, expected to be universal, to determine the mobility edge simply by looking for the point where the curve for \(I_{s_{0}}\) crosses its critical value, \(I_{s_{0},c}\) (see, e.g, Refs. [23; 25; 28; 29]). This critical value is not known for the symplectic class, but it can be determined by identifying the scale-invariant point in the spectrum at some point in the parameter space of the model under study (if one can find an Anderson transition, of course); the corresponding critical value can then be used in the rest of the analysis. Notice that one could estimate the mobility edge in a finite volume as the point where \(I_{s_{0}}\) takes any chosen value intermediate between the RMT and the Poisson predictions, and this would converge to the correct value in the infinite-volume limit. In this respect, the choice of \(I_{s_{0},c}\) is only the most convenient, as it is expected to minimize the magnitude of finite-size effects. Correlation with bosonic observablesTo investigate the correlation between staggered eigenmodes and gauge and Higgs fields we considered the following observables, \[\mathcal{U}_{l} =\sum_{n}U(n)\|\psi_{l}(n)\|^{2}\,,\qquad\mathcal{P}_{l}=\sum_{t, \vec{x}}P(\vec{x})\|\psi_{l}(\vec{x},t)\|^{2}\,, \tag{20}\] \[\mathcal{G}_{l} =\sum_{n}G(n)\|\psi_{l}(n)\|^{2}\,,\] averaged according to Eq. (12). Recall that \(U(n)\) and \(G(n)\) are the average plaquette and gauge-Higgs coupling term touching a lattice site \(n\), defined in Eq. (7). For delocalized modes \(\|\psi_{l}\|^{2}\sim\frac{1}{V}\), and the averages \(\mathcal{U}(\lambda,N_{s})\), \(\mathcal{P}(\lambda,N_{s})\), and \(\mathcal{G}(\lambda,N_{s})\) of the observables in Eq. (20) are approximately equal to the average of the corresponding bosonic observable, i.e., \(\langle U\rangle\), \(\langle P\rangle\), and \(\langle G\rangle\), respectively [see Eq. (5)]. For localized modes \(\|\psi_{l}\|^{2}\) is non-negligible only inside a region of finite spatial volume, so \({\cal P}(\lambda,N_{s})\) measures the average Polyakov loop inside the localization region, and \({\cal U}(\lambda,N_{s})\) and \({\cal G}(\lambda,N_{s})\) measure respectively the average plaquette and gauge-Higgs coupling term in a neighborhood of the localization region. One should, however, keep in mind that there are 24 neighboring squares and 8 neighboring links to each site, so that a possible correlation of modes with the plaquette and gauge-Higgs coupling term fluctuations get diluted. More informative than the averages of the observables in Eq. (20) are the corresponding centered and rescaled averages, \[\begin{split}\widehat{\cal U}(\lambda,N_{s})&= \frac{{\cal U}(\lambda,N_{s})-\langle U\rangle}{\delta U}\,,\\ (\delta U)^{2}&=\langle U(n)^{2}\rangle-\langle U(n )\rangle^{2}\,,\\ \widehat{\cal P}(\lambda,N_{s})&=\frac{{\cal P}( \lambda,N_{s})-\langle P\rangle}{\delta P}\,,\\ (\delta P)^{2}&=\langle P(\vec{x})^{2}\rangle- \langle P(\vec{x})\rangle^{2}\,,\\ \widehat{\cal G}(\lambda,N_{s})&=\frac{{\cal G}( \lambda,N_{s})-\langle G\rangle}{\delta G}\,,\\ (\delta G)^{2}&=\langle G(n)^{2}\rangle-\langle G(n )\rangle^{2}\,.\end{split} \tag{21}\] These quantities measure the correlation of the eigenmodes with fluctuations in the gauge and Higgs fields, normalized by the average size of these fluctuations. Indeed, writing these quantities out explicitly, one has, e.g., \[\widehat{\cal U}(\lambda,N_{s})=\left\langle\sum_{n}\frac{\sum_{l}\delta( \lambda-\lambda_{l})\|\psi_{l}(n)\|^{2}}{N_{t}V\rho(\lambda)}\,\frac{U(n)- \langle U\rangle}{\delta U}\right\rangle\,. \tag{22}\] As a consequence, the observables in Eq. (21) vanish in the absence of correlation, and are strongly suppressed for delocalized modes. The normalization factor takes into account that for observables with a strongly peaked probability distribution even a correlation with small deviations from average is significant, indicating that eigenmodes are attracted by the corresponding type of fluctuations, and favor the locations where they show up in a field configuration. In particular, for localized modes this allows one to identify the most favorable type of fluctuations for localization. Sea/islands pictureWe also study the correlation between eigenmodes and the "islands" of the refined "sea/islands" picture of localization discussed in Ref. [16]. These are defined using the "Dirac-Anderson Hamiltonian" representation of the staggered Dirac operator [14], obtained by diagonalizing the temporal hopping term in \(D^{\rm stag}\) [i.e., the term with \(\mu=4\) in the sum in Eq. (8)] by means of a unitary transformation \(\Omega\)[16], \[H^{\rm DA}\equiv\Omega^{\dagger}(-iD^{\rm stag})\Omega={\cal E}{\bf 1}_{s}+\frac{ 1}{2i}\sum_{j=1}^{3}\eta_{j}({\cal V}_{j}{\rm T}_{j}-{\rm T}_{j}^{\dagger}{\cal V }_{j}^{\dagger})\,, \tag{23}\] where \({\bf 1}_{s}\) is the \(V\times V\) identity matrix \(({\bf 1}_{s})_{\vec{x},\vec{y}}=\delta_{\vec{x},\vec{y}}\), \({\rm T}_{j}\) are here the spatial translation operators \(({\rm T}_{j})_{\vec{x},\vec{y}}=\delta_{\vec{x}+\vec{y},\vec{y}}\) (with periodic boundary conditions understood), \({\cal E}\) is an \(\vec{x}\)-dependent \(2N_{t}\times 2N_{t}\) diagonal matrix, \[{\cal E}(\vec{x})_{ka\,lb}=\delta_{kl}\delta_{ab}e_{ka}(\vec{x})\,,\qquad e_{ ka}(\vec{x})=\eta_{4}(\vec{x})\sin\omega_{ka}(\vec{x})\,, \tag{24}\] and \({\cal V}_{j}\) are \(\vec{x}\)-dependent \(2N_{t}\times 2N_{t}\) unitary matrices, \[{\cal V}_{j}(\vec{x})_{ka\,lb}=\frac{1}{N_{t}}\sum_{t=0}^{N_{t}-1}e^{-i(\omega _{ka}(\vec{x})-\omega_{lb}(\vec{x}+\vec{j}))t}\,U_{j}^{\rm tdg}(\vec{x},t)_{ ab}\,, \tag{25}\] with \(k,l=0,\ldots N_{t}-1\) and \(a,b=1,2\). Here "tdg" stands for "temporal diagonal gauge", i.e., \(U_{j}^{\rm tdg}\) are the spatial links in the temporal gauge where all Polyakov loops are diagonal [51], \[\begin{split} U_{j}^{\rm tdg}(\vec{x},t)&=u(\vec{x} )^{\dagger}P(\vec{x},t)U_{j}(\vec{x},t)P(\vec{x}+\hat{\jmath},t)^{\dagger}u( \vec{x}+\hat{\jmath})\,,\\ P(\vec{x},t+1)&=P(\vec{x},t)U_{4}(\vec{x},t)\,, \end{split} \tag{26}\] with \(P(\vec{x},0)={\bf 1}\), and \(u(\vec{x})\) a suitable unitary matrix such that [notice \(P(\vec{x})=P(\vec{x},N_{t})\)] \[P(\vec{x})=u(\vec{x}){\rm diag}(e^{i\phi_{1}(\vec{x})},e^{i\phi_{2}(\vec{x})} )u(\vec{x})^{\dagger}\,, \tag{27}\] with \(\phi_{1,2}(\vec{x})\in[-\pi,\pi)\) and \(e^{i(\phi_{1}(\vec{x})+\phi_{2}(\vec{x}))}=1\). Moreover, \(\omega_{ka}(\vec{x})\) are effective Matsubara frequencies, \[\omega_{ka}(\vec{x})=\frac{\phi_{a}(\vec{x})+(2n_{ka}+1)\pi}{N_{t}}\,, \tag{28}\] with \(n_{ka}\in\{0,\ldots,N_{t}-1\}\) chosen for each \(a\) so that the "energies" \(e_{ka}\) satisfy \(0\leq e_{1a}(\vec{x})\leq e_{2a}(\vec{x})\leq\ldots\leq e_{\frac{N_{t}}{2}-1\,a}\), and \(e_{k+\frac{N_{t}}{2}\,a}(\vec{x})=-e_{ka}(\vec{x})\), for \(k=0,\ldots\frac{N_{t}}{2}-1\). Notice that thanks to the simple relation between \(\phi_{1}\) and \(\phi_{2}\), one has \(e_{k1}=e_{k2}\). This double degeneracy is a consequence of the temporal hopping term being invariant under the time-reversal transformation \(T\) (see section II.2). With this choice for \(e_{ka}\), \(H^{\rm DA}\) has the general structure \[H^{\rm DA} =\begin{pmatrix}E&{\bf 0}\\ {\bf 0}&-E\end{pmatrix}\] \[+\frac{1}{2i}\sum_{j=1}^{3}\eta_{j}\left[\begin{pmatrix}A_{j}&B_{j} \\ B_{j}&A_{j}\end{pmatrix}{\rm T}_{j}-{\rm T}_{j}{}^{\dagger}\begin{pmatrix}A_{j} {}^{\dagger}&B_{j}{}^{\dagger}\\ B_{j}{}^{\dagger}&A_{j}{}^{\dagger}\end{pmatrix}\right]\,, \tag{29}\] where \(E,A_{j},B_{j}\) are \(N_{t}\times N_{t}\) matrices. It was argued in Ref. [16] that sites where the diagonal blocks \(A_{j}\) are larger are the most favorable for the localization of low modes in a phase where the Polyakov loops are ordered. In general, spatial regions with larger \(A_{j}\), which correspond to lower correlation among spatial links on different time slices, are expected to be favored by low modes; in an ordered phase such regions are localized, and so lead to low-mode localization. One can check this by looking at the correlation between modes and the quantity \[A(\vec{x})=\frac{1}{6N_{t}}\sum_{j=1}^{3}\operatorname{tr}A_{j}(\vec{x})^{\dagger }A_{j}(\vec{x})+\operatorname{tr}A_{j}(\vec{x}-\hat{j})^{\dagger}A_{j}(\vec{x}- \hat{j})\,, \tag{30}\] i.e., using the observable \[\mathcal{A}_{l}=\sum_{\vec{x}}A(\vec{x})\sum_{t=0}^{N_{t}-1}\|\psi_{l}(\vec{x},t)\|^{2}\,, \tag{31}\] averaged according to Eq. (12) to get \(\mathcal{A}(\lambda,N_{s})\), and centered and rescaled according to Eq. (22) to get \(\widehat{\mathcal{A}}(\lambda,N_{s})\), i.e., \[\begin{split}\widehat{\mathcal{A}}(\lambda,N_{s})&= \frac{\mathcal{A}(\lambda,N_{s})-\langle A\rangle}{\delta A}\,,\\ (\delta A)^{2}&=\langle A(\vec{x})^{2}\rangle- \langle A(\vec{x})\rangle^{2}\,.\end{split} \tag{32}\] ## III Phase diagram at finite temperature In this section we report our results on the phase diagram of the model. We worked at finite temperature, fixing the lattice temporal extension to \(N_{t}=4\), and performing numerical simulations with a standard heatbath algorithm. Theoretical arguments [36] and previous numerical studies [42] lead us to expect three phases: a confined phase at small \(\beta\) and small \(\kappa\); a deconfined phase at large \(\beta\) and small \(\kappa\); and a Higgs phase at large \(\kappa\). Based on the finite-temperature results of Ref. [42], and on the observed weakening of the transition for smaller temporal extensions reported there, we expect that the transitions between the three phases are analytic crossovers. A detailed study of this issue is beyond the scope of this paper, so we limited most of our simulations to a single lattice volume with \(N_{s}=20\), for 784 different \((\beta,\kappa)\) pairs, using 3000 configurations at each point. We took \(\kappa\in[0,1.35]\) in steps of \(\Delta\kappa=0.05\) and \(\beta\in[1.5,2.85]\) in steps of \(\Delta\beta=0.05\). A detailed volume-scaling study was done on a subset of these points: we discuss this below. We show our results for \(\langle G\rangle\), \(\langle U\rangle\), and \(\langle P\rangle\) in Fig. 1 as heatmap plots, obtained by cubic interpolation of the numerical results at the simulation points. These confirm our expectations, and allow us to characterize the confined phase at small \(\beta\) and \(\kappa\) by small \(\langle G\rangle\), \(\langle U\rangle\), and \(\langle P\rangle\); the deconfined phase at large \(\beta\) and small \(\kappa\) by small \(\langle G\rangle\) and large \(\langle U\rangle\) and \(\langle P\rangle\); and the Higgs phase at large \(\kappa\) by large \(\langle G\rangle\), \(\langle U\rangle\), and \(\langle P\rangle\). We estimated errors with a standard jackknife procedure: they are not shown, but relative errors are always within \(7\cdot 10^{-5}\) for \(\langle U\rangle\); \(2\cdot 10^{-3}\) for \(\langle G\rangle\); and within \(1\cdot 10^{-3}\) for \(\langle P\rangle\), except deep inside the confined phase where the average becomes very small and indistinguishable from zero within errors. More precisely, the expectation value of the gauge-Higgs coupling term (Fig. 1, top panel) divides the phase diagram into two pieces: the Higgs phase at large \(\kappa\), with large \(\langle G\rangle\), and the (undivided) confined and deconfined phases at small \(\kappa\), with similar and small values of \(\langle G\rangle\). The expectation value of the plaquette and of the Polyakov loop (Fig. 1, center and bottom panel) divide the phase diagram into two parts in a different way: the confined phase at low \(\beta\) and \(\kappa\), where both \(\langle U\rangle\) and \(\langle P\rangle\) are small, and the (undivided) Higgs and deconfined phases, where both \(\langle U\rangle\) and \(\langle P\rangle\) are large. We show our results for the corresponding susceptibilities as heatmap plots in Fig. 2. Also in this case we estimated errors (not shown in the figure) with a standard jackknife procedure, finding them to be always Figure 1: Heatmap plot of the expectation value of \(G\) (top panel), \(U\) (center panel), and \(P\) (bottom panel), see Eq. (5). Here \(N_{s}=20\) and \(N_{t}=4\). within 3%. In the top panel we show our results for \(\chi_{G}\). This quantity has a narrow ridge, visualized here as a bright line, providing a clear separation between the Higgs phase and the rest in most of the explored parameter space; a weakening of the transition is visible in the top left part of the phase diagram. In the center panel we show the plaquette susceptibility \(\chi_{U}\). This separates clearly the confined phase from the Higgs phase, while the ridge broadens at the transition between confined and deconfined phase (as well as in the top left part of the phase diagram). In the bottom panel we show the logarithm of the Polyakov-loop susceptibility. This plot shows a bright line of strong transitions separating the confined and deconfined phases. This line continues in the top left part of the plot, still clearly separating the confined and Higgs phases, but it is much dimmer there as the signal is two orders of magnitude weaker than at the transition from the confined to the deconfined phase (see Figs. 4, top and 5, top). At the transition between the deconfined and Higgs phase \(\chi_{P}\) shows an inflection point instead of a peak (see Fig. 6), with a sizeable decrease in susceptibility corresponding here to a noticeable darkening of the plot. A sketch of the resulting phase diagram is shown in Fig. 3, obtained by merging the various transition lines, defined by the peaks of the suceptibilities. The dashed line at low \(\beta\) and large \(\kappa\) signals a sizeable reduction in the strength of the transition there, as shown by all three observables. Except in this region, where they slighly deviate from each other, the transition lines between confined and Higgs phase obtained from the three different susceptibilities agree with each other, so we drew a single line. Similarly, the transition lines between confined and deconfined phase obtained from the plaquette and the Polyakov loop susceptibility agree with each other, so we drew a single line in this case as well. To verify the expected crossover nature of the transitions, we studied the volume dependence of the various susceptibilities in detail on three lines, one at constant \(\kappa=0.5\) and two at constant \(\beta=2.0\) and \(\beta=2.6\), using lattices with \(N_{s}=22,28,34,40\). For each simu Figure 2: Heatmap plot of the susceptibility \(\chi_{G}\) of the gauge-Higgs coupling term \(G\) (top panel), the plaquette susceptibility \(\chi_{U}\) (center panel), and the logarithm of the Polyakov-loop susceptibility \(\chi_{P}\) (bottom panel), see Eqs. (5) and (6). Here \(N_{s}=20\) and \(N_{t}=4\). In the top panel, the black point shows where the mobility edge \(\lambda_{c}=\lambda_{c}(\kappa)\) has an inflection point along the line at constant \(\beta=2.6\), see Fig. 17. In the center panel, it shows where the mobility edge \(\lambda_{c}=\lambda_{c}(\beta)\) vanishes along the line at constant \(\kappa=1.0\), see Fig. 16. In the bottom panel, it shows where the mobility edge \(\lambda_{c}=\lambda_{c}(\beta)\) vanishes along the line at constant \(\kappa=0.3\), see Fig. 15. Figure 3: Schematic drawing of the phase diagram, obtained combining the maxima of the susceptibilities shown in Fig. 2. A dashed line is used to indicate the weakening of the transition. lation point and each lattice volume we used 4500 configurations. We estimated errors by first averaging over configurations in blocks of size \(b_{\rm size}\) and computing the standard jackknife error on the blocked ensemble, and then increasing \(b_{\rm size}\) until the error stabilized. For our final estimates we used samples of size \(b_{\rm size}=20\), except at \(\kappa=0.5\) where we used \(b_{\rm size}=50\), although this was really needed only around \(\beta=2.3\). We show our results in Figs. 4-6. In Fig. 4 we show \(\chi_{P}\), \(\chi_{U}\) and \(\chi_{G}\) along a line of constant \(\kappa=0.5\) across the transition from the confined to the deconfined phase. The signal is very strong in \(\chi_{P}\), and a small peak is visible also in \(\chi_{U}\). The location of these peaks is not far from the critical point \(\beta_{c}\approx 2.3\) of the pure gauge theory at \(\kappa=0\)[52, 53, 54, 55]. The relatively large error bars found for \(N_{s}=22\) between \(\beta=2.28\) and \(\beta=2.31\), especially at \(\beta=2.3\), are most likely a finite-size effect due to the vicinity of the critical point of the pure gauge theory, and are not observed on larger volumes. On the other hand, no peak is visible in \(\chi_{G}\), which is constant within errors across the transition. This makes the gauge-Higgs coupling term \(G\) unsuitable to detect this transition. In Fig. 5 we show \(\chi_{P}\), \(\chi_{U}\) and \(\chi_{G}\) along a line of constant \(\beta=2.0\) across the transition from the confined to the Higgs phase. A clear peak is visible in all three observables, with \(\chi_{P}\) two orders of magnitude smaller than in Fig. 4 (top), and \(\chi_{U}\) a factor of 2 larger than in Fig. 4 (bottom). Finally, in Fig. 6 we show \(\chi_{P}\), \(\chi_{U}\) and \(\chi_{G}\) along a line of constant \(\beta=2.6\) across the transition from the deconfined to the Higgs phase. We observe a peak in \(\chi_{G}\), of similar magnitude as the one in Fig. 5 (bottom) for the transition from the confined to the Higgs phase. Neither \(\chi_{U}\) nor \(\chi_{P}\) show any significant peak: \(\chi_{U}\) changes slope at the transition, while \(\chi_{P}\) shows an inflection point. This makes \(U\) and \(P\) not quite suitable observables to detect this transition. While these results do not logically exclude the possibility of genuine phase transitions at some points in the phase diagram, combined with the results of Ref. [42] they make it implausible. ## IV Localization properties of Dirac eigenmodes In this section we discuss the localization properties of the eigenmodes of the staggered operator and how these correlate with the gauge and Higgs fields, and we present a detailed test of the sea/islands mechanism. We obtained the lowest modes of \(D^{\rm stag}\) using the PRIMME package [56, 57] for sparse matrices, exploiting Chebyshev acceleration for faster convergence. The use of algorithms for sparse matrices allows us to reduce the scaling of computational time from \(N_{s}^{9}\), expected for full diagonalization, down to \(N_{s}^{6}\). We first analyzed the eigenmodes in detail at three points of the phase diagram, using several lattice volumes to study the scaling of eigenvector and eigenvalue observables with the system size. These points are \(\beta=1.9,\kappa=1.0\), in the confined phase, right below the transition to the Higgs phase at constant \(\kappa\) (\(\beta/\beta_{c}\approx 0.97\), with \(\beta_{c}\approx 1.95\) corresponding to the peak in the Polyakov-loop susceptibility); \(\beta=2.1,\kappa=1.0\), in the Higgs phase, not far above the transition between the two phases (\(\beta/\beta_{c}\approx 1.08\)); and \(\beta=2.6,\kappa=0.3\), deep in the deconfined phase. We looked at two lattice volumes in the confined phase, and at four lattice volumes in the deconfined and Higgs phases; see Tab. 1 for details about system size, configuration statistics, and number of eigenmodes. We then computed the relevant observables locally in the spectrum, approximating Eq. (12) by averaging over spectral bins of size \(\Delta\lambda=0.0025\) at Figure 4: Polyakov-loop (top), plaquette (center), and gauge-Higgs coupling term (bottom) susceptibility across the transition between the confined and the deconfined phase at \(\kappa=0.5\). Here \(N_{t}=4\). The volume scaling is consistent with an analytic crossover. \(\beta=1.9\), \(\kappa=1.0\) (confined phase), \(\Delta\lambda=0.01\) at \(\beta=2.6\), \(\kappa=0.3\) (deconfined phase), and \(\Delta\lambda=0.0075\) at \(\beta=2.1\), \(\kappa=1.0\) (Higgs phase). Our results, reported in sections IV.1 and IV.2, demonstrate low-mode localization in the deconfined and in the Higgs phase. This detailed study also allowed us to estimate the critical value of \(I_{s_{0}}\), which we could then use to efficiently determine the dependence of the mobility edge, \(\lambda_{c}\), on the parameters \(\beta\) and \(\kappa\). We did this on two lines at constant \(\kappa\): one in the deconfined phase with \(\kappa=0.3\), changing \(\beta\) in the interval \([2.35,2.60]\) with increments \(\Delta\beta=0.05\); and one in the Higgs phase with \(\kappa=1.0\), changing \(\beta\) in \([2.1,2.4]\) with increments \(\Delta\beta=0.05\). We also studied one line at constant \(\beta=2.6\), changing \(\kappa\) in \([0.35,1.0]\) in increments of \(\Delta\kappa=0.05\). Here we used a single volume (\(N_{s}=20,N_{t}=4\)) and \(3000\) configurations at each point (except for the three points already discussed above). In all these calculations we computed \(I_{s_{0}}\) locally in the spectrum averaging over bins of size \(\Delta\lambda=0.008\). Our results, reported in section IV.3, show that along both lines at constant \(\kappa\) the mobility edge disappears at a critical \(\beta\) near the crossover to the confined phase; and that along the line at constant \(\beta\) the mobility edge is always nonzero, but it changes behavior at the crossover between the deconfined and the Higgs phase. We then studied the correlation between localized modes and the fluctuations of the gauge and Higgs fields, and tested the refined sea/islands picture of Ref. [16]. Our results, reported in section IV.4, show a strong correlation with Polyakov-loop and plaquette fluctuations, and an even stronger correlation with the fluctuations identified in Ref. [16] as the most relevant to localiza Figure 5: Polyakov-loop (top), plaquette (center), and gauge-Higgs coupling term (bottom) susceptibility near the transition between the confined and the Higgs phase at \(\beta=2.0\). Here \(N_{t}=4\). The volume scaling is consistent with an analytic crossover. Figure 6: Polyakov-loop (top), plaquette (center), and gauge-Higgs coupling term (bottom) susceptibility near the transition between the deconfined and the Higgs phase at \(\beta=2.6\). Here \(N_{t}=4\). The volume scaling is consistent with an analytic crossover. tion. ### Eigenvector observables In the top panel of Fig. 7 we show the PR of the modes in the confined phase. The PR is slightly larger for \(N_{s}=16\) than for \(N_{s}=20\), signaling that the fractal dimension is smaller than 3. This is shown explicitly in the bottom panel, where we plot \(\alpha(\lambda)\), see Eq. (13). This is estimated numerically from a pair of volumes as \[\alpha_{\text{num}}(\lambda;N_{s1},N_{s2})=3+\frac{\log\frac{\text{PR}(\lambda,N_{s1})}{\text{PR}(\lambda,N_{s2})}}{\log\frac{N_{s1}}{N_{s2}}}\,. \tag{33}\] The fractal dimension of near-zero modes is slightly below 3, and approaches 3 as one moves up in the spectrum. Taken at face value, this means that these modes are only slightly short of being fully delocalized. Clearly, this effect could be just a finite-size artifact due to the small volumes employed here. However, it could also signal that a "geometric" transition is approaching, where a mobility edge and, correspondingly, critical modes appear at the origin. In the top panels of Figs. 8 and 9 we show the size \(N_{t}V\cdot\text{PR}=\text{IPR}^{-1}\) of the modes in the deconfined and in the Higgs phase, respectively. In both cases the size of the lowest modes does not change with the volume, showing that they are localized. Higher up towards the bulk of the spectrum the mode size shows a strong volume dependence. Above a certain point in the spectrum this is compatible with a linear scaling in the volume, indicating that these modes are delocalized. The point where this starts to happen is consistent with the mobility edge, determined below in section IV.2 using spectral statistics, and marked in these plots by a solid vertical line (with an error band shown with dashed lines). The localization properties of low and bulk modes in the deconfined and in the Higgs phase are made quantitative in the bottom panels of Figs. 8 and 9, where we show their fractal dimension. For low modes this is zero within errors. Near the mobility edge our estimates for \(\alpha\) increase towards 3, which they almost reach at the upper end of the available spectral range. The rise should become steeper when using pairs of larger volumes, leading to a jump from 0 to 3 at the mobility edge in the infinite-volume limit. Such a tendency is visible in the Higgs phase. Our results are also consistent with modes at the mobility edge displaying critical localization properties, with a fractal dimension between 1 and 2. The nontrivial multifractal properties of the eigenmodes at the mobility edge are made evident in Fig. 10, where we show the ratio \[\frac{\text{IPR}_{2}(\lambda,N_{s})}{\sqrt{\text{IPR}_{3}(\lambda,N_{s})}} \sim N_{s}^{-(D_{2}(\lambda)-D_{3}(\lambda))}\,, \tag{34}\] \begin{table} \begin{tabular}{|l|l|l|} \hline \multicolumn{3}{|c|}{\((\beta=1.9,\kappa=1.0)\)} \\ \hline \(N_{s}\) & \#configurations & \#eigenvalues \\ \hline 16 & 3000 & 33 \\ \hline 20 & 1500 & 63 \\ \hline \end{tabular} \begin{tabular}{|l|l|l|} \hline \multicolumn{3}{|c|}{\((\beta=2.1,\kappa=1.0)\) and \((\beta=2.6,\kappa=0.3)\)} \\ \hline \(N_{s}\) & \#configurations & \#eigenvalues \\ \hline 20 & 8970 & 63 \\ \hline 24 & 8000 & 110 \\ \hline 28 & 3000 & 174 \\ \hline 32 & 1150 & 260 \\ \hline \end{tabular} \end{table} Table 1: Configuration statistics and number of (non-degenerate) eigenvalues used to study the volume scaling of the localization properties of staggered eigenmodes in the confined phase (top table) and in the deconfined and Higgs phases (bottom table). Figure 7: Participation ratio, Eq. (11), of the low staggered eigenmodes at \(\beta=1.9\) and \(\kappa=1.0\) in the confined phase for two different spatial volumes (top panel) and corresponding fractal dimension estimated using Eq. (33) with \(N_{s_{1}}=16\), \(N_{s_{2}}=20\) (bottom panel). Here \(N_{t}=4\). where the generalized IPRs have been defined in Eq. (14). This quantity tends to a constant both in the localized (\(D_{q}=0\)) and in the delocalized regime (\(D_{q}=3\)), while it has a nontrivial volume scaling for modes displaying multifractality, i.e, with \(q\)-dependent \(D_{q}\). This is expected to be a feature of the critical modes found at the mobility edge. This point in the spectrum is indeed characterized by a nontrivial volume scaling of the ratio in Eq. (34), which also reaches its minimum in the vicinity of the mobility edge. Comparing results in the confined and in the Higgs phase, that lie on the same line at constant \(\kappa\) near the transition, one sees that the rapid change in the localization properties of the low modes takes place precisely in the crossover region. This issue is studied in more detail below in section IV.3. ### Eigenvalue observables and mobility edge We now discuss eigenvalue observables, starting from the spectral density, Eq. (17), shown in Fig. 11. In the confined phase (top panel) the spectral density is practically constant in the lowest bins (except for the very lowest, which is depleted due to the smallness of the lattice volume), and grows as one moves towards the bulk of the spectrum. If we were in the chiral limit of massless fermions, a nonzero spectral density near the origin would indicate the spontaneous breaking of chiral symmetry [11]. Being in the opposite limit of infinitely massive fermions, we can speak of spontaneous chiral symmetry breaking only in a loose sense. In the deconfined and Figure 8: The mode size \(N_{t}V\cdot\text{PR}=\text{IPR}^{-1}\), Eq. (11), of the low staggered eigenmodes for different volumes (top panel), and corresponding fractal dimension estimated using Eq. (33) with three different volume pairs (bottom panel), at \(\beta=2.6\) and \(\kappa=0.3\) in the deconfined phase. Here \(N_{t}=4\). The vertical solid line shows the position of the mobility edge, vertical dashed lines indicate the corresponding error band. In the bottom panel, horizontal dotted lines mark the values \(\alpha=0\), corresponding to localized modes, and \(\alpha=3\), corresponding to totally delocalized modes. Figure 9: As in Fig. 8, but at \(\beta=2.1\) and \(\kappa=1.0\) in the Higgs phase. in the Higgs phase (bottom panels) we see instead that the spectral density is close to zero for near-zero modes, corresponding (again, loosely speaking) to the restoration of chiral symmetry. As we increase \(\lambda\) the spectral density increases, and does so faster as one approaches the mobility edge. However, no sign of critical behavior is visible along the spectrum. We now move on to discuss the spectral statistic \(I_{s_{0}}\), Eq. (19), for the low modes in the three different phases of the system. To estimate this quantity numerically we unfolded the spectrum, averaging then \(I_{s_{0}}\) in small spectral bins and over gauge configurations. More precisely, we defined the unfolded spacings using Eq. (15), using for \(\langle\lambda_{l+1}-\lambda_{l}\rangle_{\lambda}\) the average level spacing found in a given spectral bin, including all pairs of eigenvalues for which the smaller one fell in the bin. In Fig. 12 we show \(I_{s_{0}}\) in the confined phase. As expected, \(I_{s_{0}}\) is compatible with the value predicted by RMT in the whole available spectral range for both volumes, further confirming that these modes are delocalized. In Figs. 13 and 14 we show the value of \(I_{s_{0}}\) in the deconfined and in the Higgs phase. For modes near Figure 10: Ratio of generalized IPRs, Eq. (34), at \(\beta=2.6\) and \(\kappa=0.3\) in the deconfined phase (top panel), and at \(\beta=2.1\) and \(\kappa=1.0\) in the Higgs phase (bottom panel). The vertical solid line shows the position of the mobility edge, vertical dashed lines indicate the corresponding error band. A nontrivial volume scaling indicates nontrivial multifractal properties of the eigenmodes at the mobility edge. Figure 11: The spectral density at \(\beta=1.9\) and \(\kappa=1.0\) in the confined phase (top panel; here \(N_{s}=20\)), at \(\beta=2.6\) and \(\kappa=0.3\) in the deconfined phase (center panel; here \(N_{s}=32\)), and at \(\beta=2.1\) and \(\kappa=1.0\) in the Higgs phase (bottom panel; here \(N_{s}=32\)). In all plots \(N_{t}=4\). the origin \(I_{s_{0}}\) approaches the value expected for Poisson statistics as we increase the volume, signaling that these are localized modes. For higher modes the value of \(I_{s_{0}}\) tends instead to the RMT prediction as the volume increases, showing that modes are delocalized in this spectral region. Between these two regimes, we can find the mobility edge \(\lambda_{c}\) as the point where \(I_{s_{0}}\) is scale-invariant and the curves cross each other. To find \(\lambda_{c}\) and the critical value \(I_{s_{0},c}\) of the spectral statistic we interpolated the numerical data with natural cubic splines, and determined the crossing point for the various pairs of system sizes using Cardano's formula. The statistical error on each determination of \(\lambda_{c}\) and \(I_{s_{0},c}\) originating in the numerical uncertainty on \(I_{s_{0}}\) in the various bins is estimated by obtaining the interpolating splines and their crossing point for a set of synthetic data, generating 100 data sets by drawing for each bin a number from a Gaussian distribution with mean equal to the average \(I_{s_{0}}\) in the bin and variance equal to the square of the corresponding error. The systematic errors on \(\lambda_{c}\) and \(I_{s_{0},c}\) due to finite-size effects are estimated as the variance of the set of values for the crossing point and corresponding value of \(I_{s_{0}}\) obtained from all the pairs of volumes. We finally estimated the mobility edge and the critical \(I_{s_{0}}\) as those obtained from the crossing point of the biggest volume pair (\(N_{s}=28,32\)), as it should be the closest to the actual value in the infinite-volume limit, and the corresponding error by adding quadratically its statistical error with the finite-size systematic error discussed above. The total error is largely dominated by the finite-size contribution. We did this separately for the configurations in the deconfined and in the Higgs phase. The results for \(\lambda_{c}\) and \(I_{s_{0},c}\) are reported in Tab. 2, and shown in Figs. 13 and 14 as solid lines, with dashed lines marking the corresponding error bands. The two determinations of \(I_{s_{0},c}\), obtained in the deconfined and in the Higgs phase, agree within errors. Despite the uncertainty on \(I_{s_{0},c}\) being 10-15%, we could determine \(\lambda_{c}\) with a 1-2% uncertainty thanks to the steepness of \(I_{s_{0}}\) near the mobility edge. Figure 12: The integrated unfolded level spacing distribution \(I_{s_{0}}\), Eq. (19), at \(\beta=1.9\) and \(\kappa=1.0\) in the confined phase. Here \(N_{t}=4\). The horizontal dotted line shows the value of \(I_{s_{0}}\) expected for RMT statistics. Figure 13: The integrated unfolded level spacing distribution \(I_{s_{0}}\), Eq. (19), at \(\beta=2.6\) and \(\kappa=0.3\) in the deconfined phase. Here \(N_{t}=4\). The upper and lower horizontal dotted lines show the value of \(I_{s_{0}}\) expected for Poisson statistics and for RMT statistics, respectively. The vertical solid and dashed lines indicate the position and the error band of the mobility edge. The horizontal solid and dashed lines correspond to the estimate for the critical value \(I_{s_{0},c}\) of \(I_{s_{0}}\) at the mobility edge and its error band. ### \(\beta\) and \(\kappa\) dependence of the mobility edge Having obtained estimates of \(I_{s_{0},c}\) we can now use them to efficiently determine \(\lambda_{c}\) throughout the phase diagram using a single lattice volume at each point, and looking for the point in the spectrum where \(I_{s_{0}}\) takes the value \(I_{s_{0},c}\). We use again natural cubic splines to interpolate the numerical data, using the more precise determination of \(I_{s_{0},c}\) obtained in the deconfined phase and generating synthetic data as discussed above to estimate the statistical error. To estimate the magnitude of finite size effects, we determined also the crossing points \(\lambda_{c,\pm}\) of \(I_{s_{0}}\) with \(I_{s_{0},c}\pm\delta I_{s_{0},c}\), with \(\delta I_{s_{0},c}\) the uncertainty on \(I_{s_{0},c}\). This is meant to determine just how much the crossing point of \(I_{s_{0}}\) may change with the volume, as the error band on \(I_{s_{0},c}\) is determined by the fluctuations of the crossing point of the various pairs of volumes used to find the mobility edge and the critical statistics in section IV.2, and has nothing to do with the fact that \(I_{s_{0},c}\) is not known exactly. As explained in section II.2, one could in fact use any value intermediate between the RMT and the Poisson expectations to give an estimate of the mobility edge in a finite volume, and this would converge to the correct value in the thermodynamic limit. We can then study how \(\lambda_{c}\) depends on \(\kappa\) and \(\beta\). In Fig. 15 we show how \(\lambda_{c}\) changes in the deconfined phase as one decreases \(\beta\) towards the confined phase at fixed \(\kappa\). We expect that the mobility edge disappears as we enter the confined phase and the Polyakov loop loses its strong ordering. To estimate the value \(\beta_{\text{loc}}(\kappa)\) of \(\beta\) where this happens we fitted our results with a power-law function, \[\lambda_{c}(\beta)=A\cdot(\beta-\beta_{\text{loc}})^{B}\,, \tag{35}\] using the MINUIT library [58] to minimize the \(\chi^{2}\), computed using only the statistical errors on \(\lambda_{c}\). We then repeated the fit using \(\lambda_{c\pm}(\beta)\) to find the corresponding \(\beta_{\text{loc}\pm}\) where they extrapolate to zero, and used these to estimate the systematic uncertainty due to finite-size effects as \(\frac{1}{2}|\beta_{\text{loc}+}-\beta_{\text{loc}-}|\). We obtained for the critical value \(\beta_{\text{loc}}(0.3)=2.2997(22)_{\text{stat}}(53)_{\text{syst}}=2.2997(57)\), where the total error is the sum in quadrature of the statistical error from the fit and of the systematic error. The other fit parameters and the \(\chi^{2}\) per degree of freedom, \(\chi^{2}/\text{dof}=\chi^{2}/(n_{\text{data}}-n_{\text{parameters}})\), are reported in \begin{table} \begin{tabular}{c c|c c} \(\beta\) & \(\kappa\) & phase & \(\lambda_{c}\) & \(I_{s_{0},c}\) \\ \hline \hline 2.6 & 0.3 & deconfined & 0.2493(20) & 0.164(16) \\ 2.1 & 1.0 & Higgs & 0.1367(24) & 0.177(26) \\ \end{tabular} \end{table} Table 2: Mobility edge and critical value of \(I_{s_{0}}\) estimated at two points of the phase diagram, one in the deconfined and one in the Higgs phase. Figure 16: The dependence of the mobility edge on \(\beta\) in the Higgs phase on the line at constant \(\kappa=1.0\). The solid line is a power-law fit, Eq. (35), to the numerical data; the band corresponds to the finite-size systematic uncertainty discussed in the text. The point where the mobility edge vanishes is estimated at \(\beta_{\text{loc}}=2.0101(28)\), in the crossover region between the confined and Higgs phases, see Fig. 2 (center). \begin{table} \begin{tabular}{c|c c} & deconfined & Higgs \\ \hline \hline \(\beta_{\text{loc}}\) & 2.2997(22) & 2.0101(25) \\ \(A\) & 0.3836(17) & 0.3851(14) \\ \(B\) & 0.3592(54) & 0.4344(59) \\ \hline \(\chi^{2}/\text{dof}\) & 1.48 & 1.64 \\ \end{tabular} \end{table} Table 3: Parameters of a best fit of the \(\beta\) dependence of the mobility edge in the deconfined (\(\kappa=0.3\)) and Higgs (\(\kappa=1.0\)) phases, with the fitting function in Eq. (35). Only statistical errors are reported. Figure 15: The dependence of the mobility edge on \(\beta\) in the deconfined phase on the line at constant \(\kappa=0.3\). The solid line is a power-law fit, Eq. (35), to the numerical data; the band corresponds to the finite-size systematic uncertainty discussed in the text. The point where the mobility edge vanishes is estimated at \(\beta_{\text{loc}}=2.2997(57)\), in the crossover region between the confined and deconfined phases, see Fig. 2 (bottom). Tab. 3. The critical point is shown also in Fig. 2 (bottom), where we see that the vanishing of the mobility edge matches well with the crossover between the phases. In Fig. 16 we show how \(\lambda_{c}\) changes in the Higgs phase as one decreases \(\beta\) towards the confined phase at fixed \(\kappa\). Again, we expect the mobility edge to disappear at the crossover. For the critical \(\beta_{\rm loc}(\kappa)\) we find \(\beta_{\rm loc}(1.0)=2.0101(25)_{\rm stat}(13)_{\rm syst}=2.0101(28)\), again from a fit with a power law, Eq. (35), using statistical errors only (see Tab. 3 for the other fit parameters), and estimating systematic effects by fitting \(\lambda_{c\pm}\), as discussed above. This is shown also in Fig. 2 (center), where one sees that the vanishing of \(\lambda_{c}\) takes place again in the crossover region. The third case we examined is the transition from the Higgs phase to the deconfined phase as we decrease \(\kappa\) at fixed \(\beta\). This is shown in Fig. 17. One can see that at first \(\lambda_{c}\) decreases quickly with \(\kappa\), but below a critical value \(\kappa_{\rm loc}(\beta)\) it becomes practically constant. The critical value is defined here as the point where the behavior changes from approximately constant to approximately linear, as obtained by fitting with the following function, \[\begin{split}\lambda_{c}(\kappa)&=a\cdot\left(1- \sigma\left(d\cdot(\kappa-\kappa_{\rm loc})\right)\right)\\ &\quad+\left(b\kappa+c\right)\sigma\!\left(d\cdot(\kappa-\kappa_ {\rm loc})\right),\end{split} \tag{36}\] where \(\sigma(x)=1/(1+e^{-x})\) is the sigmoid function. Following the same procedure discussed above to estimate errors, we found \(\kappa_{\rm loc}(2.6)=0.7303(57)_{\rm stat}(17)_{\rm syst}=0.7303(59)\) for the critical point (see Tab. 4 for the other fit parameters). As shown in Fig. 2 (top), also in this case the critical value matches well with the position of the crossover. Notice that here the critical point is not as sharply defined as in the previous two cases, as it simply corresponds to a change in the \(\kappa\) dependence of the mobility edge, rather than its very appearance. However, it is possible that the change in the behavior of \(\lambda_{c}(\kappa)\) becomes singular in the infinite-volume limit, e.g., due to a discontinuity in its derivative. If so, one would find a sharply defined critical point for the geometric transition also in this case. At the present stage this is only speculation, and a more careful determination of the mobility edge is needed to test this possibility, either by a proper finite-size scaling analysis, or by checking the volume dependence of the crossing point of \(I_{s_{0}}\) with \(I_{s_{0},c}\). It is interesting to compare the estimates of \(\beta_{\rm loc}\) and \(\kappa_{\rm loc}\) obtained from eigenvalue observables to similar estimates obtained from eigenvector observables. In particular, if \(\lambda_{c}\) vanishes continuously at \(\beta_{\rm loc}\), then in the thermodynamic limit the localization length of the low modes should correspondingly diverge. We have then looked at the size of the low modes averaged over the lowest half of the localized spectral region, \[\begin{split} N_{t}V\cdot\left\langle{\rm PR}\right\rangle_{ \lambda<\frac{\lambda_{c}}{2}}&=\frac{1}{\mathcal{N}(\frac{ \lambda_{c}}{2})}\int_{0}^{\frac{\lambda_{c}}{2}}d\lambda\,\rho(\lambda)\,N_{t }V\cdot{\rm PR}(\lambda,N_{s})\,,\\ \mathcal{N}(\lambda_{0})&=\int_{0}^{\lambda_{0}}d \lambda\,\rho(\lambda)\,.\end{split} \tag{37}\] In Fig. 18 we show this quantity as a function of \(\beta\) for constant \(\kappa=0.3\) in the deconfined phase (top panel) and \(\kappa=1.0\) in the Higgs phase (center panel). This quantity does indeed grow large as one approaches the confined phase. Fits with a power-law function, \[N_{t}V\cdot\left\langle{\rm PR}\right\rangle_{\lambda<\frac{\lambda_{c}}{2}} =a\cdot\left(\beta-\beta_{\rm loc}\right)^{-b}, \tag{38}\] \begin{table} \begin{tabular}{c|c c} & deconfined & Higgs \\ \hline \(\beta_{\rm loc}\) & 2.3318(15) & 2.0499(92) \\ \(a\) & 67.2(2.1) & 52.0(2.9) \\ \(b\) & 0.501(17) & 0.424(43) \\ \hline \(\chi^{2}/{\rm dof}\) & 1.96 & 0.34 \\ \end{tabular} \end{table} Table 5: Parameters of a best fit of the \(\beta\) dependence of the average size of the lowest modes, \(\left\langle N_{t}V\cdot{\rm PR}\right\rangle_{\lambda<\frac{\lambda_{c}}{2}}\), Eq. (37), with the fitting function in Eq. (38). Figure 17: The dependence of the mobility edge on \(\kappa\) on the line at constant \(\beta=2.6\) in the deconfined and Higgs phases. The solid line is a fit to the data with Eq. (36); the band corresponds to the finite-size systematic uncertainty discussed in the text. A change of behavior is found at \(\kappa_{\rm loc}=0.7303(59)\) in the crossover region between the two phases, see Fig. 2 (top), marked here by a vertical solid line, with dashed lines giving the corresponding error band. \begin{table} \begin{tabular}{c|c c} & deconfined & Higgs \\ \hline \(\beta_{\rm loc}\) & 2.3318(15) & 2.0499(92) \\ \(a\) & 67.2(2.1) & 52.0(2.9) \\ \(b\) & 0.501(17) & 0.424(43) \\ \hline \(\chi^{2}/{\rm dof}\) & 1.96 & 0.34 \\ \end{tabular} \end{table} Table 5: Parameters of a best fit of the \(\beta\) dependence of the average size of the lowest modes, \(\left\langle N_{t}V\cdot{\rm PR}\right\rangle_{\lambda<\frac{\lambda_{c}}{2}}\), Eq. (37), with the fitting function in Eq. (38). yield \(\beta_{\rm loc}=2.3318(15)\) in the deconfined phase, and \(\beta_{\rm loc}=2.0499(92)\) in the Higgs phase, both in the crossover region, and in reasonable agreement with the determinations based on the extrapolation of the mobility edge. Here one should take into account that the functional form Eq. (38) is not fully justified, as the mode size cannot diverge in a finite volume, and there is no reason to assume that the mode size goes to zero at large \(\beta\). Nonetheless, one obtains decent fits (see the resulting fit parameters and \(\chi^{2}\) in Tab. 5); adding a constant term makes them worse. On top of this, the error estimates do not include any uncertainty due to finite-size effects, which are large near \(\beta_{\rm loc}\). For completeness, in the bottom panel of Fig. 18 we show \(N_{t}V\cdot\left<{\rm PR}\right>_{\lambda<\frac{\lambda}{\kappa}}\) as a function of \(\kappa\) at constant \(\beta=2.6\) across the two phases. Here the data indicate a finite mode size at all \(\kappa\), with a change from a constant to a steadily decreasing trend taking place at the crossover between the deconfined and the Higgs phase, showing that localized modes shrink rapidly as one moves deeper in the Higgs phase and the Polyakov-loop expectation value increases (see Fig. 1). ### Correlation with bosonic observables and sea/island mechanism We now proceed to discuss our results on the correlation of staggered eigenmodes with the gauge and Higgs fields. To this end, the most informative quantities are the centered and normalized observables \(\widehat{\cal U}\), \(\widehat{\cal P}\) and \(\widehat{\cal G}\), defined in Eq. (21), that take into account the width of the distribution of the relevant bosonic observables. Our results for these quantities are shown in Figs. 19-21. The statistical error on the numerical estimate of these quantities is obtained by first determining the jackknife error on \({\cal U}\), \({\cal P}\) and \({\cal G}\), and correspondingly on \(\langle U\rangle\), \(\langle P\rangle\), \(\langle G\rangle\) and on \((\delta U)^{2}\), \((\delta P)^{2}\), \((\delta G)^{2}\), followed by linear error propagation. Correlations with Polyakov-loop and plaquette fluctuations are always negative, showing that low modes prefer locations where these quantities fluctuate to values below their average. Correlations with gauge-Higgs coupling term fluctuations are again negative in the confined and in the Higgs phase, while they are essentially compatible with zero in the deconfined phase. The correlation of low modes with Polyakov-loop fluctuations is shown in Fig. 19. In the confined phase this is small but significant, and decreasing very little in magnitude as one goes up in the spectral region that we explored. The strength of this correlation is considerably larger in the Higgs phase, and even larger in the deconfined phase. Since Polyakov-loop fluctuations are typically localized in these phases, this increased correlation is possible only if the low modes tend to localize on the corresponding locations. In both the deconfined and the Higgs phase one sees also a more rapid decrease in the magnitude of the correlation as one moves up in the spectrum. This, however, remains stronger than for the lowest modes in the confined phase also above the mobility edge. The correlation of low modes with plaquette fluctuations is shown in Fig. 20. Also in this case a signifi Figure 18: Mode size averaged up to \(\lambda_{c}/2\), Eq. 37, at \(\kappa=0.3\) in the deconfined phase (top panel) and at \(\kappa=1.0\) in the Higgs phase (center panel), as a function of \(\beta\), and at \(\beta=2.6\) across the transition from the deconfined to the Higgs phase (bottom panel), as a function of \(\kappa\). In all plots \(N_{s}=20\) and \(N_{t}=4\). The solid line in the top and center panels is a fit with a power-law function. The vertical and dashed lines in the bottom panel mark the critical value \(\kappa_{\rm loc}\) and the corresponding error band [see after Eq. (36)]. icant correlation is found in all three phases, generally stronger (and comparable in size) in the deconfined and Higgs phases than in the confined phase. Compared to the correlation with Polyakov-loop fluctuations, one finds a similar magnitude in the deconfined phase, and a larger magnitude in the Higgs phase. Since also plaquette fluctuations are typically localized, this means that they are at least as relevant as Polyakov-loop fluctuations for the localization of low modes. A clear upturn is visible for the lowest modes in the deconfined phase and, to a much smaller extent, also in the Higgs phase. We do not have an explanation for this. Even though the density fluctuations are typically localized, this means that they are at least as relevant as Polyakov-loop fluctuations for the localization of low modes. A clear upturn is visible for the lowest modes in the deconfined phase and, to a much smaller extent, also in the Higgs phase. We do not have an explanation for this. Even though the density fluctuations are typically localized, this means that they are at least as relevant as Polyakov-loop fluctuations for the localization of low modes. A clear upturn is visible for the lowest modes in the deconfined phase and, to a much smaller extent, also in the Higgs phase. We do not have an explanation for this. Even though the density fluctuations are typically localized, this means that they are at least as relevant as Polyakov-loop fluctuations for the localization of low modes. A clear upturn is visible for the lowest modes in the deconfined phase and, to a much smaller extent, also in the Higgs phase. We do not have an explanation for this. Even though the density fluctuations are typically localized, this means that they are at least as relevant as Polyakov-loop fluctuations for the localization of low modes. A clear upturn is visible for the lowest modes in the deconfined phase and, to a much smaller extent, also in the Higgs phase. We do not have an explanation for this. Even though the density fluctuations are typically localized, this means that they are at least as relevant as Polyakov-loop fluctuations for the localization of low modes. A clear upturn is visible for the lowest modes in the deconfined phase and, to a much smaller extent, also in the Higgs phase. We do not have an explanation for this. Even though the density fluctuations are typically localized, this means that they are at least as relevant as Polyakov-loop fluctuations for the localization of low modes. A clear upturn is visible for the lowest modes in the deconfined phase and, to a much smaller extent, also in the Higgs phase. We do not have an explanation for this. Even though the density fluctuations are typically localized, this means that they are at least as relevant as Polyakov-loop fluctuations for the localization of low modes. A clear upturn is visible for the lowest modes in the deconfined phase and, to a much smaller extent, also in the Higgs phase. We do not have an explanation for this. Even though the density fluctuations are typically localized, this means that they are at least as relevant as Polyakov-loop fluctuations for the localization of low modes. A clear upturn is visible for the lowest modes in the deconfined phase and, to a much smaller extent, also in the Higgs phase. We do not have an explanation for this. Even though the density fluctuations are typically localized, this means that they are at least as relevant as Polyakov-loop fluctuations for the localization of low modes. A clear upturn is visible for the lowest modes in the deconfined phase and, to a much smaller extent, also in the Higgs phase. We do not have an explanation for this. Even though the density fluctuations are typically localized, this means that they are at least as relevant as Polyakov-loop fluctuations for the localization of low modes. A clear upturn is visible for the lowest modes in the deconfined phase and, to a much smaller extent, also in the Higgs phase. We do not have an explanation for this. Even though the density fluctuations are typically localized, this means that they are at least as relevant as Polyakov-loop fluctuations for the localization of low modes. A clear upturn is visible for the lowest modes in the deconfined phase and, to a much smaller extent, also in the Higgs phase. We do not have an explanation for this. Even though the density fluctuations are typically localized, this means that they are at least as relevant as Polyakov-loop fluctuations for the localization of low modes. A clear upturn is visible for the lowest modes in the deconfined phase and, to a much smaller extent, also in the Higgs phase. We do not have an explanation for this. Even though the density fluctuations are typically localized, this means that they are at least as relevant as Polyakov-loop fluctuations for the localization of low modes. A clear upturn is visible for the lowest modes in the deconfined phase and, to a much smaller extent, also in the Higgs phase. We do not have an explanation for this. Even though the density fluctuations are typically localized, this means that they are at least as relevant as Polyakov-loop fluctuations for the localization of low modes. A clear upturn is visible for the lowest modes in the deconfined phase and, to a much smaller extent, also in the Higgs phase. of near-zero modes is very small in both cases, leading to large fluctuations, this upturn might be significant, as the mode size displays a similar behavior (see Figs. 8 and 9), with an increase in size for the lowest modes. (The downturn seen in \(I_{s_{0}}\), Figs. 13 and 14, may also be related, but could also be a finite-size artifact caused by the low and rapidly changing density of modes, that makes our unfolding procedure not fully reliable in that spectral region.) The same upturn in the mode size is observed also in QCD [4], where it can be explained by the topological origin of the near-zero modes [59; 60]. Such modes are in fact expected to originate in the mixing of the localized zero modes associated with topological lumps in the gauge configuration at finite temperature, so extending over more than one such lump. While they fail to become delocalized due to the low density of lumps at high temperature, they nonetheless should display a larger size than localized modes not of topological origin. This picture is consistent with the strong correlation between localized near-zero modes and the local topology of the gauge configuration, demonstrated in Ref. [7], and with the lumpy nature of near-zero Dirac modes in high-temperature QCD, demonstrated in Ref. [61]. A similar mechanism could explain the larger size of the lowest modes observed here. Interestingly, no upturn in the size of the lowest modes is observed in 2+1 dimensional pure SU(3) gauge theory [24] or in 2+1 dimensional discrete gauge theories [27; 16], where the topology of gauge field configurations is trivial. Finally, the correlation of low modes with fluctuations of the gauge-Higgs coupling term is shown in Fig. 21. A very mild correlation is visible in the confined phase, no significant correlation is found in the deconfined phase, and a clear but small correlation is found in the Higgs phase, weaker than the correlation with Polyakov-loop and plaquette fluctuations. This leads us to conclude that these fluctuations are much less relevant to low-mode localization. We then studied the sea/island mechanism directly by looking at the correlation of the staggered eigenmodes with the local fluctuations of the hopping term in the Dirac-Anderson Hamiltonian, measured by the quantity \(A\) of Eq. (30). To this end we analyzed 450 configurations with \(N_{s}=16\) in the confined phase, and 1400 configurations with \(N_{s}=20\) in the deconfined and Higgs phases, with \(N_{t}=4\) in both cases. The average value of \(A\) drops substantially as one moves from the confined to the deconfined or to the Higgs phase: for the given lattice sizes (but this quantity is not expected to show a strong volume dependence), \(\langle A\rangle=0.2761(11)\) at \(\beta=1.9,\kappa=1.0\) in the confined phase; \(\langle A\rangle=0.15828(64)\) at \(\beta=2.6,\kappa=0.3\) in the deconfined phase; and \(\langle A\rangle=0.20518(86)\) at \(\beta=2.1,\kappa=1.0\) in the Higgs phase. This is expected to happen, as a consequence of the ordering of the Polyakov loop and the resulting strong correlation in the temporal direction [16]. The centered and normalized quantity \(\widehat{\mathcal{A}}\) defined in Eq. (32) is shown in Fig. 22. This quantity correlates positively with the spatial density of low modes in all phases, in agreement with the refined sea/islands picture of Ref. [16]. In the confined phase the magnitude of the correlation with fluctuations in this quantity is comparable with the correlation with plaquette fluctuations, and independent of the position in the spectrum in the available region, within errors. In the Higgs and, especially, Figure 21: Gauge-Higgs coupling term weighted by Dirac modes, centered to its average and rescaled by the square root of its susceptibility, Eq. (21), at \(\beta=1.9\) and \(\kappa=1.0\) in the confined phase (top panel; here \(N_{s}=20\)), at \(\beta=2.6\) and \(\kappa=0.3\) in the deconfined phase (center panel; here \(N_{s}=32\)), and at \(\beta=2.1\) and \(\kappa=1.0\) in the Higgs phase (bottom panel; here \(N_{s}=32\)). In all plots \(N_{t}=4\). In the center and bottom panels the solid line shows the value of the mobility edge, and the dashed lines indicate the corresponding error band. in the deconfined phase this correlation is much stronger than those with Polyakov-loop and with plaquette fluctuations. Although it remains strong also at the beginning of the bulk region, it reduces by about a third when going from the lowest modes to the first delocalized modes right above the mobility edge. Since fluctuations of \(A(\vec{x})\) are typically localized in the deconfined and Higgs phases, this result strongly suggests that they are the ones mainly responsible for trapping the eigenmodes in space. ## V Conclusions A strong connection has emerged in recent years between the deconfinement phase transition in gauge theories with or without fermionic matter, and the change in the localization properties of low Dirac modes [3; 4; 5; 6; 7; 8; 9; 10; 12; 13; 14; 15; 16; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29]. In this paper we extended this line of research by studying the lattice SU(2)-Higgs model with a Higgs field of fixed length [36; 37; 38; 39; 40; 41; 42] at finite temperature, probed with external static fermions. The extension is twofold. On the one hand, this model has dynamical scalar rather than fermionic matter: while one still expects localized modes in the deconfined phase of the model, as the nature of the dynamical matter does not affect the general argument for localization [10; 12; 13; 14; 15; 16], it is nonetheless useful to verify this explicitly. On the other hand, and more interestingly, the two-parameter phase diagram of this model displays a third phase besides the confined and deconfined phases, i.e., the Higgs phase: one can then check whether or not modes are localized in this phase, and if so whether the onset of localization is related in any way to the thermodynamic transition. A survey of the phase diagram shows the expected tripartition into a confined, a deconfined, and a Higgs phase, separated by analytic crossovers [42]. The deconfined and the Higgs phases are distinguished from the confined phase by a much larger expectation value of the Polyakov loop, and from each other by the expectation value of the Higgs-coupling term, much larger in the Higgs phase than in the deconfined and in the confined phases. Since the Polyakov loop is strongly ordered, one expects localization of low Dirac modes to take place in both phases [12; 13; 14; 15; 16]. By means of numerical simulations, we have demonstrated that localized modes are indeed present both in the deconfined and in the Higgs phase. In both cases, the mobility edge separating localized and delocalized modes in the spectrum decreases as one moves towards the confined phase, and disappears as one reaches the crossover region. At the transition between the deconfined and the Higgs phase, instead, the dependence of the mobility edge on the gauge-Higgs coupling constant changes from almost constant to steadily increasing. These findings provide further support to the universal nature of the sea/islands picture of localization [12; 13; 14; 15; 16] in a previously unexplored setup in the presence of dynamical scalar matter. We have then studied the sea/islands mechanism in more detail, measuring the correlation between localized modes and fluctuations of the gauge and Higgs fields. We found a strong correlation with Polyakov-loop and plaquette fluctuations both in the deconfined and in the Higgs phase, and a mild but significant correlation with fluctuations of the gauge-Higgs coupling term only in the Figure 22: The quantity \(\widehat{\mathcal{A}}\), Eq. (32), measuring the correlation of staggered modes with fluctuations of \(A(\vec{x})\), Eq. (31), at \(\beta=1.9\) and \(\kappa=1.0\) in the confined phase (top panel; here \(N_{s}=16\)), at \(\beta=2.6\) and \(\kappa=0.3\) in the deconfined phase (center panel; here \(N_{s}=20\)), and at \(\beta=2.1\) and \(\kappa=1.0\) in the Higgs phase (bottom panel; here \(N_{s}=20\)). In all plots \(N_{t}=4\). In the center and bottom panels the solid line shows the value of the mobility edge, and the dashed lines indicate the corresponding error band. Higgs phase. Moreover, we found in both phases a very strong correlation (stronger than that with Polyakov-loop or plaquette fluctuations) with the type of gauge-field fluctuations identified in Ref. [16] as the most relevant to localization. This provides further evidence for the validity of the refined sea/islands picture proposed in Ref. [16]. A possible extension of this work would be a study of the low \(\beta\), large \(\kappa\) corner of the phase diagram, where the crossover becomes very weak, in order to check if the line of "geometric" transitions where the mobility edge in the Dirac spectrum vanishes extends all the way to \(\beta=0\), or if instead it has an endpoint. This is interesting also in connection with the "spin glass" approach of Ref. [44]: since in that region of parameter space this predicts a transition line clearly distinct from the one found with more traditional approaches based on gauge fixing, one would like to compare this line with the one defined by the vanishing of the mobility edge (if the latter exists). A different direction would be the study of the localization properties of the eigenmodes of the covariant Laplacian, extending to finite temperature and dynamical scalar matter the work of Refs. [62; 63]. ###### Acknowledgements. We thank T.G. Kovacs for useful discussions and a careful reading of the manuscript. MG was partially supported by the NKFIH grant KKP-126769.
2303.14106
On the Susceptibility of QDI Circuits to Transient Faults
By design, quasi delay-insensitive (QDI) circuits exhibit higher resilience against timing variations as compared to their synchronous counterparts. Since computation in QDI circuits is event-based rather than clock-triggered, spurious events due to transient faults such as radiation-induced glitches, a priori are of higher concern in QDI circuits. In this work we propose a formal framework with the goal to gain a deeper understanding on how susceptible QDI circuits are to transient faults. We introduce a worst-case model for transients in circuits. We then prove an equivalence of faults within this framework and use this result to provably exhaustively check QDI circuits, a linear Muller pipeline and a cyclic Muller pipeline, for their susceptibility to produce non-stable output signals.
Raghda El Shehaby, Matthias Függer, Andreas Steininger
2023-03-24T16:15:49Z
http://arxiv.org/abs/2303.14106v2
# On the Susceptibility of QDI Circuits to Transient Faults ###### Abstract By design, quasi delay-insensitive (QDI) circuits exhibit higher resilience against timing variations as compared to their synchronous counterparts. Since computation in QDI circuits is event-based rather than clock-triggered, spurious events due to transient faults such as radiation-induced glitches, a priori are of higher concern in QDI circuits. In this work we propose a formal framework with the goal to gain a deeper understanding on how susceptible QDI circuits are to transient faults. We introduce a worst-case model for transients in circuits. We then prove an equivalence of faults within this framework and use this result to provably exhaustively check QDI circuits, a linear Muller pipeline and a cyclic Muller pipeline, for their susceptibility to produce non-stable output signals. transient faults, QDI circuits, automatic evaluation + Footnote †: This research was partially supported by the project ENROL (grant 1 3485-N31) of the Austrian Science Fund (FWF) as well as the Doctoral College on Resilient Embedded Systems (DC-RES) and the ANR project DREAMY (ANR-21-CE48-0003). ## I Introduction It is well known that synchronous circuits exhibit a natural resilience against transient faults through masking. Specifically, the relevant effects are electrical masking (short fault pulses are filtered by low-pass behavior of gates and interconnect), logical masking (depending on other input levels, the logic level of the faulty input may be irrelevant for the gate output) and temporal masking (the flip flop "samples" its data input at the active clock edges while ignoring faults that happen between these). However, synchronous circuits have little resilience against (fault) effects that impact the timing. In contrast, asynchronous, specifically QDI, circuits exhibit large, ideally unlimited, tolerance against timing variations. This is due to their event-driven operation principle. Unfortunately, this very event driven operation makes them prone to transient faults. Electrical masking and logical masking mitigate many of the fault effects, just like in the synchronous case. Whether temporal masking occurs, however, is not easy to answer. Previous works have shown that asynchronous pipelines, e.g., have data accepting windows during which they are susceptible to fault pulses. The size of these windows depends on several parameters, most notably the mode of pipeline operation (bubble-limited / balanced / token-limited). For unbalanced operation these windows may reach considerable size, making the circuit clearly more susceptible to faults than in the synchronous case with its "instantaneous sampling". That is why several mitigation methods [1] aim at minimizing the data accepting windows. In any case there is some effect equivalent to temporal masking, and most often it is constituted by Muller C-elements (MCEs): While in _combinational_ mode of operation (matching inputs), the MCE ignores fault pulses on any input, not even a pulse at the output can flip its state. In _storage_ mode (non-matching inputs), however, the MCE's state can be easily flipped by a fault pulse at one of the inputs or at the output (directly at the keeper). So apparently, the share of time during which an MCE is in combinational mode determines the masking provided by it. In a reasonably complex practical setting, however, this insight is hard to map to a general prediction of the whole circuit. **Contributions and organization.** Related studies have already explored the resilience of asynchronous circuits against transient (and permanent) faults and produced interesting results for specific cases, as well as some general insights. However, important answers are still missing. The data accepting windows, e.g., have been qualitatively described, and they have been experimentally determined and visualized for specific parameters by injecting faults over a regularly spaced time grid. However, to the best of our knowledge, no systematic exploration has been performed. In this paper we present an approach to efficiently and precisely identify the sensitive windows (position and size) over time for all nodes individually, with a guarantee to find all windows of vulnerability, no matter how small. To this end, we introduce our circuit model in Section III. In Section IV we start with basic consistency results of the model, followed by our main technical result: the definition of value regions in executions along with a proof of the equivalence of glitches within those regions (Theorem 2). Based on this result we then present our tool for sensitivity-window exploration (Section IV-D) and apply it to different QDI circuits for illustration, where the ability to break down the sensitivity analysis to the signal level proves beneficial. We conclude in Section V. ## II Related Work **Transient faults in asynchronous circuits.** Several studies have explored the effects of transient faults on asynchronous circuits. Ways for detection and mitigation techniques with some form of redundancy have been proposed alongside. The authors in [2] perform a thorough analysis of single-event transient (SET) effects, among other types of faults, in QDI circuits. The fault's impact is first presented at the gate level, then on communication channels, translating the fault to a deadlock. They elaborate other possible errors on a high level in terms of synchronization failure, token generation and token consumption. An efficient failure detection method for QDI circuits is presented in [3]. The method brings the circuit to a fail-safe state in the presence of hard and soft errors. The authors investigate the probability for a glitch to propagate through a state-holding element in asynchronous circuits. In [4], the authors propose a formal method to model the behavior of QDI circuits in the presence of transient faults. They use symbolic simulation to provide an exhaustive list of possible effects and analyze which of these cases are theoretically reachable. Their model, however, does not support delay parameters which could reduce the set of states that are physically reachable, further proving the resistance of a design against single-event upsets (SEUs). They also show in [5] the Muller C-element fault sensitivity and then specify a global sensitivity criterion to SETs for asynchronous circuits. They provide a behavioral analysis, with distinct classifications, of QDI circuits in the presence of faults. With the help of signal transition graphs (STGs), the authors in [1]_informally_ analyze SEUs due to glitches on QDI network-on-chip links. They propose several mitigation techniques with a focus on reducing the latch's sensitive window to a glitch. Some of these techniques are tested and compared against other proposed variations in [6], [7], and [8]. The assessment there is based on extensive fault injection simulations into different QDI buffer styles, in order to identify the main culprits of the circuit. They provide a quantitative analysis to determine the windows of vulnerability to SETs and the impact of certain parameter choices on the resilience of the circuit. However, the analysis is done based on a regular timing grid, which causes linear complexity in time and in resolution, and cannot exclude the potential of overlooking relevant windows between the grid points. **Hazards in PRSs.** QDI circuits can be modeled on different levels. A Production Rule Set (PRS), introduced by Martin [9], is the canonical representation of a QDI circuit from which one can easily reach an equivalent implementation in CMOS technology. PRSs do not normally support hazards, and by guaranteeing _stability_ and _non-interference_ characteristics [10], a PRS execution is assumed to be hazard-free. The authors consider an SEU as flipping of a variable's value and model it in so called transition graphs to identify deadlock or abnormal behavior. [11] extends the semantics of PRSs in order to be able to address hazards as circuit failures, but it is limited to checking the hazard-freedom property of a circuit. These papers are focused on the _possibility_ of failure and are restricted to precedence of events, without explicitly considering timing. Our work enables further propagation of what we define as a glitch in order to check whether it has reached the final outputs of a circuit and, based on actual timing information, _quantify_ this proportion of failure. ## III Model Following the work by Martin [9], we model a circuit as a set of production rules. We extend the model by delays and propagation of non-Boolean values. We start by definitions of signal values and production rules in our context. **Signal and signal values.** Signals are from a finite alphabet \(S\). Signals have values that may change over time. We extend the values a signal may attain from the classical Boolean values \(\mathbf{B}=\{0,1\}\) to the three-valued set \(\mathbf{B}_{\text{X}}=\{0,\text{X},1\}\), where X is a potentially non-binary value. Examples for non-binary values are glitches, oscillations, metastable values, etc. A signal that has value X may, however, be \(0\) or \(1\). We will make use of logical operations like \(\wedge\) and \(\neg\) on the extended domain \(\mathbf{B}_{\text{X}}\). If not stated otherwise, we resort to the semantics of the 3-valued Kleene logic, introduced by Goto for these operations; see [12]. In short, using the classical algebraic interpretation of Boolean formulas on \(\{0,1\}\subset\mathbf{R}\) where, \(\neg a\equiv 1-a\), \(a\wedge b\equiv\min(a,b)\), and \(a\lor b\equiv\max(a,b)\), one obtains the Kleene semantics by the correspondence \(\text{X}\equiv 1/2\). For example, one obtains, \(1\wedge\text{X}=\text{X}\) and \(1\lor\text{X}=1\). **Production rules.** A production rule is a guarded Boolean action with delay. It is of the form \[G\to s=1\ [d]\quad\text{ or }\quad G\to s=0\ [d]\enspace, \tag{1}\] where the guard \(G\) is a logical predicate on signals, \(s\) is a signal, and \(d\in(0,\infty)\) is the propagation delay. Intuitively, a production rule with guard \(G\), action \(s=b\), where \(b\in\{0,1\}\), and delay \(d\) sets signal \(s\)'s value to \(b\) upon predicate \(G\) being true for \(d\) time. **Circuit.** A circuit is specified by: * Finite, disjoint sets of input, local, and output signals, denoted by \(\mathcal{I}\), \(\mathcal{L}\), and \(\mathcal{O}\). * Initial values for all local and output signals. We write \(s(0)\) for the initial value of signal \(s\in\mathcal{L}\cup\mathcal{O}\). * A set of production rules \(R\) whose guards are predicates on the circuit's signals and whose actions involve only local and output signals. We require that (i) for each signal \(s\), there is at most one production rule that sets \(s\) to \(1\), and at most one that sets \(s\) to \(0\), and (ii) guards of production rules that set a signal \(s\) are mutually exclusive for all signal values from \(\mathbf{B}\). Similarly to Martin [9] we use production rules to model gates: actions that set a value to \(1\) correspond to the pull-up stack of a gate and actions that set a value to \(0\) to the pull-down stack. Any meaningful circuit will further have the properties that any local and output signal appears in a production rule that sets it to \(0\) and one that sets it to \(1\); if not the signal will remain at its initial value for all times. Further, as already demanded in the last bullet above, the guards of these opposing production rules will not both evaluate to true for any choice of signal values; if not, the pull-up and pull-down stack of this gate will drive the gate's output at the same time. **Signal trace.** A signal trace for signal \(s\in S\) is a function \(v_{s}:\mathbf{R}_{0}^{+}\rightarrow\mathbf{B}_{\text{X}}\) mapping the time \(t\) to the value of \(s\) at time \(t\). By slight abuse of notation we write \(s(t)\) for \(v_{s}(t)\). We restrict signal traces to contain only finitely many value-changes in each finite time interval. **Execution.** It remains to define how a circuit, with a given input, switches signal values. For that purpose fix a circuit, input signal traces for all its inputs \(I\), and a time \(T>0\) until which the execution is to be generated. Intuitively an _execution induced by the circuit and the input signal traces_ is inductively generated via applying the production rules to the current signal values. If a guard of a production rule is true, its action is scheduled to take place after the rule's delay. Care has to be taken to handle instability of guards. If a guard that results in a scheduled action on a signal, but whose action has not yet been applied, becomes false, we remove the scheduled action and instead set the signal to X after a small delay \(\varepsilon>0\). An \(\varepsilon\) smaller than the rule's delay accounts for the fact that non-binary outputs can propagate faster than full-swing transitions. The signal's value X is then propagated accordingly throughout the circuit. Indeed we will let \(\varepsilon\to 0\) in later sections to account for the worst case behavior of gates. Formally, the _execution prefix until time \(T\), induced by the circuit and the input signal traces_, is a signal trace prefix until time \(T\) for each local and output signal obtained as follows: 1. Initially, all signals are set to their initial values as specified by the circuit. Further, the current time \(t=0\), and the set of scheduled actions is empty. 2. Handle unstable guards: * For each production rule whose action \(s=b\), with \(b\in\mathbf{B}\), currently being scheduled: if the rule's guard evaluates to \(0\) or X, and \(s(t)\neq b\) (we say the guard is unstable), then remove the event from the scheduled events and set \(s=\text{X}\). _(generate-X)_ 3. Apply actions: * For each action \(s=v\), with \(v\in\mathbf{B}_{\text{X}}\), scheduled for time \(t\), set \(s(t)=v\) and remove the action from the scheduled actions. 4. Schedule actions: * For each production rule: if its guard evaluates to \(1\), schedule the rule's action \(s=b\) to take place after the rule's delay \(d\), i.e., at time \(t+d\) (unless \(s(t)=b\) already). * For each production rule: if its guard evaluates to X and the rule's action is \(s=b\) with \(s(t)\neq b\), schedule the action \(s=\text{X}\) for time \(t+\varepsilon\) (unless \(s(t)=\text{X}\) already). _(propagate-X)_ 5. Advance time \(t\) to the nearest future time at which an action is scheduled or an input signal switches value. If \(t\geq T\), return the local and output signal traces until time \(T\); otherwise continue with step 2. One observes that an execution prefix until time \(T^{\prime}>T\) is an extension of an execution prefix until time \(T\): for each local and output signal \(s\), the signal values in both prefixes are identical within \([0,T]\). We may thus speak of the execution as the limit of execution prefixes until times \(T\rightarrow\infty\). ### _Example_ As an example let us consider the circuit with input signal \(i\), no local signals, and output signal \(o\). As initial value we choose \(o(0)=1\). The circuit comprises of a single inverter with input \(i\), output \(o\), and delay \(1.0\), i.e., the circuit's production rules are: \[i\to o =0\ [1.0] \tag{2}\] \[\neg i \to o =1\ [1.0]\enspace. \tag{3}\] We consider three input traces: (a) Initially \(i(0)=0\), then \(i\) transitions to \(1\) at time \(1\) where it remains. (b) Prefix like (a), but the input transitions back to \(0\) at time \(1.5\). (c) Like (b), but with value X during times \([1,1.5)\). The execution prefixes until time \(T=4\) induced by the above circuit and the input signal traces (a), (b), and (c) are depicted in Figure 1. In the example, input traces (a) and (b) result in the guard of rule (2) becoming true at time \(1\). Accordingly, an action to set \(o=0\) is scheduled for time \(1+d=2\). While in input trace (a), the guard remains true until time \(2\), and thus \(o\) is set to 0 at time \(2\), in input trace (b), the guard is falsified at time \(1.5\), resulting in the action being canceled and \(o\) is set to X at time \(1.5\) (generate-X in the algorithm). For input trace (b), we have that the guard of rule (3) becomes true at time \(1.5\). Accordingly the action \(o=1\) is scheduled for time \(1.5+d=2.5\). Since the guard remains true until time \(2.5\), the action is applied resulting in \(o(2.5)=1\). Finally, input trace (c) demonstrates the algorithmic rule propagate-X in step 5: the X value at the input is propagated with propagation delay \(\varepsilon=0.1\) to the output. Resetting the output to \(1\) at time \(2.5\) occurs as for input trace (b). Fig. 1: Execution prefixes until time \(T=4\) of an inverter with input \(\downarrow\) and output \(\circ\). Signal value X is depicted as a value of \(0.5\) and marked red. The propagation delay \(\varepsilon\) for signal value X is set to \(0.1\). Left: input signal trace (a). Middle: input signal trace (b). Right: input signal trace (c). ## IV Results ### _Well-defined executions_ We start with a basic result, on the consistency of an execution as defined by the algorithm. **Lemma 1**.: _Any signal trace of an execution has at most finitely many value-changes within a finite time interval; it is thus a well-defined signal trace._ Proof.: Assume by contradiction that a signal trace has infinitely many value-changes within a finite interval \([t,t^{\prime}]\subset\mathbf{R}\). By consistency of prefixes of executions, this implies that the algorithm returns an execution with infinitely many value-changes when setting \(T=t^{\prime}\). In the algorithm, at any point in time \(\tau\) there is at most one action per non-input signal in the set of scheduled actions and at most bounded many actions per input signal until time \(T\). Observing that there is a minimum propagation delay \(d_{\min}>0\) for signal values \(0\), \(1\), and X, any newly scheduled action must occur at earliest at time \(\tau+d_{\min}\). Thus only bounded many actions occur within \([\tau,\tau+d_{\min}]\). The statement follows. ### _A transient-fault insertion tool_ To study the effect of short transient faults on the behavior of circuits we extend the algorithm from Section III to allow the insertion of external events: signal transitions from a set of _external events_ are applied at the end of step 3. Step 5 is changed to include external events when updating time \(t\) to the time of the next event. A transient fault then corresponds to two subsequent signal transitions of the same signal in the set of external events. We have implemented the algorithm in Python. **Linear pipeline.** To study the susceptibility of QDI circuits to transient faults, we used the tool to insert short pulses (glitches) at different times. As a prototypical QDI circuit, we used the linear 3-stage Muller pipeline shown in Figure 2. Delays have been uniformly set to 1 for the two pipeline inverters INV2 and INV3, to 5 for all Muller C-elements (MCE1 to MCE3), and 4 for the left most inverter INV1 and the rightmost inverter INV4, which model the source and sink of the pipeline, respectively. Figure 3 shows an execution prefix until time \(T=32\) in absence of transient faults, generated by our tool. Figures 4 and 5 show execution prefixes of the same circuit until time \(T=32\) when a glitch is inserted at the same signal, c2, at different points in time: the intervals during which a signal has value X are marked in red. One observes that the behavior in presence of the glitch is different as detailed in the following. _Non-masked glitch._ In Figure 4 the glitch occurs at the input of the MCE while it is in storage mode, i.e., non-matching inputs. Since the other stable input, en3, is at different logic level than the MCE output, c3, the X value is generated in the latter which later propagates through other signals of the circuit. _Masked glitch._ The glitch in Figure 5, however, occurs at the input of the MCE while in combinational mode, i.e. matching inputs. The glitch is masked at the output c3, but the X value appears for a short period of time at enl (since an inverter will always propagate any value it is fed). During this time span, the X value appeared and disappeared while the other MCE was also in combinational mode, hence was prevented from propagating the unstable value further in the circuit. Fig. 4: Execution prefix of linear 3-stage pipeline until time \(T=32\) with glitch of width \(0.1\) inserted at time \(10\) at signal c2. Fig. 5: Execution prefix of linear 3-stage pipeline until time \(T=32\) with glitch of width \(0.1\) inserted at time \(22\) at signal c2. Fig. 3: Execution prefix of linear 3-stage pipeline until time \(T=32\). Fig. 2: Linear Muller pipeline with 3 stages. The delays are set to 1 (INV2, INV3), 5 (C gate), 4 (source delay = INV1), and 4 (sink delay = INV4). **Susceptibility to transient faults.** The two different behaviors raise the question of when a QDI circuit like the linear pipeline can mask glitches successfully, and when it is susceptible to them. To address that, we relate susceptibility to the occurrence of glitches at signals of particular interest (typically the output signals to the environment). We call these signals of interest the _monitored signals_. For example, in the linear pipeline, signals c1 and c3 are the outputs to the environment represented by the source on the left and the sink on the right. In general, let \(C\) be a circuit, and \(i\) input signal traces. Let \(M\subseteq\mathcal{L}\cup\mathcal{O}\) be the set of monitored signals. Then, \((C,i)\) is _susceptible to a glitch (of width \(w\)) at signal \(s\in\mathcal{I}\cup\mathcal{L}\cup\mathcal{O}\) at time \(t\)_, if there exists a signal \(m\in M\) and a time \(t^{\prime}\) such that in the execution, induced by the circuit \(C\) and the input signal traces \(i\) and with a glitch (of width \(w\)) at signal \(s\) and time \(t\), it is \(m(t^{\prime})=\text{X}\). Revisiting the example of the linear pipeline, and letting \(M=\{\texttt{c1},\texttt{c3}\}\) be the set of monitored signals, we have that the pipeline with its input are susceptible to a glitch (of width \(0.1\)) at signal c2 at time \(10\), but not at time \(22\) (see Figures 4 and 5). This directly leads to the question of the sensitivity windows, i.e., the times when a circuit with an input is susceptible and when not. Related, if combined with a probability measure on faults occurring at these times, one may ask how likely a transient fault is to cause a glitch at a monitored signal. We address both questions in the following. ### _Equivalence of transient faults_ While the previous tool allows one to sample the susceptibility at particular times, such an approach has several drawbacks: (i) it is time consuming to generate such sweeps and (ii) small susceptible windows may be overlooked. In the following we present an alternative approach that relies on showing the equivalence between certain transient faults. We begin the analysis with a definition. We say _signal \(s\) has a pulse at time \(t\) of width \(w>0\)_ if \(s\) changes value at time \(t\), remains constant within time \([t,t+w)\), and changes value at time \(t+w\). A \(v\)-pulse, with \(v\in\mathbf{B}_{\text{X}}\), is a pulse that has value \(v\) during \([t,t+w)\). We speak of a _transient fault_ as an X-pulse that has width of at most some predefined small \(\gamma>0\). We are now ready to show a monotonicity property of the value X in executions: If transient faults are added to a circuit's execution, the resulting execution differs from the original one at most by turning Boolean values into X. For example, it cannot differ by a shifted 0-1 transition. **Theorem 1** (Monotonicity of X).: _Let \(C\) be a circuit and \(i\) be input traces. Let \(e\) be the execution induced by circuit \(C\) and input traces \(i\), and \(e^{\prime}\) the execution induced by circuit \(C\) and input \(i\) in presence of transient faults. Then for all signals \(s\) and times \(t\), if \(s(t)\in\mathbf{B}\) in \(e^{\prime}\), then \(s(t)\) is identical in \(e\) and \(e^{\prime}\)._ Proof.: Assume by means of contradiction that the statement does not hold and let \(t\) be the smallest time at which executions \(e\) and \(e^{\prime}\) do not fulfill the theorem's statement. Then there is a signal \(s\) such that \(s(t)=b\in\mathbf{B}\) in execution \(e^{\prime}\) and \(s(t)=v\neq b\) in execution \(e\). We distinguish between two cases for value \(v\): Case \(v=\text{X}\). If so, in execution \(e\), signal \(s\) was set to X at some time \(\tau\leq t\), and not set again within \([\tau,t]\). By minimality of \(t\), and the theorem's statement, \(s\) was also set to X in \(e^{\prime}\) at time \(\tau\) (or earlier). It follows that in execution \(e^{\prime}\), signal \(s\) was set to \(b\) within \((\tau,t]\). This implies that a rule with guard \(G\) and action \(s=b\) was triggered at a time before \(t\), and thus \(G\) was true in execution \(e^{\prime}\). By minimality of \(t\) and the theorem's statement, \(G\) must have been also true in \(e\), resulting in the same action being scheduled also in \(e\); a contradiction to the assumption that \(v=\text{X}\). Case \(v=\neg b\). If so, \(s\) was set via two different rules in \(e\) and \(e^{\prime}\) and not set to another value until time \(t\). This implies that mutually exclusive guards have evaluated to \(1\) in \(e\) and \(e^{\prime}\) before time \(t\); a contradiction to the minimality of \(t\) in combination with the theorem's statement. The theorem's statement follows in both cases. We next define time intervals that play a central role in a circuit's behavior in presence of transient faults. Given a circuit \(C\) and an execution \(e\) of \(C\), the _set of value switching times_, \(V_{C}(e)\), is the set of times \(\tau_{0}=0,\tau_{1},\ldots\) at which a signal in execution \(e\) switches value. A _value region of execution \(e\)_ is an interval \([t,t^{\prime})\subset\mathbf{R}\), where \(t,t^{\prime}\) are consecutive value switching times of execution \(e\). A _postfix of a value region_\([t,t^{\prime})\) is a (potentially empty) interval \([t^{\prime\prime},t^{\prime})\subseteq[t,t^{\prime})\). **Theorem 2**.: _Let \(C\) be a circuit, \(i\) be input traces, \(\gamma>0\) the width of a transient fault, and \(\varepsilon>0\) the propagation delay of value X. Let \(e\) be the execution induced by circuit \(C\) and input traces \(i\)._ _Then, for a signal \(s\) of the circuit, and a value region \(R\) of execution \(e\), the set \(\Sigma_{s}(R)\) of times \(t\in R\) such that \((C,i)\) is susceptible to a transient fault (of width \(\gamma\)) at signal \(s\) at time \(t\in R\) converges to a postfix of \(R\) as \(\varepsilon\to 0\) and \(\gamma\to 0\)._ Informally this means that every value region can be split into two intervals per signal: the left part of the region that contains the times at which the circuit is not susceptible, and the right part where it is susceptible to faults. Both parts/intervals can be empty. **Proof.** In the following fix \(C\), \(i\), \(\gamma\), \(\varepsilon\), execution \(e\), signal \(s\), and value region \(R\) of execution \(e\). We first show a monotonicity property within a value region. **Lemma 2**.: _Let \(R=[t,t^{\prime})\) be a value region of execution \(e\) and \(s\) a signal. Further, let \(e_{1}\) and \(e_{2}\) be executions of \(C\) with the same input traces as \(e\), but with \(e_{1}\) additionally having transient faults within \(R\) at \(s\) up to some time \(\tau_{1}\in R\) and \(e_{2}\) having at least one transient at a time \(\tau_{2}\in R\) at \(s\), where \(\tau_{1}\leq\tau_{2}\leq t^{\prime}-|C|\varepsilon-\gamma\)._ _Then for all value regions \(R^{\prime}\) of execution \(e\) and all signals \(s^{\prime}\), if \(s^{\prime}\) has value X at some time within \(R^{\prime}\) in execution \(e_{1}\), then it does so at some time within \(R^{\prime}\) in execution \(e_{2}\)._ Proof.: Within the same value region, both transient faults cause the same signals to become X, given that \(\tau_{1},\tau_{2}\) are sufficiently far from the value region's boundary \(t^{\prime}\) to allow for propagation of X with delay \(\varepsilon\) (at most \(|C|\varepsilon\) time): this follows from the fact that the circuit's signal values and set of scheduled actions are identical at the start of the first transient in \(e_{1}\) and in \(e_{2}\). Further, a signal with value X remains so unless it is set again to a Boolean value by a production rule. This can only happen by its guard becoming true right after a transient fault. Since, \(\tau_{1}\leq\tau_{2}\), and both times are in the same value region, any event scheduled (and not canceled) after the transient fault at \(\tau_{2}\) must also be scheduled (and not canceled) after the transient faults that occur until time \(\tau_{1}\): signals have the same Boolean values and remain stable for a longer time in \(e_{1}\) than in \(e_{2}\). The argument is inductively repeated for each subsequent value region of execution \(e\). We are now in the position to show the section's main result. Proof of Theorem 2.: Letting \(\varepsilon\to 0\) and \(\gamma\to 0\), we have from Lemma 2 that if a transient fault at a signal \(s\) at a time \(\tau_{1}\in R\) that causes X at a signal \(s^{\prime}\) then a transient fault at a signal \(s\) at a time \(\tau_{2}\in R\), where \(\tau_{1}\leq\tau_{2}\), also causes \(s^{\prime}\) to become X at some point in time. The theorem's statement then follows from the definition of a postfix of \(R\). ### _Automated computation of susceptible regions_ Theorem 2 directly leads to an algorithm that marks all sensitivity windows, i.e., susceptible times, within an execution prefix: for each non-output signal \(s\), and for each value region \(R\), it finds per bisection of repeatedly inserting transient faults the boundary of non-susceptible times (on the left within \(R\)) and susceptible times (on the right within \(R\)). We have implemented the algorithm in Python: given a circuit, input traces, the set of monitored signals, as well as a time \(T\) until which an execution prefix is to be generated, it outputs a figure with all susceptible windows highlighted in blue as well as the percentage of the length of the susceptible windows in the execution prefix (by default excluding the monitored signals, but with the possibility to include them). This value corresponds to the probability of a transient fault causing an X value at a monitored signal, i.e., _the probability to fail_\(P(\text{fail})\), given a uniform distribution of a single transient on all times in the execution prefix and on all signals that are not monitored signals (by default; alternatively on all signals). Clearly, though, the uniform distribution can be easily replaced by a more involved distribution. Towards this direction, the tool also outputs the probability per signal. This allows one to compute a weighted average, e.g., depending on driver strength or shielding at certain signals. Figure 6 shows the tool's output for the previous example of the 3-stage linear pipeline with sensitivity windows marked in blue. A fault occurring at any point of the blue sensitivity windows will drive one (or more) of the circuit's monitored signals to X. A fault hitting any other region (excluding the monitored signals) will be masked and will not reach the monitored signals of the circuits. ### _Comparison of fault-tolerance depending on speed_ We next illustrate the use of our tool at several example studies. To start with, let us investigate the fault masking potential of the example linear pipeline while varying the source and sink latencies. Inverter delays are symmetric and normalized to 1 and Muller C-element latencies are set to 5 inverter delays. The results are shown in Figure 7, with detail sweeps (cuts) in Figures 8 and 9. The length of the execution prefix has been chosen sufficiently high, to account for a sufficiently long time for \(P(\text{fail})\) to be dominated by the periodic operation of the circuit rather than the initial transient phase: \(T=500\) in the overview plot and \(T=1000\) in the detailed sweeps. Fig. 6: Execution prefix of linear 3-stage pipeline until time \(T=32\) with sensitivity windows marked in blue. Monitored signals are c1 and c3. Probability to fail \(P(\text{fail})=0.54375\). Fig. 7: Influence of source and sink speed on the probability to fail \(P(\text{fail})\). Linear 3-stage pipeline with delays as follows: 1 (INV), 5 (MCE), varying source and sink flows in-view of the behavior of the circuit under a stable environment, be it fast or slow, and how the circuit reacts when there is an unbalance between the speeds of source and sink. The \(z\)-axis displays the probability of an X value presence at any of the monitored signals. The \(x\) and \(y\)-axes represent the speeds (latencies) of sink and source, respectively, in time units. Note that the pattern of the plot is best visualized when having the latter axis inverted. The diagonal of the frame where both sink and source latencies are equal to 1 (fast) to where they are both 25 (slow) represents the stable/balanced environment, i.e., the source provides _tokens_ with the same speed as the sink provides the _acknowledgment_. The figure indicates that \(P(\text{fail})\) is high when the environment is stable and fast, and decreases as it gets stable and slow. When the environment is balanced, the MCEs in the circuit are not waiting for either the _data_ or the _ack_ signals; both are supplied within short intervals of time of each other. Recall that the waiting phases are those where the MCE operates in the vulnerable storage mode (inputs mismatching), so reducing the waiting period decreases \(P(\text{fail})\). The environment imbalance is divided further into 2 modes of operation. On the right side (for relatively low source delay) of the figure, the circuit is operating in _bubble-limited_ mode, where the sink's response to the source's new tokens is slow. On the left half of the figure, the sink's activity is faster than the source's, driving it in _token-limited_ mode. The vulnerability of the bubble-limited mode can be seen more clearly in Figure 9; this is where the system is most prone to failure. The probability \(P(\text{fail})\) varies from around 60-80%, where it reaches the maximum when the sink delay is equal to 22 while source delay is 1 (maximum imbalance). Similarly, the token-limited mode falls near the sink latency of 1 in Figure 8, varying from around 40-60%. The latter figures show several cross-sections of the 3D plot from Figure 7. In addition to mapping the token-limited and the bubble-limited areas to these 2 graphs, we can also spot the points belonging to the _balanced environment_ diagonal in the frame in Figure 7. These points are where the abrupt changes of behavior of each line occur, and consequently we can pinpoint where one region of the mode of operation ends and the other starts. Finally, Figure 10 shows the fault probabilities per signal of the linear pipeline as reported by our tool for varying source and sink delays (fast, normal, and slow). It allows us to give a more detailed interpretation of our observations in Figure 8. The probabilities for the monitored signals c1 and c3 are always \(1.0\) as by definition of the fault probabilities. Interestingly, c2 has a high fault probability, too. For fast sink this can be explained as follows: MCE3 spends most of the time in the vulnerable storage mode, waiting for a transition on c2. As soon as one occurs, it triggers a transition on c3 which, after the short sink delay, puts MCE3 back to storage mode. This only leaves a very short time window where MCE3 is masking faults at c2. The enable signals, in turn, are only vulnerable during those short windows, thus showing low fault probability, especially when the source delay is high, see subplot "source:25, sink:1". Recall that we have chosen a relatively large switching delay for the MCE, and our pessimistic model assumes the MCE to be vulnerable during the whole switching duration. This explains why in general \(P(\text{fail})\) increases for faster operation speed: the proportion of the sensitive MCE switching phase increases. This can be most directly observed for the balanced cases. For the other imbalanced extreme with "source:1, sink:25" we observe high fault probability for the enable signals. This is not surprising, since now the MCEs spend most time waiting for transitions on these signals. A fault probability of \(1.0\) for c1 and c3 is also unsurprising, due to our definitions, as mentioned already. Quite unexpected, on the first glance, is the fact that c2 again shows high fault probability, even though we can assume good masking by MCE3 for that input. The reason here is that, via INV2, faults from c2 directly propagate to enl which is known to have low protection by masking. As a result, we see a generally high fault probability in this mode. ### _Comparison of fault-tolerance for Muller pipeline rings_ Another very common asynchronous pipeline construct is rings. We interpret the pipeline operation to implement a 4-phase QDI protocol in the following. Shown in Figure 11 is a 3-stage Muller pipeline where one _data token_, one _spacer_, and one _bubble_ keep oscillating. Note that when using the term _token_ on its own, it encompasses a data token along with a spacer, so 1 token means one of each. It is possible to also Fig. 8: Influence of sink speed on the probability to fail \(P(\text{fail})\). Linear 3-stage pipeline with delays as follows: 1 (INV), 5 (MCE), 4 different source delays, varying sink delay. \(T=1000\). Fig. 9: Influence of source speed on the probability to fail \(P(\text{fail})\). Linear 3-stage pipeline with delays as follows: 1 (INV), 5 (MCE), 4 different sink delays, varying source delay. \(T=1000\). interpret a token to follow the 2-phase communication protocol, and in this case we would double this count. In order to study the resilience of a Muller ring w.r.t. the activity inside the pipeline we need to vary its _occupancy_, i.e., the number of data items revolving in it [13]. We need at least one bubble in the ring, regardless of how many stages constitute it. When the number of data items in the ring is small, the other stages of the pipeline will be filled with _holes_. When there is more holes than data, the pipeline is said to be _data-limited_; when there is more data than holes, it is said to be _hole-limited_[13]. These operation modes correspond to the token-limited and bubble-limited modes, respectively, in the linear Muller pipeline. We use our tool to study the effect of varying the ring occupancy, by building the ring with a different number of stages and changing the token count. As this is a Muller pipeline, each C-element needs alternating input sequences to be able to transition from \(0\) to \(1\) every stage. In order to keep the pipeline running and avoid deadlock, the process of correctly initializing the stages of the ring is crucial. As previously mentioned, there must be at least one bubble in the pipeline. For each combination of tokens and stages, we calculate the number of bubbles needed and we fill the pipeline in the following manner: * If the number of bubbles is much larger than the number of tokens, we start by filling the pipeline with bubbles, and insert tokens equally paced from one another. * If the number of bubbles is much lower than the number of tokens, we start by inserting tokens, and spread the bubbles in between. * If there is only one bubble, it doesn't matter where it is inserted. Same if there is only one token. * A token is always inserted as a data token and a spacer that are not separated by a bubble. The results for these settings are shown in Figure 12. The first point of each line (from the left) represents the maximum number of tokens allowed for the corresponding number of stages (recall that this count represents, in fact, a data token and a spacer). The top left region represents the bubble-limited operation mode, where one can clearly see that \(P(\text{fail})\) gets higher. The increasing number of stages also seems to play a role in this trend. From what we have previously observed we can conjecture that this is because an idle (waiting) stage (MCE) has the highest fault probability, while one that processes a token/transition is more resilient. By adding stages while keeping the number of tokens constant, we add idle stages - consequently \(P(\text{fail})\) increases. As we move to the edges of the token-limited region where the number of bubbles largely exceeds the number of tokens, \(P(\text{fail})\) converges to a steady percentage of approximately 45%. Finally, we compare throughput and probability to fail as a function of the same ring pipeline with a varying number of (4-phase) tokens. It has been previously observed [13, 14] that the throughput as a function of tokens behaves as a canopy plot: it is low for few number of tokens (token-limited), high in the middle, and low for high numbers of tokens (bubble-limited). Figure 13 compares this behavior with the failure probability as determined by our tool for execution prefixes of length Fig. 11: Ring 3-stage pipeline. The delays are set to \(1\) (INV) and 5 (MCE). Fig. 12: Influence of number of tokens and stages on the probability to fail \(P(\text{fail})\). Ring pipeline with varying number of stages and tokens. Delays as follows: 1 (INV), 5 (MCE), \(T=500\). A token encompasses a data token along with a spacer, so 1 token means one of each, following the 4-phase communication protocol. In the case of a 2-phase protocol, this token count is doubled. Fig. 10: Circuit fault probability per signal of the linear 3-stage pipeline with varying source and sink delays. Delays: 1 (INV), 5 (MCE), and 3 different source and sink delays indicated in the figure. \(T=1000\). \(200\). While the canopy diagram suggests that 4 and 5 tokens yield optimum throughput, the failure probability favors a lower token count. So for maximum performance the better choice would be 4 tokens. This result is not general and seems to depend on the design choices, but the general strategy should be to also consider \(P(\text{fail})\) in the system design. ### _Multi-bit QDI designs_ To demonstrate the ability of our tool to handle larger, multi-bit designs, we ran the algorithm to determine susceptible windows on execution prefixes of duration \(T=500\). As circuits under test we used 2-bit and 4-bit versions of the previously analyzed linear and ring pipeline. Our tool reported the following values for \(P(\text{fail})\): (i) the 3-stage linear pipeline with 2-bit resulted in \(0.22\) and with 4-bit in \(0.10\). (ii) the 3-stage ring pipeline with 2-bit resulted in \(0.22\) and with 4-bit in \(0.16\). The observed decrease of \(P(\text{fail})\) for higher pipeline width is as expected from literature: while the last rail to switch is the most critical one, the faster bits are less critical and hence contribute to lowering the overall fault probability - with growing impact for increasing bit number. All results were obtained within minutes on a MacBook Pro (M2, 2022) with 24 GB RAM. ## V Conclusion & Future Work By means of a formal proof we have established that the regular operation of circuits can be decomposed into time windows within which faults are equivalent in that their effect (as perceived at some selected monitoring signals) remains the same. These time windows are bounded by an arbitrary bound on the left and a regular signal transition on the right. Consequently, for determining the effect of transient faults on a circuit, a single bisection between each signal transition is sufficient to determine all sensitivity windows. The approach has two advantages over standard sweeping approaches to find sensitive regions: (i) it provably finds all sensitivity windows, no matter how small they are. Sweeping by contrast always leaves the possibility open that a small window may exist between two samples. (ii) It outperforms sweeping in that a small grid of samples is not necessary: many (large) windows require only a single sample via our method. Based on this result we have developed a Python-based tool that, starting from a production-rule based circuit description, systematically explores its resilient and its vulnerable windows (along with the respective fault effects). The relative size of the windows is then used to predict the proportion of (random) faults that will be effective, and thus, given a fault rate, the failure rate. Since our approach allows identifying the windows individually, it is possible to attach weights to the diverse nodes to account for different susceptibility (drive strength, e.g.) in the overall prediction. We have illustrated the function of our tool on several examples of typical QDI circuits which showed that the tool is efficient and allows for fast analysis with a good scaling towards complex circuits. While currently only relatively simple circuits were targeted to allow keeping a focus on the principle of our approach, a next step will be to extend the set of targets to larger and more complex circuits. Another extension of our approach will be to determine the constituent parameters for the window sizes. Since we determine all windows individually in our automated process, backtracking to the origins of the relevant signal transitions is possible. With that information we can determine in detail how individual parameters like circuit delays or pipeline load influence resilience and hence elaborate targeted optimizations. Finally, work on improving the performance of the implementation is planned: the proposed algorithm is easily parallelizable since windows can be determined independently and hence concurrently.
2308.09543
Latent State Models of Training Dynamics
The impact of randomness on model training is poorly understood. How do differences in data order and initialization actually manifest in the model, such that some training runs outperform others or converge faster? Furthermore, how can we interpret the resulting training dynamics and the phase transitions that characterize different trajectories? To understand the effect of randomness on the dynamics and outcomes of neural network training, we train models multiple times with different random seeds and compute a variety of metrics throughout training, such as the $L_2$ norm, mean, and variance of the neural network's weights. We then fit a hidden Markov model (HMM) over the resulting sequences of metrics. The HMM represents training as a stochastic process of transitions between latent states, providing an intuitive overview of significant changes during training. Using our method, we produce a low-dimensional, discrete representation of training dynamics on grokking tasks, image classification, and masked language modeling. We use the HMM representation to study phase transitions and identify latent "detour" states that slow down convergence.
Michael Y. Hu, Angelica Chen, Naomi Saphra, Kyunghyun Cho
2023-08-18T13:20:08Z
http://arxiv.org/abs/2308.09543v3
# Delays, Detours, and Forks in the Road: ###### Abstract The impact of randomness on model training is poorly understood. How do differences in data order and initialization actually manifest in the model, such that some training runs outperform others or converge faster? Furthermore, how can we interpret the resulting training dynamics and the phase transitions that characterize different trajectories? To understand the effect of randomness on the dynamics and outcomes of neural network training, we train models multiple times with different random seeds and compute a variety of metrics throughout training, such as the \(L_{2}\) norm, mean, and variance of the neural network's weights. We then fit a hidden Markov model (HMM; Baum and Petrie, 1966) over the resulting sequences of metrics. The HMM represents training as a stochastic process of transitions between latent states, providing an intuitive overview of significant changes during training. Using our method, we produce a low-dimensional, discrete representation of training dynamics on grokking tasks, image classification, and masked language modeling. We use the HMM representation to study phase transitions and identify latent "detour" states that slow down convergence. ## 1 Introduction We possess strong intuition for how various tuned hyperparameters, such as learning rate or weight decay, affect model training dynamics and outcomes (Galanti et al., 2023; Lyu et al., 2022). For example, a larger learning rate may lead to faster convergence at the cost of sub-optimal solutions (Hazan, 2019; Smith et al., 2021; Wu et al., 2019). However, we lack similar intuitions for the impact of randomness. Like other hyperparameters, random seeds also have a significant impact on training (Madhyastha and Jain, 2019; Sellam et al., 2022), but we have a limited understanding of how randomness in training actually manifests in the model. In this work, we study the impact of random seeds through a low-dimensional representation of training dynamics, which we use to visualize and cluster training trajectories with different parameter initializations and data orders. Specifically, we analyze training trajectories using a **hidden Markov model** (HMM) fitted on a set of generic metrics collected throughout training, such as the means and variances of the neural network's weights and biases. From the HMM, we derive a visual summary of how learning occurs for a task across different random seeds. This work is a first step towards a principled and automated framework for understanding variation in model training. By learning a low-dimensional representation of training trajectories, we analyze training at a higher level of abstraction than directly studying model weights. We use the HMM to infer a Markov chain over latent states in training and relate the resulting paths to training outcomes. Our contributions: 1. We propose to use the HMM as a principled, automated, and efficient method for analyzing variability in model training. We fit the HMM to a set of off-the-shelf metrics and allow the model to infer latent state transitions from the metrics. We then extract from the HMM a "training map," which describes the important metrics for each state and changes in these metrics during state transitions, helping to visualize how training evolves (Section 2). We train HMMs on training trajectories derived from grokking tasks, language modeling, and image classification across a variety of model architectures and sizes. For these settings, we use the training map to characterize how different random seeds lead to different training trajectories. We analyze phase transitions in grokking by matching them to corresponding latent states in the training map, and thus the changes in metrics associated with the phase transitions (Section 3.1). 2. We discover **detour** states, which are latent states associated with slower convergence. We propose our method for finding detour states as a general way to assign semantics onto latent states in training maps (Sections 2.3, 3.4). We discover that we can induce detour states in image classification by destabilizing the optimization process and, conversely, remove detour states in grokking by stabilizing the optimization process. By making a few changes that are known to stabilize neural network training, such adding normalization layers, we find that the gap between memorization and generalization in grokking is dramatically reduced. Our results, along with prior work from Liu et al. (2023), show that grokking can be avoided by changing the architecture or optimization of deep networks (Section 3.3). ## 2 Methods In this work, we cluster training trajectories from different random seeds and then analyze these clusters to better understand their learning dynamics and how they compare to each other. To cluster trajectories, we assign each model checkpoint to a discrete latent state using an HMM. We choose the HMM because it is the simplest time series model with a discrete latent space. Let \(\mathbf{w}_{1:T}\in\mathbb{R}^{D\times T}\) be the sequence of neural network weights observed during training. Each \(\mathbf{w}_{t}\) is a model checkpoint. In this work, we use the Gaussian HMM to label each checkpoint \(\mathbf{w}_{1:T}\) with its own latent state, \(s_{1:T}\). Fitting the HMM directly over the weights is computationally infeasible, because the sample complexity of an HMM with \(O(D^{2})\) parameters would be prohibitively high. Our solution to this problem is to compute a small number of metrics \(f_{1}(\mathbf{w}_{1:T}),\ldots,f_{d}(\mathbf{w}_{1:T})\) from \(\mathbf{w}_{1:T}\), where \(d<<D\) and \(f:\mathbb{R}^{D}\to\mathbb{R}\). Figure 1: From training runs we collect metrics, which are functions of the neural networks’ weights. We then train a hidden Markov model using the sequences of metrics generated from the training runs. The hidden Markov model learns a discrete latent state over the sequence, which we use to cluster and analyze the training trajectory. ### Training an HMM over Metrics In this work, we focus on capturing how the computation of the neural network changes during training by modeling the evolution of the neural network weights. To succinctly represent the weights, we compute various metrics such as the average layer-wise \(L_{1}\) and \(L_{2}\) norm, the mean and variances of the weights and biases in the network, and the means and variances of each weight matrix's singular values. A full list of the 14 metrics we use, along with formulae and rationales, is in Appendix B. To fit the HMM, we concatenate these metrics into an observation sequence \(z_{1:T}\). We then apply z-score normalization (also known as standardization), adjusting each feature to have a mean of zero and a standard deviation of one, as HMMs are sensitive to the scale of features. We thus obtain the normalized sequence \(\tilde{z}_{1:T}\). To bound the impact of training trajectory length, we compute z-scores using the estimated mean and variance of (up to) the first 1000 collected checkpoints. \[z_{t} =\begin{bmatrix}f_{1}(\mathbf{w}_{t})\\ \vdots\\ f_{d}(\mathbf{w}_{t})\end{bmatrix}, \tilde{z}_{t} =\begin{bmatrix}[f_{1}(\mathbf{w}_{t})-\mu(f_{1}(\mathbf{w}_{1:T }))]/\sigma(f_{1}(\mathbf{w}_{1:T}))\\ \vdots\\ [f_{d}(\mathbf{w}_{t})-\mu(f_{d}(\mathbf{w}_{1:T}))]/\sigma(f_{d}(\mathbf{w}_{ 1:T}))\end{bmatrix}\] We collect \(N\) sequences \(\{z_{1:T}\}_{1}^{N}\) from \(N\) different random seeds, normalize the distribution of each metric across training for a given seed, and train the HMM over the sequences \(\{\tilde{z}_{1:T}\}_{1}^{N}\) using the Baum-Welch algorithm (Baum et al., 1970). The main hyperparameter in the HMM is the number of hidden states, which is typically tuned using the log-likelihood, Akaike information criterion (AIC), and/or Bayesian information criterion (BIC) (Akaike, 1998; Schwarz, 1978) of validation sequences. Here, we hold out 20% of the \(N\) trajectories as validation sequences and choose the number of hidden states that minimizes the BIC. We use BIC because BIC imposes a stronger preference for simpler, and thus more interpretable, models. Model selection curves are in Appendix G. ### Extracting the Training Map Next, we use the HMM to describe what each hidden state means and how the hidden states relate to each other. We convert the HMM into a "training map," which represents hidden states as vertices in a graph and hidden state transitions as edges in the graph. First, we extract the graph's structure from the HMM. The learned HMM has two sets of parameters: the transition matrix \(p(s_{t}|s_{t-1})\) between hidden states, and the emission distribution \(p(\tilde{z}_{t}|s_{t}=k)\sim N(\mu_{k},\Sigma_{k})\), where \(\mu_{k}\) and \(\Sigma_{k}\) are the mean and covariance of the Gaussian conditioned on the hidden state \(k\), respectively. The transition matrix is a Markov chain that defines the graph's structure. It defines what hidden states exist and the possible transitions between hidden states _a priori_. We prune edges in the Markov chain if the edge is unused by the HMM for all training trajectories. We label the hidden states \(s_{1:T}\) (i.e., the graph's vertices) by ranking the features according to how much each feature \(\tilde{z}_{t}[i]\) changes the posterior probability \(p(s_{t}=k|\tilde{z}_{1:t})\). If a change \(\Delta\tilde{z}_{t}[i]\) along a feature \(\tilde{z}_{t}[i]\) leads to a large change in \(p(s_{t}=k|\tilde{z}_{1:t})\), then we consider \(\tilde{z}_{t}[i]\) to be an influential feature for the prediction that \(s_{t}=k\). Let \(\mathcal{L}\) be the likelihood \(p(\tilde{z}_{t}|s_{t})\). **Proposition 1**: _We can rank features \(\tilde{z}_{t}[i]\) according to how much they change the posterior probability \(p(s_{t}=k|\tilde{z}_{1:t})\) by computing the derivative:_ \[\frac{\partial\log\mathcal{L}}{\partial\Delta\tilde{z}_{t}[i]}=\Sigma_{k}^{-1 }[i,i] \tag{1}\] _Proof sketch: The posterior probability \(p(s_{t}|\tilde{z}_{1:t})\) is a monotonic transformation of the likelihood \(\mathcal{L}\) when holding \(\tilde{z}_{1:t-1}\) fixed. Thus, we can simply take the derivative \(\frac{\partial\log\mathcal{L}}{\partial\Delta\tilde{z}_{t}[i]}=\Sigma_{k}^{-1 }[i,i]\) to find the features \(\tilde{z}_{t}[i]\) that produce the largest changes in the log-likelihood. It follows that the most important feature \(\tilde{z}_{t}[i]\) for hidden state \(k\) has the largest \(\Sigma_{k}^{-1}[i,i]\). See Appendix A for the full derivation._ In the results to follow, we use Proposition 1 to compute the 3 most important features for each hidden state. Formally, the most important feature is \(\arg\max_{z_{i}[i]}\frac{\partial\log\mathcal{L}}{\partial\Delta z_{i}[i]}\). To characterize an edge (\(j\to k\)) in the graph, we can subtract the means between state \(j\) and \(k\). The difference vector \(\mu_{k}-\mu_{j}\) then describes the movement of features along the edge. In summary, we can obtain a training map from an HMM by extracting: * The graph structure (vertices and edges) from a pruned transition matrix. * Vertex labels from the learned covariance matrix of each hidden state, which describes the features that change the hidden state the most. * Edge labels from the difference vectors between hidden states. ### Assigning Semantics to Latent States From the HMM's transition matrix, we obtain a training map, or the Markov chain between learned latent states of training. We then label the nodes and edges in the training map using probabilistic reasoning over the HMM's learned means and covariances. But what do we learn from the path a training run takes through the map? In particular, what impact does a particular state have on training outcomes? In order to relate HMM states to training outcomes, we select a metric and predict it from the path a training run takes through the Markov chain. To do so, we must featurize the sequence of latent states, and in this work we use unigram featurization, or a "bag of states" model. Formally, let \(s_{1},s_{2},\dots,s_{T}\) be the latent states visited during a training run. The empirical distribution over states can be calculated as: \[\hat{P}(s=k)=\frac{\sum_{j}\mathds{1}(s_{j}=k)}{T} \tag{2}\] where \(k\) represents a particular state and \(T\) is the total number of checkpoints in the trajectory. This distribution can be written as a \(d\)-dimensional vector, which is equivalent to unigram featurization. In this work, we investigate how particular states impact convergence time, which we measure as the first timestep that evaluation accuracy crosses a threshold. We set the threshold to be a value slightly smaller than the maximum evaluation accuracy (see Section 3.4). We use linear regression to predict convergence time from \(\hat{P}\). Here, we are not forecasting when a model will converge from earlier timesteps; rather, we are simply using linear regression to learn a function between latent states and convergence time. After training the regression model, we examine the regression coefficients to see which states are correlated with slower or faster convergence times. If the regression coefficient for a state is positive when predicting convergence time, then a training run spending additional time in that state implies longer convergence time. Additionally, if that same state is not visited by all trajectories, then we can consider it a **detour**, because the trajectories that visit the optional state are also delaying their convergence time. **Definition: Detour state.** A learned latent state is a detour state if: * Some training runs do not visit the state. This indicates that the state is "optional." * Its linear regression coefficient is positive when predicting convergence time. This indicates that a training run spending more time in the state will have a longer convergence time. Our method for assigning semantics to latent states can be extended to other metrics. For example, one might use regression to predict a measure of gender bias, which can vary widely across training runs (Sellam et al., 2022), from the empirical distribution over latent states. The training map then becomes a map of how gender bias manifests across training runs. We also recommend computing the \(p\)-value of the linear regression and only interpreting the coefficients when they are statistically significant. ## 3 Results Training maps help us understand how and when variations due to randomness manifest over the course of training. We perform experiments across five tasks: modular addition, sparse parities, masked language modeling, MNIST, and CIFAR-100. For all training hyperparameter details, see Appendix C. Modular arithmetic and sparse parities are tasks where models consistently exhibit **grokking**(Power et al., 2022), a phenomenon where the training and validation losses seem to be decoupled, and the validation loss drops sharply after a period of little to no improvement. The model first memorizes the training data and then generalizes to the validation set. We call these sharp changes "phase transitions," which are periods in training which contain an inflection in the loss (i.e., the concavity of the loss changes) that is then sustained (no return to chance performance). We study modular arithmetic and sparse parities to see how phase transitions are represented by the HMM's discrete latent space. We complement these tasks with masked language modeling (Appendix D) and image classification. In this work, we ignore embedding matrices and layer norms when computing metrics, as we are primarily interested in how the function represented by the neural network changes. ### Algorithmic Data: Modular Arithmetic and Sparse Parities Modular Arithmetic: Figure 2.In modular addition, we train a one-layer autoregressive transformer to predict \(z=(x+y)\mod 113\) from inputs \(x\) and \(y\). We collect trajectories using 40 random seeds and train and validate the HMM on a random 80-20 validation split, a split that we use for all settings. This is a replication of the experiments in Nanda et al. (2023). In modular arithmetic, the number of epochs that different training runs take to converge differ by thousands of epochs. Examining the modular addition training map, we find that there exist paths of different lengths: some training runs take the shortest path through the map to convergence, while others do not. We feature three such paths in Figure 2. All runs initialize in state 1 and achieve low loss in state 3, but there are several paths from 1 to 3. The longest path (\(1\to 5\to 2\to 3\)) coincides with the longest time to convergence of the three featured runs, and the shortest path (\(1\to 3\)) with the shortest. Using the HMM, we can further dissect this variability by relating the edges exiting state 1 to how fast or slow generalizing runs differ with respect to model internals. The results of this examination are in the table of Figure 2. Here, we take the top 3 features of states 2, 5, and 3 via the learned covariance matrices, and quantify the feature movements of the top 3 features by subtracting the learned means (recall \(\tilde{z}\)) between these states and state 1. We find that the fast-generalizing path (\(1\to 3\)) is characterized by a "just-right" drop in the \(L_{2}\) norm (\(\downarrow\)1.68, see table). The slower-generalizing runs (\(1\to 2\to 3\)) and (\(1\to 5\to 2\to 3\)) are characterized by either smaller (\(\downarrow\)0.59) or larger (\(\downarrow\)2.08) drops in \(L_{2}\) norm. State 1 encapsulates the memorization phase transition: the training loss drop to near-zero in state 1, while validation loss increases. Thus, according to the training map, the epoch in which the generalization phase transition happens is affected by how fast the \(L_{2}\) norm drops immediately after the memorization phase transition. A "just-right" drop in the \(L_{2}\) norm is correlated with the quickest onset of generalization. Sparse Parities: Figure 8 in Appendix E.Sparse parities is a similar rule-based task to modular addition, where a multilayer perceptron must learn to apply an \(AND\) operation to 3 bits within a 40-length bit vector; the crux of the task is learning which 3 of the 40 bits are relevant. We again collect 40 training runs. Similar to modular arithmetic, path variability through the training map also appears at the beginning of training in sparse parities. Slow-generalizing runs take the path (\(2\to 0\to 5\)), while fast-generalizing runs take the more direct path (\(2\to 5\)). The \(L_{2}\) norm remains important here, with the edge (\(2\to 0\)) characterized by an increase in the \(L_{2}\) norm and the edge (\(2\to 5\)) characterized by a decrease. Once again, the speed at which the generalization phase transition occurs is associated with a specific change in the \(L_{2}\) norm immediately after the memorization phase transition. ### Image classification: CIFAR-100 and MNIST CIFAR-100: Figure 3As a counterpoint to grokking, consider image classification, a well-studied task in computer vision and machine learning. We collect 40 runs of ResNet18 (He et al., 2016) trained on CIFAR-100 (Krizhevsky, 2009), and find that the learning dynamics are smooth and insensitive to random seed. The training map is a linear graph, and the state transitions all tend to feature increasing dispersion Figure 2: One-layer transformer trained on modular addition. The first edge that a training run takes to exit the initialization state 1 significantly impacts the number of epochs the run takes to generalize. We sort features from most to least important by inverting the learned covariance matrices of each state, and we define edges by subtracting the learned means between states, as discussed in Section 2.2. See Appendix B for a glossary of metrics. The changes in the chart are the top three differences in learned means, sorted by importance–for example, state 2 has a learned \(L_{2}\) norm that is 0.59 standard deviations lower than state 1, and the \(L_{2}\) norm is the most important feature for state 2. in the weights. We show the top 3 features for each state transition in the table of Figure 3. The \(L_{1}\), \(L_{2}\) and average singular value are increasing monotonically across all state transitions. MNIST: Figure 9 in Appendix F.The dynamics of CIFAR-100 seem to be shared by MNIST. We collect 40 training runs of a two-layer MLP learning image classification on MNIST, with hyperparameters based on Simard et al. (2003). The training runs of MNIST follow a single trajectory through the training map. We examine several state transitions throughout training and find that the transitions are also characterized by similar changes between features. ### Destabilizing Image Classification, Stabilizing Grokking So far, we have observed that the training dynamics of neural networks learning algorithmic data (modular addition and sparse parities) are highly sensitive to random seed, while the dynamics of networks trained on image classification are relatively unaffected by random seed. One possible explanation is that sensitivity to random seed is a property of the data, and grokking occurs because the data induces it. In this section, we will show that this explanation is incomplete. Rather, grokking is also affected by model architecture and optimization hyperparameters, and small changes to training can both close the gap between memorization and generalization in grokking and make training robust to changes in random seed. First, we examine the training dynamics of ResNets without batch normalization (Ioffe and Szegedy, 2015) and residual connections. Residual connections help ResNets avoid vanishing gradients (He et al., 2016) and smooth the loss landscape (Li et al., 2018). Batch norm has similarly been shown to add smoothness to the loss landscape (Santurkar et al., 2018) and also contributes to automatic learning rate tuning (Arora et al., 2019). We remove batch norm and residual connections from ResNet18 and train the ablated networks from scratch on CIFAR-100 over 40 random seeds. All hyperparameters are in Appendix C. Without batch norm and residual connections, ResNet18's training dynamics become significantly more sensitive to randomness. See Figure 4. Depending on the random seed, the model may stagnate for many updates before generalizing. This increase in random variation is visible in the learned training map, which now forks when exiting state 3, the initialization state. There now exists a slow-generalizing path (\(3\to 1\)) and a fast-generalizing path (\(3\to 2\)), characterized by feature movements in opposite directions. In the Figure 3: ResNet18 trained on CIFAR-100. All 40 training runs we collected from CIFAR-100 follow the same path, although individual runs can spend slightly different amounts of time in each state. As shown by the training map and accompanying annotations in the table, the training dynamics of CIFAR-100 are similar between states. slow-generalizing path, norms and average singular value are increasing, while in the fast-generalizing path these features are slightly decreasing. If removing batch normalization destabilizes ResNet training in CIFAR-100, then adding layer normalization (which was removed by Nanda et al. (2023)) should stabilize training in modular addition. Thus, we add layer normalization back in and train over 40 random seeds. We also decrease the batch size, which leads SGD to flatter minima (Keskar et al., 2017). These modifications to training help the transformer converge around 30 times faster on modular addition data. Furthermore, sensitivity to random seed disappears-the training map in Figure 5 becomes a linear graph. From this section, we draw two conclusions. First, that grokking is caused by both the data and model training choices, and changes to model training can minimize the grokking effect. Second, that different hyperparameters or architectures can result in different training maps for the same task. In training setups sensitive to random seed, the HMM associates differences in training dynamics with different latent states. We formalize the connection between latent states and metrics such as convergence time in the next section. Figure 4: Without residual connections and batch normalization, ResNet training becomes unstable, causing convergence times to differ significantly. Slow-generalizing runs take the state transition (\(3\to 1\)), while fast-generalizing runs take the state transition (\(3\to 2\)). (Runs can take the path (\(3\to 1\to 3\to 2\)), so transition frequencies do not sum to 40). The variability induced by removing residual connections and batch norm occurs at the beginning of training. ### Predicting Convergence Time We now use these state models as features in a linear regression to identify convergence time, as described in Section 2.3. We define convergence time as the iteration where validation accuracy is greater than some threshold, and we take this threshold to be 0.9 in modular addition and sparse parities, 0.6 for the stable version of CIFAR-100, 0.4 for destabilized CIFAR-100, and 0.97 for MNIST. We set these values to be slightly less than the maximum evaluation accuracy for each task, respectively. To visualize the variance in convergence times, see Appendix H. In Table 1, we find that linear regression predicts convergence time from a given training run's distribution over latent states very accurately, as long as the training map contains forked paths. If the training map is instead linear, training follows similar paths through the HMM across different random seeds. We formalize this intuition of **trajectory dissimilarity** by measuring the expected Wasserstein distance \(W(\cdot,\cdot)\)(Kantorovich, 1939; Vaserstein, 1969) between empirical distributions for any two random seeds \(p,q\) over latent states, sampled uniformly at random. \[\text{Trajectory dissimilarity}:=\mathbb{E}[W(p,q)]=\frac{2}{N(N-1)}\sum_{i=1} ^{N}\sum_{j=1}^{i}W(p_{i},q_{j}) \tag{3}\] \begin{table} \begin{tabular}{|c|c|c||c|c|} \hline **Dataset** & \(R^{2}\) & \(p\)**-value** & **Dissimilarity** & **Forking** \\ \hline Modular addition & 0.977 & \(<\)0.001 & 0.496 & ✓ \\ \hline Modular addition, stabilized & 0.514 & \(<\)0.001 & 0.038 & \\ \hline \hline CIFAR-100 & 0.094 & 0.469 & 0.028 & \\ \hline CIFAR-100, destabilized & 0.905 & \(<\)0.001 & 0.806 & ✓ \\ \hline \hline Sparse parities & 0.961 & \(<\)0.001 & 0.183 & ✓ \\ \hline \hline MNIST & 0.049 & 0.611 & 0.063 & \\ \hline \end{tabular} \end{table} Table 1: Predictability of convergence epoch using a unigram model of states. Dissimilarity is provided per Equation 3 and the training maps are marked as forking unless they are linear. Figure 5: With layer normalization and a lower learning rate, the one-layer transformer quickly learns the modular arithmetic task, with a convergence time stable across random seed. This stability is captured by the linear training map. Critically, the map still reflects the grokking phase transitions: memorization, which occurs in state 0, and generalization, which occurs in state 2. With statistically significant (\(p<0.001\)) regression models for modular addition, sparse parities, and destabilized CIFAR-100, we can use the learned regression coefficients to find detour states. In Table 2, we highlight these detour states, defined as any state with a positive regression coefficient that is only visited by a strict subset of training trajectories. In our tasks with linear graphs, there are no detour states, because every training run visits every latent state. Our regression analysis largely confirms observations drawn from looking at the training maps and trajectories in sections prior: states 2 and 5 are detour states in modular addition, state 0 is a detour state in sparse parities, and state 1 is a detour state in destabilized CIFAR-100. Detour states signal that the outcome of training is unstable: they appear in training setups that are sensitive to randomness, and they disappear in setups that are robust to randomness. By adding layer norm and decreasing batch size, we decreased both the mean and variance of convergence time in modular addition (see Table 1). Under these stabilized regimes, detour states disappear, as the training map becomes a linear graph. Conversely, removing batch norm and residual connections destabilizes the training of ResNets, thereby inducing forks in the training map that lead to detour states. \begin{table} \end{table} Table 2: Learned linear regression coefficients. If the value is positive, then the time spent in the state is correlated with increased convergence time, and vice versa. Detour states are bolded. Figure 6: Training maps express variability in training dynamics as a more densely connected graph. For stable training setups, the HMM learns a linear graph as the training map. Training dynamics can be stabilized or destabilized by changing hyperparameters (batch size) or architecture (normalization layers, residual connections). ## 4 Related Work Our work is not the first to relate state machines to the internals of a neural network. Weiss et al. (2018, 2019) extract deterministic finite automata (DFA) from neural networks, which bears some similarity to the annotated Markov chain we extract from training runs. Williams (1992) use an extended Kalman filter (EKF) to train a recurrent neural network and note the similarity between EKF and the real-time recurrent learning algorithm (Marschall et al., 2020). In contrast to the existing literature, we use state machines to understand the training process rather than the inference process. Measuring the state of a neural network using various metrics was also done in Frankle et al. (2020). Analyzing time series data using a probabilistic framework has been successfully applied to many other tasks in machine learning (Kim et al., 2017; Hughey and Krogh, 1996; Bartolucci et al., 2014). In a similar spirit to our work, Batty et al. (2019) use an autoregressive HMM (ARHMM) to segment behavioral videos into semantically similar chunks. The ARHMM can capture both discrete and continuous latent dynamics, making it an interesting model to try for future work. These modeling decisions (discrete vs. continuous latent space, dimensionality reduction) all impact the interpretation of the trained model, so we invite readers to consider them carefully. Our work is substantively inspired by the progress measures literature, which aims to find metrics that can predict discontinuous improvement or convergence in neural networks. Barak et al. (2022) first hypothesized the existence of hidden progress measures. Olsson et al. (2022) found a progress measure for induction heads in Transformer-based language models, and Nanda et al. (2023) found a progress measure for grokking in the modular arithmetic task. The \(L_{2}\) norm is also known to be both important to and predictive of grokking, thereby motivating the use of weight decay to accelerate convergence in grokking settings (Nanda et al., 2023; Power et al., 2022; Thilak et al., 2022). Liu et al. (2023) highlight the importance of the \(L_{2}\) norm by correcting for grokking via projected gradient descent within a fixed-size \(L_{2}\) ball; conversely, they also induce grokking on new datasets by choosing a disadvantageous \(L_{2}\) norm. Our results mirror their work while showing that grokking has other available remedies, beyond ones that directly manipulate the \(L_{2}\) norm. Finally, this work relates broadly to the empirical study of training dynamics. Much of the literature treats learning as a process where increases in training data lead to predictable increases in test performance (Kaplan et al., 2020; Razeghi et al., 2022) and in model complexity (Choshen et al., 2022; Mangalam and Prabhu, 2019; Nakkiran et al., 2019). However, this treatment of training ignores how heterogeneous the factors of training can be. Different capabilities are learned at different rates (Srivastava et al., 2022), different layers converge at different rates (Raghu et al., 2017), and different latent dimensions emerge at different rates (Jarvis et al., 2023; Saxe et al., 2019). While early stages in training can be modeled nearly exactly through simple methods (Hu et al., 2020; Jacot et al., 2018), these early stages are notably distinct from later stages. Early stages exhibit unique phenomena such as critical learning periods (Achille et al., 2019) and break-even points (Jastrzebski et al., 2020). Consequently, methods like ours which treat training as a heterogeneous process are crucial in understanding realistic training trajectories. ## 5 Discussion The training maps derived from HMMs are interpretable descriptions of training dynamics that summarize similarities and differences between training runs. Our results show that there exists a low-dimensional, _discrete_ representation of training dynamics. Via the HMM, this representation is generally predictive of the next set of metrics in the training trajectory, given the previous metrics. Furthermore, in some cases this low-dimensional, discrete representation can even be used to predict the iteration in which models converge. ### Grokking and the Optimization Landscape We conjecture that grokking is the consequence of a sharp optimization landscape. Consider the edits we performed to significantly decrease the grokking effect: adding layer normalization and decreasing batch size. Normalization layers and decreasing batch size have been documented in the literature as increasing smoothness in the loss landscape (Santurkar et al., 2018; Arora et al., 2019; Keskar et al., 2017). Image classification is a well-studied task with many tricks for improving the efficiency of training; perhaps learning algorithmic data will become just as efficient in the future, such that grokking is no longer a concern. ### Progress Measures and Phase Transitions By modeling convergence time in grokking settings, we analyze phase transitions. We find that the generalization phase transition can be sped up by avoiding detour states. These detour states are generally characterized by specific requirements in metrics such as the \(L_{2}\) norm. For example, in the modular arithmetic setting, avoiding detour states requires a "just-right" decrease in the \(L_{2}\) norm-not too little, and not too much. Liu et al. (2023) posited that grokking occurs because the weight norm is slow to reach a shell of particular \(L_{2}\) norm in weight space, previously called the "Goldilocks zone" (Fort and Scherlis, 2018); our results suggest that the rate of change is also crucial, and not only the momentary value of the norm. ### The Impact of Random Seed We recommend that researchers studying training dynamics experiment with a large number of training seeds. When claims are based on a small number of runs, anomalous training phenomena might be missed, simply due to sampling. These anomalous phenomena can be the most elucidating, as in grokking experiments, where a small number of runs converge faster than the rest. The role of random variation has been highlighted for the performance and generalization of trained models (McCoy et al., 2020; Sellam et al., 2022; Juneja et al., 2023), but there are fewer studies on variation in training dynamics. We recommend studying training across many runs, and possibly relying on state diagrams like ours to distinguish typical and anomalous training phenomena. ### Limitations and Future Work Our work assumes that training dynamics can be represented by a linear, discrete, and Markovian model. Despite the successes of our approach, a higher-powered model might capture even more information about training dynamics. Relaxing the assumptions of the HMM is likely a fruitful area for future work. Additionally, in this work we perform dimensionality reduction via hand-picked statistics. We use these statistics as interpretable features for our training maps, but a fully unsupervised approach also deserves exploration. Finally, our findings are suggestive for future work on hyperparameter search. We demonstrate that 1) training instability to random seed is highly dependent on hyperparameters, and 2) instability manifests early in training. Thus, it may be more efficient to measure early variation across a few seeds to quickly evaluate a hyperparameter setting, rather than waiting to measure accuracy on the trained model. ## 6 Conclusion We make several main contributions. First, we propose directly modeling training dynamics as a new avenue for interpretability and training dynamics research. We show that even with a simple model like the HMM, we can learn representations of training dynamics that are predictive of key metrics like convergence time. Second, we discover detour states of learning, and show that detour states are related to both how quickly models converge and how sensitive the overall training process is to random seed. Finally, we show that stability across random seeds is empirically linked to generalization, providing a possible criterion for model tuning and selection. #### Acknowledgements We would like to thank William Merrill for his insightful comments. MYH is supported by an NSF Graduate Research Fellowship. This work was supported by Hyundai Motor Company (under the project Uncertainty in Neural Sequence Modeling), the Samsung Advanced Institute of Technology (under the project Next Generation Deep Learning: From Pattern Recognition to AI), and the National Science Foundation (under NSF Award 1922658).
2310.13751
Nebular C IV 1550 Imaging of the Metal-Poor Starburst Mrk 71: Direct Evidence of Catastrophic Cooling
We use the Hubble Space Telescope ACS camera to obtain the first spatially resolved, nebular imaging in the light of C IV 1548,1551 by using the F150LP and F165LP filters. These observations of the local starburst Mrk 71 in NGC 2366 show emission apparently originating within the interior cavity around the dominant super star cluster (SSC), Knot A. Together with imaging in He II 4686 and supporting STIS FUV spectroscopy, the morphology and intensity of the C IV nebular surface brightness and the C IV / He II ratio map provide direct evidence that the mechanical feedback is likely dominated by catastrophic radiative cooling, which strongly disrupts adiabatic superbubble evolution. The implied extreme mass loading and low kinetic efficiency of the cluster wind are reasonably consistent with the wind energy budget, which is probably enhanced by radiation pressure. In contrast, the Knot B SSC lies within a well-defined superbubble with associated soft X-rays and He II 1640 emission, which are signatures of adiabatic, energy-driven feedback from a supernova-driven outflow. This system lacks clear evidence of C IV from the limb-brightened shell, as expected for this model, but the observations may not be deep enough to confirm its presence. We also detect a small C IV-emitting object that is likely an embedded compact H II region. Its C IV emission may indicate the presence of very massive stars (> 100 M_sun) or strongly pressure-confined stellar feedback.
M. S. Oey, Amit N. Sawant, Ashkbiz Danehkar, Sergiy Silich, Linda J. Smith, Jens Melinder, Claus Leitherer, Matthew Hayes, Anne E. Jaskot, Daniela Calzetti, You-Hua Chu, Bethan L. James, Goeran Oestlin
2023-10-20T18:25:08Z
http://arxiv.org/abs/2310.13751v1
# Nebular C iv \(\lambda 1550\) Imaging of the Metal-Poor Starburst Mrk 71: ###### Abstract We use the Hubble Space Telescope ACS camera to obtain the first spatially resolved, nebular imaging in the light of C iv \(\lambda\lambda 1548,1551\) by using the F150LP and F165LP filters. These observations of the local starburst Mrk 71 in NGC 2366 show emission apparently originating within the interior cavity around the dominant super star cluster (SSC), Knot A. Together with imaging in He ii \(\lambda 4686\) and supporting STIS FUV spectroscopy, the morphology and intensity of the C iv nebular surface brightness and the C iv/He ii ratio map provide direct evidence that the mechanical feedback is likely dominated by catastrophic radiative cooling, which strongly disrupts adiabatic superbubble evolution. The implied extreme mass loading and low kinetic efficiency of the cluster wind are reasonably consistent with the wind energy budget, which is probably enhanced by radiation pressure. In contrast, the Knot B SSC lies within a well-defined superbubble with associated soft X-rays and He ii \(\lambda 1640\) emission, which are signatures of adiabatic, energy-driven feedback from a supernova-driven outflow. This system lacks clear evidence of C iv from the limb-brightened shell, as expected for this model, but the observations may not be deep enough to confirm its presence. We also detect a small C iv-emitting object that is likely an embedded compact H ii region. Its C iv emission may indicate the presence of very massive stars (\(>100\) M\({}_{\odot}\)) or strongly pressure-confined stellar feedback. starburst galaxies -- galaxy winds -- galaxy evolution -- emission-line galaxies -- stellar feedback -- young massive clusters -- superbubbles -- H ii regions -- ultraviolet photometry -- direct imaging + Footnote †: journal: ApJ Letters ## 1 Introduction Massive star feedback encompasses energetic processes that heat gas to temperatures above \(10^{4}\) K. OB stars and their hot, blue descendants, as well as high-mass X-ray binaries, photoionize gas into this regime, and shock-heating by supernovae and stellar winds drive temperatures up to \(10^{6}\) to \(10^{8}\) K. C iv \(\lambda\lambda 1548,1551\) (hereafter "C iv \(\lambda 1550\)") is ubiquitous in the interstellar medium of star-forming galaxies (e.g., Savage, 1984; Savage et al., 2001; Wang & Yao, 2005), where it is believed to originate in conductive interfaces between hot (\(>10^{6}\)) K gas and cooler ISM phases (e.g., McCray & Snow, 1979); this species is also a prominent P-Cygni emission line arising in hot star winds. C iv\(\lambda\)1550 traces ionization energies above 47.9 eV, and for recombination, above 64.5 eV. Nebular C iv is therefore only rarely seen, and is associated more with planetary nebulae rather than ordinary H ii regions (e.g., Aller et al., 1981; Harrington et al., 1982). However, C iv\(\lambda\)1550 emission does appear in a number of extreme starbursts, both locally (Mingozzi et al., 2022; Berg et al., 2019; Senchyna et al., 2019) and at high redshift, where it can be prominent (Stark et al., 2015; Amorin et al., 2017; Senchyna et al., 2022). In these objects, it is generally thought to be nebular, although its origin is not well understood. It could be a signature of photoionization by unusually hot stars like Wolf-Rayet (WR) stars or rapidly rotating stars, or by high-mass X-ray binaries (HMXBs); or it could be due to mechanical feedback, whether from direct collisional ionization and conductive interfaces to adiabatic heating zones (e.g., Chu et al., 1994), or from radiative, catastrophic cooling flows (Gray et al., 2019) that disrupt adiabatic conditions. The origin of C iv emission is of particular cosmological interest (Senchyna et al., 2022) when linked to low metallicites. This nebular line implies higher ionization parameters than is normally seen in H ii regions, and different mechanisms have been proposed to explain its presence in intense metal-poor starbursts. Under these conditions stars are more compact, with faster stellar rotation, both of which increase the effective temperatures. Low metallicity is also linked to stronger interaction in close binaries, which promotes the formation of WR stars and fast rotators by binary mass transfer, as well as the creation of HMXBs. Nebular C iv imaging of resolved, local objects would therefore provide an important and revealing diagnostic for these different scenarios. But since these \(\lambda\lambda 1548,1551\) resonance lines are in the far ultraviolet, it has generally been inaccessible for such targets. However, the Solar Blind Channel (SBC) of the Advanced Camera for Surveys (ACS) aboard the Hubble Space Telescope (HST) offers a long-pass filter set, F150LP and F165LP, that is capable of imaging in the light of C iv\(\lambda\)1550. The net transmission in these filters is shown by Hayes et al. (2016), who used a similar filter pair to successfully carry out imaging of the starburst galaxy SDSS J115630.63+500822.1 in O vi\(\lambda\lambda 1032,1038\). In this Letter, we report the first spatially resolved imaging of C iv nebular emission, which was carried out with the ACS/SBC. Our target is the local starburst complex Mrk 71 in the nearby Magellanic irregular galaxy NGC 2366. This system is of intense interest since it is a remarkable analog of extreme Green Pea galaxies (Micheva et al., 2017), which are the only known class of local Lyman-continuum emitting galaxies (e.g., Izotov et al., 2018; Flury et al., 2022). At a distance of only 3.4 Mpc (Tolstoy et al., 1995), Mrk 71 is close enough to resolve individual stars. It is also a metal-poor system, having \(12+\log(\mathrm{O/H})=7.89\)(Izotov et al., 1997; Chen et al., 2023), or about \(0.16\,\mathrm{Z}_{\odot}\). The nature of feedback changes dramatically at this low metallicity. In addition to hotter stellar photoionizing sources described above, mechanical feedback is much weaker (Jecmen and Oey, 2023) since at low metallicity, supernovae occur mostly at the lower-mass range of core collapse progenitors (Patton and Sukhbold, 2020; O'Connor and Ott, 2011; Heger et al., 2003) and stellar winds are dramatically weaker (e.g., Vink, 2022; Ramachandran et al., 2019; Bjorklund et al., 2023). Mrk 71 provides an outstanding template for metal-poor feedback processes because it hosts both a young super star cluster (SSC) driving an extreme ionization parameter (Knot A), and a second, more evolved, SSC (Knot B) that has generated a mature superbubble system (Figure 1). Our C iv imaging yields critical, diagnostic insight on both of these subsystems. ## 2 C iv Imaging Observations We obtained Cycle 28 ACS/SBC imaging observations of Mrk 71 (GO-16261; PI Oey) in F150LP and F165LP during 2020 Oct 28 - Nov 01, using the LODARK aperture. The target was observed in F150LP for 4 \(\times\)1480 s plus 4 \(\times\)1497 s, yielding a total of 11,908 s. In F165LP, the total exposure was 8\(\times\)1480 s plus 8\(\times\)1497 s plus 3054 s, or 26,870 s total. We combined the frames in each of the two bands using the STScI DrizzlePac software. The world coordinate systems of the images were first aligned to an accuracy of 0.15 - 0.20 pixels, as limited by the non-gaussian point-spread function (PSF; see Avila and Chiaberge, 2016). We then drizzled the 8 F150LP frames together, obtaining a pixel scale of 0\({}^{\prime\prime}\).025 pixel\({}^{-1}\). The 17 F165LP images were drizzled as a separate set, using the combined, drizzled F150LP image as the reference. PSF matching was carried out using the Photutils package, convolving both drizzled images with a combined PSF. The final combined F150LP image is shown in the bottom panel of Figure 1. Extensive, diffuse emission is seen throughout the region; however, most of this extended emission corresponds to scattered starlight as noted by Drissen et al. (2000), and we show below that it is removed by continuum subtraction. Our HST program also obtained a deep STIS long-slit spectrum across Knots A and B, using the G140L grating which includes the C iv spectral region (Figure 2). These data are reduced and calibrated in our study of Figure 1: Top: Three-color HST/WFC3 image of Mrk 71 in F373N ([O ii] \(\lambda\)3727), F502N ([O iii] \(\lambda\)5007), and F469N (He ii \(\lambda\)4686) corresponding to red, green, and blue, respectively. These archive data were obtained by James et al. (2016) (GO-13041; colors not to scale). Bottom: Our new, total combined F150LP image, in units of \(10^{-16}\) erg s\({}^{-1}\) cm\({}^{-2}\), not corrected for reddening. Most of the diffuse emission seen in this image is due to scattered starlight. Knots A and B are separated by 5.0\({}^{\prime\prime}\) (83 pc). the stellar population in Knot A (Smith et al., 2023). We use this spectrum to determine the scale factor for the F165LP image to match the depth in F150LP at C iv, thereby flux-calibrating the continuum-subtracted C iv image. This is carried out by matching the observed diffuse, **nebular** C iv flux in two regions with \(0.^{\prime\prime}5\) length along the \(0.^{\prime\prime}2\)-wide slit on both sides adjacent to Knot A. The resulting scale factor is consistent with the results obtained using the method developed by Sawant et al. (2021) based on maximizing the modal bin fraction. Shallow and deep versions of the final continuum-subtracted image are shown in Figure 3. We note that the He ii\(\lambda 1640\) emission line falls within the lower throughput regimes of both the F150LP and F165LP filters; the transmission curves and F165LP scale factor imply that the level of remaining He ii\(\lambda 1640\) emission in the continuum-subtracted image is on the order of 2% of the signal. Below, we specifically compare to archive He ii\(\lambda 4686\) emission. Many stars are seen in the deep panel of Figure 3. Since, as described above, the continuum subtraction is applied to the C iv emission and is independent of the stellar data, these stars are mostly strong FUV emitters, and are not necessarily C iv emitters. This is discussed further in Section 4. The detection of stars is also enhanced by our deep continuum imaging, which generates r.m.s. noise in the background of only \(\sim 0.1\%\). Diffuse emission is also apparently seen around both Knots A and B in Figure 3. However, inspection of the STIS spectrum in Figure 2 shows that only the diffuse emission around Knot A is real. We believe that the spurious emission around Knot B is caused by the SBC PSF. While its FWHM is \(0.02^{\prime\prime}\) and its 50% encircled energy radius is \(0.075^{\prime\prime}\)(Avila and Chiaberge, 2016), it has broad, faint wings that appear to be imperfectly continuum subtracted and thus may cause residual emission around the brightest stars. The effect extends out to about \(\sim 0^{\prime\prime}.25\), as can be seen around, e.g., LBV-V1 (Figure 3). Knot B is \(4.6\times\) brighter than Knot A within an aperture of \(0.35^{\prime\prime}\) radius, and it also has many more UV-bright stars in a more spatially extended configuration, whereas Knot A does not have many such stars in its immediate vicinity. ## 3 Knot A: Catastrophic Cooling The Knot A SSC strongly dominates the luminosity (e.g., Gonzalez-Delgado et al., 1994; Drissen et al., 2000) and Green Pea-like ionization parameter (\(\log U=-2\); James et al., 2016) of the Mrk 71 complex. Its estimated mass is \(\sim 1.4\times 10^{5}\) M\({}_{\odot}\) based on the H\(\alpha\) luminosity (Micheva et al., 2017) and its age is \(1\pm 1\) Myr based on stellar population synthesis (Smith et al., 2023). The SSC is still substantially enshrouded, but stellar features implying the presence of VMS stars are detected (Smith et al., 2023). Nebular C iv\(\lambda 1550\) emission can originate from systems with weak mechanical feedback via strong radiative cooling and/or high-energy photoionization of dense gas retained near the parent SSC. Or, C iv can be emitted from a conductive interface between a cool, dense shell and interior hot gas generated by strong, energy-driven mechanical feedback. Thus, these two scenarios produce contrasting morphologies in nebular C iv: interior emission for weak mechanical feedback versus shell emission for strong mechanical feedback (e.g., Danehkar et al., 2022, 2021; Gray et al., 2019), as demonstrated further below. Figure 3 shows diffuse C iv emission within a \(sim\)15-pc radius of the Knot A SSC, assuming a distance of 3.4 Mpc (Tolstoy et al., 1995). This region is coincident with the boundary of the dense gas to the west and south (Figure 1) that has been identified as a cavity or shell created by mechanical feedback from the SSC (Komarova et al., 2021; Oey et al., 2017). Observations of the nebular and molecular gas kinematics for this region show that the shell expansion velocity is only \(\sim 5-10\) km s\({}^{-1}\)(Komarova et al., 2021; Micheva et al., 2019; Oey et al., 2017). This localized offset in the systemic velocity is a separate component from the faint, broad emission-line wings that are a much more spatially extended feature discussed in detail by Komarova et al. (2021). For the observed parameters associated with Knot A, this local shell expansion velocity is consistent with momentum-conserving, non-adiabatic expansion and thus this system has been suggested (Komarova et al., 2021; Oey et al., 2017) to be an example of metal-poor feedback where superwinds are suppressed (e.g., Jecmen and Oey, 2023). Figure 2: STIS FUV long-slit spectrum across Knots A and B. The nebular emission features of C iv\(\lambda 1550\), He ii\(\lambda 1640\) and Oiii]\(\lambda\lambda 1661,1666\) are marked. Figure 3: The top and bottom panels show shallow and deep logarithmic contrasts, respectively, of the continuum-subtracted C iv \(\lambda\lambda 1548,1551\) image. WR stars, LBV-V1, and an IR-bright “hot spot” (Drissen et al., 1997; Drissen et al., 2000) are identified. The circle around Knot A indicates the 0.8′′-radius aperture used to measure the C iv nebular flux. Figure 4: Outcomes for the modeled parameter space following Danehkar et al. (2021, see text), where AB = adiabatic bubble, AP = pressure-confined adiabatic bubble, AW = adiabatic wind only, NW = no wind, CB = partially cooling bubble, CC = full catastrophic cooling, CP = cooling pressure-confined, MC = momentum conserving only. The code fails for the NW and MC cases since these do not generate energy-driven models. The color scale shows the temperature of the hot bubble region (Figures 5 and 6) relative to the value expected for simple adiabatic expansion. See Danehkar et al. (2021) for a full explanation and discussion of these categories. Weak, dense superwinds can experience strong, radiative cooling that quenches the energy-driven, adiabatic outflow (e.g., Silich et al., 2004; Krumholz and Matzner, 2009; Lochhaas et al., 2021), and furthermore, weak winds may be suppressed within the cluster itself, such that the individual stellar wind bubbles fail to merge into a coherent outflow (Silich and Tenorio-Tagle, 2018; Yadav et al., 2017). In the last case, an expanding cavity is formed by photoionization and radiation pressure from the SSC. We refer to both of these scenarios that disrupt adiabatic evolution as "catastrophic" cooling. Following Danehkar et al. (2021), we adopt a criterion that the driving superwind has dropped to a temperature \(<75\%\) of the adiabatic value. Another possibility is that energy-driven feedback may exist but may be pressure-confined (Oey and Garcia-Segura, 2004; Silich et al., 2007). Thus, we note that while the shell velocity and parameter space for this object are more suggestive of momentum-conserving evolution (e.g., Komarova et al., 2021), the kinematics alone are insufficient to distinguish between an energy-driven, adiabatic and non-adiabatic, catastrophic cooling regime. This underscores the importance of ions like C iv as a diagnostic of the interior and shell temperature structure. Following Danehkar et al. (2021), we calculate a grid of models for energy-driven feedback using the Maihem non-equilibrium ionization code (Gray et al., 2019), covering a range of parameters similar to those inferred for the Knot A SSC. We assume a cluster radius of 1 pc (Micheva et al., 2017), mass \(1\times 10^{5}\) M\({}_{\odot}\) with a Salpeter IMF having stellar mass range \(0.5-150\) M\({}_{\odot}\), age 1 Myr, and metallicity 0.1 Z\({}_{\odot}\). The effective wind velocity is modeled in the range \(V=250-2000\) km s\({}^{-1}\), its effective mass-loss rate is in the range \(\log\dot{M}=-2\) to \(-4\) / M\({}_{\odot}\) yr\({}^{-1}\), and the ambient density is \(n=100-1000\) cm\({}^{-3}\). The modeled ranges for \(V\) and \(\dot{M}\) are based on those expected for the combined stellar winds from the SSC and extend to values that generate suppressed superwind conditions similar to those in (Danehkar et al., 2021). Figure 4 shows the distribution of feedback outcomes, ranging from fully adiabatic bubbles to fully momentum-conserving outflows. In Figures 5 and 6, we show radial emissivities calculated following Danehkar et al. (2022) using Cloudy(Ferland et al., 2017) for two of the grid models that are broadly similar to the observed shell size, and which offer contrasting interpretations of adiabatic and catastrophic cooling, respectively. The model generated with \(V=1000\) km s\({}^{-1}\), \(\log\dot{M}=-3\)/ M\({}_{\odot}\) yr\({}^{-1}\), and \(n=1000\) cm\({}^{-3}\) produces a conventional adiabatic bubble (AB; Figure 5); and the other is a strongly cooling model, with \(V=500\) km s\({}^{-1}\), \(\log\dot{M}=-2\) / M\({}_{\odot}\) yr\({}^{-1}\), and \(n=500\) cm\({}^{-3}\) (CB; Figure 6). The density and temperature profiles for these models are also shown, and the radial zones corresponding to the freely expanding SSC wind with the given \(\dot{M}\) and \(V\), the shock-heated hot bubble, and the dense outer shell are indicated on the density profile in Figure 5. Figure 6 demonstrates that the CB model's temperature is below the adiabatic prediction in the expanding wind region. Thus, its hot bubble temperature is several times lower, on the order of a few \(\times 10^{6}\) K, than that of the AB model, which has \(T>10^{7}\) K. We also see that the CB hot bubble region occupies a much lower fractional volume interior to the shell. For stronger cooling, the hot region is cooler and shrinks further. The emissivity calculations in the bottom panels include both kinetic and photoionizing activation in the evolving outflow. We see that for the fully adiabatic model, essentially all the C iv\(\lambda 1550\) emission comes from the shell, and virtually none from the interior (Figure 5). In contrast, for the strongly cooling model, the interior contributes substantially to the C iv emission over a large volume (Figure 6; Gray et al., 2019; Danehkar et al., 2022). The top panels in Figures 5 and 6 show emissivities of only photoionization for the same density distribution. This model is largely isothermal at \(T\sim 10^{4}\) K, and demonstrates the difference when excluding kinetic heating. These are useful for inferring the emissivity profiles for pure momentum-conserving evolution (MC), where there is no shock heating. In that case, the line emission should generally follow the \(r^{-2}\) profiles in the inner radial zone, which corresponds to the driving wind's density profile; for complete catastrophic cooling, i.e., pure MC evolution, this zone would extend directly to the shell, with no hot bubble zone. To measure the nebular C iv flux around Knot A, we spatially interpolate over the stars within the circular emitting region of radius \(0.^{\prime\prime}8\) (13 pc), obtaining a total observed flux of \(3.33\pm 0.50\times 10^{-14}\) erg s\({}^{-1}\) cm\({}^{-2}\). We apply the foreground \(E(B-V)=0.033\) using the Milky Way extinction law (Cardelli et al., 1989) and local \(E(B-V)=0.084\) for Knot A following Smith et al. (2023), using the SMC reddening from Gordon et al. (2003). This yields a dereddened flux of \(6.84\pm 1.03\times 10^{-14}\) erg s\({}^{-1}\) cm\({}^{-2}\) and surface brightness of \(3.41\pm 0.51\times 10^{-14}\) erg s\({}^{-1}\) cm\({}^{-2}\) arcsec\({}^{-2}\). The uncertainties account for measurement errors only; we estimate that systematic uncertainties due to continuum subtraction, stellar interpolation, and reddening correction are on the order of 50%, 10%, and 30%, respectively. We also caution that there is uncertainty due to the imperfect PSF subtraction described above, but it Figure 5: Models of volume emissivity (erg s\({}^{-1}\) cm\({}^{-3}\)) for the shown emission lines as a function of radius, for a conventional adiabatic model (AB) generated with \(V=1000\) km s\({}^{-1}\), \(\log\dot{M}=-3\) / M\({}_{\odot}\) yr\({}^{-1}\), and \(n=1000\) cm\({}^{-3}\) from the grid in Figure 4. We also show the radial profiles for the temperature and density of this model in the left panels. The dotted lines correspond to the analytic relations \(n\sim r^{-2}\) and \(T\sim r^{-4/3}\) for an adiabatic, freely expanding wind. The bottom emissivity panels show calculations for combined collisional and photoionization (CPI) that are predicted for this model. As a comparison, the top panels show models for pure photoionization (PI) of the same density distribution, which would be largely isothermal on the order of \(10^{4}\) K (Danehkar et al., 2021). Figure 6: The same as Figure 5, but for a catastrophic cooling model (CB) with \(V=500\) km s\({}^{-1}\), \(\log\dot{M}=-2\) / M\({}_{\odot}\) yr\({}^{-1}\), and \(n=500\) cm\({}^{-3}\). The hot bubble interior is now much cooler than for the AB model in Figure 5 and occupies a much smaller fractional volume interior to the shell. is difficult to quantify and Knot A does not have many UV-bright stars in its immediate vicinity relative to the \(0.25\arcsec\) limit at which the PSF effect is seen. Resolved surface brightness can be written as \(3.74\times 10^{-12}\epsilon_{i}\ dr\)(Ferland, 2013), where \(\epsilon_{i}\) is the C iv\(\lambda 1550\) emissivity and \(dr\) is the line-of-sight path length. A value of \(\epsilon_{i}\sim 1\times 10^{-22}\) erg s\({}^{-1}\) cm\({}^{-3}\) (see Figure 6) integrated through 20 pc yields a surface brightness \(\sim 2\times 10^{-14}\) erg s\({}^{-1}\) cm\({}^{-2}\) arcsec\({}^{-2}\), which agrees well with the observed values. Given that the predicted values are for the relatively crude approximations taken from our model grid, this serves to demonstrate good general consistency between expectations and observations. Figure 3 shows that the C iv nebular emission around Knot A appears more consistent with an internally emitting morphology than a limb-brightened, dense shell. We can also compare with the He ii\(\lambda 4686\) morphology, which, similar to C iv, has only shell emission for strong feedback and interior emission for weak, radiatively cooling feedback (bottom panels of Figures 5 and 6, respectively). Figure 7 shows the ratio map of continuum-subtracted, nebular C iv\(\lambda 1550\) / He ii\(\lambda 4686\) using the archive image in F469N from James et al. (2016). These data show that C iv/He ii is lower in the center around Knot A and higher at larger radii. The centrally concentrated morphology of He ii was noted by James et al. (2016), and it is not apparent in C iv (Figure 3). The region with reduced C iv/He ii extends to a radius \(>0.5\arcsec\), which is much larger than the barely resolved \(\sim 0.1\arcsec\) Knot A. The He ii imaging was obtained with the WFC3 camera, which does not have the PSF wing issue described above for the ACS SBC MAMA detector, and so the flux cannot be attributed to the He ii-emitting stars (Smith et al., 2023) in the core. As seen in Figure 8, this pattern seen in the ratio map is consistent with that expected for the strongly cooling CB model. The radial morphology of both C iv and He ii in the CB model is based on the original \(r^{-2}\) wind density profile seen in the upper panels of Figures 5 and 6 for pure photoionization, which is then modified by kinetic heating such that the central zones become more highly ionized (Gray et al., 2019), depressing the emissivities for these species in the center, as seen in the bottom panels of the figures. They also show enhanced emission at larger radii. However, the two ions have slightly differing radial profiles, resulting in the radial dependence of their ratio shown in Figure 8 for the CB model (blue), and which is seen in our data (Figure 7). Smith et al. (2023) determine the likely presence of very massive stars (VMS) from our STIS spectrum. These may have masses up to \(600\,\mathrm{M}_{\odot}\), and thus may contribute to the high ionization implied by C iv and He ii emission (e.g., Berg et al., 2019). In contrast, for the classical AB model, the emissivity ratio for C iv/He ii is essentially zero for most of the bubble volume (Figure 8), and the interior emissivities, including in the central wind region, are orders of magnitude lower and undetectable (Figure 5). For the CB model in Figure 6, He ii\(\lambda 1640\) is brighter than C iv\(\lambda 1550\) at the smallest radii. However, the STIS spectrum (Figure 2) shows no detection of He ii\(\lambda 1640\), even near the SSC itself. This does not rule out catastrophic cooling, since the central He ii\(\lambda 1640\) is not always stronger than C iv\(\lambda 1550\) for such models (Danehkar et al., 2022; Gray et al., 2019). On the other hand, the pure photoionization models (top panels of Figures 5 and 6) have C iv\(\lambda 1550\) / He ii\(\lambda 1640>1\) in the entirety of the central wind zone. This may suggest that pure photoionization dominates here for Knot A. This is supported by the fact that O iii]\(\lambda\lambda 1661,1666\) is also seen in this region. While O iii] is seen across the entire Knot A environment, much of this flux is likely to be foreground and background emission since the entire object is enveloped in a large halo of more diffuse, extended [O iii] (Figure 1); however, the central enhancement is coincident with the strongest C iv emission around the SSC (Figure 2) and suggests local O iii] emission from Knot A itself. It is important to note that the models are idealized and remnant high-density gas is known to be in the immediate vicinity of the SSC, which remains largely enshrouded, at least along the line of sight (e.g., Smith et al., 2023; Micheva et al., 2017). Pure photoionization is consistent with complete cooling of any energy-driven, mechanical feedback. Overall, the detection of significant diffuse, nebular C iv emitted from gas within an SSC-generated cavity must be linked to the absence of pure energy-driven feed Figure 7: Ratio map of continuum-subtracted C iv\(\lambda 1550\) / He ii\(\lambda 4686\), with the locations of Knots A and B marked. The central enhancement around Knot B is an artifact (see Figure 2 and Section 4). back. In such a scenario, the C iv emission originates either from catastrophic radiative cooling that disrupts adiabatic feedback, or from pure photoionization; both scenarios are incompatible with the shock-heated temperatures (\(\gtrsim 10^{6}\) K) required for the classical, adiabatic model, which would ionize C to higher levels. These conclusions are therefore robust to parameters like shell geometry and metallicity variations. The Knot A system has parameters that are quantitatively very similar to those of the embedded SSC and surrounding molecular gas of NGC 5253 D-1 (e.g., Turner et al., 2017). In particular, the molecular velocity dispersion of \(\sim 10\) km s\({}^{-1}\) and gas mass \(\sim 10^{5}\) M\({}_{\odot}\) observed to be within a few pc of Knot A (Oey et al., 2017) are similar to values for NGC 5253 D-1, which (Silich et al., 2023) show are consistent with radiative, catastrophic cooling conditions. The catastrophic cooling models develop in our grid for \(\log\dot{M}=-2\) / M\({}_{\odot}\) yr\({}^{-1}\) and \(V\leq 500\) km s\({}^{-1}\). This is quantitatively consistent with the models for NGC 5253-D1 by Silich et al. (2020), who find that a mass deposition rate on the order of \(\dot{M}\sim 10^{-2}\) M\({}_{\odot}\) yr\({}^{-1}\) prevents individual stellar winds from merging and developing a global outflow. This would also be linked to a strongly cooling environment. This effective \(\dot{M}\) is roughly 2 orders of magnitude higher than what is expected for a \(10^{5}\) M\({}_{\odot}\) SSC, and \(V\) may be up to an order of magnitude lower than expected for stellar wind velocities on the order of 2000 km s\({}^{-1}\). These factors are quantitatively consistent with conservation of the SSC wind's kinetic energy: a factor of 100 increase in mass balances a factor of 10 reduction in wind velocity to conserve total kinetic energy. This suggests that extreme mass-loading of the wind is directly linked to its deceleration or kinetic inefficiency. The most straightforward process would be by ablation and shredding of material from molecular gas clouds and clumps, which are known to be in close proximity to the SSC and interior to the shell, at least in the line of sight (Oey et al., 2017). Disks of pre-main sequence stars are also suggested as a source of material for mass loading (Silich et al., 2020). However, metal-poor stellar winds may have lower velocities, on the order of 1000 km s\({}^{-1}\)(e.g., Garcia et al., 2014). Thus, \(V\) may be as little as a factor of 2, instead of an order of magnitude, below that expected for the SSC. It is therefore likely that mass-loading is enhanced by another process, in particular, by photoevaporation, which is expected in this radiation-dominated environment that includes large quantities of molecular gas. Moreover, radiation is dynamically important, as indicated by the extreme ionization parameter (Dopita et al., 2002; Yeh & Matzner, 2012) associated with Knot A, and by the fact that radiation is also believed to concurrently drive a separate, very low-density, fast superwind through openings in the shell walls (Komarova et al., 2021). Radiation therefore likely contributes to maintaining the outward momentum of the slow, mass-loaded flow associated with the shell around Knot A. ## 4 Knot B and Other Features Although its mass is up to \(10\times\) lower than Knot A (Micheva et al., 2017; Drissen et al., 2000), the Knot B SSC is by far the brightest source in Figure 3. The emission from this cluster is dominated by 3 known WR stars identified by Drissen et al. (2000), including WR3, an unusually luminous (\(M_{V}\sim-7\)) WC star. We caution that although many stars are clearly detected in the continuum-subtracted image, they are not necessarily C iv-emitting stars. As described in Section 3, the continuum subtraction is calibrated to the diffuse, nebular C iv emission. This continuum normalization is not optimized to identify stellar C iv emission, since most such stars have extremely blue continuum slopes and the long-pass filter bandpasses are very broad. Thus Figure 8: Predicted emissivity ratio of C iv to He ii as a function of radius for the AB (red) and CB (blue) models shown in Figures 5 and 6, respectively. Pure photoionization models are shown in the top panel and combined collisional + photoionization are shown in the bottom panels. We caution that the emissivities of these lines are orders of magnitude lower in the AB model, as seen in Figure 5. the point sources in Figure 3 are mostly very FUV-bright stars. Some of the brightest objects in the figure may be C iv-emitters, but without spectroscopic confirmation, they should only be considered candidates. These include the luminous blue variable LBV-V1 (Drissen et al., 1997; Petit et al., 2006); P-Cygni C iv emission has sometimes been observed in other LBVs, (e.g., HD 5980; Koenigsberger et al., 1995). We will carry out detailed analysis of the stellar population, including its FUV properties, in a follow-up paper. Figures 1, 3, and 7 appear to suggest the existence of substantial diffuse C iv emission near the central SSC in Knot B. As noted earlier, this emission is not seen in the STIS spectrum (Figure 2) and is therefore a spurious effect likely linked to the large, faint PSF wings of the MAMA detector and the more complex, extended morphology of this SSC, with numerous FUV-bright stars in the vicinity of the central core, consistent with its older age. Thus, the appearance of a high C iv/He ii ratio around Knot B in Figure 7 is an artifact. The F469N image obtained with WFC3 does not have broad PSF wings and therefore does not suffer from this effect. The large, dense shell around Knot B (Figure 1) is likely due to the action of multiple supernovae from this somewhat older (\(\gtrsim 4\) Myr; Micheva et al., 2017) SSC. This is supported by the existence of diffuse, soft X-ray emission associated with the southern region of this shell (Thuan et al., 2014), which is a clear signature of adiabatic, energy-driven feedback. The lower panels of Figure 5 show the typical relative emissivities for AB models. While the radial ranges of the different wind and hot bubble zones may vary depending on the specific parameters, the radial morphologies with the three shown zones are all similar for strong, adiabatic feedback. In particular, C iv is ordinarily suppressed near the bubble center due to the prevalence of C v (e.g., Gray et al., 2019). The lack of central, diffuse C iv around Knot B is therefore fully consistent with adiabatic feedback. Furthermore, the presence of central He ii\(\lambda 1640\) seen in the STIS spectrum (Figure 2) is also consistent with the AB model in Figure 5. As noted earlier, the diffuse O iii] emission in the spectrum is likely foreground or background emission. However, there is no apparent detection of limb-brightened C iv from the shell wall, as expected from the cooling interface with the hot, X-ray emitting gas (e.g., Chu et al., 1994; Danehkar et al., 2022). Figure 5 shows that for an AB model, the shell's predicted emissivity is on the order of \(10^{21}\) erg s\({}^{-1}\) cm\({}^{-3}\); thus for a \(\sim 3\)-pc line of sight through the limb, the surface brightness might be around \(3\times 10^{-14}\) erg s\({}^{-1}\) cm\({}^{-2}\) arcsec\({}^{-2}\), which is similar to that in the cavity around Knot A. It may be that the emission is a bit fainter than expected and the observations are not deep enough to detect the emission. A non-stellar, dusty "hot spot" that is bright in \(K\)-band (Drissen et al., 2000) also appears to be detected in nebular C iv emission (Figure 3). It is compact, with a FWHM \(\lesssim 0.15\arcsec\), or 2.5 pc, and the colors reported by Drissen et al. (2000) of \(J-H=0.4\) and \(H-K=1.5\) are consistent with those of compact H ii regions (e.g., Kastner et al., 2008). The total C iv flux of the object is \(>2.81\pm 0.08\times 10^{-15}\) erg s\({}^{-1}\) cm\({}^{-2}\), taking the extinction correction used for Knot A. The quoted uncertainty includes only measurement error, while systematic uncertainty due to continuum subtraction is on the order of 10%. However, this is a substantial lower limit, since the extinction for such a dense object is unknown and expected to be much higher than for Knot A. As noted earlier, C iv emission from H ii regions is unusual and implies the presence of higher-energy photons. Assuming the object is a compact H ii region, it may host one or more extremely hot, early O stars, perhaps VMS stars; alternatively, the C iv emission may be generated in the conductive interface of a very compact shell enclosing a hot bubble that is strongly pressure-confined (e.g., Silich et al., 2007; Danehkar et al., 2021). Such a system may also generate C iv by photoionization from the hot gas (Oskinova and Schaerer, 2022). The location of this object suggests that its formation may be triggered by the shell expansion due to Knot B. ## 5 Conclusion Our successful use of the F150LP and F165LP filters to obtain imaging in C iv\(\lambda 1550\) demonstrates the viability of the ACS SBC channel for obtaining spatially resolved nebular observations in this critical diagnostic emission line. Extended diffuse emission is seen in the Mrk 71 system, and also a highly excited, compact H ii region. FUV-bright stars, including some candidate stellar C iv sources like WR stars, early O stars, and LBV-V1 are clearly detected in our observations, although further analysis is needed to determine the extent to which C iv-emitting stars can be distinguished from stars with steep FUV continua. For the Knot A system, provided that the observed diffuse C iv emission is real, as confirmed by the STIS spectrum, its center-filled spatial distribution and lack of limb-brightened morphology imply that mechanical feedback is non-adiabatic, as previously suggested by the surrounding shell kinematics. Moreover, its morphology and observed flux is consistent with model expectations for strong radiative cooling. The observed lower C iv/He ii ratio in the center is also characteristic of such cooling. It may be a system that is not quite a completely cooled, purely momentum-conserving system, and/or there may be total cooling and pure photoionization, especially within Knot A itself. These observations therefore provide _direct evidence_ that Knot A is likely driving a momentum-conserving shell due to catastrophic cooling that disrupts adiabatic, energy-driven kinematics. The Knot B SSC has generated a well-defined superbubble associated with diffuse X-rays. There is no perceptible C iv emission from within the superbubble, while central He ii\(\lambda\)1640 is detected, as expected from the SSC wind. These factors all point to a system dominated by adiabatic, supernova-driven feedback. However, there is no immediate evidence of limb-brightened C iv emission from the superbubble shell that is predicted for adiabatic models. Deeper observations are needed to establish the extent to which this represents a significant discrepancy with predictions. We thank Will Gray, Genoveva Micheva, and Megan Reiter for useful discussions, and Roberto Avila of the STScI Astrodrizzle team for advice on the drizzle procedure. We also thank the anonymous referee for useful comments and questions. This work was supported by NASA HST-GO-16261. S.S. is supported by CONAH-CYT, Mexico, research grant A1-S-28458. This research is based on observations made with the NASA/ESA Hubble Space Telescope obtained from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with program 16261. The data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST) at the Space Telescope Science Institute. The specific observations analyzed can be accessed via DOI: 10.17909/90na-ch15. HST(ACS, STIS) DrizzlePac(Hoffmann et al., 2021)
2308.06670
Graphs with degree sequence $\{m^{m-1},n^{n-1}\}$ and $\{m^n,n^m\}$
In this paper we study the class of graphs $G_{m,n}$ that have the same degree sequence as two disjoint cliques $K_m$ and $K_n$, as well as the class $\overline G_{m,n}$ of the complements of such graphs. We establish various properties of $G_{m,n}$ and $\overline G_{m,n}$ related to recognition, connectivity, diameter, bipartiteness, Hamiltonicity, and pancyclicity. We also show that several classical optimization problems on these graphs are NP-hard.
Boris Brimkov, Valentin Brimkov
2023-08-13T03:06:48Z
http://arxiv.org/abs/2308.06670v1
# Graphs with degree sequence \(\{m^{m-1},n^{n-1}\}\) and \(\{m^{n},n^{m}\}\) ###### Abstract In this paper we study the class of graphs \(G_{m,n}\) that have the same degree sequence as two disjoint cliques \(K_{m}\) and \(K_{n}\), as well as the class \(\overline{G}_{m,n}\) of the complements of such graphs. We establish various properties of \(G_{m,n}\) and \(\overline{G}_{m,n}\) related to recognition, connectivity, diameter, bipartiteness, Hamiltonicity, and pancyclicity. We also show that several classical optimization problems on these graphs are NP-hard. **Keywords:** Degree-equivalent graphs, Hamiltonian graph, pancyclic graph, bipartite graph, maximum clique ## 1 Introduction Two graphs \(G_{1}\) and \(G_{2}\) are called _degree-equivalent_ if they have the same degree sequence. In this paper we present results about simple graphs that are different from and degree-equivalent to a disjoint union of cliques \(K_{p_{1}},\ldots,K_{p_{k}}\), as well as the complements of such graphs. We will denote this class of graphs by \(G_{p_{1},\ldots,p_{k}}\) and the class of their complements by \(\overline{G}_{p_{1},\ldots,p_{k}}\). Predominantly, we will study graphs that are degree-equivalent to two disjoint cliques and their complements, i.e., the graphs in \(G_{m,n}\) and \(\overline{G}_{m,n}\). Graphs in these families have degree sequence \(\{m^{m-1},n^{n-1}\}\) and \(\{m^{n},n^{m}\}\), respectively, where \(a^{x}\) means the number \(a\) appears \(x\) times in the sequence. The graphs in \(G_{m,n}\) and \(\overline{G}_{m,n}\) can model peer-to-peer file sharing networks where, for example, each of \(n\) downloaders must be connected to \(m\) peers and each of \(m\) uploaders must be connected to \(n\) peers. They can also model networks which start out as disjoint cliques or as complete bipartite graphs, and then evolve through a sequence of 2-switches (see Section 2 for definitions). We identify properties that are shared by all networks in such a sequence of transformations. In particular, we show that graphs in \(G_{m,n}\) and \(\overline{G}_{m,n}\) can have a rather varied structure, yet possess desirable qualities such as Hamiltonicity, traceability, bounded diameter, and efficient recognizability. On the other hand, various optimization problems such as maximum clique, maximum independent set, and minimum vertex cover are NP-hard on these classes of graphs. The graphs we examine in this paper are somewhat reminiscent of _biregular_, _semiregular_, _almost regular_, and _nearly regular_ graphs which have been studied in a number of works [2, 3, 15, 17, 20, 21]. Usually, an _almost regular_ graph is defined as a graph whose vertex degrees differ by at most one. Some variations or generalizations of this definition have been considered as well [9, 10]. A special example of a graph that belongs to the class \(\overline{G}_{m,n}\) is the well-known _Mantel graph_[18]; it is a graph on \(n\) vertices, which is the complement of a graph that consists of two cliques of size \(\lfloor n/2\rfloor\) and \(\lceil n/2\rceil\). The graphs of two of the Platonic solids, the cube and the icosahedron, are in \(G_{4,4}\) and \(G_{6,6}\) respectively. The graph of the dodecahedron is in \(G_{4,4,4,4}\). When \(m=n\), graphs in \(G_{m,n}\) and \(\overline{G}_{m,n}\) are regular. However, besides such special cases, not much is known about graphs in \(G_{m,n}\) and \(\overline{G}_{m,n}\). In the next section we introduce some definitions and previous results to be used in the paper. In Section 3 we obtain results related to recognition, connectivity, Hamiltonicity, pancyclicity, bipartiteness, and diameter of graphs in \(G_{m,n}\) and \(\overline{G}_{m,n}\). In Section 4 we explore the complexity of some optimization problems on these graphs. We conclude with final remarks and directions for future work in Section 5. ## 2 Preliminaries Let \(G=(V,E)\) be a simple graph (i.e. with no loops or parallel edges) with vertex set \(V\) and edge set \(E\). The _neighborhood_ of a vertex \(v\), denoted \(N(v)\), is the set of vertices adjacent to \(v\); the _degree_ of \(v\), denoted \(deg(v)\), is equal to \(|N(v)|\), where by \(|A|\) we denote the cardinality of a set \(A\). The _degree sequence_ of \(G\) is the sequence of its vertex degrees. Graphs \(G_{1}\) and \(G_{2}\) are called _degree-equivalent_, denoted \(G_{1}\simeq G_{2}\), if they have the same degree sequence. Let \(u,v,x,y\) be four vertices in a graph \(G\) such that \(uv\) and \(xy\) are edges of \(G\) and \(ux\) and \(vy\) are not edges of \(G\). A _2-switch_ applied to \(G\) is an operation that replaces the edges \(uv\) and \(xy\) with the edges \(ux\) and \(vy\). It is well known that the resulting graph has the same degree sequence as \(G\), and that two graphs \(G\) and \(H\) have the same degree sequence if and only if there is a sequence of 2-switches that transforms \(G\) into \(H\)[11]. \(G\) is _connected_ if there is a path that connects any two vertices of \(G\); otherwise \(G\) is _disconnected_. A _(connected) component_ of \(G\) is a maximal connected subgraph of \(G\). \(G\) is _separable_ if it is disconnected or can be disconnected by removing a vertex, called a _cut-vertex_. A _bridge_ (or _cut-edge_) of \(G\) is an edge whose removal increases the number of components of \(G\). A _block_ of \(G\) is a maximal nonseparable subgraph of \(G\). If \(G\) is not separable, then it is 2-_connected_ (or _biconnected_). Given \(S\subset V\), the _induced subgraph_\(G[S]\) is the subgraph of \(G\) whose vertex set is \(S\) and whose edge set consists of all edges of \(G\) which have both ends in \(S\). The _complement_ of \(G\) is a graph \(\overline{G}\) on the same set of vertices such that two vertices of \(\overline{G}\) are adjacent if and only if they are not adjacent in \(G\). The _distance_\(d(u,v)\) between vertices \(u\) and \(v\) in \(G\) is the number of edges in a shortest path between \(u\) and \(v\) in \(G\). The _diameter_ of \(G\) is defined as \(diam(G)=\max_{u,v\in V}d(u,v)\). A _clique_ in graph \(G\) is a complete subgraph of \(G\) (i.e., a subgraph in which any two vertices are adjacent). A clique on \(n\) vertices is denoted by \(K_{n}\). A _triangle_ is a clique of size \(3\). The _clique number_ of \(G\), denoted \(\omega(G)\), is the cardinality of a largest clique in \(G\). An _independent_ (or _stable_) _set_ of \(G\) is a set of vertices no two of which are adjacent; the _independence_ (or _stability_) _number_ of \(G\), denoted \(\alpha(G)\), is the cardinality of a largest independent set in \(G\). A _vertex cover_ of \(G\) is a set of vertices \(S\) such that for every edge of \(G\), at least one of its endpoints is in \(S\). Max Clique and Max Independent Set will respectively denote the optimization problems for finding a clique and independent set of maximum cardinality, and Min Vertex Cover is the problem of finding a vertex cover of minimum cardinality. These are classical NP-hard problems [12]. A _bipartite_ graph with _parts_\(V_{1}\), \(V_{2}\), denoted \(G=(V_{1},V_{2},E)\), is a graph where the sets \(V_{1}\) and \(V_{2}\) are nonempty independent sets of \(G\) and partition the set of vertices of \(G\). \(K_{r_{1},r_{2}}\) denotes the complete bipartite graph with parts of sizes \(r_{1}\) and \(r_{2}\). The _chromatic number_ of \(G\), denoted \(\chi(G)\), is the smallest number of colors needed to color the vertices of \(G\) so that no two adjacent vertices have the same color. A graph \(G\) is _perfect_ if the chromatic number of every induced subgraph of \(G\) equals the clique number of that subgraph. A _co-graph_ is a graph which does not contain as an induced subgraph the graph \(P_{4}\) (a path on \(4\) vertices). A _Hamiltonian cycle_ in a graph \(G\) is a cycle which visits every vertex of \(G\) exactly once, and a _Hamiltonian path_ is a path which visits every vertex of \(G\) exactly once. A graph that contains a Hamiltonian cycle is called _Hamiltonian_ and a graph that contains a Hamiltonian path is called _traceable_. A graph on \(n\) vertices is called _pancyclic_ if it contains cycles of every length from \(3\) to \(n\). Thus, a pancyclic graph is also Hamiltonian. For other graph theoretic definitions and notations, see [6]. Below we recall a number of theorems from the literature which we will use in the sequel. Theorem 2.1: _([4, 16]) A graph is bipartite if and only if all its cycles are of even degree._ Theorem 2.2: (Dirac [8]) A simple graph \(G\) with \(n\geq 3\) vertices is Hamiltonian if the degree of every vertex of \(G\) is greater than or equal to \(n/2\)._ Theorem 2.3: _(Ore [22]) A simple graph \(G\) with \(n\geq 3\) vertices is Hamiltonian if for every pair of non-adjacent vertices of \(G\), the sum of their degrees is greater than or equal to \(n\)._ Theorem 2.4: (Holton and Sheehan [14]) If \(G\) is a \(2\)-connected \(r\)-regular graph with at most \(3r+1\) vertices, then \(G\) is Hamiltonian or \(G\) is the Petersen graph. Theorem 2.5: (Rahman and Kaykobad [23]) A simple graph with \(n\) vertices has a Hamiltonian path if for every two non-adjacent vertices the sum of their degrees and the distance between them is greater than \(n\)._ Theorem 2.1: _(Bondy [5]) Any Hamiltonian graph with \(n\) vertices and at least \(n^{2}/4\) edges is either pancyclic or is the graph \(K_{n/2,n/2}\)._ Theorem 2.2: _(Moon and Moser [19]) If the bipartite graph \(G(U,V,E)\), \(|U|=|V|=n\) is such that for every \(k\), where \(1<k<\frac{n}{2}\), the number of vertices \(p\in U\) such that \(deg(p)<k\) is less than \(k\), and similarly with \(p\) replaced by \(q\in V\), then \(G(U,V,E)\) is Hamiltonian._ Theorem 2.3: _(Caro and Wei [7]) A simple graph with \(n\) vertices and degree sequence \(d_{1},d_{2},\ldots,d_{n}\) has an independence number \(\alpha(G)\geq\sum_{i=1}^{n}\frac{1}{d_{i}+1}\)._ ## 3 Properties of \(G_{m,n}\) and \(\overline{G}_{m,n}\) ### Recognition Mantel and Turan graphs are complete bipartite and \(r\)-partite graphs, respectively. They are co-graphs and as such are perfect. The graphs in \(G_{m,n}\) and \(\overline{G}_{m,n}\) are neither of these, in general. For example, the graph from \(G_{3,4}\) in Figure 1, left, contains \(C_{5}\) and \(P_{4}\) as induced subgraphs and therefore is neither a co-graph nor perfect. The graph from the same family \(G_{3,4}\) in Figure 1, right, contains \(P_{4}\) as an induced subgraph and therefore is not a co-graph, but it is perfect. These observations suggest that solving some optimization problems on graphs in \(G_{m,n}\) may be hard, as we will show in Section 4. It is well-known that bipartite graphs (such as Mantel graphs) as well as complete \(r\)-partite graphs (such as Turan graphs) can be recognized in polynomial time, while recognizing incomplete \(r\)-partite graphs is NP-complete for \(r>2\). The following statement shows that \(G_{p_{1},\ldots,p_{k}}\) graphs and their complements are easily recognizable, even though they could be incomplete \(r\)-partite for arbitrarily large \(r\), and do not belong to any of the known efficiently recognizable classes of graphs. Proposition 1: _It can be checked in linear time whether \(G\in G_{p_{1},\ldots,p_{k}}\)._ Proof: For a graph \(G=(V,E)\) with \(|V|=n\) and \(|E|=m\), the degree sequence of \(G\) can be found in \(O(m)\) time, and the cardinality of each number in the degree sequence can be found in \(O(n)\) time using Counting Sort. Then, it can be Figure 1: Both graphs belong to the class \(G_{3,4}\). _Left:_ The graph is neither a co-graph nor perfect. _Right:_ The graph is not a co-graph, but it is perfect. checked in \(O(n)\) time whether the number of vertices with degree \(p\) is an integer multiple of \(p+1\). This is true if and only if \(G\in G_{p_{1},\ldots,p_{k}}\). By a similar reasoning as above, it can be checked in linear time whether \(G\in\overline{G}_{p_{1},\ldots,p_{k}}\). Proof: Note that there are no graphs in \(G_{m,n}\) for \(m=1\) or \(n=1\), nor for \(m=n=2\), since any graph \(G\) that is degree equivalent to \(K_{m}\cup K_{n}\) for those values of \(m\) and \(n\) is isomorphic to \(K_{m}\cup K_{n}\). The only graph in \(G_{2,3}\) and \(G_{3,2}\) is \(P_{5}\), whose cut-vertices and bridges are described by Case 5 of the proposition (and \(P_{5}\) is traceable). The only graph \(G\) in \(G_{2,n}\), \(n\geq 4\) or in \(G_{m,2}\), \(m\geq 4\), is a graph consisting of a clique of size \(n\) or \(m\), respectively, with one edge deleted and two leaves attached to each of the endpoints of that edge. The cut-vertices and bridges of such a graph are described by Case 4 of the proposition, and such a graph is traceable. There are also clearly graphs \(G\in G_{m,n}\) that satisfy Case 1 of the proposition (for example, the ones shown in Figure 1). Moreover, the only way for a graph to have bridges and no cut-vertices is if it has at least one \(K_{2}\) component, which by Proposition 2 is impossible for any graph in \(G_{m,n}\). We will now show that if \(G=(V,E)\) is a graph in \(G_{m,n}\) with \(m\geq 3\) and \(n\geq 3\) and if \(G\) has at least one cut-vertex, then \(G\) satisfies either Case 2 or Case 3 of the proposition, and is traceable. It is well-known (see, e.g., [1]) that a separable graph has at least two blocks that each contain exactly one cut-vertex. Let \(G_{1}=(V_{1},E_{1})\) and \(G_{2}=(V_{2},E_{2})\) be two such blocks and let \(x\) be the cut-vertex of \(G_{1}\). Suppose without loss of generality that \(n\geq m\). If \(n=m\), then all vertices in \(V_{1}\) have degree \(n-1\), and so \(|V_{1}|\geq n\), since some non-cut vertex of \(G_{1}\) will have at least \(n-1\) neighbors in \(G_{1}\). For the same reason, \(|V_{2}|\geq n\). Then, at least one of \(|V_{1}|\) and \(|V_{2}|\) must equal \(n\), since otherwise there will be more than \(m+n\) vertices in \(G\). Suppose without loss of generality that \(|V_{1}|=n\). Then, \(G_{1}\) is a clique of size \(n\), so the cut-vertex of \(G_{1}\) must have degree greater than \(n-1\), a contradiction. Thus, \(n>m\). Suppose all vertices in \(V_{1}\) have the same degree. Without loss of generality, suppose the degree is \(n-1\). Then, \(|V_{1}|\geq n\). Since there are \(n\) total vertices in \(G\) with degree \(n-1\), it follows that \(|V_{1}|=n\). Thus, \(G_{1}\) is a clique of size \(n\), so the cut-vertex of \(G_{1}\) must have degree greater than \(n-1\), which contradicts the assumption that all vertices in \(V_{1}\) have degree \(n-1\). Thus, some vertices in \(V_{1}\) have degree \(m-1\) and some have degree \(n-1\). \(G_{1}\) must have at least \(m\) vertices, since some non-cut vertex of \(G_{1}\) will have at least \(m-1\) neighbors in \(G_{1}\). If \(x\) has degree \(m-1\), then since \(|V_{1}|\geq m\), it follows by a similar argument as above that \(G_{1}\) must be a clique \(K_{m}\), in which case \(x\) cannot be a cut-vertex. Thus, \(x\) must have degree \(n-1\). Figure 3: _Left:_ A graph in \(G_{4,6}\) with one cut-vertex. _Right:_ A graph in \(G_{4,5}\) with two cut-vertices and a bridge between them. Suppose first that only \(x\) has degree \(n-1\), and all other vertices in \(G_{1}\) have degree \(m-1\). Then, \(|V_{1}|\geq m\) and similarly as above it follows that \(G_{1}\) is a clique \(K_{m}\). If \(G_{2}\) is the only other block of \(G\), then \(G\) satisfies Case 2 of the proposition. If \(G_{2}\) is not the only other block of \(G\), then since \(G_{1}\) only has one vertex of degree \(n-1\), \(G_{2}\) must have at least one non-cut vertex of degree \(n-1\) and therefore must have \(n\) total vertices. Then, since all vertices of \(G\) must either be in \(G_{1}\) or \(G_{2}\), the only other block of \(G\) must be a bridge between \(G_{1}\) and \(G_{2}\). This satisfies Case 3 of the proposition. Next, suppose that at least one non-cut vertex of \(G_{1}\) has degree \(n-1\). Then, \(|V_{1}|\geq n\), and \(|V_{1}|\leq n+1\) (since if \(|V_{1}|>n+1\) then \(|V_{2}|<m\), and so the vertices in \(V_{2}\) could not have degree at least \(m-1\)). If \(|V_{1}|=n+1\), then \(|V_{2}|=m\) with \(V_{1}\) and \(V_{2}\) sharing a vertex, and so \(G_{1}\) and \(G_{2}\) must be the only blocks of \(G\). This satisfies Case 2 of the proposition. If \(|V_{1}|=n\) and \(|V_{2}|=m\), then \(G_{1}\) and \(G_{2}\) are two blocks separated by a bridge, which satisfies Case 3 of the proposition. If \(|V_{1}|=n\) and \(|V_{2}|=m+1\), then \(G_{1}\) and \(G_{2}\) are the only blocks of \(G\), and \(G\) satisfies Case 2 of the proposition. Finally, for all types of separable graphs described above, the blocks \(G_{1}\) and \(G_{2}\) are Hamiltonian by Theorem 3.2. Thus, \(G\) is traceable since a Hamiltonian path of \(G_{1}\) ending at the cut vertex of \(G_{1}\) can be combined with a Hamiltonian path of \(G_{2}\) ending at the cut vertex of \(G_{2}\), through the cut-vertex or bridge between the two blocks. Proposition 3 shows that a graph in \(G_{m,n}\) with a cut-vertex must be traceable (and cannot be Hamiltonian). In \(G_{m,n}\) there are also graphs without a cut-vertex that are traceable but not Hamiltonian, as well as graphs without a cut-vertex that are Hamiltonian. Figure 4 shows two such graphs. In fact, we make the following conjecture about the traceability of graphs in \(G_{m,n}\). Conjecture 1: Every graph \(G\in G_{m,n}\) is traceable. In the remainder of this section we provide various evidence for this conjecture. We begin with a more general construction of infinite families of 2-connected Hamiltonian graphs in \(G_{m,n}\). A _twin bridge_ joining graphs \(G_{1}\) and \(G_{2}\) consists of two edges which connect two vertices of \(G_{1}\) with two vertices of \(G_{2}\). Figure 4: Illustration to the proof of Proposition 4 showing two 2-connected graphs from \(G_{3,4}\). _Left:_ The graph is Hamiltonian. _Right:_ The graph is traceable but not Hamiltonian. Proposition 4: _Let \(G\) be a graph in \(G_{m,n}\), \(n>m\geq 3\) which can be obtained by joining two 2-connected graphs by a twin bridge. Then \(G\) is Hamiltonian except for the graph in Figure 4, right, which is traceable._ Proof: The proof is similar to the proof of Proposition 3, therefore some details are omitted. By arguments that parallel those used in the proof of Proposition 3 it can be shown that \(G\) must have one of the following two structures. _Case 1:_\(G\) can be obtained by removing an edge \(e=uv\) from a clique \(K_{m}\) and an edge \(e^{\prime}=u^{\prime}v^{\prime}\) from a clique \(K_{n}\) and adding the edges \(uu^{\prime}\) and \(vv^{\prime}\). See Figure 5, left for an example. We will refer to \(K_{m}-e\) as \(G_{1}\) and \(K_{n}-e^{\prime}\) as \(G_{2}\). _Case 2:_\(G\) can be obtained by removing two non-incident edges \(e=uv\) and \(e^{\prime}=u^{\prime}v^{\prime}\) from a clique \(K_{n}\) and joining two vertices among \(\{u,v,u^{\prime},v^{\prime}\}\) with two vertices \(a\) and \(b\) of a clique \(K_{m}\) where \(m=n-1\) by two edges. See Figure 5, right for an example. We will refer to \(K_{m}\) as \(G_{1}\) and \(K_{n}-e-e^{\prime}\) as \(G_{2}\). We will first consider the situation when \(m\geq 4\) (and hence \(n\geq 5\)). Then, in both cases, by Theorem 2.2 both \(G_{1}\) and \(G_{2}\) are Hamiltonian. If \(G\) satisfies Case 1, then a Hamiltonian cycle of \(G\) can be obtained by merging a Hamiltonian path of \(G_{1}\) ending at \(u\) and \(v\) with a Hamiltonian path of \(G_{2}\) ending at \(u^{\prime}\) and \(v^{\prime}\) using the edges \(uu^{\prime}\) and \(vv^{\prime}\). If \(G\) satisfies Case 2, let \(x_{1},\ldots,x_{k}\) be the vertices of \(G_{2}\) different from \(u\), \(v\), \(u^{\prime}\), and \(v^{\prime}\). Note that since \(n\geq 5\), \(k\) is at least 1. If the two vertices among \(\{u,v,u^{\prime},v^{\prime}\}\) which are connected to \(a\) and \(b\) have an edge between them, suppose without loss of generality that they are \(u\) and \(u^{\prime}\). Then, a Hamiltonian cycle of \(G\) can be obtained by merging a Hamiltonian path of \(G_{1}\) ending at \(a\) and \(b\) with the Hamiltonian path \(u,v^{\prime},x_{1},\ldots,x_{k},v,u^{\prime}\) of \(G_{2}\) using the two edges between \(G_{1}\) and \(G_{2}\). If the two vertices among \(\{u,v,u^{\prime},v^{\prime}\}\) which are connected to \(a\) and \(b\) do not have an edge between them, suppose without loss of generality that they are \(u\) and \(v\). Then, a Hamiltonian cycle of \(G\) can be obtained by merging a Hamiltonian path of \(G_{1}\) ending at \(a\) and \(b\) with the Hamiltonian path \(u,u^{\prime},x_{1},\ldots,x_{k},v^{\prime},v\) of \(G_{2}\) using the two edges between \(G_{1}\) and \(G_{2}\). Finally, consider the case when \(m=3\). Then the structure of \(G\) must satisfy Case 2, because in Case 1, \(G_{1}\) would not be 2-connected. Then, since \(m=n-1\), \(G\) is a graph in \(G_{3,4}\). There are only two non-isomorphic graphs in \(G_{3,4}\) that satisfy Case 2, and they are depicted in Figure 4. One of them is Hamiltonian; the other one is traceable and not Hamiltonian, and it is the unique graph with this property. Figure 5: Illustration to the proof of Proposition 4: Case 1 (_Left_) and Case 2 (_Right_). For the special case of Conjecture 1 where \(m=n\) we have the following characterization. Proposition 5: _Any graph \(G\in G_{n,n}\) is 2-connected and Hamiltonian._ Proof: In the proof of Proposition 3, it was shown that when \(m,n\geq 3\) and \(n=m\), a graph in \(G_{m,n}\) cannot have a cut-vertex. It was also shown that \(G_{1,1}\) and \(G_{2,2}\) are empty. Thus, any graph \(G\in G_{n,n}\) is 2-connected. Next, note that the Petersen graph is not in \(G_{n,n}\) for any \(n\), since if it was, it would have to belong to \(G_{5,5}\) as it has 10 vertices, but the Petersen graph is 3-regular while the graphs in \(G_{5,5}\) are 4-regular. Thus, since a graph \(G\in G_{n,n}\) is a 2-connected \((n-1)\)-regular graph on \(2n\) vertices different from the Petersen graph, and \(2n\leq 3(n-1)+1=3n-2\) for any \(n\geq 2\), by Theorem 4 it follows that \(G\) is Hamiltonian. We now explore the connectivity and Hamiltonicity of graphs in \(\overline{G}_{m,n}\). We first show that all graphs in \(\overline{G}_{m,n}\) with \(m\neq n\) are 2-connected. We then show that when \(m=n\), the graphs in \(\overline{G}_{m,n}\) are not only 2-connected but also pancyclic. Proposition 6: _Any graph \(G\in\overline{G}_{m,n}\) with \(m\neq n\) is 2-connected._ Proof: Suppose without loss of generality that \(m<n\). Note that there are no graphs in \(\overline{G}_{m,n}\) for \(m=1\) or \(n=1\), nor for \(m=n=2\), since any graph \(G\) that is degree-equivalent to \(K_{m}\cup K_{n}\) for those values of \(m\) and \(n\) is isomorphic to \(K_{m}\cup K_{n}\), and therefore \(G_{m,n}\) and \(\overline{G}_{m,n}\) are empty for those values of \(m\) and \(n\). If \(G\in\overline{G}_{m,n}\), then \(G\) has \(m\) vertices of degree \(n\) and \(n\) vertices of degree \(m\). We will first show that \(G\) is connected. Suppose for contradiction that this is not the case, and let \(G_{1}\) be an arbitrary component of \(G\). If \(G_{1}\) has only vertices of degree \(n\), then \(G_{1}\) must have at least \(n+1>m\) vertices, which contradicts the fact that there are \(m\) vertices of degree \(n\). If \(G_{1}\) has only vertices of degree \(m\), then \(G_{1}\) has at least \(m+1\) vertices. Then the other components of \(G\) would collectively have fewer than \(n\) vertices, so those vertices could not have degree \(n\). Thus, \(G\) could not have any degree \(n\) vertices, a contradiction. If \(G_{1}\) has vertices of degree \(m\) and \(n\), then \(G_{1}\) must have at least \(n+1\) vertices. Then the other components of \(G\) would collectively have fewer than \(m\) vertices, so they could not have degree \(m\) nor degree \(n\), a contradiction. We will now show that \(G\) is 2-connected. Suppose for contradiction that this is not the case, and let \(G_{1}\) and \(G_{2}\) be two blocks of \(G\), each of which contains exactly one cut-vertex. If all vertices of \(G_{1}\) (possibly except the cut vertex) have degree \(n\), then \(G_{1}\) must have at least \(n+1\) vertices; since \(G\) has a total of \(m<n\) vertices of degree \(n\), this is a contradiction. If all vertices of \(G_{1}\) (possibly except the cut vertex) have degree \(m\), then \(G_{1}\) must have at least \(m+1\) vertices. Then the other blocks of \(G\) would collectively have at most \(n\) vertices, so they could not have any vertices of degree \(n\). Then \(G\) would not have any vertices of degree \(n\) (possibly except the cut vertex of \(G_{1}\)), a contradiction. The above can be repeated identically with \(G_{2}\) in place of \(G_{1}\). Thus, it follows that both \(G_{1}\) and \(G_{2}\) must have both degree \(m\) and degree \(n\) vertices. Suppose at least one of \(G_{1}\) and \(G_{2}\), say \(G_{1}\), has a vertex of degree \(n\) which is not a cut-vertex. Then \(G_{1}\) has at least \(n+1\) vertices and \(G_{2}\) has at most \(m\) vertices. Then the non-cut vertices of \(G_{2}\) must have degree less than \(m\), which is a contradiction. Thus, it follows that the only vertices of degree \(n\) of \(G_{1}\) and \(G_{2}\) are their cut-vertices. Then each of \(G_{1}\) and \(G_{2}\) as an induced subgraph must have at least \(m+1\) vertices. We will consider two possibilities. _Case 1:_\(G_{1}\) and \(G_{2}\) share a cut vertex \(x\); there may or may not be other blocks which also share that cut vertex. Then, the graph obtained by removing \(G_{1}\) and \(G_{2}\) from \(G\) would have at most \(m+n-(m+1)-m=n-m-1<n\) vertices. Thus, \(G\) cannot have vertices of degree \(n\), possibly except the vertex \(x\). This contradicts the assumption that there are \(m\geq 2\) vertices of degree \(n\). _Case 2:_\(G_{1}\) and \(G_{2}\) do not share a cut vertex; then both \(G_{1}\) and \(G_{2}\) share their respective cut vertex, say \(x\) and \(y\), with at least one other block. Then, the graph obtained by removing \(G_{1}\) and \(G_{2}\) from \(G\) would have at most \(m+n-(m+1)-(m+1)=n-m-2<n\) vertices. Thus, \(G\) cannot have vertices of degree \(n\), possibly except the vertices \(x\) and \(y\). If \(m>2\), this contradicts the assumption that there are \(m\) vertices of degree \(n\). Thus, suppose \(m=2\). Then \(x\) and \(y\) have degree \(n\) and there are \(n\) other vertices with degree \(2\). It follows that \(G\) is a graph consisting of the two vertices \(x\) and \(y\), one or more paths of length at least \(1\) between \(x\) and \(y\), and one or more cycle blocks attached to each of \(x\) and \(y\); see Figure 6 for an illustration. It is easy to see that for a graph with this structure, the number of vertices of degree \(2\) exceeds the degree of \(x\) and \(y\) by at least \(1\). Thus, there are more than \(n\) vertices of degree \(2\), which is a contradiction. Proposition 7: _Every graph \(G\in\overline{G}_{n,n}\) is pancyclic._ Proof: A graph \(G=(V,E)\in G_{n,n}\) has \(2n\) vertices, all of degree \(n\). Thus, \(G\) satisfies the conditions of Dirac's and Ore's theorems (Theorem 2.1 and Theorem 2.2) and hence is Hamiltonian. Moreover, \(|E|=\frac{1}{2}\sum_{v\in V}deg(v)=\frac{1}{2}(2n)n=n^{2}=\frac{|V|^{2}}{4}\). Since graphs in \(\overline{G}_{n,n}\) are complements of graphs in \(G_{n,n}\), and graphs in \(G_{n,n}\) are different from a disjoint union \(K_{n}\cup K_{n}\), it follows that \(G\neq K_{n,n}\). Then, Bondy's theorem (Theorem 2.1) implies that \(G\) is pancyclic. Figure 6: Structure of graphs with two cut vertices of degree \(n>2\) and all other vertices having degree \(2\). Curves between \(x\) and \(y\) represent paths of length at least \(1\), ovals attached to \(x\) and \(y\) represent cycle blocks of size at least \(3\). Dashes mean the structure is possibly nonexistent. Finally, we consider graphs in \(\overline{G}_{m,n}\) for the special case when \(m=n-1\) and show that such graphs are traceable. Proposition 8: _Every graph \(G\in\overline{G}_{n,n-1}\) is traceable._ Proof: A graph \(G\in\overline{G}_{n,n-1}\) has \(2n-1\) vertices, \(n\) of which have degree \(n-1\) and \(n-1\) of which have degree \(n\). Then, for any two non-adjacent vertices \(u\) and \(v\), \(deg(u)+deg(v)+d(u,v)\geq(n-1)+(n-1)+2=2n\), and \(2n\) is greater than the number of vertices of \(G\). Thus, by Theorem 3.1, \(G\) is traceable. ### Bipartiteness In this section we characterize the graphs from \(G_{m,n}\) and \(\overline{G}_{m,n}\) that are bipartite. We also show that bipartiteness of a graph in \(G_{m,n}\) implies traceability, which provides more evidence for Conjecture 1. Theorem 3.1: _If a graph \(G\in G_{m,n}\) is bipartite with parts \(V_{1}\) and \(V_{2}\), \(|V_{1}|\leq|V_{2}|\), then one of the following conditions holds:_ 1. \(|V_{1}|=|V_{2}|\) _and_ \(n=m+2\) _and_ \(m,n\) _are even_ 2. \(|V_{1}|=m\) _and_ \(|V_{2}|=n\) _and_ \(n=m+1\)__ 3. \(|V_{1}|=|V_{2}|\) _and_ \(m=n\)_._ _Moreover, in the first case \(G\) is Hamiltonian, in the second case \(G\) is traceable, and in the third case \(G\) is Hamiltonian._ Proof: Let \(G=(V_{1},V_{2},E)\in G_{m,n}\) be a bipartite graph. If \(n=m\), then \(G\) is regular and must therefore be balanced. Thus, in this case, Condition 3) is satisfied. Moreover, by Proposition 5, \(G\) is Hamiltonian. If \(m<3\) or \(n<3\), then \(G\) is \(P_{5}\), in which case Condition 2) is satisfied. Note that any other graph in \(G_{m,n}\) with \(m<3\) or \(n<3\) is a graph consisting of a clique of size at least \(4\) with one edge deleted and two leaves attached to each of the endpoints of that edge, in which case \(G\) is not bipartite. Thus, suppose hereafter that \(n>m\geq 3\). Suppose \(|V_{1}|=|V_{2}|\). Then the number of degree \(m-1\) vertices in both \(V_{1}\) and \(V_{2}\) must be \(\frac{m}{2}\), and the number of degree \(n-1\) vertices in both \(V_{1}\) and \(V_{2}\) must be \(\frac{n}{2}\), since otherwise the part with the larger number of degree \(n-1\) vertices will have a higher total degree than the other part. Then, \(m\) and \(n\) must both be even. Moreover, since both parts contain a degree \(n-1\) vertex, and both parts contain \(\frac{m}{2}+\frac{n}{2}\) vertices, it follows that \(\frac{m}{2}+\frac{n}{2}\geq n-1\). Thus, \(m\geq n-2\), but also by assumption \(m<n\), so \(n-1\geq m\geq n-2\). Finally, \(m\) cannot equal \(n-1\), since both \(m\) and \(n\) are even, so \(m=n-2\). Thus, in this case Condition 1) is satisfied. Next, let \(G=(V_{1},V_{2},E)\in G_{m,n}\), \(n>m\geq 3\), and suppose \(|V_{1}|\neq|V_{2}|\). Note that both parts of \(G\) must contain some vertices of degree \(m-1\) and some vertices of degree \(n-1\), since otherwise the part with all the degree \(n-1\) vertices will have a higher total degree than the other part. Thus, \(|V_{2}|>|V_{1}|\geq n-1>m-1\), so \(|V_{1}|\geq m\) and \(|V_{2}|\geq m+1\). Let \(k=n-m\). Since \(k\) is a nonnegative integer, \(k=\lceil k/2\rceil+\lfloor k/2\rfloor\). Thus, \(|V_{1}|+|V_{2}|=m+n=2m+k=2m+\lceil k/2\rceil+\lfloor k/2\rfloor\). Note that \(|V_{1}|\leq m+\lfloor k/2\rfloor\), since otherwise \(|V_{2}|\) would not be bigger than \(|V_{1}|\). Thus, we can let \(|V_{1}|=m+\lfloor k/2\rfloor-r\) for some \(r\geq 0\), which means \(|V_{2}|=m+\lceil k/2\rceil+r\). Since \(V_{2}\) contains vertices of degree \(n-1\), it follows that \(|V_{1}|=m+\lfloor k/2\rfloor-r\geq n-1=m+k-1\). Thus, \(\lfloor k/2\rfloor-r\geq k-1\), which implies \(k=2,r=0\) or \(k=1,r=0\). In the former case \(|V_{1}|=m+1=|V_{2}|\), which contradicts the assumption that \(|V_{1}|\neq|V_{2}|\). In the latter case, \(|V_{1}|=m\) and \(|V_{2}|=m+1\), and since \(k=1\), \(n=m+1\). Thus, in this case Condition 2) is satisfied. Now suppose that \(G=(V_{1},V_{2},E)\in G_{m,n}\), \(n>m\geq 3\), is a bipartite graph with \(|V_{1}|=|V_{2}|\) and \(n=m+2\). We will show that \(G\) is Hamiltonian. Since \(n=m+2\), it follows that \[n-1\geq\frac{n-1}{2}=\frac{n+n-2}{4}=\frac{n+m}{4}=\frac{|V_{1}|+|V_{2}|}{4}= \frac{2|V_{1}|}{4}=\frac{|V_{1}|}{2}.\] Moreover, since \(m\geq 3\), it follows that \[m-1\geq\frac{m+1}{2}=\frac{m+m+2}{4}=\frac{m+n}{4}=\frac{|V_{1}|+|V_{2}|}{4}= \frac{2|V_{1}|}{4}=\frac{|V_{1}|}{2}.\] Since the only degrees of vertices in \(G\) are \(n-1\) and \(m-1\), it follows that there are no vertices in \(G\) of degree less than \(\frac{|V_{1}|}{2}\). Thus, for every \(k\) where \(1<k<\frac{|V_{1}|}{2}\), the number of vertices \(v\in V_{1}\) with \(deg(v)<k\) is \(0\) and therefore is less than \(k\). The same holds if \(V_{1}\) is replaced by \(V_{2}\). Thus, by Theorem 4.1, \(G\) is Hamiltonian. Finally, suppose that \(G=(V_{1},V_{2},E)\in G_{m,n}\), \(n>m\geq 3\), is a bipartite graph with \(|V_{1}|=m\) and \(|V_{2}|=n\) and \(n=m+1\). We will show that \(G\) is traceable. Let \(V_{2}\) have \(a\) vertices of degree \(m\) and \(b\) vertices of degree \(m-1\), and let \(V_{1}\) have \(c\) vertices of degree \(m\) and \(d\) vertices of degree \(m-1\). Since \(G\) is bipartite, we have \[am+b(m-1)=cm+d(m-1). \tag{1}\] Since \(|V_{1}|=m\) and \(|V_{2}|=n=m+1\), we also have \(a=m+1-b\) and \(c=m-d\). Substituting \(a\) and \(c\) in (1) we obtain \(b=m+d\). On the other hand, the number of degree \(m-1\) vertices in \(G\) is \(b+d=m\). Thus, it follows that \(d=0\), and hence \(b=m\), \(a=1\), and \(c=m\). This means all \(m\) vertices in \(V_{1}\) have degree \(m\), while \(V_{2}\) has \(m\) vertices of degree \(m-1\) and one vertex of degree \(m\). Let \(v\) be the vertex of degree \(m\) in \(V_{2}\), and let \(G^{\prime}=G-v\). \(G^{\prime}\) is a balanced \((m-1)\)-regular bipartite graph. Since \(m\geq 3\), there are no vertices in either part of \(G^{\prime}\) with degree less than or equal to \(k\) for \(1<k<\frac{m}{2}\), so by Theorem 4.1, \(G^{\prime}\) is Hamiltonian. Then \(G\) is traceable. Proposition 9: _Let \(G\in\overline{G}_{m,n}\). Then, \(G\) is not bipartite._ Proof: Suppose first that \(m=n\). Then, the complement of \(G\), \(\overline{G}\), is a graph on \(2n\) vertices each with degree \(n-1\), and \(\overline{G}\) is different from the disjoint union \(K_{n}\cup K_{n}\). By Caro-Wei's theorem (Theorem 4.1), given a graph \(H\) with a degree sequence \(d_{1},d_{2},\ldots,d_{r}\), \(\alpha(H)\geq\sum_{i=1}^{r}\frac{1}{d_{i}+1}\), with equality holding only when is a disjoint union of \(r\) cliques, in which case \(\alpha(H)=r\). Applying Caro-Wei's theorem to \(\overline{G}\), we have \(\alpha(\overline{G})\geq\sum_{i=1}^{2n}\frac{1}{n}=2\); however, since \(\overline{G}\) is not a disjoint union of two cliques, it follows that \(\alpha(\overline{G})\neq 2\), so \(\alpha(\overline{G})\geq 3\). Hence, \(\omega(G)\geq 3\), which means that \(G\) contains a triangle and by Theorem 3.1 it is not bipartite. Now suppose that \(m\neq n\), and without loss of generality, suppose \(n>m\). Then \(G\) has \(n\) vertices of degree \(m\) and \(m\) vertices of degree \(n\). Suppose for contradiction that \(G\) is bipartite with parts \(V_{1}\) and \(V_{2}\). Let \(V_{1}\) have a vertex of degree \(n\). Then \(|V_{2}|\geq n\) and \(|V_{1}|\leq m<n\). Then \(V_{2}\) has no vertices of degree \(n\), i.e., all vertices of \(V_{2}\) are of degree \(m\). Hence \(G=K_{m,n}\), but this contradicts the assumption that \(G\) is the complement of a graph which is different from the disjoint union of \(K_{m}\) and \(K_{n}\). ### Diameter In this section we show that the diameters of graphs in \(G_{m,n}\) and \(\overline{G}_{m,n}\) are bounded by small constants. Proposition 10: _If \(G\in G_{m,n}\), then \(diam(G)\leq 4\)._ Proof: Let \(u\) and \(v\) be vertices of \(G=(V,E)\in G_{m,n}\), and suppose without loss of generality that \(n\geq m\). We will show that \(d(u,v)\leq 4\). If \(u\) and \(v\) are adjacent, then \(d(u,v)=1\). Otherwise, we consider two cases. _Case 1:_ [\(m=n\)] or [\(n>m\) and at least one of \(u\) and \(v\) has degree \(n-1\)]. If \(N(u)\cap N(v)\neq\emptyset\), then \(d(u,v)=2\). If \(N(u)\cap N(v)=\emptyset\), then \(N(u)\cup\{u\}\cup N(v)\cup\{v\}=V\), and since \(G\) is connected, there must be adjacent vertices \(x\in N(u)\) and \(y\in N(v)\). Then, \(d(u,v)=3\). _Case 2:_ [\(n>m\) and both \(u\) and \(v\) have degree \(m-1\)]. If \(N(u)\cap N(v)\neq\emptyset\), then \(d(u,v)=2\). If \(N(u)\cap N(v)=\emptyset\), then since \(u\) and \(v\) are not adjacent, each of them has no more than \(m-2\) neighbors of degree \(m-1\). Hence, both \(u\) and \(v\) have at least one neighbor of degree \(n-1\). Let \(x\) and \(y\) be such neighbors of \(u\) and \(v\), respectively. If \(x\) and \(y\) are adjacent, then \(d(u,v)=3\). If they are not adjacent, then since \(n>m\), \(|N(x)|+|N(y)|=2n-2>m+n-2=|V\backslash\{x,y\}|\geq|N(x)\cup N(y)|\), so \(|N(x)\cap N(y)|\neq\emptyset\). Then, since \(x\) and \(y\) have a common neighbor, \(d(u,v)=4\). Proposition 11: _If \(G\in\overline{G}_{m,n}\), then \(diam(G)\leq 4\)._ Proof: Let \(u\) and \(v\) be vertices of \(G=(V,E)\in\overline{G}_{m,n}\), and suppose without loss of generality that \(n\geq m\). We will show that \(d(u,v)\leq 4\). If \(u\) and \(v\) are adjacent, then \(d(u,v)=1\). Otherwise, we consider two cases. _Case 1:_ [\(m=n\)] or [\(n>m\) and at least one of \(u\) and \(v\) has degree \(n\)]. Since \(|N(u)|+|N(v)|\geq m+n>|V\backslash\{u,v\}|\geq|N(u)\cup N(v)|\), it follows that \(|N(u)\cap N(v)|\neq\emptyset\). Thus, \(u\) and \(v\) have a common neighbor, so \(d(u,v)=2\). _Case 2:_ [\(n>m\) and both \(u\) and \(v\) have degree \(m\)]. Let \(x\) be any vertex of degree \(n\). Then, since there are a total of \(m+n\) vertices in \(G\) and since \(G\) is connected, \(x\) must be adjacent to a vertex from \(N(u)\cup\{u\}\) and to a vertex from \(N(v)\cup\{v\}\). Thus, \(d(u,v)\leq 4\). Note that there are graphs in \(G_{m,n}\) with diameters 2, 3, and 4 (for example, those shown in Figures 7, 1, and 3, respectively). However, we have not found any graphs in \(\overline{G}_{m,n}\) with diameter 4. We leave it as an open question to determine whether there are such graphs, or whether \(diam(G)\leq 3\) for all \(G\in\overline{G}_{m,n}\). ## 4 NP-hardness of problems on \(\boldsymbol{G_{m,n}}\) and \(\boldsymbol{\overline{G}_{m,n}}\) The properties discussed in the previous section are beneficial regarding possible practical applications as they make certain optimization problems easier to solve. On the other hand, some classical optimization problems are hard to solve on graphs in \(G_{m,n}\) and \(\overline{G}_{m,n}\). The next theorem provides an example of this. Theorem 4.1: Max Independent Set _is NP-hard on \(G_{n,n}\)._ Proof: Let \(G=(V,E)\) with \(|V|=n\) be a simple connected graph. We construct a graph \(G^{\prime}=(V^{\prime},E^{\prime})\) as follows. 1. \(V^{\prime}=V_{1}^{\prime}\cup V_{2}^{\prime}\), where \(V_{1}^{\prime}=\{(x,1):x\in V\}\), \(V_{2}^{\prime}=\{(x,2):x\in V\}\). 2. \((x,1)\) is adjacent to \((y,1)\) and \((x,2)\) is adjacent to \((y,2)\) in \(G^{\prime}\) if and only if \(x\) is adjacent to \(y\) in \(G\). 3. \((x,1)\) is adjacent to \((y,2)\) and \((x,2)\) is adjacent to \((y,1)\) in \(G^{\prime}\) if and only if \(x\) is not adjacent to \(y\) in \(G\) for all \(x,y\), \(x\neq y\). \(G^{\prime}\) consists of two copies of \(G\) with some vertices of the first copy connected to some vertices of the second copy so that all vertices have degree \(n-1\). \(G^{\prime}\) can be constructed in polynomial time, and \(G^{\prime}\in G_{n,n}\). Let \(I\) be a maximum independent set of \(G^{\prime}\). The vertices of \(I\) can be partitioned into \(I_{1}\) and \(I_{2}\), where \(I_{1}=I\cap V_{1}^{\prime}\) and \(I_{2}=I\cap V_{2}^{\prime}\). Let \[S=\begin{cases}\{x\in V:(x,1)\in I_{1}\},\text{ if }|I_{1}|\geq|I_{2}|\\ \{x\in V:(x,2)\in I_{2}\},\text{ if }|I_{2}|\geq|I_{1}|.\end{cases}\] Let \(J\) be a maximum independent set of \(G\). We will now show that \(|S|\) is a 2-approximation for \(|J|\). Let \(J_{1}=\{(x,1):x\in J\}\) be the copy of \(J\) contained in \(V_{1}^{\prime}\). Then, since \(J_{1}\) is an independent set in \(G^{\prime}\) and \(I\) is a maximum independent set in \(G^{\prime}\), we have \(|J|=|J_{1}|\leq|I|=|I_{1}|+|I_{2}|\). If \(|I_{1}|\geq|I_{2}|\), then \(|J|\leq 2|I_{1}|=2|S|\). Similarly, if \(|I_{1}|\leq|I_{2}|\), then \(|J|\leq 2|I_{2}|=2|S|\). In either case, \(|S|\) is a 2-approximation for \(|J|\). Since the maximum independent set problem cannot be Figure 7: A graph in \(G_{3,4}\) with diameter 2. approximated to a constant factor in polynomial time unless \(P=NP\)[13], it follows that Max Independent Set is NP-hard on \(G_{n,n}\). Given a simple graph \(G\), it is well-known that \(Q\) is a clique in \(G\), if and only if \(Q\) is an independent set in \(\overline{G}\), if and only if \(V-Q\) is a vertex cover in \(\overline{G}\). Thus, we have the following corollary. Corollary 1: _The Max Clique and Max Vertex Cover problems are NP-hard on \(\overline{G}_{n,n}\)._ ## 5 Conclusion In this paper we studied various properties of graphs that are degree equivalent to complete bipartite graphs or to disjoint unions of cliques. We showed that such graphs are connected and have a bounded diameter, and are therefore small-world networks. We characterized when these graphs are biconnected and bipartite, and showed that many of them have desirable qualities such as traceability, Hamiltonicity, and even pancyclicity. We also showed that some optimization problems are hard on these graphs. Below are several open questions and directions for future research. 1. Is every graph in \(G_{m,n}\) traceable? 2. Is there a graph in \(\overline{G}_{m,n}\) with diameter 4? 3. Can the obtained results be extended to multigraphs, and graphs that are degree-equivalent to more than two cliques or to complete multipartite graphs? Regarding question 1), we have shown that all non-biconnected graphs and all bipartite graphs in \(G_{m,n}\) are traceable. Moreover, all graphs in \(G_{n,n}\) are traceable, as are all graphs in \(G_{m,n}\) that can be obtained by joining two biconnected graphs by a twin bridge. Thus, we believe the answer to this question may be positive. Regarding question 2), we have searched many randomly generated graphs from \(\overline{G}_{m,n}\) with a computer and have not found such a graph. Thus, we believe the answer to this question may be negative. Regarding question 3), we note that many of the properties that hold for simple graphs in \(G_{m,n}\) do not hold for multigraphs or for graphs in \(G_{p_{1},\ldots,p_{k}}\) for \(k>2\). For example, multigraphs that are degree-equivalent to two disjoint cliques are not always connected, and therefore are not always Hamiltonian or traceable. The same holds for simple graphs that are degree equivalent to three or more disjoint cliques. However, it would be interesting to obtain sufficient conditions that guarantee these properties.
2307.00484
Quantum Force Sensing by Digital Twinning of Atomic Bose-Einstein Condensates
High sensitivity detection plays a vital role in science discoveries and technological applications. While intriguing methods utilizing collective many-body correlations and quantum entanglements have been developed in physics to enhance sensitivity, their practical implementation remains challenging due to rigorous technological requirements. Here, we propose an entirely data-driven approach that harnesses the capabilities of machine learning, to significantly augment weak-signal detection sensitivity. In an atomic force sensor, our method combines a digital replica of force-free data with anomaly detection technique, devoid of any prior knowledge about the physical system or assumptions regarding the sensing process. Our findings demonstrate a significant advancement in sensitivity, achieving an order of magnitude improvement over conventional protocols in detecting a weak force of approximately $10^{-25}~\mathrm{N}$. The resulting sensitivity reaches $1.7(4) \times 10^{-25}~\mathrm{N}/\sqrt{\mathrm{Hz}}$. Our machine learning-based signal processing approach does not rely on system-specific details or processed signals, rendering it highly applicable to sensing technologies across various domains.
Tangyou Huang, Zhongcheng Yu, Zhongyi Ni, Xiaoji Zhou, Xiaopeng Li
2023-07-02T06:10:00Z
http://arxiv.org/abs/2307.00484v2
# Enhanced Quantum Force Sensing by Digital Twinning of Atomic Bose-Einstein Condensates ###### Abstract High sensitivity detection plays a vital role in science discoveries and technological applications. The advancement of sensitivity has been pivotal in expanding their boundaries. While intriguing methods utilizing collective many-body correlations and quantum entanglements have been developed in physics to enhance sensitivity, their practical implementation remains challenging due to rigorous technological requirements. Here, we propose an innovative approach that harnesses the capabilities of machine learning, to significantly augment weak-signal detection sensitivity. By training a generative machine learning model on time-of-flight measurements from atomic Bose-Einstein condensates (BEC), we create a digital twinning of the experimental system, accurately matching probabilistic distributions. The digital replica is capable of generating both typical and atypical configurations, mirroring the fluctuations observed in experimental measurements caused by quantum shot-noise and technical noise. An anomaly score, quantifying the level of configuration atypicality, is obtained through the machine learning model. When an external force is applied, it perturbs the measurement outcomes of the physical system. Remarkably, even a weakly affected physical system can be detected by the machine learning model by examining the anomaly score, enabling anomaly detection. This unconventional approach to force sensing is entirely data-driven, devoid of prior knowledge about the physical system or assumptions regarding the sensing process. Our findings demonstrate a significant advancement in sensitivity, achieving an order of magnitude improvement over conventional protocols in detecting a weak force of approximately \(\mathbf{10^{-25}}\)\(\mathbf{N}\). The resulting sensitivity reaches \(\mathbf{1.7(4)\times 10^{-25}\)\(\mathbf{N}/\sqrt{\mathbf{Hz}}}\). Notably, our machine learning-based signal processing approach does not rely on system-specific details or processed signals, rendering it highly applicable to sensing technologies across various domains. ## 1 Introduction In recent decades, quantum technologies have made remarkable strides, culminating in the emergence of quantum sensing techniques that enable high-precision detection at the microscopic level. Quantum sensor leverages quantum resources to detect subtle changes in physical quantities such as time, force, and electromagnetic fields, providing extreme precision at the atomic scale [1; 2; 3]. These implementations have been successfully realized across diverse platforms, including cold atoms [4; 5], superconducting circuits [6], and solid-state spin systems [7]. Remarkably, recent progresses in quantum sensing highlight its capability for real-world applications such as precision navigation [8], gravity detection [9], dark matter searches [10] and among others [11]. However, they are often facing challenges associated with measurement uncertainty caused by quantum decoherence and back-action effects, limiting the practical applications [12; 13]. Surpassing this limit typically requires the involvement of strong correlations and high entanglement or utilizing sophisticated measurement protocols [14; 15; 16; 17]. While hardware upgrades can enhance sensing performance, computational approaches based on signal processing and data analysis offer a somewhat more economical way to enhance the high-precision detection. Earlier computational approaches have employed statistical learning of signal acquisition from sensing observable taking into account the inherent noise and variations [18; 19; 20]. In recent years, machine learning approaches have been used for sensing-related analysis and feature selection [9; 21; 22]. Nevertheless, these methods often demand a substantial amount of sensing data or rely on prior knowledge of signal and noise properties, thereby still limiting their applicability. More recently, the concept of digital twinning has gained significant attention as a powerful tool for simulating and understanding complex physical systems. Digital twinning involves creating a virtual replica or model that mirrors the behavior and properties of a physical system, allowing for real-time analysis, optimization, and prediction. By effectively bridging the physical and virtual realms, digital twinning provides a unique opportunity to enhance the performance and capabilities of quantum force sensing experiments. In this article, we propose a novel application of digital twinning for quantum force sensing in atomic BECs. Leveraging the advancements in generative machine learning models, we create a digital twin that faithfully represents the atomic BEC system under investigation. The digital twin captures the intricate correlations and non-linear dynamics of the physical system, enabling us to devise a novel approach for quantum force sensing based on anomaly detection. Conventional quantum force sensing techniques often rely on extracting basic statistical moments from high-dimensional experimental data, neglecting the valuable information encoded in complex correlations. In contrast, our digital twinning approach incorporates a generative machine learning model to construct a nonlinear function, which takes advantage of the complex correlation effects in the high-dimensional data. This innovative approach allows for improved signal-to-noise ratio and enhanced sensitivity, without compromising the long-term stability of the sensing system. We anticipate that our anomaly detection technique, facilitated by digital twinning, can be broadly applied to sensing experiments involving high-dimensional data acquisition cycles. By maximizing the utilization of high-dimensional data, our approach surpasses conventional techniques that rely solely on basic statistical features, unlocking new avenues for quantum force sensing and precision measurements. ## 2 Results _The Bose-Einstein condensate system._ We create a bosonic quantum system by trapping about \(2\times 10^{5}\)\({}^{87}\)Rb atoms. This system forms a BEC at low temperature, about 50 nK in our experiment (Fig. 1). The atomic BEC is loaded in a triangular optical lattice to suppress unwanted real-space dynamics [23]. After preparation of the lattice BEC system, the optical lattice and the trapping potential are shut-off, which then let the atoms expand ballistically. Performing the time-of-flight (TOF) experiment, we measure the momentum distribution \(n(\mathbf{k})\). It takes about \(T_{0}=38\) s to complete one experimental cycle. In the time-of-flight measurements, we have shot-to-shot noise. There are quantum shot-noises, which arise from quantum superposition of different momentum eigenstates due to atomic interactions, trapping potentials, and optical lattice confinement. There are thermal noises. Although the atomic BEC is cooled down to 50 nK, we still have thermally activated atoms. These atoms induce stochastic fluctuations in the measurement outcomes. There are also technological noises. The control of the atom numbers, the trapping potential, and the optical lattice depth are not perfect in the experiment. They may have noticeable or unnoticeable drifts in different experimental runs. As a result, the TOF measurement outcomes in the consecutive experimental runs would unavoidably fluctuate from shot to shot. We thus denote the measurement outcomes as \(n_{\alpha}(\mathbf{k})\), with \(\alpha\) indexing different experimental runs. In detecting an external force acting on the BEC, a standard approach is to examine the response in the averaged center-of-mass (COM) momentum, i.e., \[\overline{\mathbf{k}}_{\mathrm{COM}}=\frac{\sum_{\alpha}\int d^{2}\mathbf{k} n_{\alpha}(\mathbf{k})\mathbf{k}}{\sum_{\alpha}\int d^{2}\mathbf{k}n_{\alpha}( \mathbf{k})}. \tag{1}\] Although the measurement outcome \(n_{\alpha}(\mathbf{k})\) is a two-dimensional image, which potentially involves very rich information, the conventional approach of data processing following Eq. (1) only consumes the zeroth and first order moments of the two-dimensional data, leaving behind higher-order correlations. _The digital replica._ In order to incorporate the full information in the measurement outcomes, \(n_{\alpha}(\mathbf{k})\), one plausible way is to perform digital twinning of the physical system by matching the probability distributions. It then automatically takes into account high-order correlation effects. Since the atomic BEC system in the experiment has noises of various origins, it is impractical to simulate the experimental measurement outcomes using conventional modeling approaches, for example by simulating Gross-Pitaevskii equations [24, 25]. In this study, we create a digital twinning of the experimental system, by implementing a generative machine learning model, which incorporates quantum, thermal, and technical noise channels simultaneous at equal-footing in a purely data-driven approach. We implement a generative adversarial network (GAN) [26] for digital twinning of the atomic BEC. GAN consists of a generator \(G(\cdot)\) that attempts to map a latent vector \(\mathbf{z}\) to a realistic data of momentum distribution, \(G(\cdot)\): \(\mathbf{z}\mapsto\tilde{n}(\mathbf{k})\), and a discriminator \(D(\cdot)\) tries to differentiate the real data, \(n(\mathbf{k})\), from the fake data from the generator, \(\tilde{n}(\mathbf{k})=G(\mathbf{z})\). The two networks are trained simultaneously, with the generator attempting to produce data that can fool the discriminator, and the discriminator learning to correctly identify synthetic data (Methods). The generator and discriminator are realized by two parameterized deep neural networks, denoted as \(G(\mathbf{z};\theta_{G})\) and \(D(n(\mathbf{k});\theta_{D})\). We collect 3.6k independent measurements of momentum distributions, and feed to GAN. This amount of data takes about forty hours to collect in experiments, which is reasonably Figure 1: **The machine-learning assisted atomic force sensing.****a**. The schematic diagram of experiment setups for an atomic BEC based force sensor. The atomic BEC is confined by a triangular optical lattice spread in the x-y plane. The time-of-flight image is probed by an imaging laser beam along the \(z\) direction, and showing on the CCD camera. With an external force applied on the BEC, the time-of-flight image would develop systematic shifts. **b**. The workflow of an _anomaly detection_ method for force sensing. The generator and the discriminator form a generative machine learning model for digitally replicating the experimental time-of-flight images. The anomaly detection approach is enabled by introducing an additional encoder. affordable. In Fig. 2.a, we present fake data produced by the generator during training procedure, where the generated data is visually similar to real data, indicating that the model is capable of capturing the underlying data distribution without model collapsing. The trained generator produces a digital replica of the experimental measurements (Fig. 2.b,c), which could generate fluctuating configurations involving all noise channels automatically. Remarkably, even phd students cannot tell the difference between the experimental data and the data generated by the digital replica. We create two groups of data, each containing 64 TOF measurement outcomes. The first group contains real experimental data, and the second group is a mixture of real and fake data with a probability \(50\%:50\%\). We randomly select 30 Ph.D. students and let them know how these groups are formed. They are asked to identify which ones of the second group are real experimental data. We find that the averaged accuracy is 48% [27]. This confirms the digital replica indeed captures all the features in the TOF measurements of the experimental atomic BEC. As in the experimental data that has typical and atypical configurations due to noises, the digital replica also generates typical and atypical configurations. Within the framework of GAN, the degree of atypicality is quantified by an anomaly score, \[\mathcal{A}(n(\mathbf{k}))=\mathcal{A}_{R}(n(\mathbf{k}))+\lambda\cdot \mathcal{A}_{D}(n(\mathbf{k})). \tag{2}\] Here, the discrimination loss (DL) \(\mathcal{A}_{D}\) is directly given by the discriminator. The residual loss (RL) is produced by adding an encoder in front of the generator (Fig. 1), with \(\mathcal{A}_{R}=\|n(\mathbf{k})-\tilde{n}(\mathbf{k}))\|_{2}\). The encoder is trained by minimizing the residual loss. The weighting coefficient \(\lambda\in\mathbb{R}\) is a hyper-parameter that balances RL and DL. These two components evaluate the discrepancy between fake and real data in terms of image distance and feature discrimination [27]. We choose \(\lambda=-0.76\) in this work. We find that the anomalous score has an approximate normal distribution (Fig. 3), which indeed reflects the typicality of momentum distribution data. _Sensing by anomaly detection._ When an external force is applied, the physical BEC system then produces TOF data \(n(\mathbf{k})\) of different distributions. Conventional sensing schemes examine the response in the COM momentum (Eq. (1)). The distributions of the COM momentum with and without an external force are different. Force sensing requires differentiating such distributions. When a weak force (\(F_{0}=7.81\times 10^{-26}\) N in our experiment) is applied, the distribution of COM momentum is only very weakly affected, with a barely noticeable difference from force-free distribution. With the digital replica, namely the generative machine learning model, we compute the anomaly score \(\mathcal{A}(n(\mathbf{k}))\), a highly nonlinear function of \(n(\mathbf{k})\), that could incorporate higher order correlations of the data. This provides a systematic nonlinear data processing approach, much different from the linear data processing as in analyzing the simple COM momentum. Remarkably, the anomaly score is much sensitive to the force. The resultant distribution of the anomaly score caused by the weak external force is significantly different from the the force-free distribution, in sharp contrast to the COM momentum (Fig. 3). Despite the nonlinearity in the data processing, the response of the the anomaly score remains to be linear to the external force (Fig. 3.c). Figure 2: **The digital replica of TOF measurements.****a**. The momentum distribution constructed by generative machine learning model at different training epoch. We present several images generated from a fixed generator, namely the digital replica in **b**. **c** shows eight representative real TOF images with anomaly scores from 0.12 to 0.26. By comparing the distributions of the COM momentum and the anomaly score, it is evident that analyzing the anomaly score, known as _anomaly detection_ in the context of machine learning, is more efficient for detecting the external force applied to the BEC system. We further investigate the primary characteristics that contribute to the anomaly score in presence of an external force. Specifically, we assess the momentum dependence of the residual loss, \(\mathcal{A}_{R}\), namely, \[n_{R}(\mathbf{k})=|n(\mathbf{k})-G^{*}(E^{*}(n(\mathbf{k})))|, \tag{3}\] with the generator \(G^{*}(\cdot)\) and the encoder \(E^{*}(\cdot)\) being fixed. In Fig. 3.d, we provide six representative atomic images from real experimental datasets labeled with anomaly scores. In Fig. 3.e, we observe bright speckles in \(n_{R}(\mathbf{k})\), with these speckles becoming increasingly prominent as the applied force intensifies. This observation suggests that the signals contributing to the anomaly scores exhibit localization in momentum space, commonly referred to as anomaly localization in the context of anomaly localization. The phenomenon of anomaly localization indicates that the machine learning model indeed captures the relevant features of the experimental data. Further discussion on model interpretability from the perspective of feature representation [28, 29, 30] is provided in Supplementary material [27]. Our findings regarding anomaly localization, exemplified by the presence of a hexagonal peak structure in the rightmost portion of Figure 3.e, imply that the dominant signal for force sensing originates primarily from high-momentum peaks observed in time-of-flight (TOF) measurements. This could be attributed to the reduced impact of various sources of noise, such as atomic scattering, trapping potential, and thermal activation, on the high-momentum components of the Bose-Einstein condensate (BEC), owing to energy separation. _Sensitivity and Stability._ In order to quantitatively characterize the advantage of the anomaly detection over conventional approaches, we compute the corresponding sensitivity. A general force sensing process involves a force \(F\) to be detected, and a signal \(q\) that is directly or indirectly measured in the experiment. In the force sensing using the COM momentum, \(q\) corresponds to one component of \(\overline{\mathbf{k}}_{\text{COM}}\). It corresponds to the anomaly score in the anomaly detection. The measured signals \(q\) would fluctuate in consecutive experimental measurements due to noises of various channels. We assume the induced fluctuations on \(q\) are white noise, so the fluctuations in different measurements are completely independent. The strength of the fluctuations is quantified by the standard deviation of \(q\), to be referred to as \(\sigma_{0}\). A single measurement has a fixed time cost of \(T_{0}\). The minimum force we can resolve with one single-measurement is given by \(V_{\text{min}}\sim\sigma_{0}/|\partial_{V}q|\). Performing \(N\) times of experimental measurements, the signal to noise ratio Figure 3: **Sensing with anomaly score.****a-b**. The probability distribution of averaged COM momentum \(\overline{\mathbf{k}}_{\text{COM}}\) (in the unit of pixels) and anomaly score \(\mathcal{A}\) in the absence of force (blue) and in the presence of force (red), respectively. Noting that the pixel displacement refers to the distance between COM and the y-axis center of pixel-wise images. **c.**The dimensionless anomaly score as a function of impulse \(I=F_{0}\cdot\Delta T\) for an identical optical force \(F_{0}\). The normalization factor in \(\mathbf{c}\) (\(\mathcal{A}_{0}\)) is the averaged anomaly score in absence of the external force. The anomaly score \(\mathcal{A}_{t}(\Delta T)\) corresponds to signal-involved experiments with a force acting on the BEC for a time duration \(\Delta T\). **d** and \(\mathbf{e}\), show the real TOF images and their corresponding anomaly localization (_see the main text_), respectively. Noting that the left (right) three instances in **d** are sampled from datasets with force-free (force-involved) environments. (SNR) is given by \(\text{SNR}=\sqrt{N}\times|\partial_{V}qV|/\sigma_{0}\). The a one-sigma sensitivity is defined as [27] \[\mathcal{S}=\sqrt{T_{0}}\times\frac{\sigma_{0}}{|\partial_{V}q|}. \tag{4}\] This definition applies to both conventional approaches and the anomaly detection. Acting linear transformation on \(q\), the sensitivity remains the same according to Eq. (4). However, different signals that are related by nonlinear transformations do not necessarily have the same sensitivity. We compare the sensitivities of the COM momentum and the anomaly detection approaches taking exactly the same set of experimental data. For the COM momentum approach, we obtain a sensitivity \(\mathcal{S}^{\text{COM}}=6.8(9)\times 10^{-24}\) N\(/\sqrt{\text{Hz}}\) (Fig. 4.a). For the anomaly detection, we have \(S^{\text{AS}}=1.7(4)\times 10^{-25}\) N\(/\sqrt{\text{Hz}}\). This means the anomaly detection is about 40 times more sensitive than the COM momentum approach. We emphasize that in the above comparison we use the raw experimental data without invoking any prior knowledge of the physical process. The anomaly detection approach is thus entirely data-driven. We further examine how much the conventional COM momentum approach can be improved by machine learning based noise reduction. We perform Gaussian processing prior to extracting the COM momentum [27]. The resultant sensitivity can be improved to \(S_{r}^{\text{COM}}=1.6(4)\times 10^{-24}\) N\(/\sqrt{\text{Hz}}\) (Fig. 4.a). But this is still one-order-of-magnitude worse than our anomaly detection. In Figure 4.c, we show the comparison of the achieved sensitivity of digital twinning atomic BEC with previous experiments, including phase-coherent velocimetry [33], cold atoms in a cavity [2], and trapped ions [20]. Our achieved force sensitivity shows orders-of-magnitude improvement over other experiments. There are two remarks we would like to mention here. First, the standard quantum limit of our atomic BEC force sensor is \(5.45\times 10^{-29}\) N [23], which indicates there is still quite some room to improve the sensitivity here, either by improving the technical noises, or by more advanced digital twinning techniques that effectively reduces technical noises. Second, the digital twinning and the anomaly detection techniques developed here are quite generic. These techniques are readily applicable to improve the sensitivity of other experimental setups as well. For quantum force sensing, it is also important to have long-term stability besides the high sensitivity. This is captured by the Allan Deviation [35], which is widely used to examine long-term drifts. We confirm that the Allan Deviation of the anomaly detection falls off with the integration time (\(\tau\)) as \(1/\sqrt{\tau}\), having the same scaling as the Allan Deviation of the COM momentum (Fig.4.b). It is thus evident that no long-term drifts are induced by the nonlinear data processing in anomaly detection. The \(1/\sqrt{\tau}\) decay also implies the fluctuations of the anomaly score are mainly white noise, which justifies the above definition of sensitivity. It is worth noting here that the sensitivity can also be enhanced by choosing the high data-quality region of the TOF measurements according to the impulse theorem, as used in a previous study [23]. Such analysis requires certain prior information of the force, and gives a sensitivity comparable to the present anomaly detection approach. Nonetheless, we emphasize that the anomaly detection approach is purely data driven, and is consequently more robust against long-term drifts in experiments. The improvement in the long-term stability is evident by comparing the Allan Deviation of the anomaly detection to the pervious study [23]. In the previous study, Figure 4: **Sensitivity and stability of anomaly detection.****a**. The sensitivity distribution of using the anomaly score (blue), COM momentum with raw data (green) and reduce datasets (yellow), respectively. In **b**, the Allan Deviations corresponding to the anomaly score and the COM momentum. Both of them decay with the integration time \(\tau\), as \(1/\tau\). Related works are compared in (e), including measurements based on nano-tube [31], nano-particles [32], trapped ions with phase-coherent velocimetry [33], a single trapped ion with optical clock Doppler velocimetry [34], a 3D-trapped single ion [20], and ultracold atoms in a cavity [2]. Note that error bars indicate the one-sigma statistical uncertainty. the Allan Deviation bends up at a time scale of \(\tau=10^{4}\) s, whereas it keep decreasing following \(1/\sqrt{\tau}\) even at the time scale of \(4\times 10^{4}\) s. ## 3 Discussion In this study, we present a novel method for quantum force sensing using digital twinning of atomic BEC and anomaly detection facilitated by a generative machine learning model. By incorporating complex correlation effects present in the experimental data through nonlinear processing, we achieve a significant enhancement in sensitivity while maintaining long-term stability. Unlike conventional approaches that rely on extracting basic statistical moments from high-dimensional data, such as time-of-flight (TOF) measurements in BEC, our anomaly detection approach employs a neural network-based nonlinear function, denoted as \(f_{\text{NN}}(\mathbf{x})\), to fully exploit the information within the high-dimensional data. Through extensive training and iterative refinement, this methodology effectively amplifies the signal-to-noise ratio, as confirmed by convergence in sensitivity with increasing training data [27]. Notably, our findings reveal an intriguing aspect: the sensitivity of a physical sensor is intimately tied to the data processing strategy, denoted as \(\mathcal{S}[f_{\text{NN}}]\). This implies the existence of an upper bound, \[\mathcal{S}_{\text{opt}}=\max_{f_{\text{NN}}}\left\{\mathcal{S}[f_{\text{NN}}] \right\}, \tag{5}\] which represents the maximum achievable sensitivity attainable by a given sensor configuration. The determination of this upper bound warrants further investigation in future research endeavors. The other important direction is to investigate the fundamental quantum limits of the anomaly detection approach. How the optimal sensitivity \(\mathcal{S}_{\text{opt}}\) is fundamentally limited by the quantum shot-noise, and its scaling with the number of atoms in the BEC or the imaging resolution, are worth further theoretical studies. **Acknowledgments.** We acknowledge helpful discussion with W. Vincent Liu. This work is supported by National Program on Key Basic Research Project of China (Grant No. 2021YFA1400900), National Natural Science Foundation of China (Grants No. 11934002, 12075128, T2225008), Shanghai Municipal Science and Technology Major Project (Grant No. 2019SHZDZX01), and Shanghai Science Foundation (Grants No.21QA1400500). ## 4 Methods **Generative Adversarial Networks** Generative Adversarial Networks (GANs) are a class of deep learning models used for unsupervised learning tasks, such as artificial image generation and information processing [26]. Our goal is to utilize GANs as the generative model to construct the digital replica of our experimental data. We start with a set of raw observable \(X=\{\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{x}_{3},...,\mathbf{x}_{N}\}\) measured from \(N\) independent experiments. Next, we shall train the GAN relying on finite number of datasets \(X\). Generally, a GAN consists of a _generator_\(G(\cdot)\) that mapping the latent vector \(\mathbf{z}\) to a realistic data \(G(\cdot)\colon\mathbf{z}\mapsto\tilde{\mathbf{x}}\), and a _discriminator_\(D(\cdot)\) identifies the real \(\mathbf{x}\in X\) rather than the fake date \(\tilde{\mathbf{x}}=G(\mathbf{z})\) from the generator. The two networks are trained simultaneously, with the generator attempting to produce data that can fool the discriminator, and the discriminator learning to correctly identify synthetic data. In general, the generator and discriminator are realized by two parameterized deep neural networks, i.e., \(G(\mathbf{z};\theta_{G})\) and \(D(\mathbf{x};\theta_{D})\). In this sense, parameter \(\theta_{D},\theta_{G}\) are simultaneously optimized for a standard min-max loss function V(D,G) [36]: \[\min_{G}\max_{D}V(D,G)=\mathbb{E}_{\mathbf{x}\sim\mathcal{P}_{0}}[\log D( \mathbf{x})]+\mathbb{E}_{\mathbf{z}\sim\mathcal{P}_{\mathbf{z}}}[\log(1-D(G( \mathbf{z})))]. \tag{6}\] \(\theta_{D}\) and \(\theta_{G}\) are respectively adjusted to maximize the expectation value \(\mathbb{E}_{\mathbf{x}\sim\mathcal{P}_{0}}\) and minimize the \(\mathbb{E}_{\mathbf{z}\sim\mathcal{P}_{\mathbf{z}}}\). GANs work by training a generator network to produce synthetic data that resembles the healthy distribution \(\mathbf{x}\sim\mathcal{P}_{0}(\mathbf{x})\), while a discriminator network is trained to distinguish the data out of healthy distribution. In this vein, the convergence criteria is that the generator is able to fool the discriminator, resulting in the high-quality digital replica of training data. **Anomaly detection** GANs are widely used in anomaly detection due to their ability to learn complex data distributions, making them effective in identifying anomalies that deviate from the healthy distribution. Hereby, an anomaly is to quantify the similarity of query data from the health distribution \(\mathcal{P}_{0}(\mathbf{x})\). To identify anomalies via the GAN model, the authors in [28] proposed a _AnnoGAN_ structure to detect the degree of anomaly quantitatively. More specifically, for a query data \(\mathbf{x}\): first, one shall find the optimal representation \(\mathbf{z}_{\gamma}\) from the latent space \(\mathcal{Z}\) to guarantee the maximum degree of similarity between original data \(\mathbf{x}\) and generated data \(\tilde{\mathbf{x}}=G(\mathbf{z}_{\gamma})\), and computing the _residual loss_ (RL); second, calculating the _discrimination loss_ (DL) from the perspective of feature layers; third, the overall loss as the weighted sum of RL and DL resulted as anomaly score(AS). For this purpose, we define the RL \(\mathcal{A}_{R}(\mathbf{x})=\frac{1}{n_{x}}\sum|\mathbf{x}-G^{*}(\mathbf{z}_{ \gamma})|\), and the DL \(\mathcal{A}_{D}(\mathbf{x})=\frac{1}{n_{f}}\sum|h(\mathbf{x})-h(G^{*}(\mathbf{ z}_{\gamma}))|\), where \(h(\cdot)\) refers to the feature layer of an intermediate layer of the discriminator and \(n_{x}\), \(n_{f}\) corresponding pixel numbers. Therefore, the anomaly score is defined as in Eq.(2), e.g., \(\mathcal{A}(\mathbf{x})=\mathcal{A}_{R}(\mathbf{x})+\lambda\cdot\mathcal{A}_ {D}(\mathbf{x})\), where the hyperparameter \(\lambda\) is a weighted coefficient. Note that the generator \(G^{*}(\theta_{\mathcal{C}}^{*})\) and discriminator \(D^{*}(\theta_{D}^{*})\) are fixed from the previous adversarial training, and only the input vector \(\mathbf{z}\) is adapted via backpropagation for a query data \(\mathbf{x}\)[28]. Remarkably, a higher precise generator can be realized by introducing the inverse mapping \(\mu(\tilde{\mathbf{x}})\rightarrow\mathbf{z}\). Instead of direct backpropagation [28], we here apply an adaptive method [29] called _f-AnoGAN_ by introducing the extra DNN _encoder_\(E(\cdot)\) which can be trained only on \(X\) to automatically produce latent vector for any inputting data, e.g., \(\mathbf{z}_{\gamma}=E(\mathbf{x})\), see details in supplementary material [27]. Accordingly, the anomaly detection via _f-AnoGAN_ is concluded as follows: the GAN should be initially trained on the datasets \(X\) (signal-free) sampled from the healthy distribution \(\mathcal{P}_{0}(\mathbf{x})\), and the anomaly detection is subsequently processing by evaluating the anomaly score \(\mathcal{A}(\mathbf{x})\) (Eq. (2)) for arbitrary data \(\mathbf{x}\in\mathcal{X}\). The model yields large AS for data from a destructive distribution \(\mathcal{P}(\mathbf{x})\), whereas a small AS refers to the high probability of query data from the health distribution \(\mathcal{P}_{0}(\mathbf{x})\). We refer the reader to supplement materials [27] for details about model training and evaluation. As a result, we obtain a fixed GAN-based generative model to automatically output the anomaly score \(\mathcal{A}=f^{*}_{\mathrm{NN}}(\mathbf{x}|\{\theta_{G}^{*},\theta_{D}^{*}\})\) for data \(\mathbf{x}\in\mathcal{X}\). In a sensing application, the anomaly score assigned for signal-free experimental outcomes follows the healthy probability distribution \(\mathcal{P}_{0}(\mathcal{A}(\mathbf{x}))\). When it comes to an unknown perturbation \(V\) (i.e., an external force in our setup ), a destructive probability distribution \(\mathbb{P}(\mathcal{A}(\tilde{\mathbf{x}}))\) as well can be formed according to our fixed model, see the workflow in Fig.1. The former anomaly score of signal-free are statistically smaller then the later one, indicating that sensitive signal (anomaly score) could be formulated by anomaly detection method. **DATA AVAILABILITY** Data are available from the authors upon reasonable request. **CODE AVAILABILITY** The computation code for producing the results in this work is available upon reasonable request. **AUTHOR CONTRIBUTION** X.L. conceived the main idea in discussion with T.H. and X.J.Z. T.H. designed the machine learning framework and carried out the tests. Z.C.Y. contributed to the experimental data analysis. All authors contributed to writing the paper. **COMPETING INTERESTS** The authors declare no competing interests.
2302.05322
Numerical Methods For PDEs Over Manifolds Using Spectral Physics Informed Neural Networks
We introduce an approach for solving PDEs over manifolds using physics informed neural networks whose architecture aligns with spectral methods. The networks are trained to take in as input samples of an initial condition, a time stamp and point(s) on the manifold and then output the solution's value at the given time and point(s). We provide proofs of our method for the heat equation on the interval and examples of unique network architectures that are adapted to nonlinear equations on the sphere and the torus. We also show that our spectral-inspired neural network architectures outperform the standard physics informed architectures. Our extensive experimental results include generalization studies where the testing dataset of initial conditions is randomly sampled from a significantly larger space than the training set.
Yuval Zelig, Shai Dekel
2023-02-10T15:33:32Z
http://arxiv.org/abs/2302.05322v3
# Numerical Methods For PDEs Over Manifolds Using Spectral Physics Informed Neural Networks ###### Abstract We introduce an approach for solving PDEs over manifolds using physics informed neural networks whose architecture aligns with spectral methods. The networks are trained to take in as input samples of an initial condition, a time stamp and point(s) on the manifold and then output the solution's value at the given time and point(s). We provide proofs of our method for the heat equation on the interval and examples of unique network architectures that are adapted to nonlinear equations on the sphere and the torus. We also show that our spectral-inspired neural network architectures outperform the standard physics informed architectures. Our extensive experimental results include generalization studies where the testing dataset of initial conditions is randomly sampled from a significantly larger space than the training set. ## 1 Introduction Time dependent differential equations are a basic tool for understanding many processes in physics, chemistry, biology, economy and any field requires the analyze of time-dependent dynamical process. Therefore, solving those equations is an active area of research [1, 2]. For many of those equations, an analytical solution does not exist and a numerical method must be used. Numerical methods such as finite differences and finite elements methods are applied successfully in many scenarios, however when the PDEs are given on manifolds, discretization processes and application of time steps can become challenging. In recent years, there is an emergence of machine learning methods and most notably Physics Informed (PI) deep learning models [4],[20]. Physics Informed Neural Networks (PINN) are designed to solve partial differential equations and inverse problems by enforcing the networks to approximately obey the given governing equations by using corresponding loss functions during the training phase. This technique allows to obtain relatively high quality approximation with relatively small datasets. There are various neural network architectures that have been developed for this purpose, with different settings and strategies such as automation differentiation [18], numerical schemes [5], grid-free [3, 4] or grid-dependent approaches [5], and the ability to handle different geometries [19]. In this paper, we present a generalization of spectral based deep learning methods for PDEs [23],[24],[25],[26]: 1. We provide a physics informed deep learning approach that can handle the general case of differential equations over compact Riemannian manifolds. 2. The design of the networks relies on the paradigm of spectral approximation, where on each manifold we use the corresponding eigenfunction basis of the Laplace-Beltrami operator. Previous works discussing the connection between harmonic analysis and deep learning over manifolds are described in [16, 21]. As we shall see, this allows to construct neural networks that provide higher accuracy using less parameters when benchmarked with standard PINN architectures. 3. Typically, PINNs need to be re-trained for each given initial condition. Our approach allows to train a network that can take in as input a subspace of initial conditions over the manifold. The outline for the remainder of this paper is as follows. Section 2 reviews some preliminaries about PINNs and spectral approximation over manifolds. Section 3 describes the key aspects of our approach. In Section 4 we provide, as a pedagogical example, the theory and details of the method for the simple case of the heat equation over the unit interval. In Sections 5 and 6 we show how our approach is applied for nonlinear equations over the sphere and torus. Our extensive experimental results include generalization studies where the testing dataset is sampled from a significantly larger space than the training set. We also verify the stability of our models by injecting random noise to the input and validating the errors increase in controlled manner. Concluding remarks are found in Section 7. ## 2 Preliminaries ### Physics informed neural networks In this section, we describe the basic approach to PINNs presented in [4]. Generally, the goal is to approximate the solution for a differential equation over a domain \(\Omega\) of the form: \[u_{t}+\mathcal{N}[u]=0,\quad t\in[0,T],\] with some pre-defined initial and/or boundary conditions. Typically, a PINN \(\tilde{u}(x,t)\) is realized using a Multi Layer Perception (MLP) architecture. This is a pass forward network where each \(j\)-th layer takes as input the vector \(v_{j-1}\) which is the output of the previous layer, applies to it an affine transformation \(y=M_{j}v+b_{j}\) and then a coordinate-wise nonlinearity \(\sigma\) to produce the layer's output \(v_{j}\) \[v_{j}=\sigma\circ(M_{j}v_{j-1}+b_{j}). \tag{1}\] In some architectures either the bias vector \(b_{j}\) and/or the coordinate-wise nonlinearity \(\sigma\) are not applied in certain layers. In a standard PINN architecture, the input to the network \(\tilde{u}\) is \(v_{0}=(x,t)\). The unknown parameters of the network are the collection of weights \(\{M_{j},b_{j}\}_{j}\) and the network is trained to minimize the following loss function: \[MSE_{B}+MSE_{D},\] with the boundary/initial value loss \[MSE_{B}=\frac{1}{N_{b}}\sum_{i=1}^{N_{b}}|\tilde{u}(x_{i}^{b},t_{i}^{b})-u(x_{ i}^{b},t_{i}^{b})|^{2},\] and the differential loss \[MSE_{D}=\frac{1}{N_{d}}\sum_{i=1}^{N_{d}}|(\tilde{u}_{t}+\mathcal{N}[\tilde{u }])(x_{i}^{d},t_{i}^{d})|^{2}.\] In the above, \(\{(x_{i}^{b},t_{i}^{b})\}_{i=1}^{N_{b}}\) is a discretized set of time and space points, where each \(u(x_{i}^{b},t_{i}^{b})\) is the true given initial or boundary value at \((x_{i}^{b},t_{i}^{b})\). The set \(\{(x_{i}^{d},t_{i}^{d})\}_{i=1}^{N_{d}}\), typically contains randomly distributed internal domain collocation points. Since the architecture of the neural network is given analytically (as in (1) for the case of MLP), the value \((\tilde{u}_{t}+\mathcal{N}[\tilde{u}])|_{(x_{i}^{d},t_{i}^{d})}\) at a data-point \((x_{i}^{d},t_{i}^{d})\) can be computed using the automatic differentiation feature of software packages such as TensorFlow and Pytorch [6, 7] (in our work we used TensorFlow). Thus, the aggregated loss function enforces the approximating function \(\tilde{u}\) to satisfy required initial and boundary conditions as well as the differential equation. ### Spectral decompositions over manifolds We recall a fundamental result in the spectral theory over manifolds regarding the spectrum of the Laplace-Beltrami operator \(\Delta\)[8, Theorem 10.13] **Theorem 1**.: _Let \(\Omega\) be a non-empty compact relatively open subset of a Riemannian manifold \(\mathcal{M}\) with metric \(g\) and measure \(\mu\). The spectrum of \(\mathcal{L}:=-\Delta\) on \(\Omega\) is discrete and consists of an increasing sequence \(\{\lambda_{k}\}_{k=1}^{\infty}\) of non-negative eigenvalues (with multiplicity) such that \(\lim_{k\to\infty}\lambda_{k}=\infty\). There is an orthonormal basis \(\{\phi_{k}\}_{k=1}^{\infty}\) in \(L_{2}(\Omega)\) such that each function \(\phi_{k}\) is an eigenfunction of \(-\Delta\) with eigenvalue \(\lambda_{k}\). Moreover, if we wish to solve the heat equation \(u_{t}=\Delta u\) on \(\Omega\) with initial condition \(u(x,t)=f(x),\ f\in L_{2}(\Omega)\), the solution is given by:_ \[u(x,t)=\sum_{k=1}^{\infty}e^{-\lambda_{k}t}\langle f,\phi_{k}\rangle\phi_{k}( x).\] This well established result motivates the following spectral paradigm. To solve the heat equation with some initial condition, one should first decompose the initial condition function to a linear combination of the eigenfunctions basis and then apply a time-dependent exponential decay on the initial value coefficients. An approximation entails working with the subspace spanned by \(\{\phi_{k}\}_{k=1}^{K}\), for some sufficiently large \(K\) (see e.g. Theorem 4 below). For a general manifold \(\mathcal{M}\), the eigenfunctions do not necessarily have an analytic form and need to be approximated numerically. As we will show, we also follow the spectral paradigm for more challenging cases of nonlinear equations over manifolds, where the time dependent processing of the initial value coefficients is not obvious. Nevertheless, a carefully crafted architecture can provide superior results over standard PINNs. ## 3 The architecture of spectral PINNs Let \(\mathcal{M}\subset\mathbb{R}^{n}\) be a Riemannian manifold, \(\Omega\subset\mathcal{M}\) a non-empty compact relatively open subset and \(\mathcal{N}\) a differential operator over this manifold, which can possibly be nonlinear. We assume our family of initial conditions comes from a finite space \(W\subset L_{2}(\Omega)\), that can be selected to be sufficiently large. Given a vector of samples \(\vec{f}\) of \(f\in W\) over a fixed discrete subset of \(\Omega\), a point \(x\in\mathcal{M}\) and \(t\in[0,T]\), we would like to find an approximation \(\tilde{u}(\vec{f},x,t)\), given by a trained neural network, to the solution \[u_{t}+\mathcal{N}[u]=0,\] \[u(x,t=0)=f(x),\ \forall x\in\Omega.\] Recall that typically PI networks are trained to approximate a solution for a single specific initial condition (such as in [4]).However, we emphasize that our neural network model is trained only once for the family of initial conditions from the subspace \(W\) and that once trained, it can be used to solve the equation with any initial condition from \(W\). Moreover, as we demonstrate in our experimental results, the trained network has the 'generalization' property, since it is able to approximate well the solutions when the initial value functions are randomly sampled from a larger space containing \(W\). Our method takes inspiration from spectral methods for solving PDEs. It is composed of 3 steps implemented by 3 blocks, as depicted in Figure 1: 1. **Transformation Block -** The role of this block is to compute from the samples \(\vec{f}\) of the initial value condition a 'projection' onto \(W_{K}=span\{\phi_{k}\}_{k=1}^{K}\), for some given \(K\), where \(\{\phi\}_{k=1}^{\infty}\) are the eigenfunctions of the Laplace-Beltrami operator on the manifold. We denote this block as \(\tilde{\mathcal{C}}:\vec{W}\to\mathbb{R}^{K}\), where \(\vec{W}\) is a subset of \(\mathbb{R}^{L}\) which contains sampling vectors of functions from \(W\) over a fixed discrete subset of \(\Omega\). The desired output of the block is an estimation \(\{\tilde{f}_{k}\}_{k=1}^{K}\) of the coefficients \(\{\langle f,\phi_{k}\rangle\}_{k=1}^{K}\). However, in cases where it is difficult to work with the spectral basis, one can train an encoder to transform the input samples to a compressed representation space of dimension \(K\). Also, although the network is trained on point samples from \(W\), it is able to receive as input a sample vector \(\vec{f}\) of a function \(f\) which is from a larger space containing \(W\) and approximate the solution. In most cases, it is advantageous to have the choice of the sampling set and the quantities \(L\) and \(K\) to be determined by 'Nyquist-Shannon'-type theorems for the given subspaces \(W\), \(W_{K}\) and the manifold. In the scenario where \(W=W_{K}\) and the sampling set of size \(L\) is selected to provide perfect 'Shanon'-type reconstruction, the transformation block may take the form of a simple linear transformation. In complex cases, where we have no prior knowledge about the required sampling rate or we do not have perfect reconstruction from the samples, we train a transformation block \(\tilde{\mathcal{C}}\) that is optimized to perform a nonlinear 'projection' based on a carefully selected training set. 2. **Time Stepping Block -** In this block we apply a neural network that takes as input the output of the transformation block \(\tilde{\mathcal{C}}(\vec{f})\), which may be the approximation of the spectral basis coefficients \(\{\tilde{f}_{k}\}_{k=1}^{K}\), and a time stamp \(t\), to compute a time dependent representation. We denote this block as \(\tilde{\mathcal{D}}:\mathbb{R}^{K}\times[0,T]\rightarrow\mathbb{R}^{K}\). 3. **Reconstruction Block -** In this block we apply an additional neural network on the output of the time stepping block \(\tilde{\mathcal{D}}\), together with the given input point \(x\in\Omega\), to provide an estimate \(\tilde{u}(x,t)\) of the solution \(u(x,t)\). We denote this block as \(\tilde{\mathcal{R}}:\mathbb{R}^{K}\times\Omega\rightarrow\mathbb{R}\). Thus, our method is in fact a composition of the 3 blocks \(\tilde{u}:\vec{W}\times\Omega\times[0,T]\rightarrow\mathbb{R}\) \[\tilde{u}(\vec{f},x,t)=\tilde{\mathcal{R}}(\tilde{\mathcal{D}}(t,\tilde{ \mathcal{C}}(\vec{f})),x).\] Observe that in scenarios where one requires multiple evaluations at different locations \(\{\tilde{u}(\vec{f},x_{i},t)\}_{i}\), \(x_{i}\in\Omega\), at a given time step \(t\in[0,T]\), one may compute once the output of the time stepping block \(\tilde{\mathcal{D}}\) and use it multiple times for all \(\{x_{i}\}_{i}\), and by that reducing the total computation time. Figure 1: General description of our method ## 4 Introduction of the spectral PINN for the heat equation over \(\Omega=[0,1]\) We first review the prototype case of the heat equation on the unit interval where we can provide rigorous proofs for our method as well as showcase simple realization versions of our spectral network construction. Recall the heat equation: \[u_{t}=\alpha u_{xx},\quad x\in[0,1],t\in[0,0.5],\] with initial time condition: \[u(x,t=0)=f(x),\ x\in[0,1].\] ### Architecture and theory for the heat equation over \(\Omega=[0,1]\) The analytic solution to this equation can be computed in 3 steps that are aligned with the 3 blocks of our architecture. Assume the initial condition \(f:[0,1]\rightarrow\mathbb{R}\) has the following spectral representation \[f(x)=\sum_{k=1}^{\infty}c_{k}\sin(2\pi kx).\] Next, apply the following transformation on the coefficients for a given time step \(t\) \[\mathcal{D}(t,c_{1},c_{2},...):=(e^{-4\pi^{2}\alpha t}c_{1},e^{-4\pi^{2}\cdot 2 ^{2}\alpha t}c_{2},...).\] Finally, evaluate the time dependent representation at the point \(x\): \[u(x,t)=\mathcal{R}(e^{-4\pi^{2}\alpha t}c_{1},e^{-4\pi^{2}\cdot 2^{2}\alpha t} c_{2},...,x):=\sum_{k=1}^{\infty}e^{-4\pi^{2}k^{2}\alpha t}c_{k}\sin(2\pi kx).\] We now proceed to provide the details of the numerical spectral PINN approach in this scenario. First, we select as an example \(K=20\) and \(W=W_{20}\), where \[W_{20}:=\left\{\sum_{k=1}^{20}c_{k}\sin(2\pi kx),\quad c_{1},...,c_{20}\in[-1, 1],\sqrt{c_{1}^{2}+...+c_{20}^{2}}=1\right\}.\] We sample each \(f\in W_{20}\) using \(L=101\) equip-spaced points in the segment \([0,1]\) to compute a vector \(\vec{f}\). For the training of the networks we use a loss function which is a sum of two loss terms \(L_{0}+L_{D}\). The loss \(L_{0}\) enforces the solution to satisfy the initial time condition \[L_{0}(\theta)=\frac{1}{101N}\sum_{i=1}^{N}\sum_{j=0}^{100}|\tilde{u}_{\theta} (\vec{f}_{i},X_{j},0)-(\vec{f_{i}})_{j}|^{2} \tag{2}\] where \(\tilde{u}_{\theta}\) is the model with weights \(\theta\) and \((\vec{f}_{i})_{j}\) is the value of \(f_{i}\) at \(X_{j}=\frac{1}{100}j\). For the second loss term we randomly generate \(N=5,000\) triples \((\vec{f}_{i},x_{i},t_{i})_{i=1}^{N}\) and enforce the model to obey the differential condition \[L_{D}(\theta)=\frac{1}{N}\sum_{i=1}^{N}\left|\frac{\partial\tilde{u}_{\theta} (\vec{f}_{i},x_{i},t_{i})}{\partial t}-\alpha\frac{\partial^{2}\tilde{u}_{ \theta}(\vec{f}_{i},x_{i},t_{i})}{\partial x^{2}}\right|^{2}. \tag{3}\] The derivatives of the given neural network approximation in (3) are calculated using the automatic differentiation capabilities of deep learning frameworks. In this work we use TensorFlow [6]. We compare two PINN architectures that provide an approximation to the solution \(u\): 1. **The naive model -** We benchmark our method with a deep learning model which is a standard MLP neural network, that takes in as input \((\vec{f},t,x)\in\mathbb{R}^{103}\) and outputs an approximation. This model is trained to be PI using the loss function \(L_{0}+L_{D}\), where the two terms are defined in (2) and (3). The network is composed of \(5\) dense layers \(\mathbb{R}^{103}\rightarrow\mathbb{R}^{103}\) and finally a dense layer \(\mathbb{R}^{103}\rightarrow\mathbb{R}\). Each of the first five dense layers is followed by a non-linear activation function. Typically, a Rectifier Linear Unit (ReLU) \(\sigma(x)=(x)_{+}\), is a popular choice as the nonlinear activation for MLP networks [12]. However, it is not suitable in this case, since its second derivative is almost everywhere zero. Therefore we use \(\tanh\) as the nonlinear activation function. 2. **The spectral model -** In some sense, our spectral model \(\tilde{u}\) is'strongly' physics informed. Exactly as the naive model, it is also trained using the loss functions (2) and (3), to provide solutions to the heat equation. However, its architecture is different from the naive architecture, in that it is modeled to match the spectral method. The spectral model \(\tilde{u}\) approximates \(u\) using the \(3\) blocks of the spectral paradigm approximation presented in the previous section. We now provide the details of the architecture and support our choice of design with rigorous proofs 1. **Sine transformation block** This block receives as input a sampling vector \(\vec{f}\) and returns the sine transformation coefficients. Due to the high sampling rate \(L=101\), compared with the frequency used \(K=20\), the sampled function \(f\) can be fully reconstructed from \(\vec{f}\) and this operation can be realized perfectly using the Nyquist-Shannon sampling formula. However, so as to simulate a scenario on a manifold where the sampling formula cannot be applied, we train a network to apply the transformation. To this end, we created \(1,000\) initial value conditions using trigonometric polynomials of degree \(20\), and trained this block to extract the coefficients of those polynomials. In other words, we pre-trained \(\tilde{\mathcal{C}}:\mathbb{R}^{101}\rightarrow\mathbb{R}^{20}\) for the following task: \[\tilde{\mathcal{C}}(\vec{f})=(c_{1},...,c_{20}),\] where \(\vec{f}\) is the sampling vector of the function \[f(x)=\sum_{k=1}^{20}c_{k}\sin(2\pi kx).\] In this simple case where \(\Omega=[0,1]\), the network can simply be composed of one dense layer with no nonlinear activation, which essentially implies computing a transformation matrix from samples to coefficients. In more difficult cases, the architecture of the network will be more complex. 2. **Time stepping block** The time stepping block should approximate the function: \[\mathcal{D}(t,c_{1},...,c_{20})=(e^{-4\pi^{2}\alpha t}c_{1},e^{-4\pi^{2}\cdot 2 ^{2}\alpha t}c_{2},...,e^{-4\pi^{2}20^{2}\alpha t}c_{20}).\] (4) We consider \(2\) architectures for this block: **Realization time stepping block:** In the case of the heat equation we know exactly how the time stepping block should operate and so we can design a true realization. The first layer computes \[t\rightarrow(-4\pi^{2}\alpha t,-4\pi^{2}2^{2}\alpha t,...,-4\pi^{2}20^{2} \alpha t).\] The second layer applies the exponential nonlinearity \[(-4\pi^{2}\alpha t,-4\pi^{2}\cdot 2^{2}\alpha t,...,-4\pi^{2}20^{2}\alpha t )\rightarrow(e^{-4\pi^{2}\alpha t},e^{-4\pi^{2}\cdot 2^{2}\alpha t},...,e^{-4\pi^{2 }20^{2}\alpha t}).\] Finally, we element-wise multiply the output of the second layer with \((c_{1},...,c_{20})\) to output the time dependent spectral representation (4). **Approximate time stepping block:** In the case of general manifolds we may not be able to fully realize the time stepping block. Therefore, we examine what are the consequences of using an MLP network that approximates for given \(K\geq 1\) \[\mathcal{D}(t,c_{1},...,c_{K}):=(e^{-4\pi^{2}\alpha t}c_{1},...,e^{-4\pi^{2}K^{2} \alpha t}c_{K}).\] We prove the following theorem (see the appendix for proofs): **Theorem 2**.: _For any \(\epsilon>0\) and \(K\geq 1\) there exists a MLP network \(\tilde{\mathcal{D}}\), consisting of dense layers and \(\tanh\) as an activation function, with \(O(K^{3}+K\log^{2}(\epsilon^{-1}))\) weights such that_ \[\|\tilde{\mathcal{D}}(t,c_{1},...,c_{K})-\mathcal{D}(t,c_{1},...,c_{K})\|_{ \infty}\leq\epsilon,\] _for all inputs \(c_{1},...,c_{K}\in[-1,1],t\in[0,1]\)._ We remark that it is possible to approximate \(\mathcal{D}\) using ReLU as the nonlinear activation as it shown in [13]. However, recall the ReLU is not suitable for our second order differential loss function (3). In the experiments below, the approximating MLP time stepping block is composed of 5 layers. 3. **Reconstruction Block** The reconstruction block should operate as follow: \[\mathcal{R}(a_{1},....,a_{K},x)=\sum_{k=1}^{K}a_{k}\sin(2\pi kx).\] In the case of the heat equation, for given \(t\in[0,1]\), the coefficients \(\{a_{k}\}_{k=1}^{K}\) are \(\{e^{-4\pi^{2}k^{2}t}c_{k}\}_{k=1}^{K}\) or an approximation to these coefficients. Here, also one can design a realization block which uses the sine function as a nonlinearity. To support the general case we have the following result **Theorem 3**.: _For fixed \(A>0\), \(K\geq 1\) and any \(\epsilon>0\), there exists a MLP network \(\tilde{\mathcal{R}}\), consisting of dense layers and \(\tanh\) as an activation function, with \(O(K^{2}+K\log^{2}(K\epsilon^{-1}))\) weights for which_ \[|\tilde{\mathcal{R}}(a_{1},....,a_{K},x)-\mathcal{R}(a_{1},....,a_{K},x)|\leq\epsilon,\] _where \(a_{1},...,a_{K}\in[-A,A],x\in[0,1]\)._ In the experiments below, the approximating MLP reconstruction block is composed of 5 layers. Using theorem 2 and 3, we can prove a general theorem that provides an estimate for the approximation of a MLP network. We first give the definition of Sobolev spaces [22]: **Definition 1**.: Let \(\Omega\subset\mathbb{R}^{n}\) and \(C_{0}^{r}(\Omega)\) be the space of continuously \(r\)-differentiable with compact support functions. For \(1\leq p<\infty\), the Sobolev space \(W_{p}^{r}(\Omega)\) is the completion of \(C_{0}^{r}(\Omega)\) with respect to the norm \[\|f\|_{W_{p}^{r}(\Omega)}=\sum_{|\alpha|\leq r}\|\partial^{\alpha}f\|_{L_{p}( \Omega)},\] where \(\partial^{\alpha}f=\frac{\partial^{|\alpha|}f}{\partial x_{1}^{a_{1}}... \partial x_{n}^{a_{m}}},|\alpha|=\sum_{i=1}^{n}\alpha_{i}\). With this definition at hand we are ready to state a result on the approximation capabilities of our spectral architecture when MLP networks are used to approximate the spectral realization **Theorem 4**.: _Let \(r\in\mathbb{N}\). For any \(\epsilon>0\) there exists a MLP neural network \(\tilde{u}\), with \(\tanh\) non-linearities and \(O(\epsilon^{-3/r}+\epsilon^{-1/r}\log^{2}(\epsilon^{-(1+1/r)}))\) weights (the constant depends on \(r\)) for which the following holds: For any \(f\in W_{2}^{r}([0,1])\), \(f=\sum_{k=1}^{\infty}c_{k}\sin(2\pi kx)\), \(\|f^{(r)}\|_{2}\leq 1\) and \(u\), the solution to the heat equation on \(\Omega=[0,1]\) with the initial condition \(f\), the network \(\tilde{u}\) takes the input \(\{c_{k}\}_{k=1}^{K}\), \(K\leq c\epsilon^{-1/r}\) and provides the estimate_ \[\|u(f,\cdot,t)-\tilde{u}(f,\cdot,t)\|_{L_{2}[0,1]}\leq\epsilon,\qquad\forall t \in[0,1].\] ### Experimental Results We benchmark 4 models: the naive PINN model with vanilla MLP architecture consisting of 6 layers and 3 variations of the spectral model with the various blocks realized or approximated. The training was performed using \(5,000\) and \(25,000\) samples of the form \((\vec{f},x,t)\), where \(\vec{f}\) is a sampling vector of trigonometric polynomial of degree 20 on 101 equip-spaced points in the segment \([0,1]\) with \(t\in[0,0.5]\). To guarantee slow vanishing of the solution over time we used \(\alpha=0.01\). The comparison of the averaged Mean Squared Error (MSE) of the 4 models over 20 randomly sampled test initial conditions is presented in Table 1. In Figure 2 we plot the norm of the error at different time steps \[Error(t)=\sum_{i=1}^{20}\frac{1}{101}\sqrt{\sum_{k=0}^{100}\bigg{|}\tilde{u}_{ model}(\vec{f}_{i},\frac{1}{100}k,t)-u(f_{i},\frac{1}{100}k,t)\bigg{|}^{2}}.\] We show some examples of the exact solution \(u\) and the output of the different neural network solution at different times and with several initial condition in figure 3. In addition, we performed generalization and stability analysis for the different architectures. To evaluate the ability of our networks to generalize beyond the training space of polynomials of degree 20, we tested the different networks using initial conditions from a space of polynomials of degree 30. Namely, \[W_{30}=\left\{\sum_{k=1}^{30}c_{k}\sin(2\pi kx),\quad c_{1},...,c_{20}\in[-1,1 ],\sqrt{c_{1}^{2}+...+c_{20}^{2}}=1\right\}.\] To evaluate the stability of our networks, we add normal random noise with mean 0 and variance 0.001 to the initial condition sample vectors and evaluate in different time stamps using the following normalized metric \[\frac{\|\tilde{u}_{model}(\vec{f}+\vec{\delta},\cdot,t)-\tilde{u}_{model}(\vec {f},\cdot,t)\|_{2}}{\|\delta\|_{2}} \tag{5}\] where \(\delta_{i}\sim N(0,0.001)\). The results of the generalization test can be found in Table 2, and the results (averaged over 20 random initial conditions) for the stability test can be found in Table 3. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Model & Model Architecture & \#Model weights & Testing MSE: & Testing MSE: \\ number in & & & 5,000 training & 25,000 training \\ plots & & & samples & samples \\ \hline \hline 1 & Naive Model & 53,664 & 1.3e-4 & 1.19e-4 \\ \hline 2 & Spectral model - full & **2,960** & **9.0e-6** & **8.3e-6** \\ & realization (time stepping and reconstruction & & & \\ & blocks) & & & \\ \hline 3 & Spectral model - MLP & 11,980 & 5.7e-5 & 4.9e-5 \\ & approximation of time & & & \\ & stepping block, realization of & & & \\ & reconstruction block & & & \\ \hline 4 & Spectral model - & 10,401 & 2.9e-5 & 2.87e-5 \\ & realization of time & & & \\ & stepping block, MLP & & & \\ & approximation of & & & \\ & reconstruction block & & & \\ \hline \end{tabular} \end{table} Table 1: Heat equation over \(\Omega=[0,1]\) - Comparison of standard naive PINN model with 3 variants of our spherical PINN model \begin{table} \begin{tabular}{|c|c|c|} \hline Model number in & Model Architecture & MSE \\ plots & & \\ \hline \hline 1 & Naive Model & 1.0e-3 \\ \hline 2 & Spectral model - full & **7.1e-4** \\ & realization (time stepping & \\ & and reconstruction & \\ & blocks) & \\ \hline 3 & Spectral model - MLP & 8.1e-4 \\ & approximation of time & \\ & stepping block, & \\ & realization of & \\ & reconstruction block & \\ \hline 4 & Spectral model - & 7.4e-4 \\ & realization of time & \\ & stepping block, MLP & \\ & approximation of & \\ & reconstruction block & \\ \hline \end{tabular} \end{table} Table 2: Heat equation on \([0,1]\) - generalization results Figure 2: Heat equation on \([0,1]\) - Error versus time Figure 3: Heat equation over [0,1] - comparisons of the ground truth solution and the different neural network solutions with different initial conditions and at different times In both tests, we can observe that all spectral model variants outperform the naive model. The theoretical and empirical results for the simple case of the heat equation over \(\Omega=[0,1]\) motivate us to establish guidelines for designing spectral PINN networks in much more complicated scenarios. Namely, we should try to realize the various blocks, approximate them or at the least design their sub-components to have elements from the realization such as nonlinear activations relating to the spectral basis. ## 5 The sphere \(\mathbb{S}^{2}\) In this section, we demonstrate our method in a more challenging setup, a nonlinear equation on a curved manifold. The Allen-Cahn equation over the sphere \(\mathbb{S}^{2}\) is defined as follow [14]: \[u_{t}=0.1\Delta u+u-u^{3},\;t>0, \tag{6}\] where the Laplace-Beltrami operator is \[\Delta=\frac{\partial^{2}}{\partial\theta^{2}}+\frac{\cos\theta}{\sin\theta} \frac{\partial}{\partial\theta}+\frac{1}{\sin^{2}\theta}\frac{\partial^{2}}{ \partial\phi^{2}},\] with \(\phi\) is the azimuth angle and \(\theta\) is the polar angle. ### Theory and spectral PINN architecture for the Allen-Cahn equation on \(\mathbb{S}^{2}\) On \(\mathbb{S}^{2}\subset\mathbb{R}^{3}\) the spectral basis is the spherical harmonic functions [9]: **Definition 2**.: The spherical harmonic function of degree \(l\) and order \(m\) is given by: \[Y_{l}^{m}(\theta,\phi)=(-1)^{m}\sqrt{\frac{(2l+1)}{4\pi}\frac{(l-m)!}{(l+m)!}} P_{l}^{m}(\cos\theta)e^{im\phi},\] where \(\theta\in[0,\pi]\) is the polar angle, \(\phi\in[0,2\pi)\) is the azimuth angle and \(P_{l}^{m}:[-1,1]\rightarrow\mathbb{R}\) is the associated Legendre polynomial. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Model number in & Model Architecture & \(T=0.2\) & \(T=0.4\) & \(T=0.5\) \\ \hline \hline 1 & Naive Model & 3.53 & 3.55 & 3.66 \\ \hline 2 & Spectral model - full & 0.95 & 0.73 & 0.67 \\ & realization (time stepping and reconstruction & & & \\ & blocks) & & & \\ \hline 3 & Spectral model - MLP & 0.97 & 0.79 & 0.75 \\ & approximation of time & & & \\ & stepping block, & & & \\ & realization of reconstruction block & & & \\ \hline 4 & Spectral model - & 0.95 & 0.75 & 0.68 \\ & realization of time & & & \\ & stepping block, MLP & & & \\ & approximation of & & & \\ & reconstruction block & & & \\ \hline \end{tabular} \end{table} Table 3: Heat equation on \([0,1]\) - stability test results using the normalized metric (5) Each spherical harmonic function is an eigenfunction of the Laplace-Beltrami operator satisfying \[\Delta Y_{l}^{m}=-l(l+1)Y_{l}^{m}.\] In our work, for simplicity, we use the real version of the spherical harmonics, defined by: \[Y_{lm}=\begin{cases}\sqrt{2}(-1)^{m}Im(Y_{l}^{|m|}),&-l\leq m<0,\\ Y_{l}^{0},&m=0,\\ \sqrt{2}(-1)^{m}Re(Y_{l}^{m}),&0<m\leq l.\end{cases}\] The input to our networks are of type \((F,(\theta,\phi),t)\), where \(F\in\mathbb{R}^{20\times 20}\) is a sampling matrix of the initial condition on equip-spaced azimuth-polar grid of a spherical function, \(\theta\in[0,\pi],\phi\in[0,2\pi)\) are the coordinates of a point on the sphere and \(t\in[0,1]\). The loss functions are similar to the loss functions used in section 4, with the required modifications, such as for the differential loss term \[L_{D}(\theta)=\frac{1}{N}\sum_{i=1}^{N}\left|\frac{\partial\tilde{u}_{\theta} (F_{i},x_{i},t_{i})}{\partial t}-(0.1\Delta\tilde{u}_{\theta}+\tilde{u}_{ \theta}-\tilde{u}_{\theta}^{3})(F_{i},x_{i},t_{i})\right|^{2}. \tag{7}\] Our goal is to construct a spectral PINN architecture that will outperform the naive PINN architecture. Here are the details of the 3 blocks of the spectral model: 1. **Transformation Block** This block receives as input a flatten sampling matrix \(\vec{F}\in\mathbb{R}^{400}\) of the initial condition and returns the 100 spherical harmonic coefficients of degree 9. By [17, Theorem 3] under these conditions, spherical harmonics of degree 9 can be perfectly reconstructed. Thus, training one dense linear layer with sufficient samples, recovers the perfect reconstruction formula. In other words, we pre-trained \(\tilde{\mathcal{C}}:\mathbb{R}^{400}\rightarrow\mathbb{R}^{100}\) for the following task: \[\tilde{\mathcal{C}}(\vec{F})=(c_{0,0},c_{1,-1},c_{1,0},c_{1,1},...,c_{9,-9},...,c_{9,0},...,c_{9,9}),\] where \(\vec{F}\) is a flatten sampling matrix of the function \[\sum_{l=0}^{9}\sum_{m=-l}^{l}c_{l,m}Y_{lm}(\theta,\phi).\] 2. **Time Stepping Block** Unlike the heat equation on the unit interval, the Allen-Cahn equation (6) on the sphere, does not admit an analytic spectral solution. Nevertheless, we design an architecture that follows the spectral paradigm and compare it with a standard PINN MLP architecture. We test our hypothesis by conducting an ablation study using three optional architectures for the time stepping block: 1. **Input of Allen-Cahn Nonlinear Part** In this architecture, we further adapt the architecture to the nature of the equation, specifically to the non-linear part of the Allen-Cahn equation. Thus, in this variant, the input to the time stepping block is composed of: the transformation of the initial condition, the transformation of the nonlinear part of the initial condition and the time variable \((\tilde{\mathcal{C}}(\vec{F}),\tilde{\mathcal{C}}(\vec{F}-\vec{F}^{3}),t)\). Therefore the time stepping block is defined as \[\tilde{\mathcal{D}}:\mathbb{R}^{100}\times\mathbb{R}^{100}\times[0,T] \rightarrow\mathbb{R}^{100},\] where \[\tilde{\mathcal{D}}(\tilde{\mathcal{C}}(\vec{F}),\tilde{\mathcal{C}}(\vec{F}- \vec{F}^{3}),t)=(c_{0,0}(t),c_{1,-1}(t),c_{1,0}(t),c_{1,1}(t),...,c_{9,-9}(t),...,c_{9,0}(t),...,c_{9,9}(t)).\] With the additional input of the non-linear part, this variant of the time stepping block is a sum of two sub-blocks \(\tilde{\mathcal{D}}=\tilde{\mathcal{D}}_{1}+\tilde{\mathcal{D}}_{2}\). The component \(\tilde{\mathcal{D}}_{1}\) is a sub-block designed to capture an exponential dynamic of the solution across time. The sub-block \(\tilde{\mathcal{D}}_{2}\) is a standard PINN sub-block. The exponential sub-block \(\tilde{\mathcal{D}}_{1}\) is defined by \[\tilde{\mathcal{D}}_{1}(\tilde{\mathcal{C}}(\vec{F}),\tilde{\mathcal{C}}(\vec{ F}-\vec{F}^{3}),t)=e^{\tilde{\mathcal{D}}_{1,1}(t)}\odot\tilde{\mathcal{D}}_{1,2}( \tilde{\mathcal{C}}(\vec{F}),\tilde{\mathcal{C}}(\vec{F}-\vec{F}^{3})),\] where \(\odot\) is element-wise vector multiplication. The component \(\tilde{\mathcal{D}}_{1,1}:\mathbb{R}\rightarrow\mathbb{R}^{100}\) is a simple dense layer with no bias, i.e. \(\tilde{\mathcal{D}}_{1,1}(t)=V\cdot t\) where \(V\in\mathbb{R}^{100}\) is a learnable vector. The component \(\tilde{\mathcal{D}}_{1,2}:\mathbb{R}^{100}\times\mathbb{R}^{100}\rightarrow \mathbb{R}^{100}\) is an MLP subnetwork with 6 layers with \(\tanh\) activations. Finally, the sub-block \(\tilde{\mathcal{D}}_{2}\) is also an MLP subnetwork with 6 layers and \(\tanh\) activations. The full architecture with this time stepping variant is depicted in Figure 4. 2. **Standard Exponential Block** This variant of the time-stepping block is similar to the one described in (a), but without the non-linear input \(\tilde{\mathcal{C}}(\vec{F}-\vec{F}^{3})\). Thus, the subnetwork capturing the exponential behavior takes the form \[\tilde{\mathcal{D}}_{1}(\tilde{\mathcal{C}}(\vec{F}),t)=e^{\tilde{\mathcal{D}} _{1,1}(t)}\odot\tilde{\mathcal{D}}_{1,2}(\tilde{\mathcal{C}}(\vec{F})).\] In this architecture we add 5 more dense layers to this block, as each layer requires less weights. 3. **Naive MLP Time Stepping Block** In this variant of the time stepping block, the input is \((\tilde{\mathcal{C}}(\vec{F}),t)\) and the architecture is a simple MLP block of 12 layers with \(\tanh\) activation functions. 3. **Reconstruction Block** The heuristics of our spectral approach is that the output of the time stepping block should be (once trained) a representation space resembling the coefficients of the spectral basis at the given time. Therefore, we design the reconstruction block to be composed of dense layers, but we use activation functions of the form \(\sin^{l},\cos^{l}\), \(0\leq l\leq 9\), on the input data point \((\theta,\phi)\) Figure 4: Full architecture for spherical setting - the red arrows are used only in variant (a) for time stepping block since these activation functions are the building blocks of the spherical harmonics functions. To this end, we first apply two subnetworks on the data point \((\theta,\phi)\) \[\mathcal{R}_{l,\sin,0}(\theta,\phi),\mathcal{R}_{l,\cos,0}(\theta,\phi):\mathbb{ R}^{2}\rightarrow\mathbb{R}^{2}.\] We then apply on their output, component wise, the spectral activation functions \[\sin^{l}\circ\mathcal{R}_{l,\sin,0}(\theta,\phi),\quad\cos^{l}\circ\mathcal{D} _{l,\cos,0}(\theta,\phi),\quad 0\leq l\leq 9.\] Next we apply dense layers on the output of the activation functions \[\mathcal{R}_{l,\sin,1},\mathcal{R}_{l,\cos,1}:\mathbb{R}^{2}\rightarrow\mathbb{ R}^{100},\quad 0\leq l\leq 9.\] We assemble these pieces to produce a subnetwork \(\mathcal{R}_{loc}:\mathbb{R}^{2}\rightarrow\mathbb{R}^{100}\) \[\mathcal{R}_{loc}(\theta,\phi)=\sum_{l=0}^{9}\mathcal{R}_{l,\sin,1}(\sin^{l} \circ\mathcal{R}_{l,\sin,0}(\theta,\phi))\odot\mathcal{R}_{l,\cos,1}(\cos^{l }\circ\mathcal{R}_{l,\cos,0}(\theta,\phi)),\] where \(\odot\) is element-wise vector multiplication. We apply separately, on the output of the time stepping block a subnetwork \(\mathcal{R}_{d}:\mathbb{R}^{100}\rightarrow\mathbb{R}^{100}\). Finally, our reconstruction network \(\tilde{\mathcal{R}}\) is a dot-product between the outputs of \(\mathcal{R}_{d}\) and \(\mathcal{R}_{loc}\) \[\tilde{\mathcal{R}}(c_{0,0}(t),c_{1,-1}(t),c_{1,0}(t),c_{1,1}(t),...,c_{9,-9}(t),...,c_{9,0}(t),...,c_{9,9}(t),\theta,\phi)\\ =\langle\mathcal{R}_{d}(c_{0,0}(t),c_{1,-1}(t),c_{1,0}(t),c_{1,1} (t),...,c_{9,-9}(t),...,c_{9,0}(t),...,c_{9,9}(t)),\mathcal{R}_{loc}(\theta, \phi)\rangle.\] ### Experimental Results We generated training data consisting of \(N=5,000\) randomly chosen samples of the form \((\vec{F},(\theta,\phi),t)\), where \(\vec{F}\) is a flattened sampling matrix of initial conditions randomly sampled from \[W=\left\{\sum_{l=0}^{9}\sum_{m=-l}^{l}c_{lm}Y_{lm}(\theta,\phi),\qquad c_{lm} \in[-1,1],\sqrt{\sum_{l=0}^{9}\sum_{m=-l}^{l}c_{lm}^{2}}=1\right\},\] on the equip-spaced grid \[\theta_{j}=\frac{\pi}{19}j,\;j\in\{0,...,19\},\qquad\phi_{k}=\frac{2\pi}{20} k,\;k\in\{0,...,19\}.\] During the training of the spectral model we used some manipulations to improve the results: 1. Pre-training the transformation block and the reconstruction block separately before training the full model, using the PI loss function \[\frac{1}{N}\sum_{i=1}^{N}\left|F(\theta_{i},\phi_{i})-\tilde{\mathcal{R}}( \tilde{\mathcal{C}}(\vec{F}_{i}),(\theta_{i},\phi_{i}))\right|^{2}.\] 2. When training the full model, we started the first 20 epochs by freezing the weights of the transformation and reconstruction blocks that were pre-trained separately in (1) and training only the time stepping block. We observed that this technique where the transformation block and the reconstruction are pre-trained and then kept constant for the first epochs provides better initialization of the time-stepping block and overall better results. In this stage of the training, we used a loss function containing three terms. In addition to the standard initial condition loss and the differential loss we added new loss to enforce that the time stepping block does not change the spherical harmonics coefficients at time zero. Formally, the new loss term over the training set is \[\frac{1}{100N}\sum_{i=1}^{N}\|\tilde{\mathcal{D}}(\tilde{\mathcal{C}}(\vec{F }_{i}),0)-\tilde{\mathcal{C}}(\vec{F}_{i})\|_{2}^{2}.\] (8) 3. Finally, we trained the full model with all 3 loss terms for 25 more epochs. Since there is no analytical solution for the Allen-Cahn equation over \(\mathbb{S}^{2}\), we used the numerical scheme IMEX-BDF4 [14] as ground truth for testing our models. Unlike [14], we used the spherical harmonic functions basis and not the double spherical Fourier method which was used in [14] due to performance considerations. We tested our models using 20 random initial conditions and predicted the solutions for all grid points: \[\theta_{j}=\frac{\pi}{19}j,\ j\in\{0,...,19\},\quad\phi_{k}=\frac{2\pi}{20}k, \ k\in\{0,...,19\},\quad t_{n}=\frac{1}{500}n,\ n\in\{0,...,500\}.\] We benchmarked 3 spectral PINN variants of the with the naive PINN model that has MLP architecture consisting of 26 layers with \(\tanh\) activations. The comparison of the 4 models is described in Table 4. We can see that our model achieves better accuracy than the naive model, with significantly less parameters. We can also see that there is a benefit to the special processing of the non-linear part of Allen Cahn equation by feeding the time stepping block with the non-linear part of the initial condition. In Figure 5 we show the norm of the error in different time steps. As in the previous example, we performed generalization and stability tests. For the generalization test we used random initial conditions from the larger set of spherical harmonics of degree 14: \[W_{gen}=\left\{\sum_{l=0}^{14}\sum_{m=-l}^{l}c_{lm}Y_{lm}(\theta,\phi),\qquad c _{lm}\in[-1,1],\sqrt{\sum_{l=0}^{14}\sum_{m=-l}^{l}c_{lm}^{2}}=1\right\}.\] For the stability test we used the technique as in the previous section with noise \(\delta\sim N(0,0.001)\). The results of generalization and stability tests can be found in tables 5 and 6 respectively (averaged over 20 random initial conditions). Again, we can see that all spectral model variants outperform the naive model. \begin{table} \begin{tabular}{|c|c|c|} \hline Model number in & Model Architecture & MSE \\ plots & & \\ \hline \hline 1 & Naive Model & 3.6e-4 \\ \hline 2 & Spectral model, time stepping variant (a) - & **1.1e-4** \\ & Input of Allen-Cahn & \\ & nonlinear part & \\ \hline 3 & Spectral model, time stepping variant (b) - & 1.3e-4 \\ & Standard time stepping & \\ & exponential block & \\ \hline 4 & Spectral model, time stepping variant (c) - & 1.2e-4 \\ & Naive time stepping & \\ & dense block & \\ \hline \end{tabular} \end{table} Table 5: Allen-Cahn equation over \(\mathbb{S}^{2}\) - generalization test results Figure 5: Allen-Cahn equation over \(\mathbb{S}^{2}\) - Error over time of the naive and spectral variant PINN models on testing dataset In this setting, the Laplace-Beltrami operator is [10] \[\Delta_{\mathbb{T}}=\frac{1}{r^{2}}\frac{\partial^{2}}{\partial\theta^{2}}-\frac {\sin\theta}{r(R+r\cos\theta)}\frac{\partial^{2}}{\partial\theta}+\frac{1}{(R+ r\cos\theta)^{2}}\frac{\partial^{2}}{\partial\phi^{2}}.\] On this manifold we demonstrate our spectral PINN method again using the Allen-Cahn equation (6). As the class of initial conditions we use the set \[W_{5}=\left\{\sum_{k=1}^{5}\sum_{l=1}^{5}c_{k,l}\sin(k\theta)\sin(l\phi),\quad \sqrt{\sum_{k.l=1}^{5}c_{k,l}^{2}}=1\right\}.\] We sample functions from this set on an equip-spaced grid with \(N_{\theta}=N_{\phi}=15\). On the embedded torus one is required to use a numeric approximation of the spectral basis and we used the finite-elements method implemented in the python package SPHARAPY [11]. The choice of spectral basis implementation impacts the design of the architecture of the transformation and reconstruction blocks. We test several options for each block. For the transformation and reconstruction blocks we consider two options: 1. **Numerical Spectral basis blocks -** In this option, we first create a dataset of 5,000 triples, each composed of a sampling matrix of a function \(f(\theta,\phi)=\sum_{k=1}^{5}\sum_{l=1}^{5}c_{k,l}\sin(k\theta)\sin(l\phi)\) on the equip-spaced grid with \(N_{\theta}=N_{\phi}=15\) and two random coordinates \((\theta,\phi)\in[0,2\pi)^{2}\) of a point on the torus. We then train the transformation and reconstruction blocks separately as follows. For the training of the transformation block \(\tilde{\mathcal{C}}\) we further compute for each function in the training set, using its sampling matrix, the (numerical) spectral transformation using SPHARAPY. We use the coefficients computed by SPHARAPY as ground truth to train our transformation block. The block's architecture is composed of 3 convolution layers followed by one dense layer. The reconstruction block \(\tilde{\mathcal{R}}\) in this variant is trained to take as input the coefficients of the spectral representation and the coordinate \((\theta,\phi)\in[0,2\pi)^{2}\) and approximate the ground truth function value at this coordinate. The block architecture is a MLP subnet with 15 layers. 2. **Auto-Encoder-Decoder blocks -** In this variant, we train the transformation block \(\tilde{\mathcal{C}}\) as the encoder together with the reconstruction block \(\tilde{\mathcal{R}}\) as a decoder, without using explicitly the spectral representation on the torus. However, this approach is certainly inspired by the \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Model number in & Model Architecture & \(T=0.4\) & \(T=0.7\) & \(T=1.0\) \\ \hline \hline 1 & Naive Model & 3.83 & 3.19 & 2.8 \\ \hline 2 & Spectral model, time stepping variant (a) - & 0.88 & 0.81 & 0.81 \\ & Input of Allen-Cahn & & & \\ & nonlinear part & & & \\ \hline 3 & Spectral model, time stepping variant (b) - & 0.70 & 0.66 & 0.65 \\ & Standard time stepping & & & \\ & exponential block & & & \\ \hline 4 & Spectral model, time stepping variant (c) - & 2.85 & 2.86 & 2.86 \\ & Naive time stepping & & & \\ & dense block & & & \\ \hline \end{tabular} \end{table} Table 6: Allen-Cahn equation over \(\mathbb{S}^{2}\) - stability test results using the normalized metric (5) spectral method as we are ultimately optimizing some compressed representation space. The transformation encoder block simply learns to create a compressed latent representation of dimension 150 from the 225 function samples in a representation space. Its architecture is 5 convolution layers followed by one dense layer. Then the decoder takes the compressed representation together with the coordinate \((\theta,\phi)\in[0,2\pi)^{2}\) and tries to recover the ground truth function value as this coordinate. Its architecture is 17 dense layers. The loss function is then \[\frac{1}{N}\sum_{i=1}^{N}\left|\tilde{\mathcal{R}}(\tilde{\mathcal{C}}(\vec{F}_ {i}),(\theta_{i},\phi_{i}))-f_{i}(\theta_{i},\phi_{i})\right|^{2}.\] For the time stepping block we test two options 1. A custom made time stepping block that receives as input the coefficients of the initial condition as well as the coefficients of the nonlinear part and a time step \[(\tilde{\mathcal{C}}(\vec{F}),\tilde{\mathcal{C}}(\vec{F}-\vec{F}^{3}),t),\] similarly to variant (a) of the time stepping block in the spherical case from previous section. Recall that such an architecture aims to be'more' physics aware and adapted to the nature of the equation. For this variant of the time stepping block we use 9 dense layers. 2. A network that takes as input \[(\tilde{\mathcal{C}}(\vec{F}),t),\] without the nonlinear part. Here we used 15 dense layers. We denote this block as earlier with \(\tilde{\mathcal{D}}\). For validation of our models, we used the IMEX-BDF4 numeric solver [14] to obtain approximations of solutions to the equations that we considered as ground truth. In table 7 we summarize the benchmarks of the various architectures and also compare them to a naive PINN architecture, with 26 layers, that simply takes in the samples of the initial condition as well as the time step and location on the torus and outputs an approximation of the value of the solution. We can observe that the best result, in terms of accuracy and smaller size of the network, can be obtained using both numerical spectral basis blocks as transformation and reconstruction blocks, combined with the non-linear input time stepping block. Also, even the encoder-decoder variant that 'follows' the spectral paradigm to some extent without actually using the numerical spectral basis, provides a better result than the naive PINN model. In Figure 6 we show time plots of errors of the different PINN models averaged over 20 random initial conditions. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Model number & Transformation and & Time stepping block & \#Weights & MSE \\ in plots & Reconstruction blocks & & & \\ \hline \hline 1 & Numerical spectral basis & Spectral model, time & **2,564,105** & **2.5e-5** \\ & blocks & stepping variant (a) - & & \\ & & Input of Allen-Cahn & & \\ & & nonlinear part & & \\ \hline 2 & Auto-encoder-decoder & Spectral model, time & 3,800,555 & 8.3e-5 \\ & blocks & stepping variant (a) - & & \\ & & Input of Allen-Cahn & & \\ & & nonlinear part & & \\ \hline 3 & Numerical spectral basis & Spectral model, time & 3,129,976 & 1.7e-4 \\ & blocks & stepping variant (b) & & \\ \hline 4 & & Naive Model & 4,130,001 & 2.7e-4 \\ \hline \end{tabular} \end{table} Table 7: Allen-Cahn equation over \(\mathbb{T}\subset\mathbb{R}^{3}\) - Comparison of standard naive PINN model with 3 variants of our spherical PINN model test presented in Table 8, the network that was trained on samples from \(W_{5}\) was tested on random initial conditions from the larger set \[W_{10}=\left\{\sum_{k=1}^{10}\sum_{l=1}^{10}c_{k,l}\sin(k\theta)\sin(l\phi),\quad \sqrt{\sum_{k.l=1}^{10}c_{k,l}^{2}}=1\right\}.\] The stability tests listed in Table 9 are averaged over 20 random initial conditions. ## 7 Conclusion In this work we presented a physics informed deep learning strategy for building PDE solvers over manifolds which is aligned with the method of spectral approximation. Our method allows to train \begin{table} \begin{tabular}{|c|c|c|c|} \hline Model number & Transformation and & Time stepping block & MSE \\ in plots & Reconstruction blocks & & \\ \hline \hline 1 & Numerical spectral basis & Spectral model, time & **9.7e-5** \\ & blocks & stepping variant (a) - & \\ & & Input of Allen-Cahn & \\ & & nonlinear part & \\ \hline 2 & Auto-encoder-decoder & Spectral model, time & 1.1e-4 \\ & blocks & stepping variant (a) - & \\ & & Input of Allen-Cahn & \\ & & nonlinear part & \\ \hline 3 & Numerical spectral basis & Spectral model, time & 2.0e-4 \\ & blocks & stepping variant (b) & \\ \hline 4 & \multicolumn{3}{c|}{Naive Model} & 2.1e-4 \\ \hline \end{tabular} \end{table} Table 8: Allen-Cahn equation over \(\mathbb{T}\subset\mathbb{R}^{3}\) - generalization test results Figure 6: Allen-Cahn equation over \(\mathbb{T}\subset\mathbb{R}^{3}\) - Error over time of the naive and spectral variant PINN models on testing dataset a model that can take as input initial conditions from a pre-determined subset or subspace and is grid free. We showed empirically that our approach provides better approximation with much less weights compared with standard PINN architectures. For the case of the heat equation over the unit interval we provided a rigorous proof for the degree of approximation of a spectral PINN based on MLP components.
2307.01942
Minimax rates for latent position estimation in the generalized random dot product graph
Latent space models play an important role in the modeling and analysis of network data. Under these models, each node has an associated latent point in some (typically low-dimensional) geometric space, and network formation is driven by this unobserved geometric structure. The random dot product graph (RDPG) and its generalization (GRDPG) are latent space models under which this latent geometry is taken to be Euclidean. These latent vectors can be efficiently and accurately estimated using well-studied spectral embeddings. In this paper, we develop a minimax lower bound for estimating the latent positions in the RDPG and the GRDPG models under the two-to-infinity norm, and show that a particular spectral embedding method achieves this lower bound. We also derive a minimax lower bound for the related task of subspace estimation under the two-to-infinity norm that holds in general for low-rank plus noise network models, of which the RDPG and GRDPG are special cases. The lower bounds are achieved by a novel construction based on Hadamard matrices.
Hao Yan, Keith Levin
2023-07-04T22:18:27Z
http://arxiv.org/abs/2307.01942v1
# Minimax rates for latent position estimation in the generalized random dot product graph ###### Abstract Latent space models play an important role in the modeling and analysis of network data. Under these models, each node has an associated latent point in some (typically low-dimensional) geometric space, and network formation is driven by this unobserved geometric structure. The random dot product graph (RDPG) and its generalization (GRDPG) are latent space models under which this latent geometry is taken to be Euclidean. These latent vectors can be efficiently and accurately estimated using well-studied spectral embeddings. In this paper, we develop a minimax lower bound for estimating the latent positions in the RDPG and the GRDPG models under the two-to-infinity norm, and show that a particular spectral embedding method achieves this lower bound. We also derive a minimax lower bound for the related task of subspace estimation under the two-to-infinity norm that holds in general for low-rank plus noise network models, of which the RDPG and GRDPG are special cases. The lower bounds are achieved by a novel construction based on Hadamard matrices. ## 1 Introduction Networks encoding relations among entities are a common form of data in a broad range of scientific disciplines. In neuroscience, networks encode the strength of connections among brain regions (Bullmore and Sporns, 2009). In biology, networks encode which pairs of genes or proteins are co-expressed or are involved in the same pathways (Kovacs et al., 2019). In the social sciences, networks arise naturally in the form of social network data (Granovetter, 1973; Traud et al., 2012; Legramanti et al., 2022). Network embeddings are a broadly popular tool for exploring and analyzing network data. These methods seek to represent the vertices of a network in a lower-dimensional (typically Euclidean) space, in such a way that the geometry of these embeddings reflects some network structure of interest. Most commonly, these embeddings arise either via spectral methods (Rohe et al., 2011; Sussman et al., 2012; Tang and Priebe, 2018), which construct embeddings from the leading eigenvalues and eigenvectors of the adjacency matrix, or via representation learning methods (Grover and Leskovec, 2016; Lin et al., 2021). Embeddings are especially appropriate in settings where we believe that data is well-approximated by a latent space network model Hoff et al. (2002). Under these models, each vertex has an associated latent variable (often a point in Euclidean space), and network formation is driven by these latent variables, with pairs of vertices more likely to form edges if their latent variables are "similar" according to some measure (e.g., proximity in space). Examples of such models include Hoff models (Hoff et al., 2002; Ma et al., 2020), random geometric graphs (Penrose, 2003), graph root distributions (Lei, 2021) and graphons (Lovasz, 2012), to name just a few. Among these latent space models is the random dot product graph (RDPG; Young and Scheinerman, 2007; Athreya et al., 2018) and its generalization (GRDPG; Rubin-Delanchy et al., 2022). Under this model, each node \(v\) has an associated low-dimensional vector \(\mathbf{x}_{v}\in\mathbb{R}^{d}\), called its _latent position_. Conditional on these latent positions, the probability of two nodes \(u\) and \(v\) sharing an edge is given by the inner product of the associated vectors \(\mathbf{x}_{u}^{T}\mathbf{x}_{v}\). Although the RDPG is simple and widely applicable, one limitation of the model is that it can only produce graphs whose expected adjacency matrices are positive semidefinite. To overcome this drawback, Rubin-Delanchy et al. (2022) introduced the generalized random dot product graph (GRDPG), which allows this expected adjacency matrix to be indefinite. This model includes many classical models as special cases, including the stochastic block model (Holland et al., 1983), degree corrected stochastic block model (Karrer and Newman, 2011) and mixed membership stochastic block model (Airoldi et al., 2008). Under the RDPG and GRDPG, the most basic inferential problem involves estimation of the latent positions based on an observed network. Once estimates of the latent positions are obtained, they can be used in many downstream tasks such as clustering (Sussman et al., 2012; Lyzinski et al., 2014), graph hypothesis testing (Tang et al., 2017, 2017), and bootstrapping (Levin and Levina, 2019). A widely-used approach to estimating the latent positions in the RDPG is the adjacency spectral embedding (ASE; Sussman et al., 2012). The consistency of the ASE has been established previously under both the spectral (Sussman et al., 2014) and two-to-infinity (Lyzinski et al., 2014) norms and the asymptotic distributional behavior of this estimate was further explored in Athreya et al. (2016); Levin et al. (2017). For other related approaches to estimating the latent positions under the RDPG, see Tang and Priebe (2018); Xie and Xu (2020); Wu and Xie (2022); Xie and Xu (2023). The latent positions of the GRDPG can also be estimated consistently using a slight modification of the ASE (Rubin-Delanchy et al., 2022), with similar asymptotic distributional behavior to that established in previous work for the RDPG (Athreya et al., 2016; Tang and Priebe, 2018; Levin et al., 2017). These previous results suggest that the estimation rate, as measured in two-to-infinity norm, obtained by the ASE and related methods should be optimal, perhaps up to logarithmic factors. In this paper, we show that this is indeed the case (see Theorem 1), establishing minimax lower bounds for estimation of the latent positions in a class of low-rank network models that includes both the RDPG and GRDPG. This matches estimation upper bounds previously established in the literature Lyzinski et al. (2014); Rubin-Delanchy et al. (2022), up to logarithmic factors, and is in accord with previous work by Xie and Xu (2020) establishing the minimax rate under the Frobenius norm for the RDPG model. Our proof is based on a novel construction using Hadamard matrices, which may be of interest to researchers in subspace estimation. Indeed, as a corollary of our main result, we obtain minimax bounds for the closely related problem of singular subspace estimation in low-rank network models. Previous results along these lines include Cai and Zhang (2018), who established a lower bound under Gaussian noise, and Zhou et al. (2021), who provided a lower bound for random bipartite graphs under the spectral norm and Frobenius norm. Notation.For a vector \(\mathbf{x}\), we use \(\|\mathbf{x}\|_{2}\) to denote its Euclidean norm. For a matrix \(\mathbf{A}\), \(\|\mathbf{A}\|,\|\mathbf{A}\|_{F}\) and \(\|\mathbf{A}\|_{2,\infty}\) denote the spectral, Frobenius, and two-to-infinity (see Equation (4)) norms, respectively. We use \(\mathbf{A}_{ij}\) to denote the element in the \(i\)-th row and \(j\)-th column of the matrix \(\mathbf{A}\). For a sequence of matrices, we use subscripts \(\mathbf{A}_{1},\mathbf{A}_{2},\ldots,\mathbf{A}_{n}\) to index them if we do not need to specify an element of them. To specify the \((i,j)\) entry of a sequence of matrices, we use the notation \(\mathbf{A}_{ij}^{(1)},\mathbf{A}_{ij}^{(2)},\ldots,\mathbf{A}_{ij}^{(n)}\). Similarly, we use subscripts \(\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{n}\) to index a sequence of vectors. We use letters \(C\) and \(c\) to denote constants, not depending on the problem size \(n\), whose specific values may change from line to line. \(\mathbb{O}_{d}\) denotes the set of all \(d\times d\) orthogonal matrices. \(\mathbf{I}_{d}\) denotes the \(d\times d\) identity matrix. \(\mathbf{0}\) denotes a matrix of all zeros. For a positive integer \(n\), we let \([n]=\{1,2,\ldots,n\}\). We denote the standard basis in \(\mathbb{R}^{n}\) by \(\mathbf{e}_{1},\mathbf{e}_{2},\ldots,\mathbf{e}_{n}\), where the components of \(\mathbf{e}_{i}\) are all zero, save for the \(i\)-th component, which is equal to \(1\). We make use of standard use of Landau notation. Thus, for positive sequences \((a_{n})\) and \((b_{n})\), if there exists a constant \(C\) such that \(a_{n}\leq Cb_{n}\) for all suitably large \(n\), then we write \(a_{n}=O\left(b_{n}\right)\) or \(a_{n}\lesssim b_{n}\), and we write \(b_{n}=\Omega(a_{n})\). We write \(a_{n}=\Theta\left(b_{n}\right)\) to denote that \(a_{n}=O\left(b_{n}\right)\) and \(b_{n}=O\left(a_{n}\right)\). If \(a_{n}/b_{n}\to 0\) as \(n\to\infty\), then we write \(a_{n}=o\left(b_{n}\right)\) and \(b_{n}=\omega(a_{n})\). ## 2 Low-rank Models and Embeddings We are concerned in this paper with low-rank network models, in which the expected value of the adjacency matrix, perhaps conditional on latent variables, is of low rank. These models are exemplified by the RDPG, where conditional on the latent positions, the adjacency matrix has expectation given by the Gram matrix of the latent positions. **Definition 1** (RDPG; Young and Scheinerman (2007); Athreya et al. (2018)).: _Let \(F\) be a distribution on \(\mathbb{R}^{d}\) such that for all \(\mathbf{x},\mathbf{y}\in\operatorname{supp}F\), \(0\leq\mathbf{x}^{T}\mathbf{y}\leq 1\). Let \(\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{n}\in\mathbb{R}^{d}\) be drawn i.i.d. according to \(F\), and collect them in the rows of \(\mathbf{X}\in\mathbb{R}^{n\times d}\). Conditional on \(\mathbf{X}\), generate a symmetric adjacency matrix \(\mathbf{A}\in\{0,1\}^{n\times n}\) according to \(\mathbf{A}_{ij}\sim\operatorname{Bern}(\mathbf{x}_{i}^{T}\mathbf{x}_{j})\) independently over all \(1\leq i<j\leq n\). Then we say that \(\mathbf{A}\) is the adjacency matrix of a random dot product graph (RDPG), and write \((\mathbf{A},\mathbf{X})\sim\operatorname{RDPG}(F,n)\). For a fixed choice of \(\mathbf{X}\), we write \(\mathbf{A}\sim\operatorname{RDPG}(\mathbf{X})\) and say that the resulting network is distributed as a conditional RDPG with latent positions \(\mathbf{X}\)._ As defined, the (conditional) expected adjacency matrix \(\mathbb{E}[\mathbf{A}\mid\mathbf{X}]\) is always positive semidefinite under the RDPG, restricting the range of network structures it can express. The generalized RDPG (GRDPG) resolves this issue. **Definition 2** (GRDPG; Rubin-Delanchy et al. (2022)).: _Let \(d=p+q\) where \(p,q\geq 0\) are integers, and define the matrix_ \[\mathbf{I}_{p,q}=\operatorname{diag}(1,1,\ldots,1,-1,\ldots,-1). \tag{1}\] _Suppose that \(F\) is a distribution on \(\mathbb{R}^{d}\) such that \(0\leq\mathbf{x}^{T}\mathbf{I}_{p,q}\mathbf{Y}\leq 1\) for all \(\mathbf{x},\mathbf{y}\in\operatorname{supp}F\). Draw \(\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{n}\in\mathbb{R}^{d}\) i.i.d. according to \(F\), and collect them in the rows of \(\mathbf{X}\in\mathbb{R}^{n\times d}\). Conditional on \(\mathbf{X}\), generate a symmetric adjacency matrix \(\mathbf{A}\in\{0,1\}^{n\times n}\) according to \(\mathbf{A}_{ij}\sim\operatorname{Bern}(\mathbf{x}_{i}^{T}\mathbf{I}_{p,q} \mathbf{x}_{j})\) independently over all \(1\leq i<j\leq n\). We say that \(\mathbf{A}\) is the adjacency matrix of a _generalized random dot product graph (GRDPG)_ with signature \((p,q)\), and write \((\mathbf{A},\mathbf{X})\sim\operatorname{GRDPG}(F,p,q,n)\). For a fixed \(\mathbf{X}\) and signature \((p,q)\), we write \(\mathbf{A}\sim\operatorname{GRDPG}(\mathbf{X},p,q)\) and say that the resulting network is distributed as a _conditional RDPG_ with latent positions \(\mathbf{X}\) and signature \((p,q)\)._ We can naturally extend the conditional versions of these models to a generic "low-rank plus noise" network model, in which the expected adjacency matrix is low-rank. **Definition 3** (Low rank network model).: _Let \(d=p+q\) for non-negative integers \(p\) and \(q\), and let \(\mathbf{I}_{p,q}\) be as defined in Equation (1). Let \(\mathbf{X}\in\mathbb{R}^{n\times d}\) be such that \(\mathbf{P}=\mathbf{X}\mathbf{I}_{p,q}\mathbf{X}^{T}\) has all its entries between \(0\) and \(1\). Given \(\mathbf{P}\), generate a symmetric binary adjacency matrix \(\mathbf{A}\in\{0,1\}^{n\times n}\) according to \(\mathbf{A}_{ij}\sim\operatorname{Bern}(\mathbf{P}_{ij})\), independently over all \(1\leq i<j\leq n\). We say that the resulting network is distributed according to a low-rank plus noise model with expectation \(\mathbf{P}\)._ Under both the RDPG and GRDPG as well as under their generalization in Definition 3, we have \[\mathbb{E}[\mathbf{A}\mid\mathbf{X}]=\mathbf{P}=\mathbf{X}\mathbf{I}_{p,q} \mathbf{X}^{T}\] for \(\mathbf{I}_{p,q}\) as in Equation (1). Note that we recover the RDPG by taking \(q=0\). Under these models, the matrix \(\mathbf{X}\in\mathbb{R}^{n\times d}\) is a natural inferential target. The aim of this paper is to establish the limits on estimating this low-rank part \(\mathbf{X}\) under network models like those in Definitions 1, 2 and 3. For non-negative integers \(p,q\), define the set \[\mathcal{X}_{n}^{(p,q)}=\{\mathbf{X}\in\mathbb{R}^{n\times d}:0\leq\mathbf{X} \mathbf{I}_{p,q}\mathbf{X}^{T}\leq 1\}, \tag{2}\] where the inequality is meant entry-wise, so that for each \(1\leq i<j\leq n\), the element \((\mathbf{X}\mathbf{I}_{p,q}\mathbf{X}^{T})_{i,j}\) is a probability. That is, the set \(\mathcal{X}_{n}^{(p,q)}\) corresponds to the collection of all possible collections of \(n\) latent positions whose indefinite inner products under a signature \((p,q)\) are valid probabilities. In other words, any \(\mathbf{X}\in\mathcal{X}_{n}^{(p,q)}\) is a potential collection of latent positions under Definition 2 or 3. When \(p=d\), the GRDPG model recovers the random dot product graph (RDPG) model as a special case. As such, we define \[\mathcal{X}_{n}^{d}=\{\mathbf{X}\in\mathbb{R}^{n\times d}:0\leq\mathbf{X} \mathbf{X}^{T}\leq 1\}. \tag{3}\] To establish estimation rates for network latent positions (i.e., elements of the set \(\mathcal{X}_{n}^{(p,q)}\) or \(\mathcal{X}_{n}^{d}\)), we must endow the set with a distance. One such distance, surely the most studied in the context of network modeling, derives from the \((2,\infty)\)-norm. Given two matrices \(\mathbf{X},\mathbf{Y}\in\mathbb{R}^{n\times d}\), this norm is defined according to \[\|\mathbf{X}-\mathbf{Y}\|_{2,\infty}=\max_{i\in[n]}\|\mathbf{X}_{i}-\mathbf{Y }_{i}\|_{2}, \tag{4}\] where \(\|\cdot\|_{2}\) is the standard Euclidean norm in \(\mathbb{R}^{d}\) and \(\mathbf{X}_{i}\in\mathbb{R}^{d}\) denotes the \(i\)-th row of \(\mathbf{X}\in\mathbb{R}^{n\times d}\), viewed as a column vector. We will use this norm to construct a distance on the set \(\mathcal{X}_{n}^{d}\), once we account for a non-identifiability inherent to latent space models (Shalizi and Asta 2017). Observe that for any orthogonal transformation \(\mathbf{W}\in\mathbb{O}_{d}\), we have \(\mathbf{X}\mathbf{X}^{T}=\mathbf{X}\mathbf{W}(\mathbf{X}\mathbf{W})^{T}\). As a result, given an adjacency matrix \(\mathbf{A}\) generated from an RDPG, we can only hope to estimate a particular \(\mathbf{X}\in\mathcal{X}_{n}^{d}\) up to such an orthogonal transformation. We thus endow \(\mathcal{X}_{n}^{d}\) with an equivalence relation \(\sim\), writing \(\mathbf{X}\sim\mathbf{Y}\) if \(\mathbf{Y}=\mathbf{X}\mathbf{W}\) for some \(\mathbf{W}\in\mathbb{O}_{d}\). Our notion of recovering the rows of the true \(\mathbf{X}\) up to orthogonal rotation yields a natural notion of distance on these equivalence classes. **Definition 4**.: _Let \(\tilde{\mathcal{X}}_{n}^{d}\) denote the quotient set of \(\mathcal{X}_{n}^{d}\) by \(\sim\). Denoting elements of \(\tilde{\mathcal{X}}_{n}^{d}\) by \([\mathbf{X}]\) for any class representative \(\mathbf{X}\in\mathcal{X}_{n}^{d}\), define a distance on \(\tilde{\mathcal{X}}_{n}^{d}\) by_ \[\tilde{d}_{2,\infty}\left([\mathbf{X}],[\mathbf{Y}]\right)=\min_{\mathbf{W}\in \mathbb{O}_{d}}\|\mathbf{X}-\mathbf{Y}\mathbf{W}\|_{2,\infty}.\] **Observation 1**.: \(\tilde{d}_{2,\infty}\) _is a distance on \(\tilde{\mathcal{X}}_{n}^{d}\)._ Proof.: Symmetry and non-negativity of \(\tilde{d}_{2,\infty}\) are immediate from the definition and invariance of the \((2,\infty)\)-norm under right-multiplication by elements of \(\mathbb{O}_{d}\). Similarly, it follows by definition that \(\tilde{d}_{2,\infty}([\mathbf{X}],[\mathbf{Y}])=0\) if and only if \([\mathbf{X}]=[\mathbf{Y}]\). To establish the triangle inequality, note that for \([\mathbf{X}],[\mathbf{Y}],[\mathbf{Z}]\in\tilde{\mathcal{X}}_{n}^{d}\), we have \[\tilde{d}_{2,\infty}\left([\mathbf{X}],[\mathbf{Y}]\right) =\min_{\mathbf{W}\in\mathbb{O}_{d}}\|\mathbf{X}-\mathbf{Y}\mathbf{ W}\|_{2,\infty}=\min_{\mathbf{W}\in\mathbb{O}_{d},\mathbf{W}^{\prime}\in \mathbb{O}_{d}}\|\mathbf{X}-\mathbf{Z}\mathbf{W}^{\prime}+\mathbf{Z}\mathbf{W} ^{\prime}-\mathbf{Y}\mathbf{W}\|_{2,\infty}\] \[\leq\min_{\mathbf{W}\in\mathbb{O}_{d},\mathbf{W}^{\prime}\in \mathbb{O}_{d}}\|\mathbf{X}-\mathbf{Z}\mathbf{W}^{\prime}\|_{2,\infty}+\| \mathbf{Z}\mathbf{W}^{\prime}-\mathbf{Y}\mathbf{W}\|_{2,\infty}\] \[=\min_{\mathbf{W}\in\mathbb{O}_{d}}\|\mathbf{X}-\mathbf{Z}\mathbf{ W}\|_{2,\infty}+\min_{\mathbf{W}\in\mathbb{O}_{d}}\|\mathbf{Z}-\mathbf{Y} \mathbf{W}\|_{2,\infty}\] \[=\tilde{d}_{2,\infty}\left([\mathbf{X}],[\mathbf{Z}]\right)+\tilde {d}_{2,\infty}\left([\mathbf{Z}],[\mathbf{Y}]\right),\] where we have used the fact that the \((2,\infty)\)-norm is invariant under right-multiplication by an orthogonal matrix. Under the GRDPG and other low-rank network models (i.e., Definitions 2 and 3), a similar non-identifiability occurs, but its structure is complicated by the presence of the matrix \(\mathbf{I}_{p,q}\). Analogous to the orthogonal group \(\mathbb{O}_{d}\), we denote the indefinite orthogonal group by \[\mathbb{O}_{p,q}=\{\mathbf{Q}\in\mathbb{R}^{d\times d}:\mathbf{Q}\mathbf{I}_{p,q}\mathbf{Q}^{T}=\mathbf{I}_{p,q}\}.\] For any matrix \(\mathbf{Q}\in\mathbb{O}_{p,q}\) and any \(\mathbf{X}\in\mathcal{X}_{n}^{(p,q)}\), we have \(\mathbf{X}\mathbf{I}_{p,q}\mathbf{X}^{T}=\mathbf{X}\mathbf{Q}\mathbf{I}_{p,q} (\mathbf{X}\mathbf{Q})^{T}\). As a result, under the GRDPG, the conditional distribution of \(\mathbf{A}\) remains unchanged if we replace \(\mathbf{X}\) with \(\mathbf{X}\mathbf{Q}\) for any \(\mathbf{Q}\in\mathbb{O}_{p,q}\). Thus, we also consider an equivalence relation \(\sim\) on \(\mathcal{X}_{n}^{(p,q)}\), whereby for \(\mathbf{X},\mathbf{Y}\in\mathcal{X}_{n}^{(p,q)}\), we write \(\mathbf{X}\sim\mathbf{Y}\) if and only if \(\mathbf{Y}=\mathbf{X}\mathbf{Q}\) for some \(\mathbf{Q}\in\mathbb{O}_{p,q}\). Lemma 1 shows that the equivalence classes under this relation correspond precisely to the matrices \(\mathbf{X}\in\mathcal{X}_{n}^{(p,q)}\) that give rise to the same distribution over networks. A proof can be found in Appendix A **Lemma 1**.: _For \(\mathbf{X},\mathbf{Y}\in\mathcal{X}_{n}^{(p,q)}\), define respective probability matrices \(\mathbf{P}_{\mathbf{X}}=\mathbf{X}\mathbf{I}_{p,q}\mathbf{X}^{T}\) and \(\mathbf{P}_{\mathbf{Y}}=\mathbf{Y}\mathbf{I}_{p,q}\mathbf{Y}^{T}\). Then \(\mathbf{X}\sim\mathbf{Y}\) if and only if \(\mathbf{P}_{\mathbf{X}}=\mathbf{P}_{\mathbf{Y}}\)._ In light of Lemma 1, our equivalence relation can also be understood as \(\mathbf{X}\sim\mathbf{Y}\) if and only if \(\mathbf{P}_{\mathbf{X}}=\mathbf{P}_{\mathbf{Y}}\). Under this equivalence relation, we denote by \(\tilde{\mathcal{X}}_{n}^{(p,q)}\) the set of equivalence classes of \(\mathcal{X}_{n}^{(p,q)}\) under \(\sim\). When it is clear from the context, we also use \([\mathbf{X}]\) to denote the element of \(\tilde{\mathcal{X}}_{n}^{(p,q)}\) corresponding to the equivalence class of \(\mathbf{X}\in\mathcal{X}_{n}^{(p,q)}\). In order to show minimax results for estimation of the latent positions in the GRDPG model and related low-rank network models, we first need to fix a notion of distance over the parameter set \(\tilde{\mathcal{X}}_{n}^{(p,q)}\). To account for non-identifiability in the GRDPG, it is natural to follow Definition 4 and define the distance between \([\mathbf{X}]\) and \([\mathbf{Y}]\) according to \[\inf_{\mathbf{Q}_{1},\mathbf{Q}_{2}\in\mathcal{O}_{p,q}}\left\|\mathbf{X} \mathbf{Q}_{1}-\mathbf{Y}\mathbf{Q}_{2}\right\|_{2,\infty}. \tag{5}\] Unfortunately, this definition is not necessarily a valid distance. To see a simple example, consider the case when \(n=1\), \(p=1\) and \(q=1\). For any \(\mathbf{x}_{0}=(x_{0,1},x_{0,2})\in\mathbf{R}^{2}\) such that \(x_{0,1}^{2}-x_{0,2}^{2}=r\), we observe that \(\mathbf{Q}\mathbf{x}_{0}\) moves \(\mathbf{x}_{0}\) along the curve \(C_{r}:x_{1}^{2}-x_{2}^{2}=r\). Notice that for all \(r\in\mathbb{R}\), \(C_{r}\) shares a common asymptote \(l:x_{1}-x_{2}=0\). Therefore, it follows that for any \(\mathbf{x}\) and \(\mathbf{y}\in\mathbb{R}^{2}\), \[\inf_{\mathbf{Q}_{1},\mathbf{Q}_{2}\in\mathcal{O}_{1,1}}\left\|\mathbf{Q}_{1} \mathbf{x}-\mathbf{Q}_{2}\mathbf{y}\right\|_{2}=0.\] Furthermore, the quantity defined in Equation (5) may not satisfy the triangle inequality. We include an example for \(n=2,p=1\) and \(q=1\) in Section B. Instead, we must take a slightly more careful route to define a distance on \(\tilde{\mathcal{X}}_{n}^{(p,q)}\). We begin by noting that for any \(\mathbf{X}\in\mathcal{X}_{n}^{(p,q)}\), Sylvester's law of inertia implies that \(\mathbf{P}_{\mathbf{X}}=\mathbf{X}\mathbf{I}_{p,q}\mathbf{X}^{T}\), has \(p\) positive eigenvalues, \(q\) negative eigenvalues and the remaining \(n-p-q\) eigenvalues are zero. Thus, we can always decompose \(\mathbf{P}_{\mathbf{X}}\) as \[\mathbf{P}_{\mathbf{X}}=\mathbf{U}_{\mathbf{X}}\mathbf{\Lambda}_{\mathbf{X}}^ {1/2}\mathbf{I}_{p,q}\mathbf{\Lambda}_{\mathbf{X}}^{1/2}\mathbf{U}_{\mathbf{X }}^{T},\] where \(\mathbf{U}_{\mathbf{X}}\in\mathbb{R}^{n\times d}\) is a matrix with orthonormal columns and \(\mathbf{\Lambda}_{\mathbf{X}}\in\mathbb{R}^{d\times d}\) is a diagonal matrix with positive on-diagonal entries. By Lemma 1, we have \(\mathbf{U}_{\mathbf{X}}\mathbf{\Lambda}_{\mathbf{X}}^{1/2}\in[\mathbf{X}]\), since \(\mathbf{U}_{\mathbf{X}}\mathbf{\Lambda}_{\mathbf{X}}^{1/2}\) and \(\mathbf{X}\) both produce the same probability matrix \(\mathbf{P}_{\mathbf{X}}\). In light of this, we can define a distance \(\tilde{d}_{2,\infty}\) on \(\tilde{\mathcal{X}}_{n}^{(p,q)}\) according to \[\tilde{d}_{2,\infty}\left([\mathbf{X}],[\mathbf{Y}]\right)=\min_{\mathbf{W}\in \mathcal{O}_{d}\cap\mathcal{O}_{p,q}}\left\|\mathbf{U}_{\mathbf{X}}\mathbf{ \Lambda}_{\mathbf{X}}^{1/2}-\mathbf{U}_{\mathbf{Y}}\mathbf{\Lambda}_{\mathbf{ Y}}^{1/2}\mathbf{W}\right\|_{2,\infty}. \tag{6}\] The reader may notice that we have used the same notation \(\tilde{d}_{2,\infty}\) as in Definition 4. This can be done without risk of confusion: when \(p=d\) and \(q=0\), since \(\mathbf{U}_{\mathbf{X}}\mathbf{\Lambda}_{\mathbf{X}}^{1/2}\in[\mathbf{X}]\) and \(\mathbf{U}_{\mathbf{Y}}\mathbf{\Lambda}_{\mathbf{Y}}^{1/2}\in[\mathbf{Y}]\), there exist \(\mathbf{W}_{\mathbf{X}},\mathbf{W}_{\mathbf{Y}}\in\mathbb{O}_{d}\) such that \(\mathbf{X}\mathbf{W}_{\mathbf{X}}=\mathbf{U}_{\mathbf{X}}\mathbf{\Lambda}_{ \mathbf{X}}^{1/2}\) and \(\mathbf{Y}\mathbf{W}_{\mathbf{Y}}=\mathbf{U}_{\mathbf{Y}}\mathbf{\Lambda}_{ \mathbf{Y}}^{1/2}\). As a result, since \(\mathbb{O}_{p,q}=\mathbb{O}_{d}\) when \(p=d\), we have \[\tilde{d}_{2,\infty}\left([\mathbf{X}],[\mathbf{Y}]\right) =\min_{\mathbf{W}\in\mathcal{O}_{d}}\left\|\mathbf{U}_{\mathbf{X }}\mathbf{\Lambda}_{\mathbf{X}}^{1/2}-\mathbf{U}_{\mathbf{Y}}\mathbf{\Lambda}_ {\mathbf{Y}}^{1/2}\mathbf{W}\right\|_{2,\infty}\] \[=\min_{\mathbf{W}\in\mathcal{O}_{d}}\left\|\mathbf{X}\mathbf{W}_{ \mathbf{X}}-\mathbf{Y}\mathbf{W}_{\mathbf{Y}}\mathbf{W}\right\|_{2,\infty}\] \[=\min_{\mathbf{W}\in\mathcal{O}_{d}}\left\|\mathbf{X}-\mathbf{Y} \mathbf{W}\right\|_{2,\infty},\] which is precisely our definition of \(\tilde{d}_{2,\infty}\) for the RDPG as given in Definition 4. **Observation 2**.: \(\tilde{d}_{2,\infty}\) _is a distance on \(\tilde{\mathcal{X}}_{n}^{(p,q)}\)._ Proof.: Symmetry of \(\tilde{d}_{2,\infty}\) follows from the fact that \(\mathbf{W}^{T}\in\mathbb{O}_{d}\cap\mathbb{O}_{p,q}\) whenever \(\mathbf{W}\in\mathbb{O}_{d}\cap\mathbb{O}_{p,q}\), and non-negativity is immediate from the fact that \(\|\cdot\|_{2,\infty}\) is a norm. The triangle inequality follows from the same argument as given in Observation 1. It remains to show that \[\tilde{d}_{2,\infty}\left([\mathbf{X}],[\mathbf{Y}]\right)=0\quad\text{ if and only if }\quad[\mathbf{X}]=[\mathbf{Y}]. \tag{7}\] Toward this end, suppose that \(\tilde{d}_{2,\infty}\left([\mathbf{X}],[\mathbf{Y}]\right)=0\). Since \(\mathbb{O}_{d}\cap\mathbb{O}_{p,q}\) is compact, there exists \(\mathbf{W}^{\star}\in\mathbb{O}_{d}\cap\mathbb{O}_{p,q}\) such that \[\left\|\mathbf{U}_{\mathbf{X}}\mathbf{\Lambda}_{\mathbf{X}}^{1/2}-\mathbf{U}_ {\mathbf{Y}}\mathbf{\Lambda}_{\mathbf{Y}}^{1/2}\mathbf{W}^{\star}\right\|_{2, \infty}=0,\] that is to say, \(\mathbf{U}_{\mathbf{X}}\mathbf{\Lambda}_{\mathbf{X}}^{1/2}=\mathbf{U}_{ \mathbf{Y}}\mathbf{\Lambda}_{\mathbf{Y}}^{1/2}\mathbf{W}^{\star}\). We therefore have \[\mathbf{P}_{\mathbf{X}} =\mathbf{X}\mathbf{I}_{p,q}\mathbf{X}^{T}=\mathbf{U}_{\mathbf{X}} \mathbf{\Lambda}_{\mathbf{X}}^{1/2}\mathbf{I}_{p,q}\mathbf{\Lambda}_{\mathbf{ X}}^{1/2}\mathbf{U}_{\mathbf{X}}^{T}\] \[=\mathbf{U}_{\mathbf{Y}}\mathbf{\Lambda}_{\mathbf{Y}}^{1/2} \mathbf{W}^{\star}\mathbf{I}_{p,q}\mathbf{W}^{\star T}\mathbf{\Lambda}_{ \mathbf{Y}}^{1/2}\mathbf{U}_{\mathbf{Y}}^{T}=\mathbf{U}_{\mathbf{Y}}\mathbf{ \Lambda}_{\mathbf{Y}}^{1/2}\mathbf{I}_{p,q}\mathbf{\Lambda}_{\mathbf{Y}}^{1/2} \mathbf{U}_{\mathbf{Y}}^{T}\] \[=\mathbf{Y}\mathbf{I}_{p,q}\mathbf{Y}^{T}=\mathbf{P}_{\mathbf{Y}},\] and Lemma 1 implies that \(\mathbf{X}\sim\mathbf{Y}\). To show the other direction of the equivalence in Equation (7), let \(\mathbf{X},\mathbf{Y}\in\mathcal{X}_{n}^{(p,q)}\) be representatives of \([\mathbf{X}],[\mathbf{Y}]\in\tilde{\mathcal{X}}_{n}^{(p,q)}\), respectively, and suppose that \([\mathbf{X}]=[\mathbf{Y}]\). We will show there exists a matrix \(\mathbf{W}^{\star}\in\mathbb{O}_{d}\cap\mathbb{O}_{p,q}\) such that \(\mathbf{U}_{\mathbf{X}}\mathbf{\Lambda}_{\mathbf{X}}^{1/2}=\mathbf{U}_{ \mathbf{Y}}\mathbf{\Lambda}_{\mathbf{Y}}^{1/2}\mathbf{W}^{\star}\), whence it will follow that \(\tilde{d}_{2,\infty}([\mathbf{X}],[\mathbf{Y}])=0\). Recall that we associate to \(\mathbf{X}\) and \(\mathbf{Y}\) the probability matrices \[\mathbf{P}_{\mathbf{X}} =\mathbf{X}\mathbf{I}_{p,q}\mathbf{X}^{T}=\mathbf{U}_{\mathbf{X} }\mathbf{\Lambda}_{\mathbf{X}}^{1/2}\mathbf{I}_{p,q}\mathbf{\Lambda}_{\mathbf{ X}}^{1/2}\mathbf{U}_{\mathbf{X}}^{T}\quad\text{ and }\] \[\mathbf{P}_{\mathbf{Y}} =\mathbf{Y}\mathbf{I}_{p,q}\mathbf{Y}^{T}=\mathbf{U}_{\mathbf{Y} }\mathbf{\Lambda}_{\mathbf{Y}}^{1/2}\mathbf{I}_{p,q}\mathbf{\Lambda}_{ \mathbf{Y}}^{1/2}\mathbf{U}_{\mathbf{Y}}^{T},\] where \(\mathbf{U}_{\mathbf{X}},\mathbf{U}_{\mathbf{Y}}\in\mathbb{R}^{n\times d}\) both have orthonormal columns and \(\mathbf{\Lambda}_{\mathbf{X}},\mathbf{\Lambda}_{\mathbf{Y}}\in\mathbb{R}^{d \times d}\) are diagonal and positive definite. Since \([\mathbf{X}]=[\mathbf{Y}]\), by Lemma 1 there exists \(\mathbf{Q}\in\mathbb{O}_{p,q}\) such that \[\mathbf{U}_{\mathbf{X}}\mathbf{\Lambda}_{\mathbf{X}}^{1/2}=\mathbf{U}_{\mathbf{ Y}}\mathbf{\Lambda}_{\mathbf{Y}}^{1/2}\mathbf{Q}. \tag{8}\] There also exists a \(\mathbf{W}\in\mathbb{O}_{d}\) such that \(\mathbf{U}_{\mathbf{X}}=\mathbf{U}_{\mathbf{Y}}\mathbf{W}\), since \(\mathbf{U}_{\mathbf{X}}\) and \(\mathbf{U}_{\mathbf{Y}}\) corresponds to the same singular subspaces. We also have a permutation matrix \(\mathbf{\Pi}\) such that \(\mathbf{\Lambda}_{\mathbf{X}}^{1/2}\mathbf{I}_{p,q}\mathbf{\Lambda}_{\mathbf{ X}}^{1/2}=\mathbf{\Pi}\mathbf{\Lambda}_{\mathbf{Y}}^{1/2}\mathbf{I}_{p,q} \mathbf{\Lambda}_{\mathbf{Y}}^{1/2}\mathbf{\Pi}^{T}\). The presence of \(\mathbf{I}_{p,q}\) forces \(\mathbf{\Pi}\) to be of the form \[\mathbf{\Pi}=\begin{bmatrix}\mathbf{\Pi}_{p}&0\\ 0&\mathbf{\Pi}_{q}\end{bmatrix},\] where \(\mathbf{\Pi}_{p}\in\mathbb{R}^{p\times p}\) and \(\mathbf{\Pi}_{q}\in\mathbb{R}^{q\times q}\) are permutation matrices. Hence, \(\mathbf{\Pi}\in\mathbb{O}_{p,q}\cap\mathbb{O}_{d}\) and we also have that \(\mathbf{\Lambda}_{\mathbf{X}}^{1/2}=\mathbf{\Pi}\mathbf{\Lambda}_{\mathbf{Y}}^{1/ 2}\mathbf{\Pi}^{T}\). It follows from Equation (8) that \[\mathbf{Q}\mathbf{\Pi}=\mathbf{\Lambda}_{\mathbf{Y}}^{-1/2}\mathbf{W}\mathbf{ \Pi}\mathbf{\Lambda}_{\mathbf{Y}}^{1/2}.\] Denote \(\mathbf{V}=\mathbf{W}\mathbf{\Pi}\) for ease of notation. Since \(\mathbf{Q}\mathbf{\Pi}\in\mathbb{O}_{p,q}\), we have \[\mathbf{\Lambda}_{\mathbf{Y}}^{-1/2}\mathbf{V}\mathbf{\Lambda}_{\mathbf{Y}}^{1/ 2}\mathbf{I}_{p,q}\mathbf{\Lambda}_{\mathbf{Y}}^{1/2}\mathbf{V}^{T}\mathbf{ \Lambda}_{\mathbf{Y}}^{-1/2}=\mathbf{I}_{p,q}.\] Rearranging and using the fact that diagonal matrices commute, \[\mathbf{V}\mathbf{\Lambda}_{\mathbf{Y}}\mathbf{I}_{p,q}=\mathbf{\Lambda}_{ \mathbf{Y}}\mathbf{I}_{p,q}\mathbf{V}.\] Therefore, for any \(i,j\in[d]\), we have \(\mathbf{V}_{ij}(\mathbf{\Lambda}_{\mathbf{Y}}\mathbf{I}_{p,q})_{jj}=\mathbf{ V}_{ij}(\mathbf{\Lambda}_{\mathbf{Y}}\mathbf{I}_{p,q})_{ii}.\) If \(\mathbf{V}_{ij}\neq 0\), we have \((\mathbf{\Lambda}_{\mathbf{Y}}\mathbf{I}_{p,q})_{jj}=(\mathbf{\Lambda}_{ \mathbf{Y}}\mathbf{I}_{p,q})_{ii}\) and thus \((\mathbf{\Lambda}_{\mathbf{Y}}^{1/2}\mathbf{I}_{p,q})_{jj}=(\mathbf{\Lambda} _{\mathbf{Y}}^{1/2}\mathbf{I}_{p,q})_{ii}\). Otherwise, we have \[\mathbf{V}_{ij}(\mathbf{\Lambda}_{\mathbf{Y}}^{1/2}\mathbf{I}_{p,q})_{jj}= \mathbf{V}_{ij}(\mathbf{\Lambda}_{\mathbf{Y}}^{1/2}\mathbf{I}_{p,q})_{ii}=0.\] Hence, \(\mathbf{V}_{ij}(\mathbf{\Lambda}_{\mathbf{Y}}^{1/2}\mathbf{I}_{p,q})_{jj}= \mathbf{V}_{ij}(\mathbf{\Lambda}_{\mathbf{Y}}^{1/2}\mathbf{I}_{p,q})_{ii}\) always holds and it follows that \[\mathbf{V}\mathbf{\Lambda}_{\mathbf{Y}}^{1/2}\mathbf{I}_{p,q}=\mathbf{ \Lambda}_{\mathbf{Y}}^{1/2}\mathbf{I}_{p,q}\mathbf{V}.\] Thus, we have \[\mathbf{Q}\mathbf{\Pi}\ \mathbf{I}_{p,q}=\mathbf{\Lambda}_{\mathbf{Y}}^{-1/2} \mathbf{V}\mathbf{\Lambda}_{\mathbf{Y}}^{1/2}\mathbf{I}_{p,q}=\mathbf{ \Lambda}_{\mathbf{Y}}^{-1/2}\mathbf{\Lambda}_{\mathbf{Y}}^{1/2}\mathbf{I}_{p,q }\mathbf{V}=\mathbf{I}_{p,q}\mathbf{V}.\] Moving \(\mathbf{\Pi}\ \mathbf{I}_{p,q}\) to the right hand side, we have \(\mathbf{Q}=\mathbf{I}_{p,q}\mathbf{V}\ \mathbf{I}_{p,q}\mathbf{\Pi}^{T}\), implying that \(\mathbf{Q}\) is an orthogonal matrix, whence \(\mathbf{Q}\in\mathbb{O}_{p,q}\cap\mathbb{O}_{d}\). Taking \(\mathbf{W}^{\star}=\mathbf{Q}\) completes the proof. The minimax risk for estimating \(\mathbf{X}\in\mathcal{X}_{n}^{(p,q)}\) under the \((2,\infty)\)-norm after accounting for the equivalence structure encoded in \(\tilde{\mathcal{X}}_{n}^{(p,q)}\) is given by (Tsybakov 2009) \[\inf_{\hat{\mathbf{X}}}\sup_{\mathbf{X}\in\mathcal{X}_{n}^{(p,q)}}\mathbb{E} \tilde{d}_{2,\infty}\left([\hat{\mathbf{X}}],[\mathbf{X}]\right)=\inf_{\hat{ \mathbf{X}}}\sup_{\mathbf{X}\in\mathcal{X}_{n}^{(p,q)}}\mathbb{E}\min_{ \mathbf{W}\in\mathbb{O}_{d}\cap\mathbb{O}_{p,q}}\left\|\hat{U}_{\hat{\mathbf{X }}}\hat{\mathbf{\Lambda}}_{\hat{\mathbf{X}}}^{1/2}-\mathbf{U}_{\mathbf{X}} \mathbf{\Lambda}_{\mathbf{X}}^{1/2}\mathbf{W}\right\|_{2,\infty},\] where the infimum is over all estimators \(\hat{\mathbf{X}}\). Our goal in the remainder of this paper is to lower-bound this minimax risk. ## 3 Main Results We consider estimation (up to orthogonal non-identifiability) of a low-rank matrix \(\mathbf{X}=\mathbf{U}\mathbf{\Lambda}^{1/2}\), where \(\mathbf{U}\) is an element of the Stiefel manifold of all \(d\)-frames in \(\mathbb{R}^{d}\), \[\mathcal{S}_{d}(\mathbb{R}^{n})=\left\{\mathbf{U}\in\mathbb{R}^{n\times d}: \mathbf{U}^{T}\mathbf{U}=\mathbf{I}_{d}\right\}.\] The structure of \(\mathbf{\Lambda}\) plays a crucial role in the estimation of \(\mathbf{X}\). When the smallest eigenvalues of \(\mathbb{E}[\mathbf{A}\ |\ \mathbf{X}]\) are especially close to zero, it is hard to distinguish the \(d\) "signal" eigenvalues of \(\mathbf{A}\) from the "noise" associated with the remaining \(n-d\) eigenvalues. As such, we consider a particular structure on \(\mathbf{\Lambda}=\mathrm{diag}(\lambda_{1},\lambda_{2},\ldots,\lambda_{d})\). Assuming without loss of generality that \(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{d}\) and defining the condition number \(\kappa=\kappa(\mathbf{\Lambda})=\lambda_{1}/\lambda_{d}\), this spectral structure is captured by membership in the set \[\mathcal{C}(\kappa_{\star},\lambda_{\star})=\Big{\{}\mathbf{\Lambda}=\mathrm{ diag}(\lambda_{1},\lambda_{2},\ldots,\lambda_{d})\in\mathbb{R}^{d\times d}: \kappa(\mathbf{\Lambda})\leq\kappa_{\star},\lambda_{d}\geq\lambda_{\star}>0 \Big{\}}.\] With this notation in hand, we can state our main result. **Theorem 1**.: _With the sets \(\mathcal{S}_{d}(\mathbb{R}^{n})\) and \(\mathcal{C}(\kappa_{\star},\lambda_{\star})\) as defined above, define the set_ \[\mathcal{P}(\kappa_{\star},\lambda_{\star},p,q)=\left\{\left(\mathbf{U}, \mathbf{\Lambda}\right):\mathbf{U}\in\mathcal{S}_{d}(\mathbb{R}^{n}),\mathbf{ \Lambda}\in\mathcal{C}(\kappa_{\star},\lambda_{\star}),\mathbf{U}\mathbf{ \Lambda}^{1/2}\in\mathcal{X}_{n}^{(p,q)}\right\}.\] _If \(\kappa_{\star}=o\left(n\right)\), \(\kappa_{\star}\geq 3d\) and \(3\kappa_{\star}\lambda_{\star}\leq n\), then_ \[\inf_{\left(\tilde{\mathbf{U}},\tilde{\mathbf{\Lambda}}\right)}\ \sup_{\left(\mathbf{U},\mathbf{\Lambda}\right)\in\mathcal{P}(\kappa_{\star}, \lambda_{\star},p,q)}\ \mathbb{E}\ \tilde{d}_{2,\infty}\left(\left[\hat{\mathbf{U}}\hat{\mathbf{ \Lambda}}^{1/2}\right],\left[\mathbf{U}\mathbf{\Lambda}^{1/2}\right]\right)\ \gtrsim\ \sqrt{\frac{\kappa_{\star}(\lambda_{\star}\wedge\log n)}{n}}. \tag{9}\] Proof.: Our main tool is a standard packing argument (see Theorem 2.7 in Tsybakov 2009). The main technical hurdle is constructing a collection of elements of \(\mathcal{S}_{d}(\mathbb{R}^{n})\) all of which produce valid elements of \(\mathcal{P}(\kappa_{\star},\lambda_{\star},p,q)\) when paired with a particular choice of \(\mathbf{\Lambda}\). Our construction is based on stacking Hadamard matrices to form \(\mathbf{U}\in\mathcal{S}_{d}(\mathbb{R}^{n})\). In particular, we require very different constructions depending on the growth rate of the condition number \(\kappa_{\star}\), and we divide our proof of Theorem 1 into two cases accordingly. Details are given in the Appendix. As a remark, we note that the factor \(3\) in the conditions \(\kappa_{\star}\geq 3d\) and \(3\kappa_{\star}\lambda_{\star}\leq n\) can each be relaxed to \((1+\epsilon)\) and \((2+\epsilon)\), respectively, for any constant \(\epsilon>0\). Details are provided in the Appendix. ### Illustrative Examples and Applications We now apply our main result to some well-studied special cases from the network modeling literature, starting with the GRDPG. The assumption in Theorem 1 that \(\kappa_{\star}=\Omega(d)\) is a natural one for the RDPG and GRDPG setting. To see this, we first state Lemma 2. **Lemma 2**.: _Assume that \(\mathbf{P}=\mathbf{X}\mathbf{X}^{T}\), where the row vectors \(\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{n}\in\mathbb{R}^{d}\) of \(\mathbf{X}\) are independent identically distributed random vectors and let \(\mathbf{\Delta}=\mathbb{E}\left[\mathbf{x}_{1}\mathbf{x}_{1}^{T}\right]\). For \(n\) sufficiently large, it holds with probability at least \(1-2n^{-1}\) that_ \[\frac{\lambda_{1}(\mathbf{\Delta})-\delta}{\lambda_{d}(\mathbf{\Delta})+ \delta}\leq\kappa(\mathbf{P})\leq\frac{\lambda_{1}(\mathbf{\Delta})+\delta}{ \lambda_{d}(\mathbf{\Delta})-\delta},\] _where \(\delta=4\sqrt{\frac{\log d}{n}}+\frac{8\log d}{3n}\)._ Proof.: Applying the definition of \(\kappa\) and using basic properties of eigenvalues, \[\kappa(\mathbf{P})=\kappa(\mathbf{X}\mathbf{X}^{T})=\frac{\lambda_{1}\left( \mathbf{X}^{T}\mathbf{X}/n\right)}{\lambda_{d}\left(\mathbf{X}^{T}\mathbf{X}/n \right)}.\] Since \(\mathbf{P}\) is a probability matrix, for any \(i\in[n]\), we have \(0\leq\mathbf{x}_{i}^{T}\mathbf{x}_{i}\leq 1\), and \[\left\|\mathbf{x}_{i}\mathbf{x}_{i}^{T}-\mathbf{\Delta}\right\|\leq\left\| \mathbf{x}_{i}\mathbf{x}_{i}^{T}\right\|+\left\|\mathbf{\Delta}\right\|\leq \left\|\mathbf{x}_{i}\right\|_{2}^{2}+\mathbb{E}\|\mathbf{x}_{i}\|_{2}^{2}\leq 2.\] Similarly, we also have \(\left\|\mathbb{E}\left[\left(\mathbf{x}_{i}\mathbf{x}_{i}^{T}-\mathbf{\Delta} \right)\left(\mathbf{x}_{i}\mathbf{x}_{i}^{T}-\mathbf{\Delta}\right)\right] \right\|\leq 4.\) Therefore, by a matrix version of Bernstein's inequality (see Corollary 3.3 in Chen et al. 2021), with probability at least \(1-2n^{-1}\), we have \[\left\|\frac{1}{n}\mathbf{X}^{T}\mathbf{X}-\mathbf{\Delta}\right\|=\left\| \frac{1}{n}\sum_{i=1}^{n}\left(\mathbf{x}_{i}\mathbf{x}_{i}^{T}-\mathbf{\Delta }\right)\right\|\leq 4\sqrt{\frac{\log d}{n}}+\frac{8\log d}{3n}.\] Hence, by Weyl's inequality, it follows that with probability at least \(1-2n^{-1}\), \[\left|\lambda_{1}(\mathbf{\Delta})-\lambda_{1}\left(\frac{1}{n}\mathbf{X}^{T} \mathbf{X}\right)\right|\leq\delta\quad\text{ and }\quad\left|\lambda_{d}(\mathbf{\Delta})-\lambda_{d}\left(\frac{1}{n} \mathbf{X}^{T}\mathbf{X}\right)\right|\leq\delta,\] where we set \(\delta=4\sqrt{\frac{\log d}{n}}+\frac{8\log d}{3n}\). Rearranging the inequalities completes the proof. Put simply, Lemma 2 implies that under the RDPG, when \(n\) is sufficiently large, we have \(\kappa(\mathbf{P})\approx\kappa(\mathbf{\Delta})\). Without loss of generality, we assume that \(\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{n}\) are sampled from a distribution whose support is a subset of \(\mathbb{B}_{d}(1)\cap\mathbb{R}_{+}^{d}\), where \(\mathbb{B}_{d}(1)\) is the unit ball in \(\mathbb{R}^{d}\). Denote the covariance matrix as \(\mathbf{\Sigma}=\mathbb{E}\left(\mathbf{x}_{1}-\mu\right)\left(\mathbf{x}_{1}- \mu\right)^{T}\). Notice that for any \(\ell\in[d]\), \(\mathbf{x}_{1,\ell}^{2}\leq\|\mathbf{x}_{1}\|_{2}^{2}\leq 1\), hence \(\mathbf{x}_{1,\ell}^{2}\leq\mathbf{x}_{1,\ell}\) and we have \[\mu_{\ell}=\mathbb{E}\mathbf{x}_{1,\ell}\geq\mathbb{E}\mathbf{x}_{1,\ell}^{2 }=\mu_{\ell}^{2}+\mathbf{\Sigma}_{\ell\ell}.\] this implies that \(\mu_{\ell}\geq\mathbf{\Sigma}_{\ell\ell}\). If \(\mathbf{\Sigma}=\gamma\mathbf{I}_{d}\) for some \(\gamma>0\), then \(\kappa(\mathbf{\Delta})=\gamma^{-1}\mu^{T}\mu+1\geq\gamma d+1\), and hence \(\kappa(\mathbf{P})=\Omega_{\mathbb{P}}(d)\). One sufficient condition for this is that each element of \(\mathbf{x}_{1}\) be drawn i.i.d. For example, if the entries of \(\mathbf{x}_{1}\) are generated i.i.d. from the uniform distribution over \([0,1/\sqrt{d}]\), then \(\kappa(\mathbf{\Delta})=3d+1\). As another example, if we sample \(\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{n}\) uniformly from \(\mathbb{B}_{d}(1)\cap\mathbb{R}_{+}^{d}\), then one can show that \(\kappa(\mathbf{\Delta})=(2d+\pi-2)/(\pi-2)>d\). The case for the GRDPG is more complicated, owing to replacing the RDPG's inner product with an indefinite inner product. We first state Lemma 3, which allows us to relate the spectrum of the indefinite matrix \(\mathbf{P}=\mathbf{X}\mathbf{I}_{p,q}\mathbf{X}^{T}\) to the spectrum of the positive semidefinite \(\mathbf{\Delta}=\mathbb{E}\mathbf{x}_{1}\mathbf{x}_{1}^{T}\). **Lemma 3**.: _Assume that \(\mathbf{P}=\mathbf{X}\mathbf{I}_{p,q}\mathbf{X}^{T}\), where the row vectors \(\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{n}\in\mathbb{R}^{d}\) of \(\mathbf{X}\) are i.i.d. random vectors with second moment matrix \(\mathbf{\Delta}=\mathbb{E}\mathbf{x}_{1}\mathbf{x}_{1}^{T}\). If there exists \(0<\delta<1\) such that_ \[\left\|\mathbf{X}^{T}\mathbf{X}/n-\mathbf{\Delta}\right\|\leq\delta\|\mathbf{\Delta}\|, \tag{10}\] _then for a suitably chosen constant \(C>0\), we have_ \[\frac{\lambda_{1}\left(\mathbf{I}_{p,q}\mathbf{\Delta}\right)-C\sqrt{\delta}\|\bm {\Delta}\|}{\lambda_{d}\left(\mathbf{I}_{p,q}\mathbf{\Delta}\right)+C\sqrt{\delta} \|\mathbf{\Delta}\|}\leq\kappa(\mathbf{P})\leq\frac{\lambda_{1}\left(\mathbf{I}_ {p,q}\mathbf{\Delta}\right)+C\sqrt{\delta}\|\mathbf{\Delta}\|}{\lambda_{d}\left(\mathbf{ I}_{p,q}\mathbf{\Delta}\right)-C\sqrt{\delta}\|\mathbf{\Delta}\|}.\] Proof.: Since \(\mathbf{\Delta}\) is a symmetric positive semidefinite matrix, its square root \(\mathbf{\Delta}^{1/2}\) is well-defined, as is that of \(\tilde{\mathbf{\Delta}}=\mathbf{X}^{T}\mathbf{X}/n\). Using basic spectral properties, \[\kappa(\mathbf{P})=\kappa(\mathbf{X}\mathbf{I}_{p,q}\mathbf{X}^{T})=\left| \frac{\lambda_{1}\left(\mathbf{I}_{p,q}\mathbf{X}^{T}\mathbf{X}/n\right)}{ \lambda_{d}\left(\mathbf{I}_{p,q}\mathbf{X}^{T}\mathbf{X}/n\right)}\right|= \left|\frac{\lambda_{1}\left(\tilde{\mathbf{\Delta}}^{1/2}\mathbf{I}_{p,q}\tilde{ \mathbf{\Delta}}^{1/2}\right)}{\lambda_{d}\left(\tilde{\mathbf{\Delta}}^{1/2}\mathbf{ I}_{p,q}\tilde{\mathbf{\Delta}}^{1/2}\right)}\right|. \tag{11}\] Applying the triangle inequality and basic properties of the spectral norm, we have \[\left\|\tilde{\mathbf{\Delta}}^{1/2}\mathbf{I}_{p,q}\tilde{\mathbf{\Delta }}^{1/2}-\mathbf{\Delta}^{1/2}\mathbf{I}_{p,q}\mathbf{\Delta}^{1/2}\right\| \leq\left\|\tilde{\mathbf{\Delta}}^{1/2}\mathbf{I}_{p,q}\left(\tilde{ \mathbf{\Delta}}^{1/2}-\mathbf{\Delta}^{1/2}\right)\right\|\] \[\leq\left(\left\|\tilde{\mathbf{\Delta}}^{1/2}\right\|+\left\|\mathbf{ \Delta}^{1/2}\right\|\right)\left\|\tilde{\mathbf{\Delta}}^{1/2}-\mathbf{\Delta}^{1/2}\right\|\] \[\leq 2\left\|\mathbf{\Delta}^{1/2}\right\|\left\|\tilde{\mathbf{\Delta}}^{1/2 }-\mathbf{\Delta}^{1/2}\right\|+\left\|\tilde{\mathbf{\Delta}}^{1/2}-\mathbf{\Delta}^{1/2} \right\|^{2}.\] Since \(\hat{\mathbf{\Delta}}\) and \(\mathbf{\Delta}\) are both positive semidefinite matrices, by Theorem X.1.1 in Bhatia (1997), we have \[\left\|\tilde{\mathbf{\Delta}}^{1/2}-\mathbf{\Delta}^{1/2}\right\|\leq\left\|\tilde{\bm {\Delta}}-\mathbf{\Delta}\right\|^{1/2}\;\text{and}\;\left\|\mathbf{\Delta}^{1/2} \right\|=\left\|\mathbf{\Delta}\right\|^{1/2}.\] Therefore, using the fact that \(\delta\in(0,1)\), we obtain \[\left\|\hat{\mathbf{\Delta}}^{1/2}\mathbf{I}_{p,q}\tilde{\mathbf{\Delta}} ^{1/2}-\mathbf{\Delta}^{1/2}\mathbf{I}_{p,q}\mathbf{\Delta}^{1/2}\right\| \leq 2\left\|\mathbf{\Delta}\right\|^{1/2}\left\|\tilde{\mathbf{\Delta}}- \mathbf{\Delta}\right\|^{1/2}+\left\|\tilde{\mathbf{\Delta}}-\mathbf{\Delta}\right\|\] \[\leq(2\sqrt{\delta}+\delta)\|\mathbf{\Delta}\|\leq 3\sqrt{\delta}\|\mathbf{ \Delta}\|.\] Applying Weyl's inequality, it follows that \[\left|\lambda_{1}\left(\tilde{\mathbf{\Delta}}^{1/2}\mathbf{I}_{p,q}\tilde{\mathbf{ \Delta}}^{1/2}\right)-\lambda_{1}\left(\mathbf{\Delta}^{1/2}\mathbf{I}_{p,q}\mathbf{ \Delta}^{1/2}\right)\right|\leq C\sqrt{\delta}\|\mathbf{\Delta}\|\] and \[\left|\lambda_{d}\left(\tilde{\mathbf{\Delta}}^{1/2}\mathbf{I}_{p,q}\tilde{\mathbf{ \Delta}}^{1/2}\right)-\lambda_{d}\left(\mathbf{\Delta}^{1/2}\mathbf{I}_{p,q}\mathbf{ \Delta}^{1/2}\right)\right|\leq C\sqrt{\delta}\|\mathbf{\Delta}\|.\] Applying these two bounds to Equation (11), it follows that \[\kappa(\mathbf{P})\geq\frac{\lambda_{1}\left(\mathbf{\Delta}^{1/2}\mathbf{I}_{p,q} \mathbf{\Delta}^{1/2}\right)-C\sqrt{\delta}\|\mathbf{\Delta}\|}{\lambda_{d}\left(\mathbf{ \Delta}^{1/2}\mathbf{I}_{p,q}\mathbf{\Delta}^{1/2}\right)+C\sqrt{\delta}\|\mathbf{ \Delta}\|}=\frac{\lambda_{1}\left(\mathbf{I}_{p,q}\mathbf{\Delta}\right)-C\sqrt{ \delta}\|\mathbf{\Delta}\|}{\lambda_{d}\left(\mathbf{I}_{p,q}\mathbf{\Delta}\right)+C \sqrt{\delta}\|\mathbf{\Delta}\|},\] and \[\kappa(\mathbf{P})\leq\frac{\lambda_{1}\left(\mathbf{I}_{p,q}\mathbf{\Delta} \right)+C\sqrt{\delta}\|\mathbf{\Delta}\|}{\lambda_{d}\left(\mathbf{I}_{p,q}\mathbf{ \Delta}\right)-C\sqrt{\delta}\|\mathbf{\Delta}\|},\] completing the proof. For many distributions, Equation (10) holds with high probability for small choices of \(\delta\). As an example, suppose that for some constant \(K\geq 1\), \(\|\mathbf{x}_{i}\|_{2}\leq K(\mathbb{E}\|\mathbf{x}_{i}\|_{2}^{2})^{1/2}\) almost surely. Then \[\|\frac{\mathbf{X}^{T}\mathbf{X}}{n}-\mathbf{\Delta}\|\leq C\left(\sqrt{\frac{K^{ 2}d(\log d+\log n)}{n}}+\frac{K^{2}d(\log d+\log n)}{n}\right)\|\mathbf{\Delta}\|\] holds with probability at least \(1-2n^{-1}\). See Theorem 5.6.1 and Exercise 5.6.4 in Vershynin (2018). As another example, if the first \(p\) entries of \(\mathbf{x}_{1}\) are independently drawn from the uniform distribution over the interval \([1/(2\sqrt{p}),1/\sqrt{p}]\) and the last \(q\) entries are independently drawn from the uniform distribution over the interval \([0,1/(2\sqrt{q})]\), then one can show that \(\kappa(\mathbf{I}_{p,q}\mathbf{\Delta})\geq 5d-\frac{1}{2}\) and we can show that \(\|\mathbf{x}_{i}\|_{2}\leq 3(\mathbb{E}\|\mathbf{x}_{i}\|_{2}^{2})^{1/2}\) almost surely, so that \(\kappa(\mathbf{P})=\Omega_{\mathbb{P}}(d)\). On the other hand, if we treat \(d\) as a constant with respect to \(n\), then \(\kappa(\mathbf{P})=O_{\mathbb{P}}(1)\) and Theorem 1 implies the following corollary. **Corollary 1**.: _Under the GRDPG, with latent dimension \(d\) fixed with respect to \(n\), suppose that the latent position matrix \(\mathbf{X}\in\mathbb{R}^{n\times d}\) satisfies \(2d\leq\kappa(\mathbf{X}\mathbf{I}_{p,q}\mathbf{X}^{T})=O(1)\) and \(\lambda_{d}(\mathbf{X}\mathbf{I}_{p,q}\mathbf{X}^{T})\geq\lambda_{\star}\). Then_ \[\inf_{\hat{\mathbf{X}}}\sup_{\mathbf{X}\in\mathcal{X}_{n}^{d(p,q)}}\mathbb{E} \tilde{d}_{2,\infty}([\hat{\mathbf{X}}],[\mathbf{X}])\gtrsim\sqrt{\frac{ \lambda_{\star}\wedge\log n}{n}}.\] Under the RDPG, Xie and Xu (2020) derived a similar minimax lower bound for estimation in Frobenius norm, rather than \((2,\infty)\)-norm, under the setting where the latent dimension is a constant. For the sake of comparison, we restate their lower bound using our notation. **Theorem 2** (Theorem 2 in Xie and Xu (2020)).: _Let \(\mathbf{A}\sim\mathrm{RDPG}(\mathbf{X})\) for some \(\mathbf{X}\in\mathcal{X}_{n,d}\), where \(d\) is a constant with respect to \(n\). Let \(\hat{\mathbf{X}}\) be an estimator of the latent position matrix \(\mathbf{X}\) satisfying \(\|\hat{\mathbf{X}}\|_{\mathrm{F}}\lesssim\sqrt{n}\) with probability one. Then_ \[\inf_{\hat{\mathbf{X}}}\sup_{\mathbf{X}\in\mathcal{X}_{n}^{d}}\mathbb{E}\left\{ \frac{1}{n}\inf_{\mathbf{W}\in\mathcal{O}_{d}}\|\hat{\mathbf{X}}-\mathbf{X} \mathbf{W}\|_{\mathrm{F}}^{2}\right\}\gtrsim\frac{1}{n}.\] Directly applying Theorem 2 in the RDPG setting and using the fact that \[\|\mathbf{Y}\|_{2,\infty}\geq\frac{\|\mathbf{Y}\|_{F}}{\sqrt{n}} \tag{12}\] for any \(\mathbf{Y}\in\mathbb{R}^{n\times d}\), we obtain a lower bound of \(O(n^{-1/2})\). This has a gap of order \(\lambda_{\star}\wedge\sqrt{\log n}\) compared to our result in Corollary 1. Further, we note that the techniques used in Xie and Xu (2020) are specialized to the RDPG, and it is not obvious how to adapt their strategy to the more general setting considered here. ### Singular Subspace Estimation For a matrix \(\mathbf{P}=\mathbf{U}\mathbf{A}\mathbf{U}^{T}\), instead of estimating the latent positions, singular subspace estimation aims to estimate the matrix \(\mathbf{U}\in\mathbb{R}^{n\times d}\). There is a vast literature on singular subspace estimation, and we refer the interested reader to the recent survey by Chen et al. (2021). Vu and Lei (2013) derives a minimax lower bound for subspace estimation for sparse high-dimensional principal component analysis (PCA), and Cai et al. (2021b) provides a more general framework to establish lower bounds in structured PCA problems. We note that PCA is distinct from the low-rank network models considered here, and that these two papers consider estimation in the Frobenius or spectral norm in the presence of Gaussian noise, while we are concerned with estimation under the \((2,\infty)\)-norm with Bernoulli-distributed noise. To the best of our knowledge, the prior work closest to the present manuscript is that by Zhou et al. (2021), where the authors obtain minimax lower bounds for singular subspace estimation of random bipartite graphs. A few existing works address minimax lower bounds for singular subspace estimation under the \((2,\infty)\)-norm. Cai et al. (2021a) provides a lower bound under the \((2,\infty)\)-norm for subspace recovery in an incomplete low-rank matrix setting. Lower bounds can also be found in Agterberg and Zhang (2022), derived from lower bounds on the spectral norm. Below, we discuss why such approaches result in lower bounds weaker than those proved in the present work. As a corollary to Theorem 1, we also obtain a minimax lower bound for singular subspace estimation. The proof uses the same construction as Theorem 1, and thus details are omitted. **Corollary 2**.: _Under the same setup as Theorem 1, we have_ \[\inf_{\hat{\mathbf{U}}}\sup_{\mathbf{U}\in\mathcal{S}_{d}(\mathbb{R}^{n})} \mathbb{E}\min_{\mathbf{W}\in\mathbb{O}_{d}}\left\|\hat{\mathbf{U}}-\mathbf{U} \mathbf{W}\right\|_{2,\infty}\gtrsim\sqrt{\frac{\kappa_{\star}(\lambda_{\star} \wedge\log n)}{\lambda_{\star}n}}. \tag{13}\] We note that the minimum in Equation (13) is taken over \(\mathbb{O}_{d}\) rather than \(\mathbb{O}_{d}\cap\mathbb{O}_{p,q}\), since our proof of Theorem 1 only makes use of the fact that \(\mathbf{W}\in\mathbb{O}_{d}\), while the restriction to \(\mathbb{O}_{d}\cap\mathbb{O}_{p,q}\) is necessary to ensure that our distance on \(\tilde{\mathcal{X}}_{n}^{(p,q)}\) is well-defined. We remark that lower bounds for subspace estimation derived from the Frobenius norm or the spectral norm cannot be optimal in the \((2,\infty)\)-norm setting. These lower bounds use the fact that for any \(\mathbf{U}\in\mathbb{R}^{n\times d}\), \[\|\mathbf{U}\|_{2,\infty}\geq\frac{1}{\sqrt{n}}\|\mathbf{U}\| \tag{14}\] Taking \(\hat{\mathbf{U}}=\mathbf{0}\) to be our estimator, we have \[\inf_{\hat{\mathbf{U}}}\sup_{\mathbf{U}\in\mathcal{S}_{d}(\mathbb{R}^{n})} \mathbb{E}\min_{\mathbf{W}\in\mathbb{O}_{d}}\left\|\hat{\mathbf{U}}-\mathbf{U} \mathbf{W}\right\|\leq\sup_{\mathbf{U}\in\mathcal{S}_{d}(\mathbb{R}^{n})} \mathbb{E}\min_{\mathbf{W}\in\mathbb{O}_{d}}\|\mathbf{U}\mathbf{W}\|=1\] or \[\inf_{\hat{\mathbf{U}}}\sup_{\mathbf{U}\in\mathcal{S}_{d}(\mathbb{R}^{n})} \mathbb{E}\min_{\mathbf{W}\in\mathbb{O}_{d}}\left\|\hat{\mathbf{U}}-\mathbf{ U}\mathbf{W}\right\|_{F}\leq\sqrt{d},\] where this second bound follows from Equation (12). It follows that any lower bound on the \((2,\infty)\)-norm minimax rate can be no larger than \(O(\sqrt{d/n})\) if we derive it from the Frobenius norm or the spectral norm through Equation (12) or Equation (14), respectively. Comparing this with Equation (13), our lower bound in Corollary 2 improves on this rate by a factor of order \(\sqrt{(\lambda_{\star}\wedge\log n)\kappa_{\star}/\lambda_{\star}}\) if \(d\) is bounded by a constant. ### Upper bounds In order to see the tightness of our lower bounds in Theorem 1 and Corollary 2, we now consider upper bounds on the \((2,\infty)\)-norm estimation error in different asymptotic regimes. Before doing so, we must introduce the concept of average node degree and sparsity of a network. For a node in a network, its degree is defined as the number of edges connected to it. For a random network with \(n\) nodes generated from a probability matrix \(\mathbf{P}\), the \(i\)-th node has an expected degree of \(\sum_{j=1}^{n}\mathbf{P}_{ij}\). We define the average node degree of a network as the expected degree of each nodes averaging over the entire network, which is given by \(n^{-1}\sum_{i=1}^{n}\sum_{j=1}^{n}\mathbf{P}_{ij}\). If the average node degree grows as \(\Theta(n)\), we are in the dense network regime. Random networks generated by the GRDPG model are dense networks. In applications, networks are observed to be sparse: the average node degree grows as \(o(n)\). To incorporate the sparse regime into the GRDPG model, we scale the probability matrix \(\mathbf{P}\) by a sparsity factor \(\rho_{n}\in(0,1]\), so that the probability matrix becomes \(\rho_{n}\mathbf{P}\), and its average node degree grows as \(\Theta(n\rho_{n})\). When \(\rho_{n}=1\), we recover the dense regime. Allowing \(\rho_{n}\to 0\) as \(n\to\infty\) produces sparse networks. For latent position estimation under the GRDPG model, Theorem 3 in Rubin-Delanchy et al. (2022) established an upper bound on the estimation errors of the ASE under \((2,\infty)\)-norm. We restate this result here. **Theorem 3** (Theorem 3 in Rubin-Delanchy et al. (2022)).: _There exists a universal constant \(c>1\) and a matrix \(\mathbf{W}_{\star}\in\mathbb{O}_{d}\cap\mathbb{O}_{p,q}\) such that, provided the sparsity factor satisfies \(n\rho_{n}=\omega\{\log^{4c}n\}\),_ \[\left\|\hat{\mathbf{U}}\hat{\mathbf{A}}^{1/2}\mathbf{W}_{\star}-\mathbf{U} \mathbf{A}^{1/2}\right\|_{2,\infty}=O_{\mathbb{P}}\left(\frac{\log^{c}n}{n^{1/ 2}}\right).\] In the setting of Theorem 3, the condition number of the probability matrix satisfies \(\kappa=O(1)\) and \(\lambda_{d}=\Omega(n\rho_{n})=\omega(\log n)\). Applying Theorem 1, the lower bound in Equation (9) implies that the minimax estimation rate should be \(n^{-1/2}\log^{1/2}n\), which matches the upper bound up to a polylogarithmic factor. This also suggests the near optimality of the ASE in the GRDPG model. Note that Theorem 3 also applies to the RDPG model since the latter is a special case of the GRDPG model. For singular subspace estimation of low-rank plus noise models like that in Definition 3, an upper bound for the estimation error of the truncated SVD estimator \(\hat{\mathbf{U}}\) is given by Theorem 4.2 in Chen et al. (2021). Adapted to our setting, Theorem 4.2 in Chen et al. (2021) states that there exists a matrix \(\mathbf{W}_{\star}\in\mathbb{O}_{d}\), such that \[\left\|\hat{\mathbf{U}}\mathbf{W}_{\star}-\mathbf{U}\right\|_{2,\infty} \lesssim\frac{\kappa\sqrt{\rho_{n}\mu}+\sqrt{\rho_{n}\log n}}{\lambda_{d}}, \tag{15}\] where \(\mu=n\left\|\mathbf{U}\right\|_{2,\infty}/d\) is the incoherence parameter of the probability matrix \(\mathbf{P}\). Notice that we always have \(\mu\geq 1\). Under the GRDPG, both \(\mu\) and \(\kappa\) are bounded by constants, and \(\lambda_{1}/n=O(\rho_{n})\). Hence, the lower bound in Equation 13 ensured by Theorem 1 also matches the upper bound in Equation (15) up to a constant. More generally, by the Perron-Frobenius theorem, for any probability matrix \(\mathbf{P}\), we have \[\lambda_{1}\geq\min_{i\in[n]}\sum_{j=1}^{n}\mathbf{P}_{ij},\] Hence, if we assume that \(\mathbf{P}=\rho_{n}\mathbf{P}_{0}\) for some probability matrix \(\mathbf{P}_{0}\) with entries strictly bounded between \(0\) and \(1\), then \(\lambda_{1}=\Theta(n\rho_{n})\), and our lower bound in Equation (13) can be rewritten as \(\Omega(\sqrt{\rho_{n}(\lambda_{d}\wedge\log n)}/\lambda_{d})\). In this setting, if we further assume that \(\mu=O(\log n)\), the upper bound in Equation (15) becomes \(O(\sqrt{\rho_{n}(\lambda_{d}\wedge\log n)}/\lambda_{d})\), and we see that there is a \(O(\kappa)\) gap (up to log factors) between the upper bound derived by Chen et al. (2021) and our lower bound in Corollary 2. We study this gap through simulations in Section 4 (see Figure 2 and Table 2). Based on those experiments, we conjecture that the upper bound in Chen et al. (2021) can be improved to match our lower bound (up to logarithmic factors), but we leave further exploration of this point for future work. Experiments In this section, we compare our theoretical lower bounds from Section 3 with empirical estimation performance obtained by the ASE which according to existing results (e.g., Theorem 3), matches this lower bound up to logarithmic factors. Recall that for a pair of estimates \((\hat{\mathbf{U}},\hat{\boldsymbol{\Lambda}})\), the \((2,\infty)\)-norm between it and the ground truth \((\mathbf{U}_{0},\boldsymbol{\Lambda}_{0})\) is given by \[\min_{\mathbf{W}\in\mathbb{O}_{d}\cap\mathbb{O}_{p,q}}\|\hat{\mathbf{U}}\hat{ \boldsymbol{\Lambda}}^{1/2}\mathbf{W}-\mathbf{U}_{0}\boldsymbol{\Lambda}_{0}^{ 1/2}\|_{2,\infty}. \tag{16}\] Finding the exact minimizer of Equation (16) is non-trivial. Instead, we approximate it by first solving a similar Procrustes problem under the Frobenius norm, \[\min_{\mathbf{W}\in\mathbb{O}_{d}\cap\mathbb{O}_{p,q}}\|\hat{\mathbf{U}}\hat{ \boldsymbol{\Lambda}}^{1/2}\mathbf{W}-\mathbf{U}_{0}\boldsymbol{\Lambda}_{0}^ {1/2}\|_{F} \tag{17}\] and then plugging in the minimizer to the \((2,\infty)\)-distance. In practice, the minimizer under the Frobenius norm provides a good approximation to the exact minimizer. As a matter of fact, we note that the matrix \(\mathbf{W}^{\star}\) in Theorem 3 is the same minimizer of the Procrustes problem under the Frobenius norm, and therefore, the same upper bound for latent position estimation error still holds when \(\kappa=O(1)\). For details, we refer the reader to the proof of Theorem 3 in Rubin-Delanchy et al. (2022). In general, approximating the problem in Equation (16) with the minimizer of Equation (17) serves as a valid upper bound for Equation (16), and if it matches the lower bound, Equation (16) will as well. Recall from Section 3.3 that when \(\kappa=O(1)\), our minimax lower bounds in Theorem 1 and Corollary 2 match the corresponding upper bounds up to logarithmic factors. On the other hand, when \(\kappa=\omega(1)\), as discussed in Section 3.3, there is no matching upper bound to our lower bound. Rather, the best upper bound of which we are aware has a \(O(\kappa\sqrt{\mu/\log n})\) gap with our minimax lower bound. In light of this, we consider two different asymptotic regimes, both under the sparse GRDPG as discussed in Section 3.3. In the first, we fix \(\kappa\) to be a constant and vary the growth rate of the sparsity factor \(\rho_{n}\). In the second, where \(\kappa=\omega(1)\), we fix the sparsity \(\rho_{n}\) to be a constant and vary the growth rate of \(\kappa\). To emphasize the dependence of \(\kappa\) on \(n\), we also write \(\kappa\) as \(\kappa_{n}\) below. In both asymptotic regimes, we consider networks generated from a GRDPG with latent position dimension \(d=3\), and signature \((p,q)=(2,1)\). The probability matrix \(\mathbf{P}_{0}\in[0,1]^{n\times n}\) is set to be \(\mathbf{P}_{0}=\rho_{n}\mathbf{U}_{0}\boldsymbol{\Lambda}_{0}\mathbf{U}_{0}\), where \(\mathbf{U}_{0}\in\mathbb{R}^{n\times d}\) is constructed according to Equation (56) with suitably chosen constants and \[\boldsymbol{\Lambda}_{0}=\operatorname{diag}\left(\frac{n}{3},\frac{n}{3 \kappa_{n}},-\frac{n}{3\kappa_{n}}\right).\] We vary \(n\) from \(9,000\) to \(20,000\) with a step size of \(1000\). In the setting where \(\kappa=O(1)\), we fix \(\kappa_{n}=6\) and vary \[\rho_{n}\in\left\{0.2,20n^{-1/2},90n^{-2/3},190n^{-3/4},300n^{-4/5},400n^{-5/6},1800n^{-1}\right\}, \tag{18}\] where the constants are chosen so that all the \(\rho_{n}\) are approximately equal to \(0.2\) when \(n=9000\). In the second setting, where \(\kappa=\omega(1)\), we fix \(\rho_{n}=0.9\) and vary \[\begin{split}\kappa_{n}\in\Bigg{\{}\frac{1207}{500}n^{1/10},\frac{9 71}{1000}n^{1/5},\frac{391}{1000}n^{3/10},\frac{157}{1000}n^{2/5},\\ \frac{63}{1000}n^{1/2},\frac{1}{40}n^{3/5},\frac{1}{100}n^{7/10}, \frac{1}{250}n^{4/5}\Bigg{\}}.\end{split}\] (19) The constants here are chosen to satisfy that all \(\kappa_{n}\) are approximately equal to \(6\) when \(n=9000\). For each combination of \((n,\rho_{n},\kappa_{n})\), we generate \(240\) Monte Carlo trials when we keep \(\kappa_{n}=6\) and \(200\) trials when we keep \(\rho_{n}=0.9\). We approximate their latent position and subspace estimation errors as described by Algorithm 1. ``` 0:\(\mathbf{P}_{0}\in\mathbb{R}^{n\times n},d=p+q,M\). for\(1\leq i\leq M\)do Sample an adjacency matrix \(\mathbf{A}_{i}\) from \(\mathbf{P}_{0}\). \((\hat{\mathbf{U}}_{i,p},\hat{\mathbf{\Lambda}}_{i,p})\leftarrow\text{TopEig}( \mathbf{A}_{i},p);\quad(\hat{\mathbf{U}}_{i,q},\hat{\mathbf{\Lambda}}_{i,q}) \leftarrow\text{TopEig}(-\mathbf{A}_{i},q)\) \(\hat{\mathbf{U}}_{i}\leftarrow(\hat{\mathbf{U}}_{i,p},\hat{\mathbf{U}}_{i,q}); \quad\hat{\mathbf{\Lambda}}_{i}\leftarrow\text{diag}(\hat{\mathbf{\Lambda}}_{i,p},\hat{\mathbf{\Lambda}}_{i,q})\). \(\mathbf{W}_{1i}\leftarrow\arg\min_{\mathbf{W}\in\mathcal{O}_{d}\cap\mathbb{O}_{ p,q}}\|\hat{\mathbf{U}}_{i}\hat{\mathbf{\Lambda}}_{i}^{1/2}\mathbf{W}- \mathbf{U}_{0}\mathbf{\Lambda}_{0}^{1/2}\|_{F}\), \(\mathbf{W}_{2i}\leftarrow\arg\min_{\mathbf{W}\in\mathcal{O}_{d}\cap\mathbb{O}_ {p,q}}\|\hat{\mathbf{U}}_{i}\mathbf{W}-\mathbf{U}_{0}\|_{F}\). \(\ell_{1i}\leftarrow\|\hat{\mathbf{U}}_{i}\hat{\mathbf{\Lambda}}_{i}^{1/2} \mathbf{W}_{1i}-\mathbf{U}_{i}\mathbf{\Lambda}_{i}^{1/2}\|_{2,\infty};\quad \ell_{2i}\leftarrow\|\hat{\mathbf{U}}_{i}\mathbf{W}_{2i}-\mathbf{U}_{i}\|_{2, \infty}\). endfor \(\ell_{\text{latent}}\leftarrow\frac{1}{M}\sum_{i=1}^{M}\ell_{1i};\quad\ell_{ \text{subspace}}\leftarrow\frac{1}{M}\sum_{i=1}^{M}\ell_{2i}\). return\(\ell_{\text{latent}}\), \(\ell_{\text{subspace}}\). ``` **Algorithm 1** Simulation procedure for expected adjacency matrix \(\mathbf{P}_{0}=\mathbf{U}_{0}\mathbf{\Lambda}_{0}^{1/2}\mathbf{I}_{p,q} \mathbf{\Lambda}_{0}^{1/2}\mathbf{U}_{0}\) with signature \((p,q)\), based on \(M\) Monte Carlo trials. We assume access to a function \(\text{TopEig}(\mathbf{A},k)\) for obtaining the top \(k\) eigenvalues and eigenvectors of a matrix \(\mathbf{A}\). Figure 1 shows the results when we fix \(\kappa_{n}=6\) and vary \(\rho_{n}\). The left subplot shows the estimation errors for the latent positions as a function of the number of vertices \(n\). We see that the lines by and large overlap one another, indicating that the growth rate of \(\rho_{n}\) has little effect on the latent position estimation error rate, in agreement with what our lower bounds suggest. The right subplot shows the estimation error for subspace recovery, again as a function of the number of vertices \(n\). Examining the different lines in the plot, we see that as the growth rate of \(\rho_{n}\) gets smaller, the estimation error has a slower convergence rate, as suggested by our lower bound. Of course, our lower bounds make predictions about the precise slope these lines should have, a point we explore in more detail below (see Table 1and discussion thereof). Figure 2 shows the results of the same experiment when we fix \(\rho_{n}=6\) and vary \(\kappa_{n}\), once again showing estimation error as a function of the number of vertices \(n\). The left subplot shows the estimation error for the latent positions while the right subplot shows the log-estimation error for the subspaces. In both subplots, the estimation error has a slower convergence rate as the growth rate of \(\kappa_{n}\) gets larger, again in agreement with our lower bounds in Theorem 1 and Corollary 2. Figure 2: log-log plots of latent positions estimation error (left) and subspace estimation error (right) as a function of the number of vertices \(n\) when \(\rho_{n}=0.9\) and the condition number \(\kappa_{n}\) varies. The \(x\)-axis displays number of vertices \(n\) on a log scale and the \(y\)-axis displays the estimation error on log scale. Lines connect \((n,\kappa_{n})\) pairs that have the same scaling of \(\kappa_{n}\) with \(n\). Lines with darker colors correspond to networks generated with larger \(\kappa_{n}\) while lighter colors correspond to smaller \(\kappa_{n}\) with \(\kappa_{n}\) varies as in Equation (19). Figure 1: log-log plots of latent positions estimation errors (left) and subspace estimation errors (right) as a function of the number of vertices \(n\) when \(\kappa_{n}=6\). The \(x\)-axis displays the number of vertices \(n\) in log scale and the \(y\)-axis displays the estimation error in log scale. Lines connect \((n,\rho_{n})\) pairs that have the same scaling of \(\rho_{n}\) with \(n\). Lines with darker colors correspond to sparser networks while lighter colors correspond to denser networks with \(\rho_{n}\) varying as in Equation (18). The plots in Figures 1 and 2 suggest a roughly log-log linear relationship between the estimation error and the number of vertices \(n\). Given a pair \((\rho_{n},\kappa_{n})\), if the estimation error is of order \(n^{\alpha}\), then the log estimation error should be of order \(\alpha\log n\). Therefore, the slope of a line in the log-log plot provides an estimate of the exponent of the growth rate of the estimation error. To better compare the growth rate obtained from the simulations against our lower bounds in Theorem 1 and Corollary 2, regression the log estimation errors against \(\log n\) for each \((\rho_{n},\kappa_{n})\)-pair in our simulation. That is, we fit a linear model to the points in each line in Figures 1 and 2. The estimated slopes are listed in Tables 1 and 2 in the columns labeled "latent rate" and "subspace rate". We wish to compare these estimation rates against our theoretical lower bounds from Theorem 1 and Corollary 2. We note that these lower bounds include logarithmic factors, which have no bearing on the predicted slope of the lines in Figures 1 and 2 when \(n\) tends to infinity, but may lead to appreciably different lower bounds for finite \(n\). To account for this, we fit a second linear model, this time regression the logarithm of our minimax lower bound against \(\log n\). The estimated slopes are listed in Tables 1 and 2 in the columns labeled "latent lower" and "subspace lower". As an example, if we exclude the \(\log n\) factor from our minimax lower bound in Theorem 1, then the "latent lower" column of Table 1 would be all be equal to \(-0.5\), since our lower bound becomes \(\Omega(n^{-1/2})\). In comparison, fitting a linear model to the lower bounds with logarithmic terms included yields a fitted slope of \(-0.447\), in better agreement with the observed estimation rate. Examining Tables 1 and 2, we see that the estimated error rates are close to the rates suggested by our lower bounds. We note, however, that for most \((n,\rho_{n},\kappa_{n})\) triples, the estimated error rates are slightly larger than predicted by the lower bounds. One reason for this might be that the ASE method is minimax optimal up to logarithmic factors. Since the minimax lower bounds are obtained for estimators that minimize the worst case risk, it might be the case that the ASE method is near optimal in some of the worst cases and the logarithmic factors in its rate will affect the estimated rate in finite sample cases, therefore making the estimated rate slightly larger. It is also possible that randomness in our simulations still has some significant effect on our estimated slopes in the two tables, though we doubt this is the case. All told, we do not necessarily expect the estimated error rates to be exactly those appearing in our minimax lower bounds. Nonetheless, our simulations do seem to suggest that our lower bounds are near optimal. As mentioned in the beginning of this section, one of our goals is to see how the estimation errors grow when \(\kappa_{n}\) grows with \(n\), since in this setting there is a gap between our minimax results and the best known upper bound on subspace recovery. When we vary \(\kappa_{n}\), we see in Table 2 that the estimation error rates new closely to our lower bounds, rather than approaching the upper bound in Equation (15), in agreement with our conjecture in Section 3.3. ## 5 Discussion We have presented minimax lower bounds for estimation error of the latent positions and singular subspaces in the generalized random dot product graph and more general low-rank network models. We addressed the identifiability that arises due to the use of the indefinite inner product in the GRDPG model. To account for this nonidentifiability, we defined a distance on the equivalence classes of latent positions. This distance includes as special case a commonly used distance defined for the well-studied RDPG model. To derive our minimax lower bounds, we constructed packing sets of singular subspaces for probability matrices by stacking Hadamard matrices. We divided our analysis into two parts based on different regimes of the condition number \(\kappa=\lambda_{1}/\lambda_{d}\) of these probability matrices. When \(\kappa=O(1)\), we proved minimax lower bounds that hold for sparse GRDPG models with a bounded latent position dimension \(\kappa>3d\). We note that this bound on \(d\) can be relaxed to \(\kappa>(1+\epsilon)d\) for any constant \(\epsilon>0\); we have used 3 here for the sake \begin{table} \begin{tabular}{c c c c c} \hline \hline \(\rho_{n}\) & **latent** & **latent** & **subspace** & **subspace** \\ & **rate** & **lower** & **rate** & **lower** \\ \hline 0.2 & \(-0.465\)\((\pm 0.009)\) & \(-0.447\) & \(-0.952\)\((\pm 0.009)\) & \(-0.947\) \\ \(n^{-1/2}\) & \(-0.443\)\((\pm 0.009)\) & \(-0.447\) & \(-0.688\)\((\pm 0.009)\) & \(-0.697\) \\ \(n^{-2/3}\) & \(-0.435\)\((\pm 0.009)\) & \(-0.447\) & \(-0.596\)\((\pm 0.009)\) & \(-0.614\) \\ \(n^{-3/4}\) & \(-0.436\)\((\pm 0.009)\) & \(-0.447\) & \(-0.559\)\((\pm 0.009)\) & \(-0.572\) \\ \(n^{-4/5}\) & \(-0.433\)\((\pm 0.009)\) & \(-0.447\) & \(-0.529\)\((\pm 0.009)\) & \(-0.547\) \\ \(n^{-5/6}\) & \(-0.432\)\((\pm 0.009)\) & \(-0.447\) & \(-0.510\)\((\pm 0.009)\) & \(-0.531\) \\ \(n^{-1}\) & \(-0.418\)\((\pm 0.009)\) & \(-0.447\) & \(-0.420\)\((\pm 0.009)\) & \(-0.447\) \\ \hline \hline \end{tabular} \end{table} Table 1: Error rates for different choices of sparsity \(\rho_{n}\). The “latent rate” and “subspace rate” columns show simulated estimation error rates for latent positions and subspaces using the ASE method when \(\kappa_{n}\) is set to be 6 and we vary the growth rate of \(\rho_{n}\). The values in the brackets correspond to a 95% confidence interval. The first column \(\rho_{n}\) shows the growth rate of the sparsity factor \(\rho_{n}\) up to constants, whose exact choices of constants are given in Equation (18). The third column “latent lower” and the last columns “subspace lower” give the corresponding error rate lower bounds for latent position estimation and subspace estimation. \begin{table} \begin{tabular}{c c c c c} \hline \hline \(\kappa_{n}\) & **latent** & **latent** & **subspace** & **subspace** \\ & **rate** & **lower** & **rate** & **lower** \\ \hline \(n^{1/10}\) & \(-0.4072\)\((\pm 0.0096)\) & \(-0.3974\) & \(-0.8551\)\((\pm 0.0096)\) & \(-0.8474\) \\ \(n^{1/5}\) & \(-0.3471\)\((\pm 0.0098)\) & \(-0.3474\) & \(-0.7439\)\((\pm 0.0098)\) & \(-0.7475\) \\ \(n^{3/10}\) & \(-0.2974\)\((\pm 0.0097)\) & \(-0.2974\) & \(-0.6436\)\((\pm 0.0098)\) & \(-0.6474\) \\ \(n^{2/5}\) & \(-0.2416\)\((\pm 0.0100)\) & \(-0.2474\) & \(-0.5390\)\((\pm 0.0100)\) & \(-0.5478\) \\ \(n^{1/2}\) & \(-0.2072\)\((\pm 0.0095)\) & \(-0.1974\) & \(-0.4562\)\((\pm 0.0096)\) & \(-0.4486\) \\ \(n^{3/5}\) & \(-0.1524\)\((\pm 0.0098)\) & \(-0.1474\) & \(-0.3567\)\((\pm 0.0099)\) & \(-0.3528\) \\ \(n^{7/10}\) & \(-0.0957\)\((\pm 0.0101)\) & \(-0.0974\) & \(-0.2513\)\((\pm 0.0102)\) & \(-0.2545\) \\ \(n^{4/5}\) & \(-0.0471\)\((\pm 0.0097)\) & \(-0.0474\) & \(-0.1536\)\((\pm 0.0100)\) & \(-0.1563\) \\ \hline \hline \end{tabular} \end{table} Table 2: Error rates for different choices of condition number \(\kappa_{n}\). The “latent rate” and “subspace rate” columns show simulated estimation error rates for latent positions and subspaces using the ASE method when \(\rho_{n}\) is set to be 0.9 and we vary \(\kappa_{n}\). The values in the brackets correspond to a 95% confidence interval. The first column \(\kappa_{n}\) shows the growth rate of \(\kappa_{n}\) up to constants, whose exact choices of constants are given in Equation (19). The third column “latent lower” and the last columns “subspace lower” give the corresponding error rate lower bounds for latent position estimation and subspace estimation. of simplicity. The resulting lower bounds show that the adjacency spectral embedding (Sussman et al., 2012) for estimating the latent positions is minimax optimal up to logarithmic factors. We provided examples to show that the assumption \(\kappa>(1+\epsilon)d\) is not a stringent condition under both the GRDPG model and the RDPG model. In the regim where \(\kappa=\omega(1)\), we established minimax lower bounds that also hold for growing latent dimension \(d\), as long as \(\kappa>3d\). Here again, the constant \(3\) can be relaxed to \(1+\epsilon\) for any constant \(\epsilon>0\). Under this regime, we are not aware of any matching upper bound for latent position estimation or subspace estimation. The best upper bound currently known to us has a gap of \(O\left(\kappa\sqrt{\mu/\log n}\right)\) compared to our bound. To evaluate how close our lower bounds are compared to the actual performance of the adjacency spectral embedding, we conducted simulations under different regimes of \(\kappa\). The results are in agreement with our lower bounds. In our future work, we would like to relax the assumption on \(\kappa\). The main difficulty is that constructing packing sets for singular subspaces of probability matrices with small \(\kappa\) is nontrivial, as it requires a careful combinatorial analysis of the positive and negative patterns of Hadamard matrices or other construction techniques. In addition, we would like to close the theoretical gap between the upper bounds and lower bounds when \(\kappa=\omega(1)\). As suggested by our simulation results, we conjecture that in the regime where the condition number is allowed to grow, the existing upper bounds are not sharp. A tighter upper bound requires a more careful study of how noise perturbs singular subspaces and singular values of probability matrices. Lastly, low-rank matrices with a growing rank \(d\) are a less studied regime, yet this provides a more realistic model for many real world networks (Sansford et al., 2023). Future work should investigate the estimation error when \(d\geq\kappa\).
2305.17959
Implications of $z>{\sim}12$ JWST galaxies for galaxy formation at high redshift
Using a semi-analytic galaxy-formation model, we study analogues of 8 recently discovered JWST galaxies at $z>{\sim}12$. We select analogues from a cosmological simulation with a $(311{\rm cMpc})^3$ volume and an effective particle number of $10^{12}$ enabling resolution of every atomic-cooling galaxy at $z{\le}20$. We vary model parameters to reproduce the observed UV luminosity function at $5{<}z{<}13$, aiming for a statistically representative high-redshift galaxy mock catalogue. Using the forward-modelled JWST photometry, we identify analogues from this catalogue and study their properties as well as possible evolutionary paths and local environments. We find faint JWST galaxies ($M_{\rm UV}>{\sim}-19.5$) to remain consistent with standard galaxy-formation model and that our fiducial catalogue includes large samples of their analogues. The properties of these analogues broadly agree with conventional SED fitting results, except for having systematically lower redshifts due to the evolving UV luminosity function, and for having higher specific star formation rates as a result of burstier histories in our model. On the other hand, only a handful of bright galaxy analogues can be identified for the observed $z{\sim}12$ galaxies. Moreover, in order to reproduce the $z>{\sim}16$ JWST galaxy candidates, boosted star-forming efficiencies and reduced feedback regulation are necessary relative to models of lower-redshift populations. This suggests star formation in the first galaxies could differ significantly from their lower-redshift counterparts. We also find that these candidates are subject to low-redshift contamination, which is present in our fiducial results as both the dusty or quiescent galaxies at $z{\sim}5$.
Yuxiang Qin, Sreedhar Balu, J. Stuart B. Wyithe
2023-05-29T08:46:43Z
http://arxiv.org/abs/2305.17959v2
# Implications of \(z\)\(\gtrsim\)12 JWST galaxies for galaxy formation at high redshift ###### Abstract Using a semi-analytic galaxy-formation model, we study analogues of 8 recently discovered JWST galaxies at \(z\)\(\gtrsim\)12. We select analogues from a cosmological simulation with a (311cMpc)\({}^{3}\) volume and an effective particle number of 10\({}^{12}\) enabling resolution of every atomic-cooling galaxy at \(z\)\(\lesssim\)20. We vary model parameters to reproduce the observed UV luminosity function at 5\(<\)\(z\)\(<\)13, aiming for a statistically representative high-redshift galaxy mock catalogue. Using the forward-modelled JWST photometry, we identify analogues from this catalogue and study their properties as well as possible evolutionary paths and local environment. We find faint JWST galaxies (\(M_{\rm UV}\)\(\gtrsim-\) 19.5) to remain consistent with standard galaxy-formation model and that our fiducial catalogue includes large samples of their analogues. The properties of these analogues broadly agree with conventional SED fitting results, except for having systematically lower redshifts due to the evolving UV luminosity function, and for having higher specific star formation rates as a result of burstier histories in our model. On the other hand, only a handful of bright galaxy analogues can be identified for the observed \(z\)\(\sim\)12 galaxies. Moreover, in order to reproduce the \(z\)\(\gtrsim\)16 JWST galaxy candidates, boosted star-forming efficiencies and reduced feedback regulation are necessary relative to models of lower-redshift populations. This suggests star formation in the first galaxies could differ significantly from their lower-redshift counterparts. We also find that these candidates are subject to low-redshift contamination, which is present in our fiducial results as both the dusty or quiescent galaxies at \(z\)\(\sim\)5. keywords: cosmology: theory - dark ages, reionization, first stars - diffuse radiation - early Universe - galaxies: high-redshift - intergalactic medium ## 1 Introduction If the Hubble Space Telescope (HST) gave us a glimpse of the \(z\)\(\gtrsim\)10 Universe through the keyhole, JWST has undoubtedly opened the door. Since its Early Release Observations (ERO; Pontoppidan et al., 2022) and Director's Discretionary Early Release Science (ERS) programs were made publicly available, a large number of \(z\)\(\gtrsim\)10 candidates have been reported by various independent groups using NIRCam images (Naidu et al., 2022, 2020; Castellano et al., 2022; Yan et al., 2023; Finkelstein et al., 2022; Atek et al., 2023; Donnan et al., 2023; Whitler et al., 2020; Harikane et al., 2023; Zavala et al., 2023; Rodighiero et al., 2023; Adams et al., 2023; Cullen et al., 2023; Furtak et al., 2023; Ono et al., 2022; Bradley et al., 2022; Robertson et al., 2023). These preliminary results include some discoveries which challenge the current concordance model (Boylan-Kolchin et al., 2009; Haslbauer et al., 2022; Parashari and Laha, 2023). Following up on the recent NIRSpec result (Curtis-Lake et al., 2023) pushing the earliest, _spectroscopically confirmed_ high-redshift galaxy to \(z=13.2^{+0.04}_{-0.07}\), we aim to quantify whether standard galaxy formation models are consistent with these observables at the cosmic dawn. According to the standard galaxy-formation model (see reviews by Somerville and Dave, 2015; Naab and Ostriker, 2017; Vogelsberger et al., 2020 and references therein), galaxies form from the gravitational collapse of over-dense regions of dark matter and gas, which subsequently grow through mergers and accretion. Gas cools and condenses to form stars, which illuminate the host galaxy and allow us to observe its complex structure. The standard model also accounts for the role of feedback processes including energy, mass and metals injected by supernovae and central supermassive black holes. These have been proven essential to regulating star formation and shaping galaxy properties. Using a semi-analytic galaxy-formation model (introduced in Section 2), we make realizations of the early Universe including NIRCam broad-band photometry for billions of galaxies at \(z\)\(\gtrsim\)5. By varying efficiencies of star formation and feedback, this theoretical galaxy population is calibrated to represent summary statistics of the large sample observed before JWST, mostly at \(z\)\(\leq\)10. In this modelled galaxy catalogue, we then seek those having a spectral energy distribution (SED) close to the new JWST observations, for which we include both the NIRSpec targets and several NIRCam candidates1 at \(z\)\(\geq\)12. Our objective is to (1) probe the potential formation history and local environment of galaxies formed in the first \(\sim\)300Myr of our Universe using modelled analogues, where identifiable; and other wise (2) study the implications for standard galaxy formation model where the modelled galaxies are inconsistent with observations. The cosmic chronology of our targets is illustrated in Fig. 1 and their inferred properties from various observational campaigns are summarized in Table 1. After discussing their implications for galaxy-formation models in Section 3, we present analogues of these JWST galaxies in Section 4. Section 5 concludes our results. Cosmological parameters from Planck 2018 (\(\Omega_{\rm m}\), \(\Omega_{\rm b}\), \(\Omega_{\Lambda}\), \(h\), \(\sigma_{8}\), \(n_{8}\) = 0.312, 0.0490, 0.688, 0.675, 0.815, 0.968; Planck Collaboration et al., 2020) are adopted in this study. ## 2 Modelling the First Galaxies During the Epoch of Reionization In this work, we use the _Meraxes_ semi-analytic model (SAM; Mutch et al., 2016), designed to study the Epoch of Reionization (EoR). The model is applied to dark matter halo merger trees introduced in Balu et al. (2022), which were constructed (with the VELOCIraptor halo-finder and TreeFroog algorithm by Elahi et al., 2019, ) from an \(N\)-body simulation of \(210h^{-1}\) cMpc (performed by Power et al. in prep using SWIFT by Schaller et al., 2023) that has been augmented (with DarkForest by Qiu et al., 2020) to resolve all atomic cooling halos at \(z\leq 20\) (i.e., \(M_{\rm vir}\geq 3\times 10^{7}\rm M_{\odot}\)). With halo properties inherited from the merger trees, our SAM assigns galaxies with a baryonic component according to the cosmic mean and the strength of local photo-ionization. It then evaluates galaxy properties based on various astrophysical processes including gas accretion and cooling, stellar evolution and feedback, as well as metal enrichment, satellite infall, and merger events. We also incorporate standard stellar population synthesis (using the instantaneous model with nebular continuum included from starburst99 by Leitherer et al., 1999) and estimate attenuation by the interstellar (ISM) and intergalactic medium (IGM; following Charlot & Fall, 2000 and Inoue et al., 2014) to calculate galaxy spectra as well as SEDs with the NIRCam wide-band filters. The model also includes feedback from AGN (Qin et al., 2017) but their UV emission is ignored in this work. We assume 15 per cent of UV ionizing photons and X-rays above 500 eV escape from the host galaxy, and study their impact on the large-scale neutral IGM (via an excursion-set algorithm based on 21cnFAST by Mesinger et al., 2011; see also \begin{table} \begin{tabular}{l l l l l l l l} \hline Sec & ID & \(z\) & \(M_{\rm UV}\) & \(\log_{10}[M_{*}/\rm M_{\odot}]\) & SFR[\(\rm M_{\odot}\) yr\({}^{-1}\)]\({}^{a}\) & References\({}^{b}\) & w.r.t. our models \\ \hline 4.1 & JADES-GS-z13 & \(13.20^{+0.04}_{-0.07}\) & \(-18.5\pm 0.2\) & \(7.8^{+0.4}_{-0.5}\) & \(1.0^{+1.0}_{-0.5}\) & CL23, R22, D23b, H23a & \\ 4.1 & JADES-GS-z12 & \(12.63^{+0.24}_{-0.08}\) & \(-18.8\pm 0.1\) & \(8.4^{+0.4}_{-0.7}\) & \(1.3^{+1.9}_{-0.9}\) & CL23, R22, H23a & \\ 4.2 & S5-z12-1 & \(12.58^{+1.25}_{-0.46}\) & \(-20.2\pm 0.1\) & \(8.53^{+0.61}_{-0.69}\) & \(5.5^{+4.7}_{-4.4}\) & H23b & \\ \hline 4.3 & GL\(z12\) & \(12.2^{+0.1}_{-0.2}\) & \(-21.0\pm 0.1\) & \(9.1^{+0.3}_{-0.4}\) & \(6^{+5}_{-2}\) & N22, B23, D23a, H23b, & \\ & & & & & & & \\ 4.4 & Maisie’s Galaxy & \(11.44^{+0.09}_{-0.08}\) & \(-20.32^{+0.08}_{-0.06}\) & \(8.50^{+0.29}_{-0.44}\) & \(2.1^{+4.8}_{-2.0}\) & F122, AH23, D23a, H23ab, O22, Z22 & \\ \hline 4.5 & SMACS\_z16a & \(15.92^{+0.17}_{-0.15}\) & \(-20.59\pm 0.15\) & \(8.79^{+0.32}_{-0.33}\) & \(16.6^{+2.9}_{-16.4}\) & AT23, AD23, FU23, H23b & inconsistent with fiducial, \\ 4.5 & SMACS\_z16b & \(15.32^{+0.16}_{-0.13}\) & \(-20.96\pm 0.14\) & \(8.80^{+0.44}_{-0.25}\) & \(57.5^{+38.0}_{-29.4}\) & \\ \hline 4.6 & S5-z16-1 & \(16.41^{+0.66}_{-0.55}\) & \(-21.6\pm 0.3\) & \(8.59^{+1.23}_{-0.31}\) & \(5.1^{+2.7}_{-1.8}\) & H23b, O22 & inconsistent with fiducial, \\ & & & & & & & \\ \hline \end{tabular} \({}^{a}\) SFR is averaged over 50Myr except for JADES-GS-z12, JADES-GS-z13 and Maisie’s Galaxy in which 30Myr, 30Myr and 10Myr are considered, respectively. \({}^{b}\) References are Adams et al. (2023b, AD23), Arrahal Haro et al. (2023, AH23), Atek et al. (2023, AT23), Bakx et al. (2023, B23), Curtis-Lake et al. (2023, CL23), Donnan et al. (2023a, D23a), Donnan et al. (2023b, D23b), Finkelstein et al. (2022, FI22), Furtak et al. (2023, FU23), Harikane et al. (2023a, b, H23ab), Naidu et al. (2022b, N22), Ono et al. (2022, O22), Popping (2023, P23), Robertson et al. (2023, R22), Santini et al. (2023, S22), Zavala et al. (2023, Z22). \end{table} Table 1: Properties of the \(z\gtrsim 12\) JWST targets explored in this work. The last columnr states whether the galaxy is consistent with our galaxy-formation models. Figure 1: Eight \(z\geq 12\) JWST targets explored in this work with background illustrating the early stage of reionization (projected with a depth of 4 cMpc, a typical bubble radius for high-redshift bright galaxies) according to our fiducial model. References for these galaxy observations are listed in Table 1. Murray et al., 2020). Fig. 1 illustrates the early stage of reionization according to our fiducial model, for which the late-time prediction and integrated EoR history are consistent with quasar/Lyman-alpha emitter observations (McGreer et al., 2015; Wang et al., 2020; Banados et al., 2018; Greig et al., 2017, 2019, 2022; Davies et al., 2018; Wold et al., 2022; Inoue et al., 2018; Morales et al., 2021; Ouchi et al., 2018; Whitler et al., 2020; Mason et al., 2018; Jung et al., 2020; Qin et al., 2021; Bolan et al., 2022; Campo et al. in prep) and _Planck_'s latest measurement of the cosmic microwave background (Planck Collaboration et al., 2020). This fiducial model also results in a high-redshift galaxy and quasar population that is statistically representative of the observed Universe, including the predicted stellar mass function and UV non-ionizing luminosity function calibrated against observations across cosmic time (\(z\)\(\sim\)5-10; Qin et al., 2017). The adopted fiducial parameters2 and how we tuned them are outlined in detail by Qiu et al. (2019) where Bayesian inference was performed against the observed UV luminosity function and colour-magnitude relation prior to JWST. Here, we illustrate the luminosity function between \(z\)=20 and 5 in Fig. 2 with some latest measurements including those taking advantage of the early JWST data. While the model was only calibrated against observations at lower redshifts, its prediction at \(z\)\(>\)10 remains accurate with only some discrepancies at the very bright end which we further examine in the next section. Footnote 2: A further tuning was needed when the new merger trees are employed (see more in Balu et al., 2022). ## 3 Implications of Bright JWST Galaxies for a Standard Galaxy-Formation Model GLz12 (Naidu et al., 2022) and Maisie's Galaxy (Finkelstein et al., 2022) are two bright galaxies at \(z\)\(\sim\)12. However, observing them in these small-volume ERS programs indicates a surprisingly large number density for high-redshift bright galaxies -- GLz12 sets the number density for galaxies of \(M_{\rm UV}\)\(\sim\)\(-\)21 at \(\phi\)\(\sim\)\(10^{-5}\)Mpc\({}^{-3}\)mag\({}^{-1}\) together with another \(z\)\(\sim\)10 candidate reported in GLASS (Naidu et al., 2022); while Maisie's Galaxy suggests \(\phi\)\(\sim\)\(2\times\)\(10^{-5}\)Mpc\({}^{-3}\)mag\({}^{-1}\) at a luminosity \(\sim\)1 magnitude fainter. These values, although having large uncertainties, are consistent with each other and more recent estimates from much larger samples (Donnan et al., 2023; Harikane et al., 2023; Perez-Gonzalez et al., 2023). On the other hand, theoretical models that tie galaxy formation closely to their host halo growth seem to struggle to simultaneously match both the bright and faint end of the luminosity function as well as its cosmic evolution (e.g. Dayal et al., 2014; Behroozi et al., 2019 used in Naidu et al., 2022; see also discussion in Mason et al., 2023; Yung et al., 2023). This is also the case for our model. We highlight the galaxy UV luminosity function at \(z\)=10-13 from multiple snapshots of _Merzaves_ output in the left panel of Fig. 3. To facilitate discussion of possible failures of semi-analytic galaxy formation models at high redshift, we also add results from 1. _no dust_, a fiducial model where dust attenuation in stellar birth clouds and in the ISM is ignored to explore the possibility that extrapolating our dust model to such early Universe might be inaccurate; Figure 2: Galaxy UV non-ionizing luminosity functions from \(z\)=20 to 5 predicted by our fiducial model, which was calibrated to reproduce the observational data (light grey) prior to JWST such as Finklekstein et al. (2015); Oesch et al. (2016); Livermore et al. (2017); Atek et al. (2018); Ishigaki et al. (2018); Bhauwedkar et al. (2019); Bouwens et al. (2021, 2023); Leechbchaulfit et al. (2022) and Kauffmann et al. (2022). Shaded regions and error bars indicate the 1\(\sigma\) Poisson error from the model and observations. We also highlight (dark grey) recent JWST results from Donnan et al. (2023); Finklekstein et al. (2022); Harikane et al. (2023); Naidu et al. (2022) and Pérez-González et al. (2023), which are still broadly consistent with our model prediction at least in the faint range. We also note that a \(z\)\(\sim\)16 candidate (CERS-93316) selected by Donnan et al. (2023) and Harikane et al. (2023) has recently been refuted spectroscopically. However, Atek et al. (2023) reported two other \(z\)\(\sim\)16 candidates in the same field (SMACS J0723) and hence a revised number density will likely be at a similar level. 2. _no dust or fb_, a _no dust_ model with supernova (and reionization) feedback further minimized to study scenarios where feedback in the first galaxies is much weaker than previously expected; 3. _maxSF_, a _no fb_ model with star formation efficiency further maximized to illustrate some of these JWST candidates might be forming stars at much higher rates compared to galaxies at \(z\)\(\lesssim\)10. ### Model modifications to reproduce more bright galaxies at high redshift The dust model in our simulation is based on Charlot & Fall (2000), linking attenuation in stellar birth clouds and the ISM to star formation rates, metallicities and gas column densities. Its parameters were chosen after a rigorous Bayesian exploration (Qiu et al., 2019) with constraints from UV luminosity functions and colour-magnitude relations at relatively lower redshifts (\(z\)\(\sim\)\(-\)\(7\); Bouwens et al., 2014, 2015). At higher redshifts, the detection of GN211 by Oesch et al. (2016) previously challenged the validity of these dust models at \(z\)\(>\)10 (Mutch et al., 2016). This is evident in Fig. 2 with the predicted number density being barely consistent with the inferred value by GN211 (but see the latest spectroscopic result from Bunker et al. 2023 which resets GNz11 with a fainter magnitude and lower redshift). However, the latest JWST result for fainter galaxies from Donnan et al. (2023a) and Harikane et al. (2023b) suggests that the model prediction is consistent with observations up to \(z\)\(\sim\)14. Should the dust model fail at \(z\)\(>\)10 for the brightest galaxies (see e.g. Ferrara et al. 2023; Markov et al. 2023), the detection of GL212 presents a serious challenge to our model as its luminosity function at \(z\)\(\sim\)12 _when ignoring dust attenuation_ is still \(>\)2 times lower than GLz12 implies. To explain the properties of these luminous galaxies, Harikane et al. (2023b) considered modifying the initial mass function3 (IMF) and incorporate a top-heavy, PopIII-dominated IMF to increase the intrinsic UV luminosity (see also recent theoretical work by Haslbauer et al. 2022, Parashari & Laha 2023, Shen et al. 2023, Trinca et al. 2023, and Yung et al. 2023). _Merzxes_ is being upgraded to enable accurate modelling of PopIII star formation to address this possibility (Ventura et al. in prep.). In this work, we limit our exploration to the supernova feedback. Footnote 3: Observational data shown in this work have been converted accordingly to match our Kroupa (2001) IMF using the astrodatapy package ([https://github.com/qyx268/astrodatapy](https://github.com/qyx268/astrodatapy)). Supernova feedback is modelled as a thermal and kinetic source to inhibit gas collapse and star formation. Its efficiencies are tied to the maximum circular velocity (\(V_{\rm max}\)) of the host halo and increase in galaxies with shallower gravitational potentials (Murray et al., 2005; Guo et al., 2011). While the energy coupling efficiency has no redshift dependence for a given \(V_{\rm max}\), earlier results suggest larger mass-loading factors are of necessity to heat more gas in the early Universe (Hopkins et al., 2014; Hirschmann et al., 2016; Cora et al., 2018). This again is based on matching relatively lower redshift observations (c.f. what we study in this work) and could fail at the early stages of the Epoch of Reionisation (EoR). Assuming no supernova feedback at all (_no dust or fb_ in Fig. 3) leads to higher number densities even with respect to the upper limits from the cosmic-variance-free results of SuperBoRG at relatively lower redshifts (Leethochawalit et al., 2022, see also Bagley et al., 2022). We can further enhance the number density by increasing the star formation efficiencies4 (i.e. _maxSF_). These suggest that adjusting feedback or star-forming efficiencies at \(z\)\(>\)10 is needed to better model these new observations from JWST. Figure 3: _Left panel_: similar to Fig. 2 but focused on luminosity function at redshifts around S5-x12-1 (Harikane et al., 2023b), GLz12 (Naiu et al., 2022) and Msaise’s Galaxy (Finkelstein et al., 2022) \(-\)\(17\) consecutive snapshots from _Merzxes_ between \(z\)=10 and 13 are considered as independent realizations to reduce the sample variance for bright galaxies. In addition to the fiducial prediction, models that ignore dust attenuation (_no dust_), supernova feedback (_no dust or fb_) and maximize star formation efficiency (_maxSF_) are shown for comparison together with the early JWST estimates by Naidu et al. (2022); Donnan et al. (2023a); Harikane et al. (2023b); Pérez-González et al. (2023) and Finkelstein et al. (2022). _Right panel_: luminosity function at redshifts around \(z\)=16. Our simulation results are based on 10 consecutive snapshots between \(z\)=15 and 17 while observational estimates come from Harikane et al. (2023b). As noted in Fig. 2, CEERS-93316 that was considered as a \(z\)\(\sim\)16 candidate by Harikane et al. (2023b) is in fact a \(z\)\(\sim\) 4.9 low star-forming dusty galaxy (Arnabal Haro et al., 2023). However, as the two \(z\)\(-\)16 candidates reported by Atek et al. (2023) remain promising and were found in the same field as what Harikane et al. (2023b) explored, a revised number density will likely be at a similar level. ### JWST \(z\)\(\sim\)16 candidates are inconsistent with the standard model. The necessity of alternative galaxy-formation models has become increasingly pressing as observations have moved towards higher redshifts. The right panel of Fig. 3 highlights our fiducial predictions for the galaxy population at \(z\)\(\sim\)16, which presents very few galaxies that are brighter than -20 mag at these redshifts. However, the faintest reported \(z\)\(\sim\)16 candidate has a UV magnitude of \(-20.4\pm 0.2\). This implies that there are no analogues for any of the \(z\)\(\sim\)16 candidates in our fiducial catalogue. One might argue that these early JWST surveys are limited to a small effective volume and can be potentially biased by sample variance (see estimates by Yung et al., 2023). However, our fiducial/standard galaxy-formation model extrapolates to a number density of \(\lesssim\)10\({}^{-12}\)Mpc\({}^{-3}\)mag\({}^{-1}\) at the magnitude of the brightest candidate (\(M_{1600}\)=-21.6). This implies that if these galaxies are spectroscopically confirmed to be at \(z\)\(\sim\)16, they will only exist in our fiducial outputs if the simulation volume covers the entire observable Universe. Given this, we argue that the existence of those \(z\)\(\sim\)16 candidates is inconsistent with our standard galaxy-formation model, which we emphasize again has a predicted galaxy population that is statistically representative of observations at relatively lower redshifts and/or luminosities. At such high redshifts, dust is expected to have been produced in only trace amounts and therefore to not affect the UV luminosity function. However, we can still only reproduce \(z\)\(\sim\)16 galaxies with intrinsic UV magnitudes around \(-\)21 when feedback is assumed ineffective. In fact, to be consistent with the estimated number density for UV bright galaxies at \(z\)\(\sim\)16 (Harikane et al., 2023), we need to effectively turn off supernova feedback and maximize star formation efficiency (see _maxSF_ in the right panel of Fig. 3). It is worth noting that when Harikane et al. (2023, see also Donnan et al. 2023 and Naidu et al. 2022) was estimating the number density of \(z\)\(\sim\)16 bright galaxies, CEERS-93316 was considered as a candidate. However, a recent NIRSpec result has refuted this \(z\)\(\sim\)16 nature and determined it is in fact a low star-forming dusty galaxy at \(z=4.9\)(Arrabal Haro et al., 2023). On the other hand, there are two additional \(z\)\(\sim\)16 candidates reported by Jaek et al. (2023, and studied further in the next section) which remain promising. Since they were found in a sub-field that Harikane et al. (2023) explored, a revised number density that considers these two candidates are likely to be at a similar level as estimated by Harikane et al. (2023). The detection of these extremely bright candidates at \(z\)\(\sim\)16 further illustrates that while feedback and regulated star formation are essential to galaxy formation across most cosmic time, this may not be the case at \(z\)\(\gtrsim\)16. ## 4 JWST galaxies, observed and modelled In this section, the eight high-redshift JWST galaxies (see Table 1) are discussed in order of their redshifts and luminosities - we start from intrinsically faint objects at relatively low redshifts, and then move onto bright ones found at much earlier times which also becomes increasingly challenging to study their analogues sometimes even with additional tuning of our model. ### JADES-GS-213 & JADES-GS-212 Curtis-Lake et al. (2023) and Robertson et al. (2023) reported four spectroscopically confirmed galaxies at \(z\)\(>\)10. The two targets studied here, JADES-GS-213 and JADES-GS-z12 of redshifts \(z\)=13.30\({}^{+0.04}_{-0.07}\) and \(z\)=12.63\({}^{+0.24}_{-0.08}\) respectively, come from an epoch even earlier than the previous record of high-redshift galaxies with spectroscopic confirmation, GNz11 (Oesch et al., 2016, see also Bunker et al. 2023 for an updated spectrum with NIRSpec). These galaxies are results from the JWST Advanced Deep Extragalactic Survey (JADES) that combines NIRCam and NIRSpec targeting at the GOODS (i.e., Great Observatories Origins Deep Survey) South (GS) field, reaching a 5\(\sigma\) magnitude limit of \(\sim\)28.4 mag for spectroscopy. When fitting the SEDs, Curtis-Lake et al. (2023) utilised the full spectra while Robertson et al. (2023) focused on the photometric data, leading to similar (but not identical) physical properties - both JADES-GS-z13 and JADES-GS-z12 are quite small with an intrinsic UV magnitude fainter than -19 mag and a stellar mass of only \(\sim\)10\({}^{8}\)M\({}_{\odot}\). #### 4.1.1 Analogue selection In this work, we focus on the SEDs and inferred galaxy properties reported by Robertson et al. (2023) when identifying analogues within our simulation. In particular, we look for modelled galaxies that have an SED consistent with the measurement by requiring the magnitude in bands F200W, F277W, F356W and F444W (i.e., the ones above the Lyman-\(\alpha\) break) to be within 2\(\sigma\) of the observational uncertainties5 while luminosities also have to be under the 2\(\sigma\) threshold for bands of non-detection (i.e. F090W, F115W and F150W). We further apply a prior on the redshift range (i.e., \(z\in[10.8,16.9]\) over 31 snapshots) to avoid low-redshift contamination and speed up the selection process. Footnote 5: Throughout this paper, a 10% error floor is additionally considered in the observed photometry for all targets to account for potentially underestimated systematics (see e.g. Naidu et al., 2022). This includes the two \(z\)\(\sim\)16 candidates found in lensing fields (see Section 4.5) before errors of the lensing model are further added. In addition, the negative 2\(\sigma\) threshold of non-detection is reset to zero during analogue selection. It is worth highlighting an implicit prior built into our analogue selection as a result of the evolving luminosity function - within a cosmological simulation box, there are more galaxies with fainter magnitudes or lower redshifts (see Fig. 2). Therefore, when marginalising the analogue sample distribution onto luminosity-vs-redshift, galaxies with low luminosities and redshifts are dominant. This often leads to lower values of these two properties (and other properties sharing a degeneracy) compared to observational results that do not impose such a prior. Our selection leads to 1296 and 397 analogues in our fiducial output for JADES-GS-z13 and JADES-GS-212, respectively, whose properties are summarized in Figs. 4 and 5. #### 4.1.2 Galaxy properties We find both observed SEDs to be consistent with modelled star-forming galaxies at \(z\)\(\sim\)12.6 or \(\sim\)13.2, as is evident from the two modes in the redshift distribution. With minor differences, the high redshift mode also indicates higher luminosities, lower metallicities and less dust extinction. However, the overall distribution does not alter significantly after applying the redshift prior inferred by the spectral break at Lyman-\(\alpha\)(Curtis-Lake et al., 2023, i.e. comparing the blue and red distributions) with the predicted galaxy properties comparable to estimates from Robertson et al. (2023) - both JADES-GS-z13 and JADES-GS-z12 analogues have an intrinsic UV magnitude around \(M_{1600}=-18.5\), a stellar mass less than 10\({}^{8}\)M\({}_{\odot}\) and a size of only \(\sim\)60 physical parsecs with very low metallicities and suffering little dust attenuation. The major difference between our result and, for instance, Robertson et al. (2023) comes from the star formation history (SFH). Already flexible, the SFH considered by Robertson et al. (2023, see also other observational studies mentioned in this work) during the SED fitting includes 6 snapshots between \(z\)\(\sim\)12 and 20 in order to capture the burstiness of high-redshift star formation. On the other hand, there are 35 snapshots in our simulation between these redshifts and, with such a high cadence, we are able to accurately simulate the SFH Figure 4: JADES-GS-z13 analogues. _Lower-left corner plot:_ marginalized galaxy property distributions of the model analogues with the redshift prior set to be between z=10.8 and 16.9 (red) as well as based on the 2\(\sigma\) uncertainties of the spectroscopic result (blue, Curtis-Lake et al. 2023), respectively. Note that in the 1D distributions, the vertical axes present number densities in linear scale with the total integral normalized to 1 for both. From left to right or top to bottom, these are redshift, intrinsic UV magnitude, star formation rate averaged over 30Myr and normalised by stellar mass (sSFR), stellar mass, metallicity, dust extinction, galaxy size, halo virial mass, the fraction of gas that is accessible to star formation, and the radius of surrounding ionized bubble. Indicated on the top of each 1D distribution are the median value and 1\(\sigma\) uncertainties ([16,84] percentiles) based on the larger redshift prior. For comparison, estimates from Robertson et al. (2023) are shown as the grey shaded regions and inside the parentheses. _Top right panel:_ modelled SED and spectra for an example analogue with thick solid and thin dashed lines indicating whether dust attenuation is considered. The nominated SEDs and 2\(\sigma\) uncertainties from Robertson et al. (2023, using rockeppo) are shown with black circles while upper limits are presented as 5\(\sigma\). _Central-right corner plot:_ Star formation rate in the past 100Myr for 10 randomly selected analogues, to illustrate the bursty star formation nature of these low-mass galaxies in our simulation as a reason for the inferred high sSFR. in the presence of time-resolved feedback (massive stars can take much longer to become supernovae than the dynamical timescale of the galactic disc; Mutch et al. 2016). In addition, our model does not impose the bursty-continuity prior as in Robertson et al. (2023), but rather requires galaxies to accumulate enough cold and dense gas before being able to form stars. For instance, Figs. 4 and 5 show that JADES-GS-z13 and JADES-GS-z12 are likely hosted by halos of \(\sim\)10\({}^{10}\)M\({}_{\odot}\) and, with around 40 per cent of their gas accessible to star formation (\(f_{\rm gas}\))6, these galaxies might have 5 times more (in mass) star-forming gas than their stellar components. Also because our modelled galaxies have to reach this critical mass before forming stars, the analogues possess a much burstier SFH than Robertson et al. (2023), leading to higher recent star formation rates (SFRs averaged over the past 30Myr; and specific SFR) and/or lower integrated stellar masses. Footnote 6: _Meraxes_ reserves a so-called ejected gas reservoir for each modelled galaxy, as a response to supernova feedback. Gas in this reservoir is considered to have a cooling timescale much longer than the Hubble time and therefore does not contribute to star formation. Finally, our model also predicts that (assuming all galaxies have a UV ionizing escape fraction of 0.15) JADES-GS-z13 and JADES-GS-z12 are likely located in ionized bubbles of \(\lesssim\)2 cMpc in radius7. Figure 5: Similar to Fig. 4 but for JADES-GS-z12 and its analogues. However, in rare cases where the analogue coexists with a more massive neighbour, it may have a much larger ionized bubble of up to \(\sim\)4 cMpc in radius (see also Qin et al. 2022; Whitter et al. 2023). Footnote 7: The \(z\)\(\sim\)12-1 galaxy is a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\sim\)12-1 galaxy with a \(z\)\(\)\(\sim\)12-1 galaxy with a \(z\)\(\sim the modelled SED in band F200W, F277W, F356W and F444W (all above the Lyman-\(\alpha\) break) to be within 2\(\sigma\) of the observational uncertainties while the flux also has to be below the 2\(\sigma\) threshold for bands of non-detection (i.e. F090W, F115W and F150W). We identify only one analogue, which is found at the nominated photometric redshift of GLz12 using easy (Brammer et al., 2008), i.e. \(z\)=12.2, with the same inferred intrinsic UV magnitudes of \(-21.0\) mag. The top panel of Fig. 7 shows the spectrum and SED of this analogue. As with the two spectroscopically confirmed galaxies, it also shows the spectral features of typical star-forming galaxies at high redshift with a UV slope of \(\lesssim\)\(-2\)(Bouwens et al., 2014) and negligible dust attenuation. #### 4.3.2 Local environment The lower panels of Fig. 7 present the local environment of our single analogue of GLz12 - from left to right, we illustrate the local ionization at \(z=12.2\), 15 and 10 as well as the distribution of galaxies within the Hu bubble as in the normalized cumulative number of UV ionizing photons and the number density of galaxies as a function of Figure 6: Similar to Figs. 4 and 5 but for S5-z12-1 and its analogues. Note the upper limit in band F150W is 2\(\sigma\) and the blue colour now indicates a small number of analogues having M\({}_{1600}\) consistent with the 2\(\sigma\) uncertainties of the inferred luminosities from Harikane et al. (2023b). Since the randomly selected analogue spectrum represents a low-luminosity galaxy, the top-left panel also highlights a second analogue with an intrinsic magnitude comparable to the observation. To ease comparison we also replace the sSFR panels with SFR, which is now an averaged over 50Myr following Harikane et al. (2023b). luminosity. With a total stellar mass of \(10^{8.3}\)M\({}_{\odot}\), this analogue has ionized its surrounding IGM to a radius of 2.7 cMpc, larger than the majority of analogues presented in Sections 4.1 and 4.2. However, despite being the most massive galaxy in its ionized region and at least 4 magnitudes brighter than all the other neighbouring galaxies in the Hii bubble, this analogue only contributes \(\sim\)30% of the local UV ionizing photons. Such a high and early local ionization is only possible when a significant portion (i.e. 15 per cent in the fiducial model) of ionizing photons manage to escape from the numerous low-mass galaxies that are as faint as \(M_{1600}\)\(\sim-15\) to \(-12\) mag. The analogue is also in an over-dense region of the universe. When counting the number of galaxies within the Hii region, the density is an order of magnitude higher than the average field inferred from the UV luminosity function (c.f. Fig. 2). For instance, follow-up of GLz12 with deeper JWST observations (e.g. WDEEP led by Finkelstein et al. 2021 aims to reach a magnitude limit of F356W=30.7) is expected to uncover another two neighbouring galaxies within 2.2'\(\times\)2.2' (\(\Delta z\)\(\sim\pm\) 0.5 is considered). This is shown in Fig. 7 (see also recent follow-ups on bright Lyman-\(\alpha\) emitting galaxies at relatively lower redshifts, e.g. Leonova et al. 2022; Tacchella et al. 2023; Witten et al. 2023; Whitler et al. 2023). However, among these neighbours, none sits inside the ionized bubble of the GLz12 analogue since reionization is still at its infant stage at these high redshifts and the ionized regions correspond to sizes of only \(\Delta z\sim\pm\)0.01. When probing the progenitors and descendants of GLz12, our model suggests that it is among the first galaxies that start reionization and remains highly efficient in forming stars and contributing UV ionizing photons across cosmic times. At earlier redshifts such as Figure 7: GLz12 analogue. _Top panel:_ modelled SED and spectra are shown in colour while the nominated observed SED and the 2\(\sigma\) uncertainties or upper limits from Naidu et al. (2022b) are presented in black. _Lower-left panel:_ local environment at the observed redshift with background indicating Hii ionization (projection depth and colour follow Fig. 1). The apparent and intrinsic UV magnitudes are listed in the lower-left corner for the analogue followed by its stellar mass. Presented in the lower-right corners are the radii of the Hii bubble (with the mean indicated by thick red circles and thin ones for the 1\(\sigma\) uncertainties), the number of neighbouring bright galaxies (F356W\(<\)30.7 mag) within the mean Hii bubble (indicated by filled stars) and that (indicated by the transparent stars if not inside the bubble) within half of _JWST_ NIRCAM FoV (2.2’\(\times\)2.2’, 2D projection with \(\Delta z\)\(\simeq\)-0.5, see the green squares). Numbers in the brackets indicate counting down to F356W=32.7 mag (with dark and light dots indicating faint galaxies within the bubble or within the FoV). These two magnitude thresholds are motivated by upcoming _JWST_ deep and lensing fields, respectively. _Lower-middle panels:_ local environment at earlier and later times to visualize the evolution around the analogue. _Lower-right panels:_ the normalized cumulative number of UV ionizing photons and number density as a function of UV magnitudes for galaxies within the ionized region. The UV magnitude of GLz12 analogue as well as its progenitor and descendent is indicated by the circles. \(z\sim 15\), the analogue sits in a non-spherical ionized region of around 1.5 to 1.8 cMpc in radius, which is already more than half the size it will grow to by \(z\)=12.2. Its intrinsic UV magnitude is fainter than -16 mag. Therefore, unlike GLz12 at \(z\)=12.2, the progenitor is instead \(\sim\)3 magnitudes fainter than the brightest in the region and has a very minor contribution to reionization at these early times. On the other hand, its dominance grows at lower redshifts. For instance, with a stellar mass exceeding 10\({}^{9}\)M\({}_{\odot}\) at \(z\)=10, this descendant alone contributes 40 per cent in the local ionizing budget, expanding its Hii territory to nearly 5 cMpc in radius. From the lower-right panels of Fig. 7, we also find the ionized region is increasingly biased towards higher redshifts with fainter galaxies becoming relatively more significant to reionization. #### 4.3.3 Formation history & subsequent evolution We next look at the possible evolutionary path of GLz12 using Fig. 8, in which the history of our analogue in terms of its UV magnitude, SFR, stellar mass, halo mass, gas content, size, metallicity and optical depth to UV non-ionizing photons is presented. The inferred properties using Prospector(Leja et al., 2017) reported in Naidu et al. (2022) as well as estimates from Harikane et al. (2023) and One et al. (2022) are also shown in the figure for comparison. We see that the GLz12 analogue grows its UV luminosities very rapidly at \(z\)\(\geq\)12 with the analogue showing a 100\(\times\) increment from \(z\)\(\sim\)15 to 12 which is less than 100 Myr. When averaged over 50 Myr, this analogue possesses a steady growth of SFR in the past and reaches \(\sim\)10M\({}_{\odot}\)yr\({}^{-1}\) at \(z\)\(\sim\)12.2, consistent with the observational results. On the other hand, the snapshot-averaged SFR presents a bursty SFH, similar to analogues of less massive galaxies discussed in the previous two sections. However, due to its relatively large dark matter component which has created a deeper gravitational potential, a greater amount of gas has been accreted by the GLz12 analogue to fuel star formation for a longer period of cosmic time. Therefore, its SFH is less bursty compared to the low-mass counterparts such as JADES-GS-z12 and JADES-GS-z13 (c.f. Figs. 4 and 5). Our model predicts the GLz12 analogue has a stellar mass of only 2\(\times\)10\({}^{8}\)M\({}_{\odot}\), which is 5 times lower than Naidu et al. (2022) but more consistent with Harikane et al. (2023). These two studies adopt the same continuity prior for the SFH, highlighting the potentially underestimated systematics in these ERS results which can result in such a large difference in the inferred galaxy properties (see more discussion in Naidu et al. 2022,24). The halo mass evolution suggests that a number of merger events might have occurred in the formation history of galaxies like GLz12. For this particular analogue, its progenitor merges into a more massive halo at \(z\)\(\sim\)13, which introduces a significant increase in its star-forming gas component and triggers a re-ignition of star formation at a rate of more than 1M\({}_{\odot}\)yr\({}^{-1}\). Late on, around \(z\)=12.2, the analogue encounters another major merger, further boosting star formation activities and reaching an SFR of \(\gtrsim\)10M\({}_{\odot}\)yr\({}^{-1}\). These also lead to significant fluctuations in the predicted galaxy size (around \(z\)\(\sim\)12.2) between the measured values of \(\sim\)0.5 pkpc (Naidu et al., 2022) and 0.06\(\pm\)0.01pkpc (Ono et al., 2022), which chose different point spread functions and images during the analysis. Note that in our model, star-forming discs are assumed to be rotationally supported, conserving specific angular momenta when the gas cools from an initially virialized state, and following an exponential surface density profile. These assumptions, made by many theoretical models (e.g. Henriques et al., 2015; Stevens et al., 2016) facilitating evaluation of galaxy sizes from their host halo properties, have been shown successful when predicting low-redshift observa Figure 8: GLz12 analogue history as in (1) the intrinsic UV magnitude (thick and thin lines consider or exclude dust attenuation); (2) SFR averaged over each snapshot (blue) or \(\sim\)50Myr following Naidu et al. (2022, red); (3) stellar mass; (4) halo mass; (5) fraction of gas accessible to star formation; (6) galaxy half-light radius; (7) ISM metallicity; and (8) optical depths for photons with a wavelength of 1600Å (thick and thin lines for \(\tau_{1600}\) in the birth cloud of the emitting star or in the ISM). GLz12 properties estimated by Naidu et al. (2022), Harikane et al. (2023) and Ono et al. (2022) are also indicated in corresponding panels. For comparison, thin grey lines illustrate property histories for the next five brightest galaxies (only dust attenuated M\({}_{1600}\) and \(\tau_{1600}\) in the birth cloud are shown to not crowd the plot). tions - the scale radius is \(R_{\rm{g}}\)=\(R_{\rm{vir}}(\lambda/\sqrt{2})\) where \(R_{\rm{vir}}\) and \(\lambda\) are the virial radius and halo spin parameter while the effective radius (or half-light radius) is \(R_{\rm{g}}\)\(\sim\)1.68\(R_{\rm{g}}\). However, the increasing merger rate towards higher redshifts implies that galaxies may not have enough time to recover from previous merger events despite having shorter dynamical timescales (Poole et al., 2016). Although GLz12 shows no sign of multiple clumps down to 0.05 kpc, there are an increasing number of observations suggesting that galaxies at high redshift do not have a simple disc-like morphology and present signs of interaction (e.g. Treu et al., 2023; Witten et al., 2023; Whitler et al., 2023). Therefore, we caution against over-interpreting our prediction of galaxy sizes. The fate of the GLz12 analogue is to steadily increase its UV luminosity and the stellar content with a stellar-to-halo-mass ratio that increases from 0.5 per cent at \(z\)=12.2 to 5 per cent at \(z\)\(\sim\)6. It is evident from Fig. 8 that despite the bursty nature of star formation in GLz12's analogue, its extremely bright UV radiation is not transient. For comparison, we show the property histories for the next 5 most luminous galaxies identified at \(z\)=12.2. More than half of these galaxies become much fainter than the GLz12 analogue at later times with some even dropping luminosities for \(\gtrsim\)100Myr after \(z\)=12. Finally, in agreement with the expectation (e.g. Bouvens et al., 2010, 2014) for galaxies with a UV continuum slope of \(\sim\)\(2.3\pm 0.1\)(Naidu et al., 2022b, see also Cullen et al., 2023 for a larger JWST sample), our model also suggests that the GLz12 analogue experiences negligible dust attenuation at \(z\)\(\gtrsim\)12.2. This is mainly driven by its low metallicity10, which is only \(\sim\)10 per cent of the solar value and aligns with recent ALMA follow-up finding no strong [OIII] emission from GLz12 (Bakk et al., 2023, see also Popping, 2023). The theoretical interpretation for such massive galaxies experiencing little dust attenuation is that either they have ejected their dust contents or current star-forming clouds are segregated from dust that was generated in earlier episodes of star formation (see e.g. Ziparo et al., 2023). As the gas fraction of the GLz12 analogue remains high at \(z\)\(\sim\)12, it is likely that UV emitting regions and dust are indeed of different origins in high-redshift galaxies (see e.g. Behrens et al., 2018; Sommovigo et al., 2020). In fact, the star-forming disc only reaches the level of solar metallicity at \(z\)\(\sim\)8 where the optical depth to UV non-ionizing photons inside the birth cloud of stars exceeds \(\tau_{1600}=1\). At this point, the cloud can absorb a significant fraction of UV photons (e.g. compare the thick and thin coloured lines in panel 1 of Fig. 8) before having dissipated after 10Myr (Charlot & Fall, 2000). UV photons also experience attenuation by the diffuse ISM dust although our model suggests this only becomes significant at very late times (\(z\)\(\sim\)6). Footnote 10: Mass and radius of galaxies also play a role in determining dust attenuation according to our model (Qiu et al., 2019). However, the impact from differences between the predicted and measured galaxy sizes can be minimized by a normalization factor, which was calibrated against the observed UV luminosity function and colour across a large magnitude range. ### Maisie's Galaxy Maisie's Galaxy at \(z=11.44^{+0.09}_{-0.08}\) was reported by multiple teams including Finkelstein et al. (2022), Donnan et al. (2023a), Harikane et al. (2023b) and Arrabal Haro et al. (2023) showing consistent inferred properties11. Using its photometric data (e.g. Finkelstein et al., 2022; \(z=11.8^{+0.2}_{-0.3}\)), it is estimated that Maisie's Galaxy possesses a low SFR of \(\sim\)2M\({}_{\odot}\)yr\({}^{-1}\), a stellar mass around \(10^{8.5}\)M\({}_{\odot}\), an intrinsic UV magnitude of \(-20.3\), a steep UV slope of \(-2.5\), a low dust extinction of \(A_{\rm{v}}\)\(\sim\)0.1, and an effective radius around 0.34 pkpc. Given its lower luminosity, identify analogues for this galaxy should be less challenging than GLz12. Footnote 11: Maisie’s Galaxy was initially considered to be at \(z\)=14.3\({}^{+0.1}_{-1.1}\)(Finkelstein et al., 2022), and therefore became the second object that we looked into as it had an even higher redshift than GLz12 which was previously thought at \(z\)\(\sim\)13. Using the SED from the first version of its pre-print, we identified 2 analogues showing similar evolution and environment as the analogue presented here but with a burstier SFH. #### 4.4.1 Analogue selection Within our 31 simulation snapshots between \(z\)=10.8 and 16.9, 7 galaxies are identified having fluxes consistent within at least 2\(\sigma\) of the observed photometry (Finkelstein et al., 2022) in the filters F150W, F200W F277W, F356W, and F444W, as well as being lower than the 2\(\sigma\) upper limit in the non-detection band F115W. Although we do not forward model luminosities from the HST or JWST F410M filters, the spectra of our analogues are found also consistent with these measurements/upper limits. Our analogues share similar physical properties with the observations, including an intrinsic UV magnitude around \(-20.3\), a stellar mass of \(\sim\)\(10^{8.5}\)M\({}_{\odot}\), and a redshift between 11 and 12. To be concise, only two example analogues, dubbed analogue-a and b, are presented here which are found at \(z\)=11.3 and 12.0. From the top panel of Fig. 9, we see the SED fitting is well-performed overall with a slightly flatter predicted spectrum compared to the observation. We notice the same challenge in fitting the UV slope was also faced by Finkelstein et al. (2022, fig. 4) and Harikane et al. (2023b, fig. 8) while Donnan et al. (2023a, fig. A6) found a better fit at \(z=12.3\) instead. However, there are still differences between these observational results - for instance, F200W is measured to be 27.3 mag by Finkelstein et al. (2022) and 27.8 mag by the other two groups while the colour F200W-F356W is -0.4 in Donnan et al. (2023a) unlike the -0.6 by the rest of the teams. These might explain why Maisie's Galaxy is reported to be brighter by Finkelstein et al. (2022) and less blue by Donnan et al. (2023a). Nevertheless, such a steep UV slope is consistent with most high-redshift star-forming galaxies having low metallicities (Bouwens et al., 2014), aligning with the properties of our analogues which suffer little dust attenuation. We note that Zavala et al. (2023) reported no detection of Maisie's Galaxy in a number of far-infrared and millimetre observations such as SCUBA-2, Spitzer and Hershel, and hence ruled out the scenario of strong dust emission. Moreover, while preparing this manuscript, Arrabal Haro et al. (2023) presented NIRSpec result of Maisie's Galaxy which verifies its cosmic origin - both the spectroscopic redshift (\(z=11.44^{+0.09}_{-0.08}\)) and inferred galaxy properties remain consistent with the photometric results. #### 4.4.2 Local environment and evolutionary paths Interestingly, despite being fainter than the GLz12 analogue presented in Section 4.3, both analogues of Maisie's Galaxy are located in slightly larger ionized bubbles of a radius around 3.6 cMpc. This implies a dense local environment for these two analogues, which is evident in the central and lower left panels of Fig. 9. We see that analogues-a and b have crowded local environment at \(z\)\(\sim\)11.8 with 3 or 4 galaxies brighter than 30.7 mag in F35W within the corresponding Hii bubbles (c.f. zero in the lower left panel of Fig. 7). A deep photometric follow-up would identify \(\sim\)5 or 50 depending on whether the field is lensed. Looking further at their evolutionary histories, we see analogue-a is in fact a satellite galaxy at \(z\)=15 with the central galaxy having already ionized the surrounding IGM. The two are likely undergoing merger at \(z\)\(\sim\)13 with our analogue having its mass stripped first before the final merger. This is indicated by a 50 Myr trough in the halo mass history. The merger triggers an influx of star-forming gas, leading to a subsequent star formation rate (averaged over 10 Myr following Finkelstein et al., 2022) of \(\sim\)20M\({}_{\odot}\)yr\({}^{-1}\) and leading to a much higher luminosity than observed. By \(z\)=12 (see the lower right panels of Fig. 9), at which we identify analogue-a, its SFR drops to \(\sim\)20M\({}_{\odot}\)yr\({}^{-1}\). Figure 9: Maisie’s Galaxy analogues. See captions of Figs. 7 & 8 for more plot details. However, two example analogues are presented here with additional photometry from HST and JWST F410M (still consistent with the analogue despite not being included during selection) indicated in grey in the top pane. Also note that SFR at \(z\)\(\gtrsim\)11.5 is averaged over several snapshots with total intervals of \(\sim\)10 Myr following Finkelstein et al. (2022) while at later times it is averaged over one snapshot with an increasing time step from 10 to 20Myr. Observational results are taken from Finkelstein et al. (2022) and Harikane et al. (2023). \(<\)1M\({}_{\odot}\)yr\({}^{-1}\) and luminosity becomes more consistent with Maisie's Galaxy. On the other hand, analogue-b has a much younger stellar age as most of its stars are formed in a burst at \(z\)=11.3 with an SFR of nearly 30M\({}_{\odot}\)yr\({}^{-1}\). This burst costs all star-forming gas that analogue-b has gradually accumulated over the past 100 Myr, during which its SFR is kept low at \(<\)1M\({}_{\odot}\)yr\({}^{-1}\). Moreover, the predicted galaxy size (\(\sim\)0.25 pkpc) and metallicity (on the order of 0.001 to 0.01) are similar and consistent with what is suggested by the observation. As for the potential subsequent evolution of Maisie's Galaxy, properties of the two analogues diverge after \(z\)\(\sim\)12. As analogue-b has consumed all of its star-forming gas, its subsequent star formation becomes quenched. On the other hand, analogue-a remains highly efficient in forming stars and its stellar component at \(z\)\(\sim\)11 reaches nearly an order of magnitude larger mass than that of analogue-b. At \(z\)\(\sim\)11, a major merger event happens to analogue-b, bringing it to a similar evolutionary path as analogue-a from then on and both of them keep forming stars at a high level of 10-100M\({}_{\odot}\)yr\({}^{-1}\) until \(z\)\(\sim\)6. ### SMACS_z16a & SMACS_z16b In the field of SMACS, Atek et al. (2023) identified two galaxies at \(z\)\(\sim\)16 - SMACS_z16a and SMACS_z16b (see also Adams et al. 2023b; Harikane et al. 2023b), which have intrinsic UV magnitudes of around \(-\)20.5. To account for gravitational lensing, we also demagnify their observed SEDs by a factor of 2.18 and 1.13 and update the uncertainties to further incorporate errors of the lensing models. The exact values are taken as the average magnification among different strong lensing models based on Furtz et al. (2023). #### 4.5.1 Analogue selection As we have argued in Section 3 using the estimated number density of \(z\)\(\sim\)16 candidates, these candidates are too bright to remain consistent with our fiducial model. Therefore, we instead seek star-forming analogues in the _maxSF_ output in this section. However, as these candidates are still subject to spectroscopic confirmation and may be low-redshift quiescent or dusty galaxies (see e.g. Naidu et al. 2022a; Harikane et al. 2023b; Arrabal Haro et al. 2023), we also present low-redshift analogues _in the fiducial output_ for comparison. We apply the same criteria when performing high- or low-redshift analogue searches - the modelled SED has to be within the 2\(\sigma\) observational uncertainties for filters above the break (F200W, F277W, F356W, F44W) and lower than the 2\(\sigma\) flux threshold for non-detection (F090W & F150W). However, the redshift range for the star-forming analogue search is extended to 57 snapshots between \(z\)=10 and 23 while for the low-redshift search, it is limited to \(z\)\(\geq\)5 as our _N_-body simulation has not reached later times yet. From _maxSF_, we identify 2886(505) high-redshift galaxies possessing similar SEDs as the observed one for SMACS_z16a(b). On the other hand, when looking at the fiducial model, only 34(3) low-redshift analogues are found at \(z\)\(\geq\)5. As these two \(z\)\(\sim\)16 candidates possess qualitatively similar SEDs, we only discuss analogues of SMACS_z16a, and present SED examples of its analogues as well as their distribution as functions of various properties in Fig. 10. #### 4.5.2 High-redshift solutions We identify two modes for the high-redshift star-forming analogues found in _maxSF_ (see also Figs. 4 and 5). Therefore, we further split the sample (approximately) based on redshift and the dust extinction parameter (\(A_{V}\))12. This results in a \(z\)\(\sim\)11.5 population with \(A_{V}\)\(\sim\)1, which we consider as high-redshift, dusty galaxies; while the second group is centred at \(z\)=15 with very little dust attenuation. For SMACS_z16a, these two solutions correspond to the different redshifts inferred by Atek et al. (2023, \(z=15.92^{+0.17}_{-0.15}\)) and by Harikane et al. (2023b, \(z=10.61^{+0.51}_{-8.55}\)), with the latter rejecting SMACS_z16a as being \(z\)\(\sim\)16 because the colour F200W-F277W is not red enough. This is further illustrated by the example high-redshift, dusty analogue shown in the top panel of Fig. 10, whose F200W flux sits on the 2\(\sigma\) upper limits of the observation while F277W is closer to the lower threshold. Footnote 12: Moderate degeneracy between redshift, dust extinction, UV magnitude and metallicity is present in the high-dimensional distribution. As expected, these high-redshift analogues are forming stars at very high rates (\(\sim\)10M\({}_{\odot}\)yr\({}^{-1}\)), and have managed to build a relatively large stellar content with a stellar-to-halo mass ratio of nearly 10 per cent (i.e., \(M_{*}\)\(\sim\)10\({}^{9}\)M\({}_{\odot}\) and \(M_{\rm vir}\)\(\sim\)10\({}^{10}\)M\({}_{\odot}\)). As they represent areas where the first episode of star formation occurs in our universe _which assumes negligible stellar feedback_, they have been able to convert all accreted gas into stars (\(f_{\rm gas}\)\(\sim\)1), fuelling long-lasting star-forming events. However, this also leads to an over-prediction of the ISM metallicity compared to results from Furtz et al. (2023). This is because in _maxSF_, while supernovae do not provide thermal or kinetic feedback to remove metals (and gas) from the host galaxy, they continue polluting their environment. As our dust attenuation model assumes the optical depth increases towards later times and scales (almost) linearly with the metallicity (see more in Qiu et al. 2019), this results in the two SED solutions we see here - while the higher-redshift analogues prefer lower metallicities and are dust-free, the lower-redshift most likely exhibit an opposite trend. #### 4.5.3 Low-redshift solutions The low-redshift analogues found in our fiducial model can also be divided into two groups - based on the extinction parameter, galaxies with \(A_{V}>\) 0.2 are considered to be dusty star-forming galaxies and isolated from quiescent counterparts. Example SEDs and property distributions for these two scenarios are shown in Fig. 10 for comparison. Although it is a small sample13, we see that the low-redshift analogues are on average 3 magnitudes fainter than the high-redshift cases, with a much lower stellar-to-halo mass ratio of around 2 per cent - they are galaxies with similar stellar mass (\(\sim\)10\({}^{9}\)M\({}_{\odot}\)) but inside much larger halos (\(\sim\)5\(\times\)10\({}^{10}\)M\({}_{\odot}\)). Compared to the dusty high-redshift scenario, the dusty analogues at low redshift are also forming stars at \(\sim\)10M\({}_{\odot}\)yr\({}^{-1}\) but out of a relatively smaller gas content (mostly \(f_{\rm gas}\)\(\lesssim\)20 per cent) as a result of supernova feedback implemented in standard galaxy-formation modelling (i.e., our fiducial). In addition, with similar disc sizes and metallicities, the low-redshift dusty galaxies suffer significant attenuation. On the other hand, the quiescent analogues have zero star formation rate, larger discs and low attenuation. Observationally, even though quiescent galaxies with such low masses are not common at \(z\)\(\gtrsim\)5, there have been tentative reports with JWST of quiescent galaxies at early times (Looser et al. 2023) and lower masses (Strait et al. 2023). ### S5-z16-1 S5-z16-1 (Harikane et al. 2023b) is a \(z\)=16.41\({}^{+0.66}_{-0.55}\) candidate identified from Stephan's Quintet. With an intrinsic UV magnitude of \(-\)21.6, S5-z16-1 is even brighter than the two lensed candidates discussed in the previous section. Therefore, identifying its analogues becomes more challenging and the _maxSF_ model presents only five galaxies that share its SED (selection criteria identical to Sec. 4.5). As mentioned in Sec. 3.2, CEERS-93316 was also a bright \(z\)\(\sim\)16 candidate (Donnan et al. 2023a; Harikane et al. 2023b; Naidu et al. 2022a) but is now considered to be a \(z\)=4.9 dusty galaxy (Arrabal Haro et al. 2023). The NIRSpec observation reveals that strong nebular lines such as [OIII] and H\(\alpha\) have boosted its NIRCam photometry (particularly for band F277W), leading to an apparent break as well as the biased interpretation of its redshift. In fact, because these two galaxies have very similar SEDs, their inferred galaxy properties (when assuming they are at \(z\)\(\sim\)16) are also very close (see data points at \(z\)\(\sim\)16 in Fig. 11). One of the five analogues we find for S5-z16-1 happens to also be the only analogue we can find for CEERS-93316 in _maxSF_. Therefore, the revised interpretation of CEERS-93316 Figure 10: Analogues of the \(z\)\(\sim\)16 candidate from Atek et al. (2023, ID: SMACS_z16a). _Top panel:_ we show examples for four different scenarios including quiescent and dusty galaxies at \(z\)\(\sim\)6 in the fiducial as well as high-redshift, star-forming counterparts with or without noticeable dust attenuation at \(z\)\(\sim\)10–17 in _maxSF_ where feedback is turned off and star formation efficiency is maximized. We additionally show the brightest galaxy in the fiducial model for comparison. See the caption of Figs. 7 and 9 for more figure details, but here we offset the central wavelength of the modelled SEDs for better visualization. _Bottom panels:_ distribution of galaxy properties including redshift; intrinsic UV magnitude; star formation rate averaged over 50Myr; stellar mass; halo mass; the fraction of gas available for forming stars; size; metallicity; and dust extinction parameters for 471 high-redshift (dusty) analogues, 2415 high-redshift (dusty) analogues, 30 low-redshift (dusty) analogues and 4 low-redshift (quiescent) analogues. Median values and [16,84] percentiles are presented in the top corner of each subpanel with estimated intrinsic UV magnitude, stellar mass and size from Atek et al. (2023) and SFR from Furtzk et al. (2023) indicated by shared regions. provides a warning that SED fitting, including that done in this work, requires better handling of the emission profile. #### 4.6.1 High-redshift solutions Figure 11 shows the SED and possible evolutionary path of S5-z16-1 with a comparison against the observations. The property histories for the other 4 analogues of S5-z16-1 are also included and we see that the prediction at \(z\)\(\sim\)16 from our _maxSF_ model agrees very well with the observational results. The S5-z16-1 analogues are extremely bright with stellar masses around 10\({}^{9}\)M\({}_{\odot}\) and high stellar-to-halo mass ratios of \(\sim\)10 per cent (Harikane et al., 2023). When averaged over 50 Myr, the star formation rates are on the order of 10M\({}_{\odot}\)yr\({}^{-1}\). It is also remarkable that all 5 analogues show consistent formation histories before \(z\)\(\sim\)13 - they build their stellar contents very rapidly in this 200 Myr interval starting with \(M_{*}\)\(\sim\)10\({}^{7}\)M\({}_{\odot}\) and \(M_{\rm vir}\)\(\sim\)3\(\times\)10\({}^{8}\)M\({}_{\odot}\) at \(z\)\(\sim\)25, reaching \(M_{*}\)\(\sim\)3\(\times\)10\({}^{9}\)M\({}_{\odot}\) and \(M_{\rm vir}\)\(\sim\)2\(\times\)10\({}^{10}\)M\({}_{\odot}\) at \(z\)\(\sim\)13. This is because _maxSF_ includes no feedback, which allows these analogues to be able to convert all their gas into stars and significantly reduces the stochasticity in the formation history. We see that from \(z\)\(\sim\)13, the evolution of these analogues starts to diverge as a result of different merger histories. #### 4.6.2 Low-redshift solutions Using the broad-band photometry measured by Harikane et al. (2023), no low-redshift analogues can be identified in our fiducial output (which stops at \(z\)=5). However, based on the line correction Figure 11: _Top panel_: (a) spectrum (thick black curve) and SED (black squares) of a star-forming galaxy at \(z\)\(\sim\)16 in _maxSF_ which can be considered as an analogue for both S5-z16-1 (red circles; Harikane et al. 2023) and CEERS-93316 (if it were at \(z\)\(\sim\)16; blue circles; Donnan et al. 2023; Harikane et al. 2023; Naidu et al. 2022). (b) spectra and SED (thick, purple) of a dusty galaxy (in the fiducial catalogue) that is analogue to S5-z16-1 _after line correction_ (purple circles). Note that the central wavelengths of these SEDs are offset for better visualization. _Bottom panels:_ possible evolution as in (1) intrinsic UV magnitude; (2) star formation rate averaged over \(\sim\)50Myr, (3) stellar mass; (4) halo mass; (5) fraction of star-forming gas; and (6) galaxy disc size for analogues shown above (thick lines) with observational results from Harikane et al. (2023) and Donnan et al. (2023) also indicated for comparison (red and blue circles; also note that the two results overlap with each other and sizes are measured by Ono et al. 2022). Stellar mass and star formation rate of CEERS-93316 at \(z\)=4.9 obtained by Arrabal Haro et al. (2023) is shown (purple circles) as a comparison for our dusty analogue identified at \(z\)\(\sim\)5.3. We additionally show the brightest galaxy (thin grey lines) in the fiducial model as well as the remaining 4 star-forming analogues (thin coloured lines) of S5-z16-1 found in _maxSF_, which stops at \(z\)\(\sim\)10. Arrabal Haro et al. (2023) inferred for CEERS-93316, we alter the SED of S5-26-1 by +0.75, +0.50 and +0.25 mags in F277W, F356W and F444W to account for potential contamination14. Using this updated SED (see purple circles in Fig. 11), we successfully identify 1225 dusty analogues between \(z\)=5 and 6 in the fiducial catalogue. Footnote 14: We choose to not perform line correction for SMACS_z16a(b) in Section 4.5 due to the highly uncertain emission strength for high-redshift dusty galaxies. However, assuming the flux of the emission line is proportional to the continuum, we expect similar levels of alteration in magnitude to SMACS_z16a(b). Qualitatively, this improves the fitting of the two example low-redshift analogues shown in the top panel of Fig. 10. Figure 11 presents an additional example analogue for low-redshift dusty galaxies (at \(z\)\(\sim\)5-3). This analogue has a dramatically different evolutionary path compared to all high-redshift analogues we have studied in this work. The example analogue is firstly identified15 at \(z\)\(\sim\)15 with a halo mass of \(\sim\)6\(\times\)10\({}^{9}\)M\({}_{\odot}\) and a less than 0.1 per cent stellar content. Although bursty, it keeps forming stars at \(\lesssim\)1M\({}_{\odot}\)/yr (when averaged over 50 Myr) until \(z\)\(\sim\)8 when its stellar mass reaches 2\(\times\)10\({}^{8}\)M\({}_{\odot}\) and the stellar-to-halo-mass ratio increases to 0.3 per cent. Afterwards, this galaxy quickly gains more masses and, at \(z\)\(\sim\)5.3, its halo and stellar masses become nearly 10\({}^{11}\)M\({}_{\odot}\) and 10\({}^{9}\)M\({}_{\odot}\), respectively. While the metallicity of this galaxy is twice the solar level at \(z\)\(\lesssim\)6, its shrinking disc size caused by a reducing halo spin makes the disc more opaque and therefore a significant amount of UV radiation becomes saturated, making this galaxy a non-detection in JWST's F090W and F150W bands (see the top panel of Fig. 11). It is worth noting that this dusty galaxy has an extinction parameter of \(A_{v}\)\(\sim\)3.5 which is consistent with typical values at 3\(<\)\(z\)\(\sim\)6 (e.g. Barrufet et al., 2023; Rodighiero et al., 2023). Footnote 15: In this work, augmentation is not applied to halos identified at such low redshifts when reionization has already finished. ## 5 Conclusion JWST has delivered unprecedented data of our early Universe, revealing galaxy formation in the first 300 Myr of the cosmic time. In this work, we utilize a large-volume, high-resolution cosmological simulation coupled with a semi-analytic galaxy-formation model to study the possible evolutionary paths as well as the local environment for eight JWST galaxy candidates at \(z\)\(\geq\)12. These include three faint (\(M_{\rm UV}\)\(\gtrsim\)19.5) galaxies at \(z\)\(\sim\)12 - JADES-GS-213, JADES-GS-z12, S5-212-1; two bright galaxies at \(z\)\(\sim\)12 - GLz12, Maisie's Galaxy; and three bright galaxies at \(z\)\(\sim\)16 - SMACS_z16a, SMACS_z16b, SS-216-1 (e.g., Atek et al., 2023; Curtis-Lake et al., 2023; Finkelstein et al., 2022; Harikane et al., 2023; Naidu et al., 2022b). We find faint JWST galaxies to be consistent with the standard galaxy-formation model while the bright ones are challenging or inconsistent depending on their redshift. Using our fiducial model, which is statistically representative of the observed Universe across most cosmic time (\(5<z<13\)) and the observed magnitude range, we show 1. large samples of analogues have broad-band photometry that is consistent with the faint JWST galaxies. The distribution of our modelled galaxy properties is also broadly aligned with the SED fitting results to observations. But due to the burstier nature of star formation predicted by our model, the inferred stellar masses are lower than observations that commonly assume a more continuous star formation history; 2. as a result of low number density, bright \(z\)\(\sim\)12 galaxies only have a handful of analogues in the fiducial model, whose properties are similar to values obtained through inverse modelling of observed SEDs. Although a small sample, these analogues in general suggest that bright JWST targets have a rapid build-up of their stellar content and are located in dense regions with their local environment having diverse possibilities; and 3. our fiducial simulation does not contain bright analogues for \(z\)\(\sim\)16 candidates found in the small volume of these ERS programs. However, the observed SED of these \(z\)\(\sim\)16 candidates can still be reproduced by low-redshift galaxies in the fiducial model, which either are experiencing strong dust attenuation of their UV radiation or have quenched star formation to exhibit a Balmer break. To reproduce bright \(z\)\(\sim\)16 JWST candidates, we find that highly efficient star formation with no feedback regulation is required. The formation history of these extremely bright analogues in this model demonstrates that they have an incredibly high stellar-to-halo mass ratio that is close to the cosmic mean baryon fraction. This suggests that while feedback and regulated star formation are essential to galaxy formation during most of the cosmic time, the confirmation of \(z\)\(\sim\)16 galaxies would indicate that this was not the case for the first massive galaxies formed during the cosmic dawn. ## Acknowledgements We thank S. Finkelstein, S. Tacchella and D. Breitman for their comments. This research was supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project #CE170100013. Part of this work was performed on the OzSTAR and Gadi national computational facilities. YQ acknowledges HPC resources from ASTAC Large Programs and the RCS NCI Access scheme as well as its Cloud at The University of Melbourne. ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author.
2305.07586
Knowledge distillation with Segment Anything (SAM) model for Planetary Geological Mapping
Planetary science research involves analysing vast amounts of remote sensing data, which are often costly and time-consuming to annotate and process. One of the essential tasks in this field is geological mapping, which requires identifying and outlining regions of interest in planetary images, including geological features and landforms. However, manually labelling these images is a complex and challenging task that requires significant domain expertise and effort. To expedite this endeavour, we propose the use of knowledge distillation using the recently introduced cutting-edge Segment Anything (SAM) model. We demonstrate the effectiveness of this prompt-based foundation model for rapid annotation and quick adaptability to a prime use case of mapping planetary skylights. Our work reveals that with a small set of annotations obtained with the right prompts from the model and subsequently training a specialised domain decoder, we can achieve satisfactory semantic segmentation on this task. Key results indicate that the use of knowledge distillation can significantly reduce the effort required by domain experts for manual annotation and improve the efficiency of image segmentation tasks. This approach has the potential to accelerate extra-terrestrial discovery by automatically detecting and segmenting Martian landforms.
Sahib Julka, Michael Granitzer
2023-05-12T16:30:58Z
http://arxiv.org/abs/2305.07586v2
# Knowledge distillation with Segment Anything (SAM) model for Planetary Geological Mapping ###### Abstract Planetary science research involves analysing vast amounts of remote sensing data, which are often costly and time-consuming to annotate and process. One of the essential tasks in this field is geological mapping, which requires identifying and outlining regions of interest in planetary images, including geological features and landforms. However, manually labelling these images is a complex and challenging task that requires significant domain expertise and effort. To expedite this endeavour, we propose the use of knowledge distillation using the recently introduced cutting-edge Segment Anything (SAM) model. We demonstrate the effectiveness of this prompt-based foundation model for rapid annotation and quick adaptability to a prime use case of mapping planetary skylights. Our work reveals that with a small set of annotations obtained with the right prompts from the model and subsequently training a specialised domain decoder, we can achieve satisfactory semantic segmentation on this task. Key results indicate that the use of knowledge distillation can significantly reduce the effort required by domain experts for manual annotation and improve the efficiency of image segmentation tasks. This approach has the potential to accelerate extra-terrestrial discovery by automatically detecting and segmenting Martian landforms. Keywords:Segment Anything Model (SAM) Semantic Segmentation Knowledge Distillation Geological Mapping ## 1 Introduction We have recently witnessed a paradigm shift in AI with the advent of _foundation models_ utilising astronomical amounts of data. The fields of natural language processing and multi-modal learning have been revolutionised with the emergence of ChatGPT and the like [19, 22]. The very first foundation models such as CLIP [23], ALIGN [13], and DALLE [24], have focused on pre-training approaches but are not suited to image segmentation. However, recently, Segment Anything (SAM) [18] was released, which is a large vision transformer ViT-based [6] model trained on the large visual corpus (SA-1B) containing more than 11 million images and one billion masks. SAM is designed to generate a valid segmentation result for any prompt. However, SAM is trained on general world case scenarios with popular structures. Recent studies have revealed that SAM can fail on typical medical image segmentation tasks [5; 9] and other challenging scenarios [4; 11; 12; 25]. Since SAM's training set mainly contains natural image datasets, it may not be directly transferable to niche tasks on data such as magnetic resonance (MRI), or HiRISE imaging 1, amongst other specialised data formats. Nonetheless, SAM is still a powerful tool that has a powerful image encoder and its prompt functionality can significantly boost the efficiency of manual annotation. In the planetary science domain, where vast amounts of remote sensing data are gathered, annotation is an intensive task. An approach that reduces the effort on the domain experts' end is highly desired. In these scenarios, active learning [15; 17] and knowledge distillation [7] via training a specialised model with relatively fewer samples can be highly valuable. Footnote 1: “High-Resolution Imaging Science Experiment” is camera aboard the Mars Reconnaissance Orbiter (MRO) spacecraft, which is designed to capture high-resolution images of the Martian surface and provide detailed information about the planet’s geology and atmosphere. ### Segment Anything Model SAM utilises a vision transformer-based [10] approach to extract image features and prompt encoders to incorporate user interactions for segmentation tasks. The extracted image features and prompt embeddings are then processed by a mask decoder to generate segmentation results and confidence scores. There are four 2 types of prompts supported by SAM, namely _point_, _text_, _box_, and _mask_ prompts. Footnote 2: Text prompt is currently not released. For the points prompt, SAM encodes each point with Fourier positional encoding and two learnable tokens that specify foreground and background. The bounding box prompt is encoded by using the point encoding of its top-left and bottom-right corners. SAM employs a pre-trained text encoder in CLIP for encoding the free-form text prompt. The mask prompt has the same spatial resolution as the input image and is encoded by convolution feature maps. Finally, SAM's mask decoder consists of two transformer layers with a dynamic mask prediction head and an Intersection-over-Union (IoU) score regression head. The mask prediction head generates three downscaled masks, corresponding to the whole object, part, and subpart of the object. SAM supports three main segmentation modes: fully automatic, bounding box, and point mode. ### Landform detection on Mars using HiRISE images Mapping planetary landforms plays a crucial role in various tasks such as surveying, environmental monitoring, resource management, and planning. On Earth, for example, the presence of water triggers several geological and geomorphological processes [1]. Conversely, on Mars, researchers have found correlations between the presence of certain landforms such as pits, sinkholes, and landslides and the possible presence of water [2; 3]. However, identifying, classifying, and drawing regions of interest manually is a complex and time-consuming process [20]- one that would greatly benefit from automation. In this regard, the identification and segmentation of various Martian landforms have gained increasing attention in recent years [14, 16, 20, 21]. Figure 2 shows an overview of some of the pits and skylights that can be identified on the Martian terrain. In this study, we focus only on these landforms, utilising a dataset prepared exclusively for it (cf. Section 2.1). Automatic detection and segmentation of these landforms have the potential to accelerate the identification of potential landing sites for future missions, study the geological history of Mars, Figure 1: Overview of our deployed approach by extending SAM. It consists of SAMs image encoder that learns an embedding of the image, and a specialised decoding unit to learn the domain-specific semantics. SAMs prompt encoder and mask decoder, represented within the orange bounding box are utilised only for annotating incrementally the \(\boldsymbol{\Delta}(\mathbb{N})\) training samples. While training the domain decoder, the image encoder is frozen so as not to update its weights. Figure 2: Principal types of pits and skylights found on Mars terrain: (a) Skylight with possible cave entrance (Type 1a). (b) Pit with possible relation to cave entrance (Type 1b). (c) “Bowl” pit with a possible connection to lava tubes (Type 2a). (d) Pit with uncertain connection to lava tubes or dikes (Type 2b). (e) Coalescent pits (Type 3). (f) Pit with a possible connection to lava tubes (Type 4) [20]. and contribute to a better understanding of the planet's potential habitability. Therefore, this endeavour is of significant importance in planetary science. ## 2 Method ### Dataset The data used in this work are images acquired by image sensors operating in the visible (VIS) and Near InfraRed (NIR) spectrums on board probes orbiting Mars. This data set is composed of images by HiRISE instrument and downloaded both as Reduced Data Record (RDR) and Experiment Data Record (EDR) format from public space archives such as PDS Geosciences Node Orbital Data Explorer (ODE) 1. With this, Nodjoumi _et al_. [20] released a processed dataset with 486 samples. This dataset is split into 405 images for training, 25 for validation and the rest for testing. In their work, they train a Mask-RCNN using all images annotated manually. In order to explore the applicability of knowledge distillation, we incrementally select train samples for annotation and subsequently train the domain decoder with these. This, in effect, is analogous to learning correct prompts for the task, with the least amount of annotated samples. Footnote 1: [https://ode.rsl.wustl.edu/](https://ode.rsl.wustl.edu/) ### Prompt-mode selection for annotation We conducted an evaluation of the SAM model using the three different prompt settings: (a) In the automatic prompt setting, SAM generates single-point input prompts in a grid pattern across the input image and selects high-quality masks using non-maximal suppression. All parameters were set to their default values. In the case of multiple masks being obtained, we selected the mask with the highest returned IoU score. (b) In the point prompt setting, we used the centre Figure 3: An overview of generation of segmentation masks with the three different prompt settings in SAM. The box prompt delineates the land mass from the adjacent shadow in comparison to the point prompt. of the ground truth regions of interest as the point prompts for SAM. (c) In the box prompt setting, we computed the bounding box for SAM around the ground-truth mask. Figure 3 illustrates the mask generation on an exemplary sample for the three modes. Clearly, the automatic prompt simply segments all regions in a semantic agnostic way. Point and box prompts generate high-quality segmentation masks, with an average image level IoU above 90 %. Although in our case, point and box prompt performed relatively comparably on simpler cases, we empirically found box prompt to be most reliable in occluded and shadowy scenes and thus chose that to be used for final annotations. In practice, the expert would need only a few seconds to draw boxes around all relevant regions of interest on a sample. ### Domain Decoder #### 2.3.1 Why not directly fine-tune the SAM decoder? A recent work [8] from the medical domain corroborates our observation that the model underperforms significantly in comparison to state-of-the-art without training and with just the use of prompts. So fine-tuning the model would be necessary. However, we also observe that the decoder in SAM has learnt patterns other than that specific to the task and is prone to detecting other regions not relevant to our task. In our case, we empirically observed fine-tuned model 4 to give spurious results. Figure 4 illustrates an exemplary fail-case of fine-tuning the SAM decoder with the labels. SAM decoder even when fine-tuned is optimal only when prompts are available, and thus is hard to be used without human-in-the-loop or additional information from the ground truth. All of the recently developed works [8, 9, 11] use prompts derived from the ground truth labels for the problem-specific task. This is not a realistic scenario in our application. We, therefore, choose to train a separate decoder to learn the problem-specific semantics. Footnote 4: The SAM decoder is fine-tuned via training with a set of 25 annotated images for 100 epochs. We employ a lightweight decoder (cf. Figure 1) comprised of only three upsampling layers using deconvolutions that maps the bottleneck \(z\) to an image of the desired size, in our case (\(3\times 900\times 900\)). The bottleneck is obtained by passing the image through SAMs encoder. During training, only the weights of the decoder are updated. We use a sigmoid activation to map the logits in the range \([0,1]\) for binary segmentation. In this manner, we train the decoder with incremental sets of SAM-annotated images. The incremental function \(\mathbf{\Delta}(\mathbb{N})\) is used in step sizes with \(\mathbb{N}\in\{5,10,15,20,25,50\}\). All models are trained for a total of 100 training epochs, without additional control. We compare the performance using mean Intersection over Union (mIoU), micro F1, accuracy, and micro-precision and recall. Micro metrics are chosen to better represent the performance under data imbalance. Figure 5 shows the evolution of the metrics. We observe that the performance improvement with additional training samples after a handful is non-significant, with any differences being representative of stochasticity in evaluation rather than true information gain. By observing the metrics above and with qualitative evaluation it can be inferred that depending on the complexity of the domain-specific task, a very small number of annotations can suffice for a representative performance (cf. Figure 6). Further, we compare the performance of this approach with \(\mathbf{\Delta}(5)\) against the Mask-RCNN model proposed in existing literature (cf. Table 1) for the same task, which serves as the benchmark for our comparison. This model is trained with the full training size of 405 manually annotated images. The authors [20] Figure 4: Landforms of interest are harder to detect without prompts while using the SAM decoder. While the untuned model will segment all surrounding regions, the fine-tuned model still struggles with ignoring the regions of non-interest. Figure 5: Development of the evaluation metrics with increasing sizes \(\mathbf{\Delta}(\mathbb{N})\) of annotated training samples. Increasing training size beyond a handful of samples yields trivial overall improvement. in this work only reported macro metrics and noted that about 1000 positive labels were required for satisfactory performance. We clearly see that knowledge distillation through SAM by utilising relatively minuscule labels surpasses the benchmark on most reported metrics. In spite of the precision being slightly lower, the recall is substantially higher. It is to be noted that in tasks like these, recall should be given a higher importance to precision, since missing a region of interest is more critical than falsely identifying one. ## 3 Conclusion In this work, we extended the SAM framework and applied it to the segmentation of landforms like pits and skylights on the surface of Mars using HiRISE images. We observed that SAM has a high accuracy in separating various semantic regions, however, it cannot be directly applied to domain-specific tasks due to a lack of problem-specific bias. To this end, we developed and applied a domain-specific decoder that takes the image embedding generated by SAMs image encoder and learns the problem-specific semantics with substantially fewer labels. By training the domain decoder with only 5 labelled images sampled randomly, we demonstrated an equivalent if not superior performance to the existing Mask-RCNN method for the same task that was trained with over 400 labelled images. We also explored the applicability of SAMs decoder for annotation using the various out-of-box prompts. We observed that the fully automatic mode is prone to marking irrelevant regions, and further can also miss some regions of interest if it doesn't know where to look. The point-based mode can be ambiguous at times. In contrast, the bounding box-based mode can clearly specify the ROI and obtain reasonable segmentation results without multiple trials and errors. We can therefore conclude that the bounding box-based segmentation mode can be a useful setting for rapid annotation by the domain expert. In conclusion, our study reveals that SAM can effectively be exploited to accelerate domain-specific segmentation tasks. This work presents the first attempt to adapt SAM to geological mapping by fine-tuning through knowledge distillation. As part of future work, it might be worthwhile to investigate how \begin{table} \begin{tabular}{|l||r|r|r|r|} \hline **model** & **macro F1** & **accuracy** & **macro precision** & **macro recall** \\ \hline \hline Mask-RCNN [20] & 0.811 & 0.774 & **0.952** & 0.706 \\ \hline ours (\(\mathbf{\Delta}(5)\)) & **0.86** & **0.96** & 0.89 & **0.93** \\ \hline \end{tabular} \end{table} Table 1: Comparison of the state of the art vs our proposed approach trained only with 5 labelled samples. The authors in [20] train their model with 405 samples and report macro metrics. the process of annotation can be automated, further lowering the load of human-in-the-loop. We hope this work will motivate more studies to build segmentation foundation models in the planetary science domain.
2308.08891
Multimode ion-photon entanglement over 101 kilometers of optical fiber
A three-qubit quantum network node based on trapped atomic ions is presented. The ability to establish entanglement between each of the qubits in the node and a separate photon that has travelled over a 101km-long optical fiber is demonstrated. By sending those photons through the fiber in close succession, a remote entanglement rate is achieved that is greater than when using only a single qubit in the node. Once extended to more qubits, this multimode approach can be a useful technique to boost entanglement distribution rates in future long-distance quantum networks of light and matter.
V. Krutyanskiy, M. Canteri, M. Meraner, V. Krcmarsky, B. P. Lanyon
2023-08-17T09:56:19Z
http://arxiv.org/abs/2308.08891v1
# Multimode ion-photon entanglement over 101 kilometers of optical fiber ###### Abstract A three-qubit quantum network node based on trapped atomic ions is presented. The ability to establish entanglement between each of the qubits in the node and a separate photon that has travelled over a \(101\,\mathrm{km}\)-long optical fiber is demonstrated. By sending those photons through the fiber in close succession, a remote entanglement rate is achieved that is greater than when using only a single qubit in the node. Once extended to more qubits, this multimode approach can be a useful technique to boost entanglement distribution rates in future long-distance quantum networks of light and matter. + Footnote †: Correspondence should be sent to [email protected] ## I Introduction Envisioned quantum networks consist of matter-based nodes for information processing and storage, that are interconnected with photonic links for the establishment of entanglement between the nodes [1; 2]. Such networks could span distances from a few meters to a world-wide quantum network and would enable applications in computing, sensing and communication. Photon-mediated entanglement has been established across elementary networks consisting of two [3; 4; 5; 6; 7; 8; 9; 10; 11] and three [12] remote matter qubits, distributed over distances up to a 1.5 kilometers [13]. Recently, two atoms \(400\,\mathrm{m}\) apart were entangled over a spooled \(33\,\mathrm{km}\)-long fiber channel [14]. A key requirement for long-distance quantum networking is the ability to entangle a matter qubit with a photon and to distribute that photon over many tens of kilometers. That ability has been demonstrated using a range of different systems including trapped ions [15; 16; 17] and atoms [18], for distances of up to 50 kilometers [17]. A second key requirement is the ability to integrate multiple quantum-logic capable qubits into network nodes [2]. Nodes consisting of two co-trapped atoms [19], two qubits in a diamond-defect system [20] and two trapped ions [21; 22; 23; 24] have been demonstrated. One advantage of having multiple qubits in network nodes is the possibility to perform _multimode_ entanglement distribution [25]. With a single matter-qubit, one has to wait at least the light travel time to learn if entanglement distribution was successful between nodes before trying again, or entanglement with the first photon is lost. For example, over 100 kilometers of optical fiber, the light travel time limits the maximum attempt rate for establishing remote entanglement to \(1\,\mathrm{kHz}\), which given the \(1\,\mathrm{\char 37}\) transmission probability using standard optical fibers at \(1550\,\mathrm{nm}\), would yield a maximum possible success rate of \(10\,\mathrm{Hz}\). This limit could be overcome by sending many photons into the channel, each entangled with a different matter-qubit in the node: thereby performing multiple entanglement distribution attempts within the single photon travel time. In this paper we present two main results. First, matter-photon entanglement is achieved over a spooled \(101\,\mathrm{km}\)-long fiber channel: twice the distance of previous works (e.g., [15; 16; 17; 18]) and requiring a matter-qubit coherence time on the order of the photon travel time (\(500\,\mathrm{\SIUnitSymbolMicro s}\)) to achieve. Second, using three co-trapped matter qubits in the node we demonstrate a multimoding enhancement for the rate of entanglement distribution. ## II Experimental setup and sequence A conceptual schematic of the experimental setup is presented in Figure 1 and now summarised. Our network node includes three \({}^{40}\)Ca\({}^{+}\) ions confined in a 3D linear Paul trap and at the position of the waist of an optical cavity for photon collection at \(854\,\mathrm{nm}\)[26; 27]. The ions are positioned at anti-nodes of the vacuum cavity standing wave. Because the cavity axis is not quite perpendicular to the ion-string axis (differing by a designed angle of 5 degrees), it is not possible to position the ions in the same anti-node. Instead, there is a unique axial confinement (and corresponding axial centre-of-mass frequency \(\omega_{z}\)) at which the ions can be positioned in neighbouring anti-nodes. A calibration process is performed to find that unique value, see Appendix A, yielding \(\omega_{z}=\leavevmode\nobreak\ 0.869(20)\,\mathrm{MHz}\). Given \(\omega_{z}\), we calculate that the ion spacing is \(5.26(10)\,\mathrm{\SIUnitSymbolMicro m}\) and that the angle between the ion string and the cavity axis is \(85.3(1)^{\circ}\). Single photons are generated via a dichromatic cavity-mediated Raman transition (BCMRT) [26; 29], driven via a \(393\,\mathrm{nm}\) Raman laser beam with a \(1.2\,\mathrm{\SIUnitSymbolMicro m}\) waist at the ions [23]. A Raman laser pulse on an ion in the state \(|S\rangle=|4^{2}S_{1/2,m_{j}=-1/2}\rangle\) ideally generates the maximally-entangled state \[|\psi(\theta)\rangle= (|D^{\prime},V\rangle+e^{i\theta}|D,H\rangle)/\sqrt{2}, \tag{1}\] where \(|D^{\prime}\rangle\) and \(|D\rangle\) are the respective Zeeman states \(|3^{2}D_{5/2},m_{j}=-3/2\rangle\) and \(|3^{2}D_{5/2},m_{j}=-5/2\rangle\), \(|V\rangle\) and \(|H\rangle\) are the respective vertical and horizontal polarization components of a single photon emitted into the cavity vacuum mode, and \(\theta\) is a phase set by the relative phase of the two freqeucency components in the bichromatic beam [29]. After exiting the vacuum chamber through an optical viewport, photons are coupled into single mode optical fiber and then converted to 1550 nm (telecom C band) via the polarisation-preserving single-photon frequency conversion system of [17; 28]. Telecom photons are then sent into a 101km-long single mode fiber spool with a calculated photon travel time of 494 us and measured total transmission probability of 1.36(4)%. Neither the optical length nor temperature of the fiber spool is actively stabilised. Finally, the photon polarisation is analysed in a chosen basis using a combination of motorised waveplates, a polarising beam splitter and two superconducting single photon nanowire detectors (Figure 1(d)). The experimental pulse sequence consists of three parts: initialisation, photon generation and ion-qubit measurement. Initialisation consists of 7 ms of Doppler cooling followed by 20 us of optical pumping into the state \(\ket{S}\). Photon generation consists of a sequence of pulses, that we call an attempt, which is repeated up to 15 times (15 attempts). Each attempt begins with 50 us of Doppler cooling and 20 us of optical pumping, which serve to reinitialise the ions after any previous attempt. Next comes a 50 us Raman laser pulse on each ion sequentially, spaced by 12 us to allow e.g., the laser focus to be switched between ions using an acoustooptic detector. The ideal result is a train of three photons, in which each photon is maximally entangled with the ion that emitted it. Next comes a 503 us wait time to allow all three photons to traverse the 101 km fiber spool and be detected. At the beginning of that wait time, the \(\ket{D}\) electron population of all ions is moved to the state \(\ket{S}\) via an 6.4 us \(\pi\)-pulse using a laser at 729 nm. As such, the ion-qubits are encoded in superpositions of the \(\ket{S}\) and \(\ket{D^{\prime}}\) states while the photons travel. After 243.6 us of the wait time from the last 729-nm pulse, a 729 nm \(\pi\)-pulse then swaps the \(\ket{S}\) and \(\ket{D^{\prime}}\) population of all ion qubits, realising a spin echo. The pulse sequence for a single attempt is now completed. In the cases in which no photons are detected within the expected arrival time windows, another attempt is performed. In the cases in which at least one photon is detected within the expected arrival time windows, further attempts are aborted and ion-qubit measurement is executed. Ion-qubit measurement begins with an optional 729 nm \(\pi/2\)-pulse implemented on the \(\ket{S}\) to \(\ket{D^{\prime}}\) transition on all ions. The optional pulse is implemented when the ion-qubits are to be measured in the Pauli \(\sigma_{x}\) or \(\sigma_{y}\) basis: we set the optical phase of the pulse to determine in which of the two bases the measurement is made. The optional pulse is not implemented when the ion-qubit is to be measured in the \(\sigma_{z}\) basis. Finally, single-ion resolved state detection is performed via electron shelving for 1.5 ms on all three ions simultaneously, at which point the experimental sequence is concluded. The chosen ion-qubit measurement basis and photon polarisation-qubit measurement are fixed throughout a single execution of the experimental pulse sequence. The experimental pulse sequence is repeated sufficiently many times, and in sufficiently many measurement bases, to allow for reconstruction of the two-qubit states \(\rho_{ij}\) of all nine possible combinations of one ion-qubit (\(i\)) and one photon-qubit (\(j\)), via state tomography. States are reconstructed via the maximum-likelihood method and are conditional on successful detection of photon \(j\). Uncertainties in parameters derived from the states \(\rho_{ij}\) are obtained via the Montecarlo technique. We use the concurrence \(C\)[30] to quantify the degree of entanglement in the states \(\rho_{ij}\), where \(0\leq C\leq 1\) and \(C=1\) is a maximally entangled state achieved e.g., by the state of Equation 1. Figure 1: **Experimental schematic.** (a) Three \({}^{40}\)Ca\({}^{+}\) ions at neighbouring antinodes of an 854 nm vacuum standing wave mode in an optical cavity. Sequential lasers pulses, one on each ion, generate three photons, each entangled by polarisation to the ion that emitted it. Inset: Atomic energy level diagram. \(\ket{S}\!=\!\ket{4^{2}S_{1/2,m_{j}=-1/2}}\), \(\ket{P}\!=\!\ket{4^{2}P_{3/2,m_{j}=-3/2}}\), \(\ket{D}\!=\!\ket{3^{2}D_{5/2,m_{j}=-5/2}}\), \(\ket{D^{\prime}}\!=\!\ket{3^{2}D_{5/2,m_{j}=-3/2}}\). The frequency difference \(\Delta_{2}-\Delta_{1}\) is equal to the one between \(\ket{D^{\prime}}\) and \(\ket{D}\). (b) The photons are converted to 1550 nm via quantum frequency conversion (QFC) using the system of [17; 28]. (c) A 101 km-long single mode fiber spool (SMF-28). (d) Polarisation analysis involving half (\(\lambda/2\)) and quarter (\(\lambda/4\)) waveplates, filter network, a polarising beam splitter (PBS) and superconducting nanowire single photon detectors (SNSPDs). The narrowest element of the filter network is an air-spaced Fabry-Pérot cavity with a 250 MHz linewidth centred at 1550 nm [17]. ## III Experimental results In our first experiment we characterise the ion-photon states \(\rho_{ij}\) before the photon conversion setup, at the emitted photon wavelength of 854 nm. For those states we use the new notation \(\rho_{ij}^{0}\), reflecting that the photons have traveled over zero kilometers of fiber. A modified setup is used in which the fiber-coupled photons after the cavity output are sent to a polarisation analysis setup that is similar to the one shown in Figure 1, but with optics and detectors optimised for 854 nm. The experimental pulse sequence has the following differences compared to the one described in the previous section: only one attempt to make a photon train is made per sequence, the 503 us wait time is removed as well as the spin echo, and a Raman pulse length of 60 us is used on each ion. The modified pulse sequence was implemented over a 42 minute period during which \(A=41645\) attempts were made to generate a photon from each ion. Figure 2(a) shows a histogram of the single photon detection events, in which three single photon wavepackets are clearly visible. Photons detected in the first, second and third 65 us-long time windows are the ones expected to have been produced due to the corresponding Raman laser pulse applied to the first, second and third ion, respectively. The total number of counts recorded in those windows are 13127, 14465, 13326, corresponding to estimated detection probabilities for 854 nm photons of 0.315(3), 0.347(3), and 0.320(3) where uncertainties are based on Poissonian photon detection statistics. Only in 14 attempts was more than one detection event registered in the same time window, illustrating the single photon character of our source. In \(N_{single}=18337\) cases, exactly one photon detection event was registered in one of the window. In \(N_{double}=9037\) cases, exactly two photon detection events were registered in different windows. In \(N_{triple}=1485\) cases, exactly three detection events were registered in different windows. The total probability to detect at least one photon within one photon generation attempt was \((N_{single}+N_{double}+N_{triple})/A=0.693(4)\). The expectation value of the number of photons detected in an attempt was \((N_{single}+2\times N_{double}+3\times N_{triple})/A=0.981(5)\). Each measured single photon wavepacket in Figure 2(a) is well described by a theoretical model based on a master equation with model parameters for each wavepacket that differ only in the values used for the ion-cavity coupling strengths of the corresponding ion (see Appendix B). The differences in those values are consistent with the effect of the Gaussian profile of the vacuum cavity mode across the ion string. The simulations include an overall detection path efficiency of 0.518 for each of the photons which is consistent with a value of 0.53(3) obtained from independent calibrations (see Appendix C). The detection path efficiency includes all losses encountered by a photon after emission into the cavity, including the finite probability of exiting the cavity into the output mode (independently measured to be 0.78(2) [26]) all the way to the average 854 nm detector efficiencies (independently measured to be 0.87(2) [26]). The simulations predict probabilities of 0.575, 0.664, 0.575 for emission of photons into the cavity from ions 1, 2 and 3, respectively. Lower detected photon efficiencies are achieved in this work, compared to [26], largely due to the lack of ground state cooling and a sub-optimal Raman laser beam direction with respect to the principle magnetic field (quantisation) axis. Both issues could be corrected by reconfiguring the experimental setup in future. In all presented density matrices in this paper we use the following notion for ion-qubit states: \(|D^{\prime}\rangle=|\uparrow\rangle\) and \(|S\rangle=|\downarrow\rangle\). Figure 2(b) presents the absolute values of all nine tomographically reconstructed two-qubit ion-photon density matrices \(\rho_{ij}^{0}\). The concurrences of the three states \(\rho_{11}^{0}\), \(\rho_{22}^{0}\) and \(\rho_{33}^{0}\) are 0.90(1), 0.91(1), 0.92(1), respectively, proving strongly entangled states. Figure 2: **Ion-photon entanglement over 0 km**. (a) Histograms of 854 nm photon arrival times. Probability densities are shown on the vertical axis: number of counts normalized by the number of attempts A and by the time-bin width of 1 μs, measured before QFC in Figure 1 and without the fiber spool. Three single photon wavepackets are visible. Color is used to demark the ion that is expected to have produced the photon, following the ion colouring scheme in Figure 1. Time zero is when an acousto-optic modulator received a radio frequency signal to send a laser pulse to ion 1. Detection efficiencies and quantum states are determined within the 65 us-long time windows shown via coloured vertical lines. Dashed black lines show results of a theoretical model. (b) Absolute values of measured density matrices \(\rho_{ij}^{0}\) of all nine ion-photon pairs. The ion (photon) involved in each row (column) is constant and indicated by the colored ball on the left (photon wavepacket above). States \(\rho_{i=j}^{0}\) are colored red, green and blue. States \(\rho_{i\neq j}^{0}\) are shown in grey. The concurrences of the remaining six states \(\rho^{0}_{i\neq j}\) are zero. The fidelities of the absolute values of the states \(\rho^{0}_{11}\), \(\rho^{0}_{22}\) and \(\rho^{0}_{33}\), with \(\ket{\psi(0)}\) (Equation 1 for \(\theta=0\)) are 0.945(6), 0.950(5), 0.952(4), respectively. The fidelities of all the states \(\rho^{0}_{i\neq j}\) with the maximally mixed two-qubit state are greater than 0.96 to within three standard deviations of uncertainty. We use the fidelity defined as \(\mathrm{Tr}\left(\rho^{0}_{i=j}\ket{\psi(0)}\bra{\psi(0)}\right)\). The entangled states \(\rho^{0}_{i=j}\) are locally rotated with respect to each other. Specifically, the phases of the large coherence terms (\(\ket{\downarrow,H}\bra{\uparrow,V}\) and its complex conjugate) are \(0.731(5)\pi,0.632(5)\pi,0.530(7)\pi\), for \(\rho^{0}_{11},\rho^{0}_{22}\) and \(\rho^{0}_{33}\), respectively. Those phases are consistent with a \(\sigma_{z}\) rotation of the ion-qubit states as a function of time due to an incorrect setting of the frequency difference between the two fields in the Raman laser drive by 689 Hz. That frequency difference should ideally be equal to the one between the \(\ket{D}\) and \(\ket{D^{\prime}}\) states (Figure 1(a)). The incorrect setting was due to a miscalibration in the transition frequencies and could be reduced to below the Hertz level by a more careful calibration using 729 nm spectroscopy. Alternatively, such frequency offsets can be corrected by spin echos implemented on the ion-qubits during the photon travel time, as we demonstrate in the next experiment. The physical origins of the remaining imperfections in the entangled states are not yet known and identifying them will be the subject of future work. We conclude from analysis of the data in Figure 2 that an 854 nm photon can be generated that is strongly entangled with any desired ion in the string. In our second experiment the ion-photon states \(\rho_{ij}\) are characterised using the full setup of Figure 1. For these states we use the new notation \(\rho^{101}_{ij}\), reflecting that the photons have traveled over 101 kilometers of optical fiber. Measurements were taken over 45 minutes, during which \(A^{101}=882,982\) attempts were made. Figure 3(a) shows a histogram of the recorded single photon detection events. The three photon wavepackets are spaced over a total of 172 us and thus simultaneously fit well within the travel time of the fiber spool. The total number of counts recorded in the three sequential 50 us-long time windows were 572, 693 and 643, corresponding to detection probabilities of \(p_{1}=6.5(3)\times 10^{-4}\), \(p_{2}=7.8(3)\times 10^{-4}\), and \(p_{3}=7.3(3)\times 10^{-4}\), respectively. Only in two attempts was more than one detection event registered in the same time window. In \(N_{single}=1900\) cases, exactly one photon detection event was registered in one of the windows. In \(N_{double}=4\) cases, exactly two photon detection events were registered in different windows. There were no cases in which exactly three or more detection events were registered in different windows. The total probability to detect (successfully distribute) at least one photon over 101km per attempt was \(2.16(5)\times 10^{-3}\). The expectation value of the number of photons detected in an attempt was also \(2.16(5)\times 10^{-3}\). The measured wavepackets of Figure 3(a) are well described by the ones from the master equation model. The only model parameter values that differ from those used to produce the simulations in Figure 2(a) are: a lower total detection path efficiency of \(1.26\times 10^{-3}\); the shorter Raman laser pulse used; a 7 % lower Raman laser Rabi frequency and; ion-cavity coupling strengths that differ by up to 10 %, consistent with an ion string shift of 1.4 um away from the cavity axis (see Appendix B). The lower path efficiency and Rabi frequencies are consistent with independent calibrations. The shift of the string was not calibrated, however, the calibration process used to position the middle ion in the centre of the cavity waist had not been performed for a month before the data was taken and therefore a relative drift of 1.4 um due to thermal effects is not unreasonable. Figure 3(b) presents the absolute values of the three tomographically-reconstructed states \(\rho^{101}_{i=j}\), after the application of the same local two-qubit rotation was applied to each state. A local two-qubit rotation is a tensor product of single qubit rotations--one to the ion and one to the photon--which cannot change the entanglement content. The method used to obtain that local rotation is now described. First, the data from all matched ion-photon pairs (\(i=j\)) were added up and used to tomographically reconstructed a single ion-photon state \(\rho^{101}\). Second, a numerical search was performed over local rotations that maximises the fidelity of \(\rho^{101}\) with Figure 3: **Ion-photon entanglement over 101 km**. (a) Histograms of 1550 nm photon arrival times. Probability densities are shown on the vertical axis: normalized by the number of attempts A\({}^{100}\) and by the time-bin width of 1 μs, measured using the setup of Figure 1. The colour scheme is as described in Figure 2. Dashed black lines show results of a theoretical model. (b) Absolute values of measured ion-photon density matrices \(\rho^{100}_{i=j}\). The presented states are locally-rotated from the ones reconstructed directly from the data, as described in the main text. (c) Conceptual schematic of the experimental sequence. One attempt—involving three Raman laser pulses—took 757 μs (dashed line labelled ii). Attempts using a single ion would have taken 633 μs (dashed line labelled i). Re. Init. is the 70 μs-long cooling and optical pumping performed after each attempt. the state \(\left|\psi(0)\right\rangle\left\langle\psi(0)\right|\), yielding the optimum local rotation and a fidelity of 0.89(2). The concurrence of the state \(\rho^{101}\) is 0.76(4). The concurrences of the states \(\rho^{101}_{11}\), \(\rho^{101}_{22}\) and \(\rho^{103}_{33}\) are 0.71(8), 0.80(6), 0.83(6), respectively. After the local rotation, the fidelities of those states with \(\left|\psi(0)\right\rangle\left\langle\psi(0)\right|\) are 0.85(4), 0.88(3) and 0.90(3), respectively. No statistically significant rotation of the ion-qubits states with respect to each other is evident. In Appendix D we describe a model of the effect of our photon detector background counts on ideal ion-photon entangled states. The results of the model show that the infidelities in the tomographically-reconstructed states \(\rho^{101}_{i=j}\) are statistically consistent with the effects of those imperfections alone. Decoherence of the ion-qubits during the 494 us photon travel time is insignificant: coherence times of 62(3) ms are expected in our system when using optical spin echos [23]. We turn now to consider the achieved multi-moding enhancement. Each attempt in the 101 km experiment took \(\tau=757\) us (Figure 3(c)) and provided three opportunities to succeed in detecting a photon (one from each ion). The total probability for at least one successful photon detection per attempt was \(P=2.16(5)\times 10^{-3}\), which yields an effective success rate of \(P/\tau=2.85(7)\) Hz. If instead we had used only one ion in the string, completing each attempt would have taken 633 us (Figure 3(c)), as in addition to the reinitialisation and photon generation times, one has to wait 494 us for the photon to travel and be detected, before trying again. For the probability of success for that attempt we take the value from our experiment, for the most efficient central ion, of \(p_{2}=7.8(3)\times 10^{-4}\). One then calculates a predicted effective success rate of 1.23(5) Hz for the single-ion case. Therefore, by using all three ions we achieved a success rate increase by a factor of 2.3(1). That factor is reduced from the ideal value of three due to three separate effects: slightly lower photon emission probabilities from the ions not in the centre of the string; the times for switching the focus of the laser between the ions and; that we wait for all three photons to (potentially) arrive before trying to generate new photons. The last effect could be eliminated in future, after development of a single-ion-focused reinitialisation scheme. Even considering a scenario in which the time for generating and reinitialising photons was effectively zero, such that each attempt took 494 us, the 101km success rate for a single ion emitting in the string would still be only 1.59(7) Hz.. ## IV Conclusion and outlook In conclusion, ion-photon entanglement was achieved over a 101 km-long fiber channel with a Bell state fidelity largely limited by detector background counts. The use of three co-trapped ion qubits allowed entanglement to be distributed at a higher rate than when using a single ion, by overcoming the attempt rate limit set by the photon travel time over the channel. In future, photon detection after the fiber channel could be used to swap entanglement to a duplicate remote ion-node via entanglement swapping [3; 11; 31]. Here, each remote node sends a photon trains and coincident photon detection between different temporal pairs heralds entanglement of known remote ion pairs. The quantum processing and coherence times possible in ion-qubit registers could then be used to store the established entanglement for extended periods of time, as well as to purify the distributed entanglement and to grow the number of remote Bell pairs over time. The multimoding depth in our system could be significantly increased in future by coupling more ions in the node to travelling photons. For example, longer ion strings could perhaps be shuttled stepwise through the cavity mode by modulated the trap electrodes, allowing generation of a photon from each one. Alternatively, a single stationary ion could be used to generate photons sequential, and have its quantum state transferred to co-trapped ions after each attempt via quantum logic operations, as demonstrated for two ions in [24]. Benefiting from multimoding with hundreds or thousands of ions would require significantly shortening the current single photon wavepacket lengths (Figure 3(a)) such that they all fit simultaneously within the light travel time. Achieving that without compromising photon generation efficiency requires an increased ion-cavity coupling strength afforded e.g., by the smaller mode volume cavities [32; 33; 34]. Datasets are available online [35] ###### Acknowledgements. This work was funded in part by the Austrian Science Fund (FWF) START prize project Y 849-N20 and FWF Standalone project QMAP with project number P 34055. This work received funding from the DIGITAL-2021-QCI-01 Digital European Program under Project number No 101091642 and project name 'QCI-CAT', and the European Union's Horizon Europe research and innovation programme under grant agreement No. 101102140' and project name 'QIA-Phase 1'. We acknowledge funding for V. Krutyanskiy by the Erwin Schrodinger Center for Quantum Science & Technology (ESQ) Discovery Programme, and for B.P.L. by the CIFAR Quantum Information Science Program of Canada. The opinions expressed in this document reflect only the author's view and reflects in no way the European Commission's opinions. The European Commission is not responsible for any use that may be made of the information it contains. For the purpose of open access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission. Experimental data taking was done by V.Kru., M.M., V.Krc. and M.C.. Development of the experimental setup was done by V.Kru., M.C., M.M, and B.P.L.. Data analysis and interpretation was done by V.Kru., M.C., M.M. and B.P.L.. Modelling was done by V.Kru.. The manuscript was written by B.P.L. and V. Kru., all authors provided detailed comments. The project was conceived and supervised by B.P.L.. ## Appendix A Positioning the ion string in the cavity The process is carried out in three steps. The goal of the first step is to overlap the centre of the ion trap (equivalently, the middle ion in the string) with the center of the waist of the cavity's \(854\,\mathrm{nm}\) TEM\({}_{00}\) mode. This is achieved using a single ion following the method described in Appendix B. 1d of [26]. The goal of the second step is to obtain an ion-ion distance such that, when projected onto the cavity axis, the ions are spaced by \(427\,\mathrm{nm}\): the distance between nodes (and anti-nodes) in the \(854\,\mathrm{nm}\) vacuum cavity standing wave. That is achieved by varying the axial confinement of the three-ion string and, for each value, performing measurements similar to the ones presented and explained in [36]. Specifically, we record the \(854\,\mathrm{nm}\) photons when illuminating all three ions with a broadly-focused \(393\,\mathrm{nm}\) beam together with an \(854\,\mathrm{nm}\) and \(866\,\mathrm{nm}\) repumper. For each axial confinement we minimize the rate of the detected \(854\,\mathrm{nm}\) cavity photons by fine adjustment of the cavity position along its axis using an in-vacuum translation stage. The axial confinement that offers the lowest count-rate is found and interpreted as the situation in which each ion is located at a node of the vacuum-cavity standing wave. The goal of the third step is to position each of the three ions at a cavity anti-node. That is achieved using the single-ion focused Raman beam and repumpers to generate cavity photons from the central ion, then adjusting the cavity position along its axis using in-vacuum translation stages until the count rate is maximised. ## Appendix B Numerical simulations of photon wavepackets Numerical simulations were performed to obtain an estimation for the photon generation efficiencies and single photon wavepackets. Specifically, the master-equation model of the laser-atom-cavity system is used from [26]. The model parameters include the experimental geometries of the Raman laser, cavity and magnetic field, which are the same as described in [23] (in particular, see Sec. I.B and III.C of the supplementary material). A key model parameter is the maximum strength of the coherent coupling between a single photon in the cavity and a single ion, which is calculated to be \(g_{0}=2\pi\times 1.53\,\mathrm{MHz}\) in our system, based on the cavity geometry and the properties of the atomic transition. Here we consider the \(\left|P\right\rangle-\left|D\right\rangle\) and \(\left|P\right\rangle-\left|D^{\prime}\right\rangle\) transitions but do not take into account the different Clebsch-Gordon coefficients for the two transitions or the projection of the transition polarizations onto the cavity-photon polarizations, both of which are accounted for separately in simulations [29]. The coupling strength of the bichromatic cavity-mediated Raman transition on a given ion in a string is reduced by e.g., the ion's motion in the trap and by any displacement of the ion's position away from the cavity axis. We model those effects using a reduced ion-cavity coupling strength for ion \(i\) as \(g_{i}=x_{i}\gamma g_{0}\), where \(0\leq x_{i}\leq 1\) and \(0\leq\gamma\leq 1\). The parameter \(x_{i}\) quantifies any reduction in ion-cavity coupling strength due to ion \(i\) not being positioned at the cavity axis. The parameter \(\gamma\) quantifies any other reduction in the coupling strength of the bichromatic cavity-mediated Raman process e.g., due to the motion of the ion in the trap. We use a single value for \(\gamma\) for all ions and determine its value by comparing measured single-photon temporal wavepackets with simulated wavepackets based on numerical integration of the master equation for a range of values of the coupling strength [26]. Another key model parameter is the strength of the bichromatic drive. In order to determine that strength we measure the AC Stark shift of the Raman transition via spectroscopy, as described in Sec. I.B and III.C of the supplementary material of [23]. The bichromatic drive field polarization in the experiment (and simulations) is set to linear and perpendicular to the magnetic field and thus consists of an equal superposition of two circularly polarized components \(\sigma^{-}\) and \(\sigma^{+}\). While only the \(\sigma^{-}\) component is set to resonantly drive the bichromatic cavity-mediated Raman transition, both polarisation components contribute to the AC Stark shift. In the simulations we set the strength of the bichromatic drive \(\Omega^{-}\)--the Rabi frequency with which the \(\sigma^{-}\) transition \(\left|S\right\rangle=|4^{2}S_{1/2},m_{j}=-1/2\rangle\) to \(|4^{2}P_{3/2},m_{j}=-3/2\rangle\) is driven--to the value for which the model predicts the same total AC Stark shift as measured in the experiment. The model requires specifying both Rabi frequencies, \(\Omega_{1}^{-}\), \(\Omega_{2}^{-}\) of the two \(\sigma^{-}\)-polarized frequency components of the bichromatic drive. Here, \(\Omega_{1}^{-}\) stands for the component that drives \(\left|S\right\rangle-\left|D^{\prime}\right\rangle\) and results in a vertically polarized (\(V\)) photon and \(\Omega_{2}^{-}\) stands for the component that drives \(\left|S\right\rangle-\left|D\right\rangle\) and results in a horizontally-polarized (\(H\)) photon. For all the simulations we set \((\Omega_{1}^{-})^{2}+(\Omega_{2}^{-})^{2}=(\Omega^{-})^{2}\) and \(\Omega_{1}^{-}/\Omega_{2}^{-}=0.81\): the value for which the model predicts equal probabilities for the generation of the \(H\) and \(V\) polarized photons. Now we provide more information about the simulations for the experiment in which \(854\)-nm photons are detected, shown in Figure 2 of the manuscript. By considering the ion string to be centred around the cavity axis and, from the Gaussian cavity mode profile, we calculate the values of \(\{x_{i}\}\) to be \(\{0.83,1,0.83\}\) for the three ions. Next, \(\gamma=0.784\) is found to provide a close match between the measured and simulated wavepackets. We measured an AC Stark shift of the Raman transition of \(0.88(2)\,\mathrm{MHz}\) for all the three ions. In simulations we use the value of \(\Omega^{-}=2\pi\times 31.47\,\mathrm{MHz}\) for which the model predicts an AC Stark shift of the Raman transition of \(0.88\,\mathrm{MHz}\). We now provide more information about the simulations for the experiment in which 1550-nm photons are detected, shown in Figure 3 of the manuscript. We use \(\gamma=0.784\) as found using the data of Figure 2. We use \(\{x_{i}\}\) values of \(\{0.739,0.9870.894\}\) which are calculated from the Gaussian cavity mode profile in the case of a \(1.4\,\mathrm{\SIUnitSymbolMicro m}\) displacement of the ion string along the trap axis direction with respect to the center of the cavity mode. This shift is found by comparing simulated wavepackets and another experiment performed on the same date as the one presented in Figure 3 but involving \(50\,\mathrm{km}\) of fiber instead of \(101\,\mathrm{km}\) (in which the measurement statistics is better due to the higher photon detection efficiency). In the \(101\,\mathrm{km}\) experiment, we measured AC Stark shifts of the Raman transition of \(0.82(2)\,\mathrm{MHz}\) for all the three ions. In simulations we used the value of \(\Omega^{-}=2\pi\times 30.41\,\mathrm{MHz}\) for which the model predicts AC Stark shifts of the Raman transition of \(0.82\,\mathrm{MHz}\). ## Appendix C Photon path efficiency Here, a detailed efficiency budget is presented for the photon detection path. When not given explicitly, uncertainties in given probabilities are half of the last significant digit. The beginning of each element in the following list gives the probability associated with a distinct part or process in the experiment. The detection path efficiencies provided in the main text should be compared with a product of these probabilities (or a subset thereof, for the data taken involving \(854\,\mathrm{nm}\) photons). For the photons detected at \(854\,\mathrm{nm}\), the total probability of the list is \(0.53(3)\). For the photons detected at \(1550\,\mathrm{nm}\), the total probability of the list is \(15(1.2)10^{-4}\). 1. \(0.78(2)\): probability that, once a cavity photon is emitted into the cavity, the photon exits the cavity into freespace on the other side of the output mirror [26]. 2. \(0.96(1)\): transmission of free-space optical elements that are between the cavity output mirror and first fiber coupler (see \(P_{el}\) in [26]). 3. \(0.81(3)\): efficiency of coupling the photons into the first single mode fiber [26]. This value should rather be considered an upper bound, since it was measured some days before the data presented in this paper was taken. We anticipate that coupling could be improved in future with better couplers and an anti-reflection coated fiber end facet. 4. In case of detection of \(854\,\mathrm{nm}\) photons (Figure 2) the lists ends here with \(0.87(2)\): detection efficiency of either of the single photon detectors for \(854\,\mathrm{nm}\)[26]. In case of detection of \(1550\,\mathrm{nm}\) photons (Figure 3) this item is not relevant and the list continues. 5. \(0.30(1)\): fiber-input to free-space-output efficiency of the quantum frequency conversion setup together with the spectral filtering and polarization analysis optics (see panels (b) and (d) of Figure 1). This value was measured with laser light directly before acquiring the data presented in Figure 3. 6. \(0.0136(4)\): measured transmissions of the \(101\)-km fiber, consisting of two \(50.4\)-km SMF-28 fiber spools and one fiber connector. See panel (c) of Figure 1 for the position of the fiber in the optical path. 7. \(0.95\): transmission of a fiber joiner present in the path. 8. \(0.83(3)\): efficiencies of coupling the telecom photons into the detector's single mode fibers, see panel (d) of Figure 1. 9. \(0.75(2)\): detection efficiency of either of the telecom single photon detectors [23]. ## Appendix D Infidelity due to photon detector background counts We model the effect of background photon detector counts on the 1550-nm ion-photon states presented in Figure 3 (defined as a detector click that didn't result from a photon from the ion). For this, the background count rate is extracted from the measured counts in the tomography experiments by looking far outside the time windows in which the photons from the ions arrive and summing the contributions from both detectors, giving \(2.0(1)\)-cps. The infidelity that those background counts would contribute, when added to a perfect maximally-entangled Bell state, is simulated numerically. Specifically, the expected background count probabilities in our photon time windows are added to the expected measurement outcome probabilities for a perfect state, then, after renormalisation, a new 'noisy' state density matrix is reconstructed via Maximum Likelihood tomography. Using the background count rate of \(2\)-cps we thereby calculate the maximum observable fidelities to be \(0.88\), \(0.90\), \(0.90\) for the three ion-photon pairs respectively, as ordered elsewhere in the manuscript. These fidelities can be compared with the ones obtained in the experiment of \(0.85(4)\), \(0.88(3)\) and \(0.90(3)\).
2306.00961
The eROSITA Final Equatorial Depth Survey (eFEDS): Complex absorption and soft excesses in hard X-ray--selected active galactic nuclei
Context. The soft excess, a surplus of X-ray photons above 2 keV with respect to a power law, is a feature of debated physical origin found in the X-ray spectra of many type-1 active galactic nuclei (AGN). The eROSITA instrument aboard the Spectrum-Roentgen-Gamma (SRG) mission will provide an all-sky census of AGN suitable for spectral analysis. Aims. The primary goal of this work is to test a variety of models for the soft X-ray emission of AGN (thermal emission, non-thermal emission, ionised absorption, or neutral partial covering absorption) to help identify the physical origin of the soft X-ray spectral complexity. Differences between these models are examined in the context of this sample to understand the physical properties. Methods. We used Bayesian X-ray analysis to fit a sample of 200 AGN from the eFEDS hard X-ray--selected sample with a variety of phenomenological and physically motivated models. Model selection was performed using the Bayes factor to compare the applicability of each model. Results. We find that 29 sources have evidence for a soft excess at a confidence level >97.5%, all of which are better modelled by an additional soft power law than by thermal blackbody emission. We find 23 of these sources prefer a warm corona model, while six sources prefer relativistic blurred reflection. Additionally many sources show evidence for complex absorption, with 29 preferring a warm absorber and 25 a partial covering absorber. Sources with a soft excess show a significantly higher Eddington ratio than those with warm absorbers. We discuss the implication of these results for the physical processes in the central regions of AGN. Conclusions. Spectral fitting with Bayesian statistics is ideal for the identification of complex absorption and soft excesses in the X-ray spectra of AGN and can allow one to distinguish between different physical interpretations. (Abridged)
Sophia G. H. Waddell, Kirpal Nandra, Johannes Buchner, Qiaoya Wu, Yue Shen, Riccardo Arcodia, Andrea Merloni, Mara Salvato, Thomas Dauser, Thomas Boller, Teng Liu, Johan Comparat, Julien Wolf, Tom Dwelly, Claudio Ricci, Joel R. Brownstein, Marcella Brusa
2023-06-01T17:55:14Z
http://arxiv.org/abs/2306.00961v1
The eROSITA Final Equatorial Depth Survey (eFEDS): Complex absorption and soft excesses in hard X-ray-selected active galactic nuclei ###### Abstract Context:The soft excess, a surplus of X-ray photons below 2 keV with respect to a power law, is a feature of debated physical origin found in the X-ray spectra of many type-1 active galactic nuclei (AGN). The eROSITA instrument aboard the Spectrum-Roentgen-Gamma (SRG) mission will provide an all-sky census of AGN. Spectral fitting of these sources can help identify the physical origin of the soft excess. Aims:The eROSITA Final Equatorial Depth Survey (eFEDS) field, designed to mimic the expected average equatorial depth of the all-sky survey, provides the ideal sample to test the power of eROSITA. The primary goal of this work is to test a variety of models for the soft X-ray emission of AGN (thermal emission, non-thermal emission, ionised absorption, or neutral partial covering absorption) to help identify the physical origin of the soft X-ray spectral complexity. Differences between these models are examined in the context of this sample to understand the physical properties. Methods:We used Bayesian X-ray analysis to fit a sample of 200 AGN from the eFEDS hard X-ray-selected sample with a variety of phenomenological and physically motivated models. Model selection is performed using the Bayes factor to compare the applicability of each model for individual sources as well as for the full sample, and source properties are compared and discussed. Black hole masses and Eddington ratios were estimated from optical spectroscopy. Results:We find that 29 sources have evidence for a soft excess at a confidence level \(>97.5\%\), all of which are better modelled by an additional soft power-law, as opposed to thermal blackbody emission. Applying more physically motivated soft excess emission models, we find that 23 sources prefer a warm corona model, while only six sources are best fit with relativistic blurred reflection. Sources with a soft excess show a significantly higher Eddington ratio than the remainder of the sample. Of the remainder of the sample, many sources show evidence for complex absorption, with 29 preferring a warm absorber, and 25 a partial covering absorber. Many (18/26) sources that show significant neutral absorption when modelled with an absorbed power law, in fact show evidence that the absorber is ionised, which has important implications on the understanding of obscured AGN. In contrast to the soft excesses, warm absorber sources show significantly lower Eddington ratios than the remainder of the sample. We discuss the implications of these results for the physical processes in the central regions of AGN. Conclusions:Spectral fitting with Bayesian statistics is ideal for the identification of complex absorption and soft excesses in the X-ray spectra of AGN, and can allow one to distinguish between different physical interpretations. Applying the techniques from this work to the eROSITA all-sky survey will provide a more complete picture of the prevalence and origin of soft excesses and warm absorbers in type-1 AGN in the local Universe. ## 1 Introduction eROSITA (extended ROentgen Survey with an Imaging Telescope Array: Merloni et al. 2012; Predehl et al. 2021) is the soft X-ray instrument aboard the Spectrum-Roentgen-Gamma (SRG; Sunyaev et al. 2021) mission, which successfully launched in July 2019. The primary operation mode of SRG/eROSITA is continuous scanning, and the mission was designed to create an eight-pass map of the entire X-ray sky, providing X-ray variability information as well as spectroscopy in the \(0.2-8\) keV band. The most numerous class of objects to be detected will be millions of Active Galactic Nuclei (AGN), powered by accreting supermassive black holes at the centres of galaxies. According to the AGN unification model, the presence or lack of broad lines in the optical spectra of AGN can be explained according to their viewing angle, where type-1 (or Seyfert 1) galaxies offer a direct view of the central accretion disc and broad emission line region, while type-2 (or Seyfert 2) galaxies are viewed through an obscuring, dusty torus (Antonucci, 1993; Urry & Padovani, 1995). Observing type-1 AGN at X-ray energies allows for the direct study of the innermost regions of the central engine, where extreme relativistic effects can occur, while observing type-2 AGN can probe the physical properties of the torus. The primary source of X-ray emission in AGN is the corona, which consists of hot and/or relativistic electrons located at some height (few \(r_{\rm g}\) to 100s \(r_{\rm g}\)) above the inner accretion disc (e.g. Haardt & Maraschi, 1991, 1993; Merloni et al., 2000; Fabian et al., 2004). The resulting coronal emission arises from Compton up-scattering of lower energy photons, and takes the form of a power law that dominates the X-ray spectra of AGN above energies of \(\sim 2\) keV. In the soft X-ray band, the spectra of many AGN show evidence for a soft excess (Pravdo et al., 1981; Arnaud et al., 1985; Singh et al., 1985), a surplus of photons over the primary power-law continuum below \(1-2\) keV. The origin of this component is highly debated, and a variety of physical mechanisms have been proposed to explain this feature. Initially, it was proposed that the tail of the disc blackbody from the hottest, innermost regions of the accretion disc may be responsible, as the soft excess shape is well fitted with a blackbody with temperatures of \(\sim 0.1\) keV (e.g. Gierlinski & Done, 2004). The temperature of the disc, however, should scale with black hole mass as T \(\propto\) M\({}_{\rm BH}^{-1/4}\)(Shakura & Sunyaev, 1973). Given this relationship, the blackbody photons even from the innermost accretion discs in AGN with typical masses of \(10^{7}-10^{9}\) solar masses are highly unlikely to be visible in the X-ray regime. Furthermore, the expected trend between the fitted blackbody temperature and the black hole mass has not been found (e.g. Gierlinski & Done, 2004; Crummy et al., 2006). Instead, it has been proposed that the soft excess may be due to a blurred reflection component (e.g. Ross et al., 1999; Ross & Fabian, 2005). Some of the photons from the corona will be incident upon the accretion disc. These can be Compton back-scattered from the disc, or excite fluorescence or recombination processes, thus producing a multitude of absorption and emission features. If the accretion disc is highly ionised, these features are concentrated primarily at low energies below 2 keV. Due to the proximity of the inner accretion disc to the black hole, these features are relativistically broadened. This produces a smooth soft excess in excess of a power law at low energies that can have a form similar to a blackbody, as well as a broad iron line and absorption edge in the hard X-ray. Blurred reflection modelling has been used to successfully explain the spectral shape and variability of numerous type-1 AGN (e.g. Fabian et al., 2004; Zoghbi et al., 2008; Wilkins et al., 2017; Gallo et al., 2019; Jiang et al., 2019; Waddell et al., 2019; Boller et al., 2021). In another interpretation, the soft excess can be produced by a secondary, warm corona (e.g. Done et al., 2012; Petrucci et al., 2018, 2020) which is optically thick (\(\tau\sim 10\); Petrucci et al., 2020) and cooler than the primary hot X-ray corona. Blackbody seed photons from the disc undergo Comptonisation in the warm corona. Due to the lower temperature (\(\sim 0.1-1\) keV) and higher optical depth of the secondary corona, compared to that producing the hard power law, the tail of the Comptonisation spectrum can be seen in the soft X-rays, which can then produce a soft excess over the power law continuum. In general, the warm corona is interpreted to be a slab above the accretion disc (e.g. Done et al., 2012). The warm corona may only be stable under certain restrictive conditions (e.g. the gas cannot be too hot or too cold; see Ballantyne & Xiang, 2020) and may also sometimes produce significant absorption features for temperatures below \(10^{7}\) K (Garcia et al., 2019). This model has been shown to fit the spectral shape of numerous type-1 AGN (e.g. Done et al., 2012; Ehler et al., 2018; Tripathi et al., 2019; Petrucci et al., 2018, 2020). In contrast to these emission mechanisms, it has been proposed that the soft excess is an artefact of improperly modelled absorption features (e.g. Gierlinski & Done, 2004; Tanaka et al., 2004; Parker et al., 2014; Fang et al., 2015; Gallo et al., 2015; Boller et al., 2021; Parker et al., 2021). In a neutral partial covering absorption scenario, X-rays from the corona pass through an absorber with moderate column density and covering fraction, producing significant absorption in the soft X-ray. This produces a flat spectrum that appears to have some excess emission. This model also produces a deep iron absorption edge at \(\sim 7\) keV (Tanaka et al., 2004; Parker et al., 2021), which can explain the hard X-ray curvature observed in some AGN spectra. Rapid variability observed in the spectra of some type-1 AGN has also been attributed to changes in the column density or covering fraction of the absorber. Instead of an excess of soft photons, many AGN show evidence for complex ionised (warm) absorption which produces features concentrated in the soft X-ray. In particular, partially ionised neon, oxygen and iron absorption lines and edges in the \(0.5-2\) keV band are observed in many sources (e.g. George et al., 1998; Kaastra et al., 2000; Kaspi et al., 2000; Blustin et al., 2004, 2005; Gierlinski & Done, 2004; McKernan et al., 2007; Laha et al., 2014; Mizumoto et al., 2019). It has been proposed that these warm absorbers can be physically connected to disc winds launched in some AGN systems, or even to large-scale outflows (e.g. Blustin et al., 2004, 2005; Kallman & Dorodnitsyn, 2019). These low-velocity winds are well-studied in AGN, especially using high resolution (e.g. using gratings) spectroscopy (e.g. Blustin et al., 2004). Proper characterisation of warm absorption or partial covering absorption is essential for characterising not only the soft excess, but also the hard X-ray continuum, as absorption can create an apparent soft excess at low energies. Distinguishing between these various models for the soft excess is not a straightforward task, and many previous attempts to do so have shown that all models can sometimes reproduce the observed X-ray spectral shape (e.g. Ehler et al., 2018; Tripathi et al., 2019; Waddell et al., 2019; Chalise et al., 2022). Each model typically has caveats, and it is often difficult to simultaneously explain the spectral shape as well as the short and long term variability. It is also likely that more than one soft excess component exists simultaneously, often complicated by the superposition of multiple absorption components (Boller et al., 2021). Since each of the different soft excess models have very different physical interpretations, consequences for the understanding of X-ray emission from AGN differ. X-ray reverberation mapping (see Uttley et al., 2014, for a review), where a search is performed for time lags between different X-ray energy bands, can also be used to probe the geometry and height of the corona above the black hole (e.g. Zoghbi et al., 2013; Wilkins & Gallo, 2015; Kara et al., 2016), and the lags can be interpreted as light travel time between two coronas (Chainakun & Young, 2017) or between the hot X-ray corona and the accretion disc (De Marco et al., 2013). Other timing methods including fractional variability analysis (Vaughan et al., 2003) or principal component analysis (Parker et al., 2015) can also be compared with simulations to distinguish between models. In this work, the X-ray spectra of 200 AGN from the hard X-ray-selected sample (Nandra et al. in prep.) of the eROSITA Final Equatorial Depth Survey (eFEDS) field (Brunner et al., 2022; Liu et al., 2022; Salvato et al., 2022) are fit with a variety of phenomenological and physically motivated models to search for the presence of soft excesses and attempt to determine their physi cal origin. Therefore, the eROSITA bandpass of \(0.2-8\,\mathrm{keV}\) is ideal for such measurements, as it provides excellent coverage and resolution as well as high effective area for energies below \(1\,\mathrm{keV}\). In Sect. 2, the data reduction and techniques used in this work are described. In Sect. 3, the preliminary models used for spectral fitting are described, and the absorption and soft excess samples are constructed. Section 4 describes the physical models applied to the sample of sources with soft excess. In Sect. 5, the spectral properties identified in this work are presented, and in Sect. 6, a further discussion of results is given. Finally, conclusions are drawn in Sect. 7. ## 2 Data reduction and fitting ### The eFEDS hard X-ray-selected sample The eFEDS field was observed in the eROSITA calibration and performance verification phase and covers \(\sim 140\) deg\({}^{2}\)(Brunner et al., 2022). This equatorial survey overlaps with a plethora of multi-wavelength data, facilitating source characterisation and classification, as well as redshift measurements (Salvato et al., 2022). The eFEDS field is slightly deeper than, but comparable to, the average equatorial exposure of the originally planned eROSITA All Sky Survey (eRASS:8; eight passes of the entire sky), with an average exposure per pixel of \(2.2\,\mathrm{ks}\). The eFEDS data are therefore representative of and can be used to predict the all-sky survey performance. More details of this survey and the resulting data products, including the source detection algorithm and data reduction techniques, are presented in Brunner et al. (2022). All eFEDS data have been made public in June 2021 with the Early Data Release (EDR) of the eROSITA German consortium1. Footnote 1: [https://erosita.mpe.mpg.de/edt/](https://erosita.mpe.mpg.de/edt/) The main X-ray source catalogue in the eFEDS field is assembled from sources detected in the \(0.2-2.3\,\mathrm{keV}\) band (Brunner et al., 2022; Liu et al., 2022). For the current work, proper characterisation of the hard X-ray emission is crucial for understanding the strength and shape of the soft excess component, so sources which only have detections below \(2.3\,\mathrm{keV}\) are less suitable for our analysis. Therefore, this work makes use of the hard X-ray-selected catalogue, based on the detection likelihood in the \(2.3-5\,\mathrm{keV}\) band (DET_LIKE_3 \(>10\); see Nandra et al. in prep.). This sample is nearly spectroscopically complete, with 197/246 sources having a spectroscopic redshift from a variety of sources. Since most objects which are significantly detected in the hard X-ray also have significant soft emission, the total of 246 sources in the hard X-ray-selected catalogue largely overlap with the 27,910 sources presented in the main catalogue, with 20 sources being present only in the hard catalogue. These sources are classified according to the process outlined in Salvato et al. (2022). This classification combined the results of three independent methods and considers the multi-wavelength properties of the sources' optical/IR counterparts. The best matches are determined based on a combination of the astrometric and photometric information (see Salvato et al. (2022)). We then apply the selection methods presented in Nandra et al. (in prep) and Salvato et al. (2022) wherein sources with secure counterparts, which are likely extragalactic based on their redshift and colour-colour diagnostics, and which have secure photo-z or spectroscopic redshift, are considered to be AGN. Applying these classifications and cuts, a final sample of 200 hard X-ray-selected AGN is obtained. Of the remaining 46 sources, around one-third are likely galactic, one-third do not have sufficient data quality for counterpart identification, and around one-third do not have sufficiently secure redshifts for spectral modelling. For completeness, all AGN from the sample, including those only detected in the hard band, are included. All sources are listed using their eROSITA name and source ID in Appendix B. Most AGN in this sample are high flux and low redshift (median \(z\sim 0.35\)) sources (see Nandra et al. in prep), and targeting of bright eFEDS sources as part of several SDSS follow-up programmes has resulted in a high level of spectroscopic coverage (Nandra et al. in prep., Merloni et al. in prep.). A total of 156 sources have usable optical spectra obtained as part of SDSS-IV (Gunn et al., 2006; Smee et al., 2013; Dawson et al., 2016; Blanton et al., 2017; Ahumada et al., 2020), or from SDSS-V (Bowen & Vaughan, 1973; Gunn et al., 2006; Smee et al., 2013; Kollmeier et al., 2017; Wilson et al., 2019; Almeida et al., 2023), see also Anderson et al. (in prep.) and Kollmeier et al. (in prep.). For sources without a spectroscopic redshift, photometric redshifts were computed according to the method outlined in Salvato et al. (2022). To ensure accurate photometric redshifts are obtained, IR, optical, and UV data are used in order to construct and SED, and this is fit to measure the redshift. Independent methods (LePhare; Ilbert et al. (2006) and DNNZ; Nishizawa et al. in prep) are compared, and the most reliable redshifts are those which agree between the two methods. Only these sources are considered in spectral fitting (for more details on redshift measurements, see Salvato et al., 2022). Since we have very high spectral coverage for the AGN sample, for sources which rely on a photo-z, the peak of the probability density function for each redshift is taken, and associated errors are not considered. Redshift and luminosity distributions are presented in Sect. 5.1, and also in Nandra et al. (in prep). ### X-ray spectral analysis X-ray spectral extraction was performed in the manner described in Liu et al. (2022) and Nandra et al. (in prep), using the eROSITA standard analysis software system (eSASS) version c001 (Brunner et al., 2022). Spectral fits were also performed in a manner similar to those described in Liu et al. (2022) and Nandra et al. (in prep), with the exception that here the spectra were not rebinned before modelling. This maximises the spectral information at the expense of computational speed. Following Simmonds et al. (2018), a parametric spectral model for the eROSITA background (see Freyberg et al., 2020) is learned empirically using all eFEDS background spectra (Brunner et al., 2022; Liu et al., 2022). First, the parametric model is determined using principle component analysis (PCA) in logarithmic count space. Next, the background spectrum of each source is iteratively fitted by adding principle components in logarithmic space, and further adding Gaussian lines in linear space, as required by the data according to the Akaike information criterion (AIC). In this way, when the addition of further Gaussian lines no longer changes the AIC, the background model is considered satisfactory. During the joint source and background fit, the normalisation of the background shape is a free parameter, while the shape parameters are kept fixed; however when the relative areas of the source and background regions are accounted for, this value is almost always 1. This technique ensures that improper subtraction of the background does not affect the spectral fits, particularly at higher energies, which is highly relevant for this sample. After the background model has been applied, spectral fitting is performed using Bayesian X-ray analysis (BXA; Buchner et al., 2014; Buchner, 2019). BXA combines the X-ray spectral fitting packages and models used in XSPEC (Arnaud, 1996) with UltraNest (Buchner, 2021), a nested sampling algorithm. By using BXA, the full range of parameter space can be explored to ensure that the best fit is found. Input priors on parameters are used to constrain the values to a reasonable parameter space, and posterior distributions can be examined after fitting to better understand the constraints that can be placed on parameters. Using BXA for spectral fitting, we can also perform model comparison. The Bayesian evidence (\(Z\)) is computed for each spectral fit. This value encompasses both the available parameter space and the fit quality, so it can be used to compare models. This is done using the Bayes factor, K; \[K=\frac{Z_{H1}}{Z_{M2}} \tag{1}\] where M1 and M2 are the models to be compared. While this value cannot directly be linked to a confidence interval or significance, Bayes factors can still be used to identify the best fitting spectral models. This method will be used in the following sections in order to robustly compare spectral models, and select sources with soft excesses and warm absorbers on a sound statistical basis. ## 3 Preliminary spectral modelling Before applying more physically motivated spectral models to each source, we seek a simple characterisation of the spectral shape. This baseline model can then be rejected if the data show statistical evidence in favour of a more complex model including a soft excess or absorption component. The continuum can be modelled using a power law, modified by absorption from the Milky Way (taken from Willingale et al. (2013)) as well as the host galaxy, and the results of this fit are discussed in Sect. 3.1. The method for performing model comparison is summarised in Sect. 3.2, with more details in Appendix A. Next, the complex absorption modelling is presented in Sect. 3.3; here, neutral partial covering absorption and warm (ionised) absorption are compared. In Sect. 3.4, two different toy models are presented and compared to characterise the shape of the soft excess; a second power law component, and a blackbody component. Sect. 3.5 discusses the model comparison in more detail and present a final sample of sources with complex absorption and soft excesses. A full list of the xspec models is given in Table 1, and an example source (ID 00011) with all models applied along with the residuals for each of the best fits is shown in Fig. 1. All sources are listed in Appendix B along with the PL model fit parameters, information on which model provided the best fit to the source, and complex model parameters where relevant. ### Baseline model: Absorbed power law Each source is first fit with an absorbed power law model (PL). Two absorption components are added; one component has a redshift of zero and the column density fixed to the value of the Milky Way (taken from Willingale et al. (2013) for each source), while the other has a redshift matching the host galaxy and column density, log(NH\({}_{\rm{\lambda}}\)), left free to vary in order to account for absorption in the host galaxy (e.g. originating in galaxy-scale gas, torus, other absorbing material). The column density of the host galaxy absorber is allowed to vary between \(\simeq 10^{50}\) cm\({}^{-2}\) and \(\simeq 10^{25}\) cm\({}^{-2}\), where the lower limit is much smaller than the absorption in the Milky Way and is thus difficult or impossible to measure, and the upper limit represents an entirely obscured spectrum. The index of the power law component is constrained to be between one and three. This will allow for the identification of very hard and soft sources while ensuring that most sources have reasonable values of \(\Gamma\sim 1.8-2.0\)(e.g. Nandra & Pounds, 1994; Reeves & Turner, 2000; Nandra et al., 2007; Waddell & Gallo, 2020, 2022; Liu et al., 2022). For the normalisation, a broad, log-uniform prior ensures that all of the broad range of AGN fluxes found in the eFEDS field can be adequately characterised. The full list of priors are summarised in Table 2. \begin{table} \begin{tabular}{l l} \hline Name & xspec implementation \\ \hline PL & tbabs\(\times\) ztbabs\(\times\)powerlaw \\ PL+PCF & tbabs\(\times\) ztbabs\(\times\) zpcfabs\(\times\)powerlaw \\ PL+WA & tbabs\(\times\) ztbabs\(\times\) cwank\(\times\)powerlaw \\ PL+BB & tbabs\(\times\) ztbabs\(\times\) (powerlaw + constant \(\times\) blackbody) \\ PL+PL & tbabs\(\times\) ztbabs\(\times\) (powerlaw + constant \(\times\)powerlaw) \\ \hline \end{tabular} \end{table} Table 1: Abbreviated model names as referenced in this work as well as their implementation in xspec. Figure 1: Comparison of different models and residuals for ID 00011 (z = 0.5121), a source best fit with a double power law soft excess model. The top panel shows the folded spectrum along with each of the best fit models, and the second, third, fourth, fifth and sixth panels show the residuals for the power law (grey), warm absorber (blue), partial covering (orange), blackbody (dark red) and double power law (red) models, respectively. The Bayesian evidence is also given for each model, to ease comparison. The spectrum and residuals are re-binned for clarity. The best fit is a power law soft excess, with a Bayes factor of K\({}_{pl}\sim 1.22\times 10^{7}\) and a significance of \(>\)99%. Data have been re-binned for display purposes. Some results from this preliminary fit are shown in Fig. 2. The left-hand panel shows the distribution of median values of the host-galaxy column density, NH\({}_{z}\). There is clearly a large peak at column densities of NH\({}_{z}\approx 10^{20}\) cm\({}^{-2}\) (at the limit of the prior so consistent with no additional absorption component beyond Galactic absorption), with a smaller, secondary peak at \(\simeq 10^{22.5}\) cm\({}^{-2}\). Since most sources have low column densities, this suggests that the soft excess component or warm absorption, if present, should be easily detectable in most sources. The right-hand panel shows the distribution of median values of the photon indices (\(\Gamma\)), with most having values of \(\Gamma\simeq 2\). This is likely a selection effect, as the value is slightly steeper than found by some samples (e.g. Nandra & Pounds 1994; Reeves & Turner 2000; Waddell & Gallo 2020, 2022), but in agreement with previous eROSITA modelling presented in Liu et al. (2022) and Nandra et al. (in prep.). There are also a number of sources with very steep (\(\Gamma>2.3\)) or very flat (\(\Gamma<1.4\)) values. With typical error bars of the order of \(\pm 0.2\), these values are not in agreement with the expected median of \(\Gamma\simeq 2\). These likely indicate the presence of more complex spectral features, such as soft excess emission or complex absorption, motivating further investigation. ### Model comparison summary In the rest of this work, the best model for each spectrum is identified with Bayesian model comparison. This relies on the computation of Bayes factors, which examine the Bayesian evidence for two models to determine which is preferred. Simulations are used to assess the significance of selecting one model over another. These are described in detail in Appendix A. One-thousand simulated spectra are generated using an absorbed power law (PL) model using the average sample properties, and these spectra are then fit with each of the more complex models subsequently defined in this work. Purity thresholds can then be defined based on the Bayes factor values which yield a given number of instances where modelling falsely selects the more complex model as the correct one; in this work, thresholds of 95%, 97.5% and 99% are considered, and more detail is given on these selections in Appendix A. For these simulated spectra, false detections are defined when a model both has the lowest Bayesian evidence of all models, and the Bayes factor exceeds the threshold. In this way, for a real source to be classified as having a warm absorber, soft excess, or partial covering absorber, this model must have the lowest Bayesian evidence, and the Bayes factor must exceed the threshold. For this reason, there is no overlap between the true soft excess, partial covering, or warm absorber samples. ### Absorption modelling The first model used to model a complex absorption component in this work is a neutral partial covering scenario (PL+PCF), where emission from the corona passes through an absorber before reaching the observer (e.g. Tanaka et al. 2004). This absorbs hard X-ray photons while allowing leakage in the soft X-ray, which flattens the observed power law slope, gives the appearance of a soft excess, and can produce a deep edge at 7 keV depending on the column density. However, often multiple absorbing zones with a variety of ionisation states, column densities and covering fractions are required to fit the observed spectral shape. In this work, one neutral partial covering absorber is applied to each source. The full XSPEC implementation is given in Table 1. Two more free parameters are present; the absorption column density is allowed to vary between \(10^{20}\) and \(10^{25}\) cm\({}^{-2}\), and the fraction of emission which passes through the absorber (the covering fraction) is allowed to vary between zero (no absorp \begin{table} \begin{tabular}{l l l l l l l l l} \hline (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) \\ Model name & log(NH\({}_{\rm{\tiny{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{ \rm{ \rm{ \rm{ \rm{ }}} \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ }}}}}}}}}}{{{{{}}}}}{{{{}}}}{{{{{}}}{{{}}{{}}{{}{{}}{{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{{}{}{}{}{}{ {}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{ {}{}{}{{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{ {}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{ {}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{ {}{}{}{}{}{{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{ {}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{ {}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{ {}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{ {}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{ {}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{ {}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{ {}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{ {}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{ {}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{ {}{}{{}{}{}{}{{}{}{}{{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{ {}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{{}{}{}{}{{}{}{ {}{}{{}{}{{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{}{{}{}{{}{}{{}{}{}{{}{}{}{ {}{{}{{}{}{{}{}{}{}{}{{}{}{{}{}{{}{}{}{}{{}{{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{ {}{{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{{}{}{ {}{{}{}{{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{}{}{{}{}{{}{ {}{}{{}{{}{{}{}{}{}{}{{}{}{{}{}{{}{}{}{{}{}{}{{}{}{{}{}{{}{}{}{{}{}{}{{} {{}{}{{}{}{{}{}{{}{}{{}{}{}{{}{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{}{{}{{}{}{{}{}{}{{} {{}{{}{}{{}{}{}{}{{}{}{}{{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{}{{} {{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{{} {}{{}{{}{}{{}{}{{}{}{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{{}{}{{}{}{{}{}{}{{}{}{{}{{}{}{{}{}{{} {}{{}{{}{{}{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{{}{{}{}{{}{}{{}{}{{}{}{{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{{}{}{{}{{}{{}{ tion) and one (full covering). The redshift of the absorber is set to that of the host galaxy such that absorption in the vicinity of the corona is modelled. All other parameters and priors are the same as the baseline PL model. While the use of a single, neutral absorber to explain the observed curvature in the spectral shape may be an over-simplification of a true physical absorber, this simple implementation still allows for a preliminary model check, and the Bayesian evidence can be compared with other physical scenarios. In the warm absorber model (PL+WA), rather than passing through a neutral absorber, the X-ray emission passes through an ionised medium, which produces absorption features in the soft X-ray spectrum due to partially ionised materials including Neon, Oxygen and Iron (e.g. George et al., 1998; Kaastra et al., 2000; Kaspi et al., 2000; Blustin et al., 2004, 2005; Gierlinski and Done, 2004; McKernan et al., 2007; Laha et al., 2014; Mizumoto et al., 2019). These warm absorber features have been physically linked to low velocity (e.g. 100s-1000s km s\({}^{-1}\)) outflows or disc winds which intercept the line of sight (e.g. Kallman and Dorodnitsyn, 2019). To model the warm absorber, an XSPEC-compatible table model (cwa18.fits; cwa18) was generated using XSTAR. The construction of this model is described in Nandra et al. (2007). This model also has two more free parameters than the baseline PL model; the column density and the ionisation of the absorber. The ionisation of the absorber is allowed to vary broadly between \(10^{-4}\) and \(10^{4}\) ergs cm s\({}^{-1}\) to account for a broad range of wind ionisation states, and the column density is between \(10^{20}\) and \(10^{24}\) cm\({}^{-2}\). The full list of priors for the partial covering absorber and warm absorber models is summarised in table 2. The significance of each of the absorption components is assessed using simulations, described in detail in Appendix A, and all individual fit parameters are given in Appendix B. Using these simulations and the Bayes factor as given in table 3, it is found that 29/200 sources (14.5%) have evidence for a warm absorber and 25 sources (12.5%) have evidence for partial covering absorbers, both at the 97.5% confidence level. For completeness, most figures will show all three determined purity levels (95%, 97.5%, and 99% significance) for comparison. Fig. 3 shows the distribution of the warm absorber parameters; column density and ionisation (\(\xi\)). All three purity levels are shown; sources which have purity at the 95% level (K\({}_{\rm wu}\) \(>\) 0.815) are shown as transluct blue circles, sources at the 97.5% level (K\({}_{\rm wu}\)\(>\) 1.126) are shown as blue rings, and sources with the 99% purity (K\({}_{\rm wu}\)\(>\) 2.040) are shown as dark blue circles. Sources which do not show significant evidence for warm absorbers, and are indicated with black crosses. Sources lacking evidence for warm absorption have lower column densities, while sources with warm absorbers have higher column densities of \(\geq 10^{21}\) cm\({}^{-2}\), with typical values around \(10^{22}-10^{23}\) cm\({}^{-2}\). In general, the column densities are not well constrained and can extend to lower values, as there is significant degeneracy between the column density and the ionisation of the absorber as well as between the host-galaxy absorbing column density and the warm absorber column density. Interestingly, the significant warm absorbers show a wide range of column densities and ionisations, suggesting some diversity in absorbers across different AGN. Typically, warm absorbers studied in the X-ray have been found to have ionisations of the order of \(\xi\sim 10-1000\) ergs cm s\({}^{-1}\) and column densities of the order of \(10^{20}\)-\(10^{23}\) cm\({}^{-2}\)(e.g. Blustin et al., 2004, 2005; McKernan et al., 2007; Tombesi et al., 2010; Mizumoto et al., 2019). The results from this work are broadly in agreement with this; however, several low ionisation absorbers with \(\xi\sim 0.01-1\) ergs cm s\({}^{-1}\) are also found, including some with very high significance (e.g. a very large improvement of the Bayesian evidence compared to a power law). The error bars on the ionisation these sources are large, and the marginal posterior probability distributions can be complex (see sections 5.2 and 6.4 for more details). These results should be interpreted with caution due to the known degeneracy between the ionisation and the column density of the absorber, however they appear significantly different than sources best fit with neutral absorbers. It is also interesting to note that the sources with higher ionisations (\(\sim 10^{2}\) ergs cm s\({}^{-1}\) ) have relatively low redshifts (\(z<0.5\)), while the sources with lower ionisations occupy a much broader range of redshifts with many having \(0.5<z<1\). These sources will be discussed in more detail in later sections. An example of a source (ID 00016) best fit with a warm absorber model is shown in Fig. 4. The background (black dashed line), the warm absorber model (blue) and an absorbed power law model (grey) are shown over-plotted with the folded spectrum. The warm absorber model clearly provides a better fit than the simple absorbed power law (PL) model, in particular to the softest X-ray energies, as well as in the \(2-5\) keV band. In this case, the partial covering absorber model fails to reproduce the absorption features. Fig. 5 is the corresponding corner plot for the warm absorber model fit to source ID 00016, with variable names provided in the figure caption. For this source, most parameters are very well independently constrained, though some degeneracy exists between the column density and the ionisation of the warm absorber. Here the column density of the warm absorber and host galaxy absorber are independently constrained, and the host galaxy column density is consistent with the minimum value. More discussion on parameter correlations and degeneracies for this model is found in sections 5.2 and 6.4. Finally, Fig. 6 shows the distribution of partial covering column densities and covering fractions obtained from the PL+PCF model, with the Bayes factor as given in Table 3. As in Fig. 3, Figure 3: Warm absorption parameters for all AGN in the sample. Sources with warm absorption components of various purity levels (95%, 97.5% and 99%) are indicated with blue circles (translucent, unfilled and opaque, respectively). Typical error bars are indicated with a black cross. sources which have purity at the 95% level (K\({}_{pcf}\)\(>\) 1.392) are shown as translucent orange pentagons, sources at the 97.5% level (K\({}_{pcf}\)\(>\) 1.555) are shown as orange unfilled pentagons, and sources with the 99% purity (K\({}_{pcf}\)\(>\) 2.646) are shown as darker orange pentagons. It is clear that sources which are best fit with the partial covering model have higher covering fractions than those which do not; the median is \(\sim 0.4\) for sources with no evidence for partial covering absorption, and \(\sim 0.7\) for those with evidence for partial covering components. Typical column densities are \(\sim 10^{23}\) cm\({}^{-2}\), with a few sources having higher column densities near \(10^{24}\) cm\({}^{-2}\). This is similar to the warm absorbers. Many of the highest significance sources (filled orange pentagons) also have very steep photon indices (e.g. those in the top right-hand corner of Fig. 6). The high covering fraction creates deep absorption edges around 7 keV for sources with sufficiently high column densities, which are difficult to see in the data due to high background at high energies. An example of a source (ID 00030) best fit with the neutral partial covering model is shown in Fig. 7. As for the warm absorber spectrum, the background (black dashed line), the partial covering model (orange) and an absorbed power law model (grey) are shown over-plotted with the folded spectrum. This source is also well fit with a power law soft excess model, but evidence comparison reveals that the partial covering absorption model provides the best fit, highlighting the importance of considering a variety of models to explain the soft spectrum. The corner plot for the partial covering model of ID 00030 is shown in Fig. 8. Some parameters are less well constrained for this model, and the photon index is found to be extrmely high compared to expected values of \(\sim 1.9-2.0\). In this source, there are degeneracies between the partial covering fraction and photon index, as well as between the partial covering fraction and the normalisation on the coronal power law component. In some sources, there are also degeneracies between the partial covering fraction and column density. These column density degeneracies will be discussed in Sect. 6.4. ### Soft excess modelling In order to account for an intrinsic soft excess component, two separate spectral models are used. First, a blackbody component is used (PL+BB). The normalisation component of the blackbody is linked to that or the coronal power law, with a constant factor applied to set the relative spectral flux density of the power law and blackbody components at 1 keV. For the second model, the blackbody component is replaced by a soft power law component (PL+PL), where the relative normalisations of the soft and hard power law is again fit using a constant factor. Motivated by the results of Liu et al. (2022) from fitting the eFEDS main sample, a wide range of values are adopted for the priors of the blackbody temperature \(kT\) and the soft power law index \(\Gamma_{s}\) in order to characterise a wide variety of soft excess shapes. Priors are listed in Table 2. The significance of the soft excess is evaluated for each source by computing the Bayes factor (equation 1), as given in Table 3. The resulting Bayes factors are then compared between models for each source to assess which model is better able to characterise the shape of the soft excess. The results are shown Figure 4: ID 00016 (z = 0.2907), a source best fit with a warm absorption model. The spectrum (re-bimed for display) is shown in black, the background model is shown as a black dashed line, the power law model is shown as a grey line, and the warm absorber model is shown in blue. The bottom two panels show the residuals for the power law, and a warm absorber, respectively. The source has relatively high signal-to-noise, and has a warm absorber column density of \(\sim 10^{22}\) cm\({}^{-2}\) and an ionisation of \(\xi\sim 10^{2}\) ergs cm s\({}^{-1}\). Data have been re-binned for display purposes. Figure 5: Corner plot for source ID 00016, best fit with a warm absorption model. The diagonal panels show the marginal posterior probability distribution for each parameter, while the other panels show the conditional probability distribution functions for each pair of parameters. Here, log(left) is the host galaxy absorber column density (in units of \(\times 10^{22}\) cm\({}^{-2}\) ), LOGNI is the column density of the warm absorber ( cm\({}^{-2}\) ), LOGXI is the ionisation of the warm absorber ( ergs cm s\({}^{-1}\) ), log(norm) is the power law normalisation, PhoIndex is the photon index of the power law, and norm is the relative renormalisation of the background model with respect to the source model, which is in agreement with 1. in Fig. 9, shown for all sources, including those which show evidence for obscuration. Sources which do not show evidence for a soft excess are shown in grey. Most sources are better fit with the PL+PL model (e.g. lie below the line), and all but one of the few sources which are better fit with the PL+BB model have very low K\({}_{pl}\) and K\({}_{bb}\) values and therefore likely do not have a soft excess. The sole exception of this is eFEDS ID 00016, which appears to have a strong soft excess with both models, but in fact is better fit with a warm absorber. Therefore, it can be concluded that the PL+PL model is a better representation of the soft excess, therefore the PL+BB model will be discarded for the remainder of this work. The double power law model will be used to select sources with significant soft excesses. As with the complex absorber modelling, the significance of each of the soft excess components is assessed using simulations, described in detail in Appendix A. The distributions of parameters for the soft excess models are shown in Fig. 10. All three purity levels are shown; sources which have purity at the 95% level (K\({}_{pl}>1.392\)) are shown as translucent red squares, sources at the 97.5% level (K\({}_{pl}>2.586\)) are shown as unfilled red squares, and sources with the 99% purity (K\({}_{pl}>8.613\)) are shown as dark red squares. Choosing the 97.5% purity level, this leaves 29/200 sources with soft excesses, or \(\sim 14.5\%\) of the full sample, the same number as found for a warm absorption model and slightly more than found using a partial covering model. However, significantly more sources are found to have soft excesses at the 95% and 99% confidence levels. Sources with soft excess display a surprisingly large variation in primary photon index, which may suggest that some sources have additional absorption components not considered in this model, or that the two power law model is too simplistic to characterise the spectral shape and complexity for some sources. There also appears to be two clusters of soft photon indices for sources with soft excesses; a cluster around \(\Gamma_{s}=4.5\), and another around \(\Gamma_{s}=6.5\). Both of these values are too steep to be produced in a corona with reasonable opacity and temperature (e.g. Petrucci et al., 2018), Figure 8: Corner plot for source ID 00030, best fit with a partial covering absorber. Here, the first instance of log(nH) is the host galaxy absorber column density (in units of \(\times 10^{22}\) cm\({}^{-2}\)), log(norm) is the power law normalisation, Phoflox is the photon index of the power law, the second instance of log(nH) is the partial covering absorber column density (\(\times 10^{22}\) cm\({}^{-2}\) ), CvrFrac is the covering fraction of the absorber, and norm is the relative renormalisation of the background model with respect to the source model, which is in agreement with 1. Figure 6: Partial covering parameters for all AGN in the sample. Sources with partial covering components of various purity levels (95%, 97.5% and 99%) are indicated with orange pentagons (translucent, unfilled and opaque, respectively). Typical error bars are indicated with a black cross. Figure 7: ID 00030 (z = 0.4263), a source best fit with a partial covering absorption model. The spectrum (re-binned for display) is shown in black, the background model is shown as a black dashed line, the power law model is shown as a grey line, and the neutral partial covering absorption model is shown in orange. The bottom two panels show the residuals for the power law, and a partial covering absorber, respectively. The source has a moderate covering fraction of \(\sim 0.6\) and a moderate column density of \(4\times 10^{22}\) cm\({}^{-2}\). Data have been re-binned for display purposes. and depending on the assumed temperature, these may exceed the steepness of the exponential cut-off. Rather, they likely highlight some diversity in the shape of observed soft excesses, or may be competing with the host galaxy absorption to attempt to match the observed spectral shape. This will be explored further in Sect. 4. An example of a source (ID 00039) best fit with the double power law model is shown in Fig. 11. As for the previous models, the background (black dashed line), the double power law soft excess model (red) and an absorbed power law model (grey) are shown over-plotted with the folded spectrum. The fit improvement by adding the second power law is visually apparent throughout the spectrum, and in particular in the hard band - the single power law model tends to try to approximate the softer end of the spectrum where the instrument is more sensitive, while the second power law provides a better fit across all energies. The corresponding corner plot for this source is shown in Fig. 12, and the component labels are explained in the caption. Here there are some additional degeneracies between parameters, including between the photon indices and normalisations, as well as between the photon indices and the covering fraction. While some parameters are less well independently constrained, the double power law model is still highly informative in characterising the shape of the soft excess. ### The soft excess, warm absorber and partial covering samples After analysing these phenomenological models, based on selecting samples with 97.5% purity, this work finds 29 sources with true soft excesses, 29 with warm absorbers, and 25 sources with partial covering absorbers, where by definition there is no overlap between each of these samples. This is because we require the model to provide the best fit to the data as well as satisfying the Bayes factor criteria. With these defined samples, it is possible to search for possible sources of bias in the relatively limited parent sample used in this work. Figure 13 shows the detection likelihood in the \(2.3-5\) keV (DET_LIKE_3) band for sources best fit with each model. To assess any potential differences in the distributions, a Kolmogorov-Smirnov test (KS-test) is used, which compares two samples to compute the likelihood that they are drawn from the same parent sample. No differences between distributions of hard band detection likelihoods are found, with KS-test p-values \(>0.1\) when comparing each sub-sample, and soft excesses and warm absorbers are found in sources with the lowest and highest computed DET_LIKE_3 values alike. This shows that the selection of the hard X-ray-selected sample does not heavily bias the detection of soft excesses or complex absorbers, and indeed, likely facilitates these measurements as the hard power law can better be constrained. This also motivates using hard X-ray-selected samples of AGN for further investigations of future eROSITA samples. Considering the true soft excess, it is also of interest to examine the energy at which the two power laws have the same flux. This point marks where the soft excess begins to dominate over the hard corona power law. This value (E\({}_{\rm cross}\)) has been computed, and the resulting histogram is shown in Fig. 14, using the 97.5% purity samples. Regardless of which model is the best fit, the PL+PL model is necessarily used to compute the E\({}_{\rm cross}\) value. All results are shown in the rest-frame, and the typical error bar is also shown in black in the top right-hand corner. Most sources have E\({}_{\rm cross}\) values of less than \(1\) keV, and all are below \(2\) keV. The median value for the soft excess sample of E\({}_{\rm cross}=0.55\) keV is indicated with a solid red vertical line. This demonstrates the importance of having a good fit in the softest energy X-rays in order to properly characterise the soft excess; fits performed only above \(0.5-1\) keV will likely not be able to fit the true soft X-ray shape. Figure 10: Soft excess (double power law) parameters for all AGN in the sample. Sources with soft excess components of various purity levels (95%, 97.5% and 99%) are indicated with red squares (translucent, unfilled and opaque, respectively). Typical error bars are indicated with a black cross. Figure 9: Comparison of Bayes factors between soft excess models. The Bayes factor for the double power law is shown on the horizontal axis, while the Bayes factor for the blackbody model is shown on the vertical axis. The dashed grey line shows the one-to-one relation, with sources lying below the line being better fit with the PL+PL model, and sources above the line being better fit with the PL+BB model. Sources which show evidence for a soft excess (K\({}_{\mu\beta}\)\(>\)2.586, corresponding to a 97.5% significance) are shown in black, and sources which do not show evidence for a soft excess are shown in grey. In addition to examining the Bayes factors and crossing energies, the soft excess can also be defined in terms of the soft excess strength (SE). In this work, this is defined as; \[\mathrm{SE}=\frac{\mathrm{F_{SE}}}{\mathrm{F_{PL}}} \tag{2}\] where \(\mathrm{F_{SE}}\) is the unabsorbed, rest-frame flux of the soft power law in the \(0.2-1\) keV band, and \(\mathrm{F_{PL}}\) is the unabsorbed, rest-frame flux of the hard power law in the \(0.2-1\) keV band. The soft excess strength therefore indicates how much excess flux is provided by the soft excess component in the \(0.2-1\) keV band. Separately, the soft flux fraction (SFF) can also be defined; \[\mathrm{SFF}=\frac{\mathrm{F_{0.2-1}}}{\mathrm{F_{0.2-10}}} \tag{3}\] where \(\mathrm{F_{0.2-1}}\) is the unabsorbed, rest-frame flux in the \(0.2-1\) keV band, and \(\mathrm{F_{0.2-10}}\) is the unabsorbed, rest-frame flux in the \(0.2-10\) keV band. Fluxes are measured using the PL+PL model. Therefore, this flux ratio indicates the fraction of the broad-band flux which is emitted in the soft X-ray band. Histograms for both of these are shown in Fig. 15, with the soft excess strength shown in the top panel and the soft flux fraction in the bottom panel. In both panels, it is apparent that the SE and SFF values for sources with soft excesses are higher on average than those which do not. This seems reasonable, as it would be expected that sources with statistically significant soft excesses would have stronger soft emission and would therefore emit a higher fraction of their total flux in the soft X-ray. Furthermore, it is shown that sources with warm absorbers also appear to have very strong soft excesses and soft flux fractions with this model, likely due to the fact that very steep soft photon indices Figure 11: ID 00039 (\(z=0.3893\)), a source best fit with a double power law soft excess model. The spectrum (re-binned for display) is shown in black, the background model is shown as a black dashed line, the power law model is shown as a grey line, and the double power law model is shown in red. The bottom two panels show the residuals for the power law, and a soft excess, respectively. Data have been re-binned for display purposes. Figure 12: Corner plot for source ID 00039, best fit with a double power law soft excess. Here, the first instance of log(nH) is the host galaxy absorber column density (in units of \(\times 10^{22}\) cm\({}^{-2}\) ), log(norm) is the power law normalisation, log(factor) is the relative normalisation of the soft power law with respect to the hard power law, the first instance of PhoIndex is the photon index of the hard power law, the second instance of PhoIndex is the photon index of the soft power law, and norm is the relative renormalisation of the background model with respect to the source model, which is in agreement with 1. Figure 13: Distributions of detection likelihood in the \(2.3-5\) keV band (DET_LIKE_3) shown for sources best fit by each model. The vertical axis is given in log-space to highlight the true distribution of the sample. Sources best fit with a power law are shown with a black dotted line, sources best fit with a soft excess are shown as a red solid line, sources best fit with a warm absorber are shown as a blue dash-dot line, and sources best fit with partial covering absorption are shown with an orange dotted line. are preferred for these sources in order to approximate the shape of the absorption features, which highlights the important of using the correct model for characterising the soft excess. To better quantify these differences, a KS-test is performed comparing the distribution of soft excess sources to those best fit with a power law. For the soft excess strength and for the soft flux fraction, the KS-test returns a significance level of \(4\times 10^{-6}\) and \(6\times 10^{-9}\), respectively. This implies that for both qualifications of the soft excess, the null hypothesis that both distributions are drawn from the same parent sample can be rejected with \(>99.999\%\) confidence. As would be expected, the soft excess strength and soft flux fractions are significantly higher for sources with soft excesses are higher on average than for those which do not. This may suggest that sources with and without soft excesses are two distinct populations of AGN. This conclusion, however, still cannot confirm the physical origin of the soft excess; to do this, physically motivated models must be fit to each spectrum, and the evidence compared. ## 4 Physical interpretation for the soft excess ### Soft Comptonisation One physical interpretation for the soft excess is that it is produced via Comptonisation of disc blackbody photons in a secondary warm corona, which is cooler than the hot corona responsible for the primary hard power law \(\Gamma_{k}\). The warm corona is hypothesised to have a higher optical depth than the hot corona (e.g. Done et al. 2012; Petrucci et al. 2018, 2020), but as the temperature is lower, the resulting X-ray emission will be a steeper power law which dominates at low energies. This interpretation has been used successfully to model steep but very smooth soft excesses in type-1 AGN, as the soft Comptonisation will not produce emission or absorption features. To model this in XSPEC, nthComp(Zdziarski et al. 1996; Zycki et al. 1999) \begin{table} \begin{tabular}{l l} \hline \hline Name & xspec implementation \\ \hline NTH & tbabs\(\times\)ztbabs\(\times\)(nthComp + constant \(\times\)nthComp) \\ REL & tbabs\(\times\)ztbabs\(\times\)relxill \\ \hline \hline \end{tabular} \end{table} Table 4: Physical soft excess model abbreviation, and corresponding xspec implementations. Figure 14: Distribution of computed rest-frame E\({}_{\rm cross}\) values. Sources best fit with a power law are shown with a black dotted line, sources best fit with a soft excess are shown as a red solid line, sources best fit with a warm absorber are shown as a blue dash-dot line, and sources best fit with partial covering absorption are shown with an orange dotted line. The median of E\({}_{\rm cross}\) = 0.55 keV for the soft excess sample is indicated with a solid red line. The typical error bar is shown in black. Figure 15: Distributions of soft excess strength and soft flux fraction. Top: Distribution of soft excess strengths (\(F_{SE}/F_{PL}\)) shown for sources best fit by each model. Sources best fit with only with a single absorbed power law are shown with a black dotted line, sources best fit with a soft excess (PL+PL) are shown as a red solid line, sources best fit with a warm absorber are shown as a blue dash-dot line, and sources best fit with partial covering absorption are shown with an orange dotted line. Bottom: same as top, but shown for the soft flux fraction, \(F_{0.2-1}/F_{0.2-10}\). The vertical black line shows the expected SFF for a source with \(\Gamma=2\) and nH = \(5\times 10^{22}\) cm\({}^{-2}\). is used, with one nthComp component modelling the optically thin hot corona, and another modelling the optically thick, warm corona. The blackbody seed temperature, \(kT_{\rm bb}\), is fixed at 1 eV and is linked between the two coronae (e.g. Petrucci et al., 2018, 2020). The X-ray spectral shape is not dependant on this parameter so long as it remains at a reasonable disc temperature of a few eV (Petrucci et al., 2018). For the hot corona, the photon index is allowed to vary uniformly between one and three in order to capture the likely parameter space. The electron temperature is frozen at 100 keV, well outside the eROSITA bandpass and in agreement with the assumptions from other works (e.g. Fabian et al., 2015). For the warm corona, the photon index is allowed to vary uniformly between two and 3.5, and the electron temperature is allowed to vary uniformly between \(0.1-1\) keV. These parameter ranges are based on fits and simulations performed by, for example, Petrucci et al. (2018, 2020), where it is demonstrated that these parameters are reasonable when assuming an optical depth of \(\sim 10-20\). Finally, as in the PL+PL model, the normalisations of the two nthComp components are linked, and the flux of the soft corona relative to the hard is set using a cross-normalisation constant. The normalisation component is given a log-uniform prior between -10 and 1, and the cross-normalisation is given a log-uniform prior between -3 and 1. All priors of free parameters are listed in Table 5. ### Blurred reflection In a blurred reflection scenario, some of the X-ray photons emitted from the corona are reflected from the innermost regions of an accretion disc, producing a reflection spectrum (e.g. Ross et al., 1999; Ross and Fabian, 2005; Dauser et al., 2012; Garcia et al., 2013). As photons strike the disc, they are absorbed, producing deep absorption features and edges. As the atoms de-excite, they produce emission features mostly. For an ionised disc, the reflected emission is concentrated in the soft X-ray, but also includes a prominent Fe K\(\alpha\) emission line and hard reflection continuum. Due to the fast rotation of the disc and gravitational redshift due to the central black hole, the features in the reflection spectrum are relativistically blurred, producing a soft excess (e.g. Crummy et al., 2006; Jiang et al., 2019). Understanding the reflection spectrum can reveal many properties of the innermost regions of the AGN, including the height and structure of the corona, the ionisation and abundances in the accretion disc, and changes in these parameters over time. This model has been successfully used to probe the geometry of the corona as well as successfully explain the variability and spectral shape of many type-1 AGN (e.g. Zoghbi et al., 2008; Dauser et al., 2012; Gallo et al., 2019; Waddell et al., 2019; Boller et al., 2021). Here, the reflection spectrum and power law are both modelled using the relxill model (Dauser et al., 2012, 2014; Garcia et al., 2013). There are many free parameters in this model, including the inner emissivity index \(q_{1}\), which describes the illumination pattern of the corona onto the accretion disc. This parameter is allowed to vary uniformly between three and ten, while the outer emissivity index (\(q_{2}\)) is fixed to 3. The inclination, or viewing angle, is allowed to vary uniformly between ten to 80 degrees. This parameter should actually be evenly distributed in cosine space, however, the inclination is typically poorly constrained and difficult to measure correctly, so the uniform prior is acceptable. The black hole is assumed to have maximum spin, in part due to selection effects which make maximum spin AGN brighter and thus easier to detect in flux limited samples (e.g. Vasudevan et al., 2016; Baronchelli et al., 2018; Arcodia et al., 2019), and also due to the fact that the spin is difficult to constrain without high signal-to-noise data in the \(4-10\) keV band, where the iron K\(\alpha\) line can be modelled (e.g. Bonson and Gallo, 2016). The inner radius of the disc is fixed at the innermost stable circular orbit (ISCO; \(1.235r_{g}\) for a maximum spin black hole with \(a=0.998\)), and the outer radius is fixed somewhat arbitrarily to \(400r_{g}\), beyond where significant reflection of X-ray photons is possible for moderate coronal heights. The iron abundance in the disc is fixed to solar, and the disc ionisation (\(\xi=4\pi F/n\), where F is the illuminating flux and n is the hydrogen number density of the disc) is allowed to vary between log(\(\xi\)) of zero and four. Since the coronal power law component is also included in relxill, no separate power-law model is included. The photon index, \(\Gamma\), is allowed to vary between one and three, to account for a very broad range of possible indices. The reflection fraction, which describes the fraction of flux from the corona which is reflected from the accretion disc, is allowed to vary uniformly between 0.1 and 10. Here, a reflection fraction (R) of 0.1 would indicate strong beaming (e.g. the corona is outflowing or forms the base of the jet), a reflection fraction of one suggests that half the flux from the corona is reflected off the disc, while the other half is observed directly, and a reflection fraction of 10 is a strong indicator for light bending (e.g. the corona is close to the disc such that the gravitational pull of the black hole bends the path of the light towards the disc). Finally, the normalisation component is given a log-uniform prior between -10 and one so that AGN of an extreme range of fluxes can be modelled. All priors of free parameters are listed in Table 5. There are several different flavours of relxill, all intended to model different physical properties of the innermost regions of the AGN (e.g. Dauser et al., 2016; Jiang et al., 2019). Users can choose to assume a lamp-post geometry (relxilllp), a varying disc density (relxilllp), among other changes. We also freeze many parameters in our analysis (e.g. the iron abundance, outer emissivity index, and black hole spin, although these have been shown to vary, in some cases dramatically, between AGN (Zoghbi et al., 2008; Fabian et al., 2009; Daly and Sprinkle, 2014; \begin{table} \begin{tabular}{l l l l l l l l l l l} \hline (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) \\ Model name & log(NH\({}_{\rm c}\)) & \(\Gamma_{\rm a}\) & \(\Gamma_{\rm s}\) & log(c) & \(kT_{e}\) & q\({}_{1}\) & Inclination & log(\(\xi\)) & Reflection & norm \\ & [cm\({}^{-2}\)] & & & & & [keV] & & (degrees) & & fraction & \\ \hline Prior & log-uniform & uniform & uniform & log-uniform & log-uniform & uniform & uniform & log-uniform & log-uniform & log-uniform \\ \hline NTH & \(20-25\) & \(1-3\) & \(2-3.5\) & \(-3-1\) & \(0.1-1\) & - & - & - & - & -10 - 1 \\ REL & \(20-25\) & \(1-3\) & - & - & - & \(3-10\) & \(10-80\) & \(0-4\) & \(0.1-10\) & -10 - 1 \\ \hline \hline \end{tabular} \end{table} Table 5: Priors used for the physical true soft excess models. Column (1) gives the shortened name of the model. Column (2) shows the lower and upper limit of the host galaxy absorption. Column (3) shows the lower and upper limit of the hard (coronal) power law component. Columns (4) through (7) show the upper and lower limits of the warm Comptonisation parameters, and columns (8) through (10) shows the constraints placed on the blurred reflection parameters. Column (11) gives the constraint placed on the power law normalisation. Reynolds, 2019). In particular, many parameters are best constrained using the iron K\(\alpha\) line profile, as the iron line is broadened due to the strong relativistic effects in the central region. Given the limited eROSITA sensitivity and high background levels above \(\sim 5\) keV, this is very difficult for most sources, even those in the hard sample presented in this work. Nevertheless, this simplified treatment of relativistic reflection still has the potential to capture sources which display the typical characteristics of a blurred reflection spectrum. ### Model selection Having fit both of the models described above to the 29 sources in the soft excess sample, the evidence for each model can be computed and compared in order to determine which model is preferred, with Bayes factors computed as given in Table 3. The results of this comparison are presented in Table 6. The final column in the table indicates which is the preferred model for each source. Out of the 29 sources, six are better fit with blurred reflection and the remaining 23 are best fit with soft Comptonisation. More closely examining the sample, it is apparent that many of the sources which are best fit with blurred reflection have very small differences in Bayes factors between models. This is not the case for sources best fit with soft Comptonisation, where some sources have much larger differences in Bayesian evidence than with blurred reflection. This indicates that while not all sources are fit well with all models, all sources are relatively well fit with soft Comptonisation. This effect can also be seen in Fig. 16, which shows the Bayes factor (K\({}_{nth}\)) for the warm corona for each source plotted with the normalised difference in Bayes factor values, (K\({}_{rel}\) - K\({}_{nth}\))/K\({}_{nth}\). Sources which lie above zero on the y-axis (shown with green triangles) are best fit with blurred reflection, and sources which lie below zero (shown in purple squares) are best with with the warm corona model. Many sources best fit with the warm corona model are much better fit with this model. Furthermore, sources which have more statistically significant soft excesses (shown with filled in symbols) are far more likely to be best fit with a warm corona, with only 2/20 preferring a blurred reflection model. The last line of Table 6 shows the evidence comparison for the full sample, following, e.g. Baronchelli et al. (2018). Unsurprisingly, the best fitting model for the full sample is the soft Comptonisation model. Individual model parameters and their errors for each source are given in Appendix B, and we note that many parameter values are poorly constrained and have large errors for these complex models. \begin{table} \begin{tabular}{c c c c} \hline \hline (1) & (2) & (3) & (4) \\ eROID & ln(Z\({}_{nth}\)) - ln(Z\({}_{HST}\)) & ln(Z\({}_{rel}\)) - ln(Z\({}_{HST}\)) & Best model \\ \hline \hline 00001 & 0.0 & -0.69 & NHTCOMP \\ 00004 & 0.0 & -3.28 & NHTCOMP \\ 00007 & 0.0 & -1.26 & NHTCOMP \\ 00011 & 0.0 & -2.83 & NHTCOMP \\ 00029 & 0.0 & -0.60 & NHTCOMP \\ 00034 & 0.0 & -2.77 & NHTCOMP \\ 00035 & -0.25 & 0.0 & RELXILL \\ 00038 & 0.0 & -0.85 & NHTCOMP \\ 00039 & 0.0 & -2.80 & NHTCOMP \\ 00045 & 0.0 & -2.30 & NHTCOMP \\ 00054 & 0.0 & -1.88 & NHTCOMP \\ 00057 & -0.16 & 0.0 & RELXILL \\ 00076 & 0.0 & -0.32 & NHTCOMP \\ 00121 & 0.0 & -0.92 & NHTCOMP \\ 00122 & -0.15 & 0.0 & RELXILL \\ 00153 & 0.0 & -0.98 & NHTCOMP \\ 00176 & -0.05 & 0.0 & RELXILL \\ 00200 & 0.0 & -0.75 & NHTCOMP \\ 00204 & -0.39 & 0.0 & RELXILL \\ 00216 & 0.0 & -2.01 & NHTCOMP \\ 00237 & 0.0 & -0.74 & NHTCOMP \\ 00288 & 0.0 & -1.15 & NHTCOMP \\ 00340 & 0.0 & -1.18 & NHTCOMP \\ 00358 & 0.0 & -1.36 & NHTCOMP \\ 00426 & 0.0 & -1.64 & NHTCOMP \\ 00760 & -0.26 & 0.0 & RELXILL \\ 00784 & 0.0 & -0.27 & NHTCOMP \\ 01136 & 0.0 & -0.44 & NHTCOMP \\ 01736 & 0.0 & -1.16 & NHTCOMP \\ \hline \hline ALL & 0.0 & -30.93 & NHTCOMP \\ \hline \hline \end{tabular} \end{table} Table 6: Evidence comparison for the 29 sources in the soft excess sample. The source ID is listed in column (1). Columns (2) and (3) show the Bayesian evidence, where the values are all normalised by subtracting the highest fit value. A negative number in the column therefore indicates the worse fitting model, while a value of zero shows the preferred model. In the final row, the same exercise is performed for the full sample. Column (4) lists the name of the best fitting model. Figure 16: Comparison of Bayes factors for the warm corona and relativistic blurred reflection models. Bayes factors for the warm corona model are shown on the horizontal axis, and the difference between Bayes factors normalised by the warm corona Bayes factor is shown on the vertical axis. Open shapes indicate soft excesses with 97.5% confidence, and filled shapes indicate 99% confidence. Sources best fit with a warm corona are shown as purple squares, and sources best fit with blurred reflection are shown as green triangles. ### Model parameters It is also of interest to compare the properties derived from the phenomenological double power law model of the soft excess sources, shown in Fig. 17. Sources best fit with blurred reflection are again shown in dark green, and sources best fit with a warm corona model are shown in purple. Median values for each sub-sample are shown with vertical lines. While the distributions of hard X-ray photon indices are very similar, the distributions of soft photon indices differ, with the median value being much higher for sources best fit with blurred reflection. While this result is not significant when using e.g. the Anderson-Darling test, there are very few sources best fit with blurred reflection so it is difficult to make firm conclusions. Nevertheless, the result is intriguing, as it may present a diagnostic tool to differentiate between a warm corona and a blurred reflection soft excess, and will be discussed further in Sect. 6.2. Regarding the parameters of the warm Comptonisation models, the median hot corona photon index is \(\Gamma=1.61\), and the median warm corona photon index is \(\Gamma=3.15\). The median warm corona temperature is kT\(=0.45\) keV, which is consistent with other studies (Petrucci et al., 2018, 2020). The distributions of the best-fit parameters for all soft excess sources are shown in Fig. 18, where the warm corona values and errors are indicated with black squares. The red lines show different values of the optical depth, where the warm corona optical depths are between \(\tau=5\) and \(\tau=20\) and the optical depths for the hot corona are \(\simeq 1\). While the warm corona photon index is also consistent with previous works, the hot corona has a much flatter spectral index than expected (e.g. \(\Gamma\sim 1.8-1.9\) in previous studies, and \(\Gamma\sim 2.0\) in typical eROSITA sources). This result is unexpected and is further discussed in Sect. 6.3. Moving to the parameters of the blurred reflection modelling, it is apparent that many parameters are not well constrained, likely due to the lower quality of many eFEDS spectra as well as the absence of a high signal-to-noise iron line. Examining the best-fitting parameters for the two blurred reflection sources with soft excesses at \(>\)99% significance (corresponding to the filled green triangles in Fig. 16), both have intermediate disc ionisations of \(\xi\sim 100\), intermediate inclinations of \(\sim 40\) degrees (which are expected for type-1 AGN), and of particular note, high reflection fractions \(R>>1\). In fact, examining all sources best fit with blurred reflection, it is found that all have best fit \(R>1\), although not all are constrained to be \(>1\). This suggests that the spectral fitting method presented in this work is preferentially identifying sources with very strong reflection components. Indeed, in some cases of sources best fit with blurred reflection, it seems there is more excess emission around \(0.7-0.9\) keV, which may correspond to iron-L which is present in the reflection spectrum but not the Comptonisation spectrum, which may explain why the blurred reflection model is preferred. Spectral modelling with brighter sources with more counts in the \(4-7\) keV band, or including data above 8 keV, would help to identify the iron line and Compton hump if present and thus better constrain blurred reflection parameters. To visually demonstrate differences between the spectral models, Fig. 19 shows an example source, ID 00034, which is best fit with a warm corona model. The data are shown in black along with the background model in a black dashed line, an absorbed power law in a grey dotted line, the blurred reflection model in a green dash-dot line, and the best fit warm corona model in purple. From the spectrum, it can be seen that the blurred reflection model under-estimates the flux in the \(0.2-0.3\) keV energy band, and over-estimates the flux around 0.5 keV where the blurred reflection model features strong iron emission. Both models however clearly provide a much better fit than the absorbed power law model, which fails to reproduce the spectral shape at most energies. Figure 17: Distributions of soft and hard photon indices separated by best soft excess model. Top: Distributions of hard photon index obtained from the PL+PL modelling for sources in the soft excess sample. Sources which are best fit with soft Comptonisation are shown with a purple dotted line and sources best fit with blurred reflection are shown with a solid dark green line. Median values for each sample are shown with vertical lines in the corresponding colours. The shaded histograms indicate the sources with 99% significance on the soft excess. Bottom: As top, but showing the soft X-ray photon index. ## 5 Spectral properties ### Luminosity-redshift plane To study the distribution of soft excesses and warm absorbers in a parameter space which can easily be compared to other surveys, sources are plotted in the L-z plane in Fig. 20, where most redshifts are spectroscopic. The \(2-10\) keV luminosity is estimated from the baseline absorbed power law model. Sources best fit with an absorbed power law are shown as grey crosses, sources best fit with warm absorbers are shown as blue circles, sources best fit with partial covering are shown as orange pentagons, and sources best fit with soft excesses are shown as red squares. In this way, we differentiate from the luminosity redshift plane already presented in Nandra et al. (in prep). Different opacities and fill-styles indicate different purity thresholds for the best fit model, as previously defined. The rest-frame, absorption corrected X-ray luminosities of sources with soft excesses and warm absorbers follow those of sources best fit with an absorbed power law. Most sources with soft excesses can be detected up to about \(z=0.5-0.6\), above which the soft excess is presumably shifted out of the observed band, while complex absorption can be detected up to about \(z=1\), with a few sources at higher redshifts. Interestingly, the highest redshift source in the sample, with a spectroscopic redshift of \(z=3.277\) (see Nandra et al. in prep.), shows complex absorption best fit by a warm absorber. However, because this source has very few counts below \(\sim 1\) keV, it is hard to determine definitively the nature of the soft X-ray complexity without data with a higher signal-to-noise ratio. ### Characterising the soft excess and complex absorption Having completed the modelling for all 200 sources in the hard X-ray-selected sample of AGN in eFEDS, the soft excess, warm absorption and partial covering sub-samples can be examined in more detail to search for distinctive characteristics. When considering the full sample of hard X-ray-selected AGN, only \(\sim 15\%\) of sources show strong statistical evidence for a soft excess. However, this fraction can increase significantly when only considering a small parameter space. Figure 21 shows the distribution of photon index and host-galaxy column density. These values are obtained from the ztbabs component of the baseline PL model (and thus not including any additional spectral components), regardless as to the best-fit model for each source. In this way, the properties of sources can be compared for a naive approach wherein it is assumed that all spectra can only be fit with a power law. Here, the sources with soft excesses are heavily clustered at large photon indices and small host galaxy column densities. This is sensible, as the single power law attempts to explain both the high energy component and the steep soft excess with a single power law, increasing the slope. Selecting only sources with photon indices larger than two and host galaxy column densities less than \(2\times 10^{20}\) cm\({}^{-2}\), 42% of the sources have soft excesses. These are not intrinsic properties of these sources; indeed, it is found that when the soft excess is modelled correctly, the measured mean photon index decreases by 0.35, from a mean of 2.15 when using only one power law to a mean of 1.8 for the hard \begin{table} \begin{tabular}{l c c c c c} \hline (1) & (2) & (3) & (4) & (5) & (6) \\ eROID & q\({}_{1}\) & Inclination & \(\Gamma\) & log(\(\xi\)) & log(\(R\)) \\ \hline 00035 & 7.1 \({}^{+2.0}_{-2.3}\) & 36 \({}^{+21}_{-18}\) & 2.45 \({}^{+0.10}_{-0.11}\) & 1.8 \({}^{+1.4}_{-1.2}\) & 0.4 \({}^{+0.5}_{-0.8}\) \\ 00204 & 4.3 \({}^{+2.6}_{-1.0}\) & 43 \({}^{+19}_{-21}\) & 1.86 \({}^{+0.22}_{-0.25}\) & 2.0 \({}^{+0.8}_{-0.9}\) & 0.8 \({}^{+0.1}_{-0.3}\) \\ \hline \hline \end{tabular} \end{table} Table 7: Summary of blurred reflection parameters for the two sources with highly significant soft excess (\(>99\%\)) which are best fit with blurred reflection. Column (1) gives the eROSITA ID, column (2) gives the emissivity index, column (3) gives the disc inclination, column (4) gives the photon index, column (5) gives the disc ionisation and column (6) gives the reflection fraction. Figure 19: ID 00034 (Z=0.1027), a source best fit with a warm corona model. The spectrum (re-binned for display) is shown in black, the background model is shown as a black dashed line, the power law model is shown as a grey dotted line, a blurred reflection model is shown in green dash-dot line, and the best fitting warm corona model is shown in purple. The bottom three panels show the residuals for the power law, blurred reflection, and a warm corona, respectively. Data have been rebinned for display purposes. Figure 18: Warm corona photon indices and temperatures derived from the soft Comptonisation modelling. Lines of constant optical depth are shown in red. photon index when the second power law is added. These values are similar to the mean value of the sample of sources best fit with only a power law. Examining now the parameter space region populated by the complex absorption sources, many of the sources with extremely low photon indices (\(\Gamma<1.4\)) and a range of column densities are best fit with warm absorption or partial covering, which likely explains why these photon indices appear so flat compared to the more typical values of \(\Gamma\sim 1.8\). When the correct absorption model is applied, the photon index increases to more reasonable values for many of these sources. However, there also appears to be a large cluster of sources with column densities of \(>10^{22}\) and photon indices around \(\Gamma\sim 2\) (see upper middle of Fig. 21). These sources may be of particular interest, as they suggest the presence of Compton-thin AGN in eFEDS, which may show absorption and scattered emission from the torus. Almost all sources with column densities \(>10^{22}\) cm\({}^{-2}\) (18/26), and all eight sources with column densities \(>10^{23}\) cm\({}^{-2}\) have evidence for a warm absorber. This raises the very interesting possibility that many of the AGN that might have been classified as Compton-thin obscured AGN are actually better described with a warm absorber model, and suggests that eROSITA is more likely to probe these complex absorption sources as opposed to AGN obscured by neutral, distant gas. Such column densities are likely too large to be associated with absorption on host-galaxy scales, and must instead originate in the torus. However, these large column densities would not be expected in type-1 AGN, which are typically unobscured. Searching for obscured (\(>10^{22}\) cm\({}^{-2}\) ) sources which also have SDSS spectra, 17 sources were observed. Of these, six are type-1 AGN with constrained black hole masses and accretion rates (with broad H\(\beta\), Mg II or CIV lines), and four of these six have evidence for a warm absorber. The other optical spectra do not have sufficient data quality to confirm whether they are type-1 or type-2 AGN (see Sects. 5.3 and 6.4 for more details). Examining each of these sources individually, many show deep absorption edges in the \(\sim 0.5-1\) keV band. When modelled using a single absorbed power law, the absorption edge is well fit with the single absorber, but the emission is significantly under-fit at low energies (e.g. \(\sim 0.2-0.5\) keV). An example of this is shown in Fig. 22, which shows a spectrum (ID 00439, also presented in Brusa et al. 2022) folded with the background model, an absorbed power law model, a partial covering absorber, and a warm absorber. Without a warm absorber, the flux is significantly under-estimated in the softest X-ray energies (below \(0.4-0.5\) keV). When the warm absorber is added, the host galaxy column density modelled using ztbabs is consistent with \(10^{20}\) cm\({}^{-2}\), and the absorption edge better describes the emission below \(\sim 1\) keV. The partial covering absorber also fails to produce the correct spectral shape. Furthermore, when the warm absorber is added to these sources, the host-galaxy column densities are all consistent with \(10^{20}\) cm\({}^{-2}\), which is consistent with values found in the Milky Way and is consistent with other host galaxies. This can also be seen in the corner plot for the warm absorption model of source ID 00439, shown in Fig. 23. Interestingly, the optical spectrum for source was found to show evidence for an ionised outflow Brusa et al. (2022), further supporting the warm absorber X-ray model. This source has relatively fewer X-ray counts as it is heavily absorbed in the soft band, and key parameters are less well constrained. There is some degeneracy between the power law index and normalisation, and the ionisation appears to be low, but the marginal posterior probability distribution has a small secondary peak at a higher ionisation. Most strikingly, the column densities of the host galaxy and warm absorber cannot be independently constrained, though the host galaxy absorption is consistent with \(10^{20}\) cm\({}^{-2}\), and the warm absorber component significantly improves the fit. The constraining of the absorbing column densities will be discussed in further detail in Sect. 6.4. Figure 21: Distributions of photon indices and the host-galaxy absorption column densities, These values are always measured using the baseline PL model, irrespective of the true best-fit model for each source. Sources with soft excesses are shown as red squares, sources with warm absorbers are shown with blue circles, sources best fit with partial covering are shown with orange pentagons, and sources best fit with the baseline power law model are shown as black crosses. Marker styles represent samples of different purities, as described in Sect. 3. The typical error bars are shown in the top right corner, and the horizontal grey line indicates a column density of \(10^{22}\) cm\({}^{-2}\). Figure 20: Distributions of redshifts and \(2-10\) keV un-absorbed X-ray luminosity for each source. Sources best fit with an absorbed power law are shown as grey crosses, sources with soft excesses are shown as red squares, and sources with warm absorbers are shown with blue circles. ### Relationship with optically derived properties A total of 172 of the AGN in our eFEDS hard X-ray-selected sample have optical spectra available from SDSS I-V (Gunn et al. 2006; Smee et al. 2013; Dawson et al. 2016; Blanton et al. 2017; Ahumada et al. 2020; Bowen & Vaughan 1973; Gunn et al. 2006; Smee et al. 2013; Kollmeier et al. 2017; Wilson et al. 2019). By fitting these spectra, and using certain assumptions, the optical luminosity obtained from spectral fitting can be used to estimate the bolometric luminosity, L\({}_{\rm bol}\) (see e.g. Shen et al. 2011), and the width of the broad optical lines can be used to estimate the black hole mass. The spectral fitting procedure applied to AGN in the hard X-ray-selected sample of eFEDS AGN is described in more detail in Nandra et al. (in prep.). In short, the optical spectral fitting programme PyQSOFit (Guo et al. 2018) is used to measure the continuum luminosity and optical/UV broad lines (H\(\beta\), Mg II or C IV) width, such that L\({}_{\rm bol}\) and M\({}_{\rm BH}\) can be estimated. Using the black hole masses and bolometric luminosities (derived from the optical luminosity), the Eddington luminosity (L\({}_{\rm Edd}\)) can be defined as \[L_{\rm Edd}=1.26\times 10^{38}\Big{(}\frac{M_{\rm BH}}{M_{\rm sun}}\Big{)}\ \ {\rm[ergs^{-1}]}, \tag{4}\] and the Eddington ratio, L\({}_{\rm Edd}\), can then be defined as \[\lambda_{\rm Edd}=\frac{L_{\rm bol}}{L_{\rm Edd}}. \tag{5}\] Figure 23: Corner plot for source ID 00439, best fit with a warm absorption model but which shows evidence for Compton-thin absorption when modelled with an absorber power law. The diagonal panels show the marginal posterior probability distribution for each parameter, while the other panels show the conditional probability distribution functions for each pair of parameters. Here, log(nH) is the host galaxy absorber column density (in units of \(\times 10^{22}\) cm\({}^{-2}\)), LGGNH is the column density of the warm absorber ( cm\({}^{-2}\)), LGGXI is the ionisation of the warm absorber ( ergs cm s\({}^{-1}\) ), log(norm) is the power law normalisation, Phoflare is the photon index of the power law, and norm is the relative renormalisation of the background model with respect to the source model, which is in agreement with 1. Figure 22: Example spectrum of ID 00439 (z = 0.6027), which is best fit fit a warm absorber model. The spectrum is shown along with the background model (black dashed line), the absorbed power law model (grey), a partial covering absorption model (orange), and the best-fit warm absorber model (blue). The bottom three panels show the residuals for the power law, warm absorber and partial covering models, respectively. Data have been re-binned for display purposes. Figure 24: Distribution of black hole masses and bolometric luminosities calculated from the optical spectra. Sources best fit with an absorbed power law are shown as grey crosses, sources with soft excesses (97.5%) are shown as red squares, sources with warm absorbers (97.5%) are shown with blue circles, and sources with partial covering (97.5%) absorbers are shown with orange pentagons. We then apply the selection criteria presented in Wu & Shen (2022) to select a clean sample of line measurements, namely that the flux of the line divided by the error on the flux must be \(>2\), which excludes three sources from the sample. The analysis is restricted to sources where the accretion rate can be constrained, that is, where the error on the accretion rate does not exceed the value of the accretion rate, which removes an additional 13 sources. Using this approach a total of 154 AGN have constrained accretion rates, measured from the H\(\beta\), Mg II or C IV optical lines. Fig. 24 shows the distribution of black hole masses and bolometric luminosities for the sources included in the sample. There are several sources with relatively low black hole masses which appear to host highly significant soft excesses. Furthermore, there is a large cluster of sources with black hole masses of the order of \(5\times 10^{8}M_{\rm sun}\). At these masses, there appears to be some evidence that the sources with soft excesses have higher bolometric luminosities than those with partial covering absorbers or warm absorbers. In Nandra et al. (in prep.), the distributions of redshift, black hole mass, bolometric luminosity and Eddington ratio are discussed in detail. Here, these distributions are re-examined, but separating the sources based on the best fitting model from this work. These distributions are shown in Fig. 25, with median values for each sub-sample indicated with vertical dashed lines in the corresponding colours. No significant differences were found in the bolometric luminosities (top left) or black hole masses (top right) between models. However, the distributions of FWHM of the optical broad lines (bottom left), as well as the distributions of Eddington ratios (bottom right), differ significantly. To quantify this, we use the Anderson-Darling (AD test), which is more sensitive to small changes in the wings of the distributions than the KS-test, and is ideal for treatment of smaller samples. Using this, it is found that the measured FWHM values for sources with warm absorbers are significantly higher (AD test p-value 0.012) than those with soft excesses. In general, it is also seen that sources with soft excesses tend to have lower than average FWHM values (as compared to the full sample), while those with warm absorbers tend to have larger FWHM. Also of interest is to examine the sources with FWHM values consistent with those found in narrow-line Seyfert 1 (NLS1) galaxies (e.g. Boller et al., 1996), which are classified based on their H\(\beta\) line widths and other optical line properties (Osterbrock & Pogge, 1985; Goodrich, 1989) and are believed to host younger, lower mass black holes accreting at a high fraction of the Eddington limit (e.g. Pounds et al., 1995; Grupe, 2004; Gallo, 2018; Waddell & Gallo, 2020). For sources with FWHM \(<2000\)km s\({}^{-1}\), four out of five sources show evidence for a soft excess, which is consistent with previous findings that NLS1s typically have strong and steep soft excesses (e.g. Boller et al., 1996; Waddell & Gallo, 2020). Given that the two parameters that are used to construct the Eddington ratio are the FWHM and luminosity, and that there are no differences in luminosities between samples, it should be assumed that differences in FWHM are likely driven by differences in the Eddington ratios of the respective samples. Indeed this is seen, and the bottom-right panel of Fig. 25 shows the distributions of Eddington ratios for sources best fit with each model, as well as the median values for each sample, which are indicated with vertical lines in the corresponding colours. While the median Eddington ratio for the partial covering sample agrees with the power law sample, the median values for the soft excess and warm absorber samples differ significantly, with the soft excess sources having a much higher median Eddington ratio (AD test p-value 0.0059). The median value for the warm absorbers is also significantly lower than for the power law sample (AD test p-value 0.029). This may indicate some intrinsic differences between the sources which show evidence for soft excesses compared to those which show evidence for warm absorbers. To look at this in more detail, it is useful to examine the fraction of sources best fit by each model as a function of accretion rates. This is shown in Fig. 26, where the fraction of sources best fit with a soft excess, warm absorber or partial covering component are shown, using the same binning as in the bottom-right panel of Fig. 25. Sources best fit with soft excess tend to have higher Eddington ratios, and the fraction of sources with soft excess increases with increasing Eddington ratio. By contrast, the sources with warm absorbers typically have lower Eddington ratios, with a small secondary peak of \(\lambda_{\rm Edd}\sim 0.1\). The sources with partial covering appear unremarkable, with most sources having median Eddington ratios of \(\lambda_{\rm Edd}\sim 0.005-0.1\). To confirm these results, several potential sources of bias were examined. First, the lowest and highest Eddington ratio bins had very few sources, which may lead to erroneously large fractions of sources best fit with a given model in those bins. The distribution was then recomputed using just four bins, where the fractions from the lowest two and highest two bins were summed. Again, the fraction of sources with warm absorbers appeared to decrease with increasing Eddington ratio with a small secondary increase around \(\lambda_{\rm Edd}\sim 0.1\), and the fraction of sources best fit with a soft excess increases with increasing Eddington ratio, suggesting that the binning is not significant influencing this result. Next, since soft excesses were mostly detected up to redshifts of \(z\sim 0.5-0.6\) (see Fig. 20), it was found that warm absorbers could be found up to much higher redshifts, with some even found above \(z=1\) (Fig. 20). The analysis was therefore repeated using only sources with \(z\leq 0.6\). The median Eddington ratio was again found to be significantly higher for sources with a soft excess than for those with a warm absorber (AD test p-value 0.0031), and the fractions of sources per bin the same trends as for the full sample. Sources with soft excesses typically have higher X-ray spectral counts than sources best fit with a power law, as there are additional counts in the soft band. The analysis is therefore repeated by selecting a narrow range of counts (in the \(0.2-5\) keV band), here between 25 and 1000 counts. This removes the very high signal to noise spectra, but also removes the very low count spectra where additional spectral components may be difficult to identify. The differences remain significant, with an AD test p-value of 0.015. If the minimum counts is increased to 50 counts, the AD test p-value increases slightly to 0.035. Many of the higher accretion rate sources have soft excesses, but also have high counts. By contrast, many of the warm absorber sources have lower counts as they are heavily absorbed, and also have lower accretion rates. However, when only using sources with between 50 and 1000 counts, a large fraction of sources have been removed from the sample and it is difficult to make statistical evaluations. We note that no correlations exist between the number of counts and the statistical significance of a warm absorber, partial covering, or soft excess components, and that all these components are found in sources with a broad range of counts. Finally, to maximise the number of sources with measurable accretion rates, there is no signal to noise cut placed on the optical spectra. However, this can lead to erroneous measurements of the black hole mass (e.g. Coffey et al. (2019) and references therein). Therefore, the analysis is again repeated using a minimum median SDSS spectrum signal-to-noise ratio of 5, as pro posed in these works. Again, the results are confirmed - the Eddington ratios were found to be significantly higher for sources with a soft excess than for those with a warm absorber (AD test p-value 0.0042), and the trends observed in the fractions per bin were unchanged. These suggest that the differences observed are intrinsic to the sources and not simply related to biases in the analysis. For the subsequent analysis and discussions, no cuts are made on the redshift, X-ray counts, or SDSS signal-to-noise. ### X-ray and optical-derived parameter correlations Previous works (e.g. Shemmer et al. 2008; Risaliti et al. 2009; Brightman et al. 2013; Trakhtenbrot et al. 2017) have shown a shallow correlation between the X-ray photon index and the Eddington ratio, although this has been debated or refuted in other works (e.g. Laurenti et al. 2022). This correlation is often explained in the context of the eigenvector 1 (EV1) space, wherein the main variance in samples of quasars is shown to take the form of an anti-correlation between the strength of the FeII and [OIII] optical emission, which has been confirmed to be driven by the Eddington ratio (Boroson & Green 1992; Shen & Ho 2014; Wolf et al. 2020). To examine this in the context of eFEDS, the (hard) photon index from the best fitting spectral model is shown in Fig. 27, plotted with the Eddington ratios, with histograms also shown for each parameter. There is a general trend where sources with steeper photon indices also have higher Eddington ratios; the slope is constrained to be 0.07\(\pm\)0.05, which is significantly non-zero and in agreement with the slope found in Trakhtenbrot Figure 25: Histograms of key optical-derived properties, shown as: all sources (grey), sources best fit with a power law (black dotted line), sources best fit with a soft excess (red solid line), sources best fit with a warm absorber (blue dot-dash line) and sources best with with partial covering (orange dashed line). Median values for each sub-sample are indicated with vertical lines in corresponding colours. Top left: distributions of bolometric luminosities. Top right: distributions of black hole masses. Bottom left: distributions of FWHM of the H\(\beta\), Mg II, or C IV optical lines. Bottom right: distributions of the Eddington ratios (\(\lambda_{\mathrm{H}\mathrm{a}\mathrm{I}}\)). et al. (2017). Performing a KS test on the photon indices for the sources best fit with warm absorbers and soft excesses, a p-value of 0.5 is obtained, suggesting that there is no difference in photon index. This also suggests that some additional intrinsic properties of these systems is responsible for the observed differences. Separately, the Eddington ratio was also checked along with the soft excess strength (F\({}_{SE}\)/F\({}_{PL}\)), however no significant correlation between these parameters was found. These findings will be discussed in more detail in Sect. 6. With the sources that show evidence for a soft excess and also have reliable SDSS spectra, we also attempted to identify a physical parameter that was strongly correlated with the Eddington ratio, black hole mass, or bolometric luminosity in an attempt to explain why sources with soft excesses are found in an increasing fraction at higher Eddington ratios. Examples of interest are shown in Fig. 28, with the hot corona photon index and Eddington ratio shown in the top panel, the warm corona photon index and the Eddington ratio shown in the middle panel, and the warm corona temperature and black hole mass shown in the bottom panel. The best fit lines obtained from a linear regression are shown as shaded purple regions in the top panel, and comparisons of best-fit lines found by Shemmer et al. (2008); Risaliti et al. (2009); Brightman et al. (2013) and Trakhtenbrot et al. (2017) are shown with black solid, dashed, dotted and dash-dot lines, respectively. Neither correlation is significant, due to large errors on the data points and shallow slopes. That being said, the photon indices obtained from the warm corona are lower than, but broadly consistent within errors, those found by these previous studies. This is also discussed in Sects. 6.2 and 6.3. No correlation is apparent between the warm corona temperature and the black hole mass (or Eddington ratio), and errors for the warm corona temperature are large for most sources. ## 6 Discussion ### Soft X-ray spectral properties of eFEDS AGN The eFEDS survey covering 140 deg\({}^{2}\) of the sky was designed to demonstrate the capabilities of eROSITA for extragalactic survey science, in anticipation of the all sky survey. Even with a very small fraction of the final survey area, eFEDS has demonstrated the power of eROSITA not only to detect new sources and source populations, but to characterise their properties and provide physical insights via their X-ray spectra. This work uses those capabilities to identify AGN with a soft excess, and to differentiate between various models for the excess, which may be an artefact of complex absorption or a true emission component. More specifically, this work uses Bayesian fitting methods and reliable model selection to characterise the X-ray spectral of a sample of 200 hard X-ray-selected AGN, finding the following results: * In addition to the underlying continuum modelled with a power law, an additional power law describes the shape of the soft excess better than a blackbody component. This suggests a non-thermal origin for the soft excess is preferred over a blackbody originating in the inner disc. * Making use of simulations and spectral fitting, 29 (14.5%) sources show evidence for a soft excess, 29 (14.5%) sources show evidence for a warm absorber, and 25 (12.5%) sources show evidence for a partial covering absorber, all with 2.5% (\(\sim 5\)) spurious detections estimated from simulations. By design, there is no overlap between these samples. * Examining these sources in colour-colour space reveals differences between sources best fit with soft excesses, complex absorbers, and only power laws, which can be used for comparison with soft X-ray-selected samples and the eROSITA all-sky survey. * Of the 29 sources in the selected soft excess sample, 23 appear best fit with the soft Comptonisation interpretation, and six appear best fit with blurred reflection. * Many sources which display evidence for Compton-thin absorbers (\(>10^{22}\) cm\({}^{-2}\) ) from the baseline absorbed power law model are actually warm absorbers, including all sources with column densities \(>10^{23}\) cm\({}^{-2}\), suggesting that a population of apparently absorbed type-1 AGN actually display evidence for warm absorption from e.g. a disc wind rather than absorption in the distant torus. * Sources with lower Eddington ratios tend to more frequently host warm absorbers, and the fraction of sources with warm absorbers appears to decrease with increasing Eddington ratio. * Sources with higher Eddington ratios more frequently have soft excesses, and the fraction of sources with soft excesses increases with increasing Eddington ratio. These findings and the consequences for the all-sky survey are further discussed in the following sections. ### Soft excesses in eFEDS The sub-sample of 29/200 sources with soft excesses identified in this work prove a powerful tool for studying soft excesses in AGN. First, it should be noted that this fraction of sources with soft excesses is low, with less than half (83 sources, or 41.5%) of sources showing evidence for some form of additional component (soft excess or complex absorption). Furthermore, only 29 sources (14.5% of the full sample) show evidence for a soft excess which cannot be explained with complex absorption alone. Figure 26: Fraction of the total sources best fit with each model (soft excess, warm absorber, partial covering), shown per bin using the same binning as the bottom-right panel of Fig. 25. Sources best fit with soft excesses are shown in red, sources best fit with warm absorbers are shown as blue dashed lines, and sources best fit with partial covering are shown as orange dash-dot lines. Previous works studying samples of soft excesses typically assume that all sources have a soft excess and apply the models accordingly, and indeed some works have claimed ubiquitous or near-ubiquitous detections (Piconcelli et al. 2005; Done et al. 2012; Scott et al. 2012), while in this work it is demonstrated that little or no information can be gained from fitting a soft excess component to sources without a soft excess. Soft excess measurements are not biased by the selection of the hard X-ray sample (Fig. 13), and that while sources with soft excess are typically higher fluxes, there are also very bright sources with no evidence for a soft component. It remains unclear if this work finds a lower number of AGN with soft excess due to low signal-to-noise for some data sets or overly stringent selection criteria, or if this is reflective of the true fraction of sources with soft excesses. Further investigation is needed to properly estimate this fraction. While this work has found that most sources are best fit or at least well fit with soft Comptonisation, there are six sources which show some preference for blurred reflection. Sources best fit with blurred reflection have higher soft \(\Gamma\) values when fitting with the PL+PL model (see Fig. 17). Part of this may be due to modelling biases or uncertainties, however, this may imply that sources best fit with blurred reflection have steeper soft excesses. Furthermore, it is found that sources best fit with blurred reflection have a higher median soft excess strength, and high reflection fraction values. This suggests that the sources with extremely strong reflection spectra are still identified as best fit blurred reflection, while sources with weaker measured reflection spectra are in fact unlikely to be best fit with blurred reflection, and the soft excess is more likely produced with a warm corona. This may be a limitation of the eROSITA bandpass, where emission above 5 keV is very difficult to detect due to the combination of low effective area and high background. While the soft components can have a similar shape, reflection models predict a more prominent and relativistically broadened line at \(\sim 6\) keV due to fluorescent emission from Fe K\(\alpha\), which may differ from the more limited reprocessing in the warm corona model. Strong and steep soft excesses are seen in a majority of X-ray observed NLS1 galaxies (e.g. Boller et al. 1996). Individually, many NLS1s have been shown to be well fit with blurred reflection, absorption, or a combination of the two (e.g. Tanaka et al. 2004; Fabian et al. 2009; Gallo et al. 2015, 2019; Jiang et al. 2019; Waddell et al. 2019; Boller et al. 2021). On aggregate, NLS1s also appear to have stronger soft excess strengths, a strong correlation between the soft excess strength and the strength of the Compton hump, and a weak anti-correlation between the photon index and soft excess strength (Waddell & Figure 27: Distribution of Eddington ratios and photon indices measured from the best fitting spectral model. Sources shown in grey are best fit with a power law, sources shown with red squares are best fit with a soft excess, sources shown in blue circles are best fit with warm absorbers, and sources shown as orange pentagons are best fit with partial covering. Histograms for each parameter are also shown, and median values are indicated with solid lines in the corresponding colours. Gallo, 2020). Together, this may suggest that some AGN display NLS1-type spectra, and that these sources are more likely fit with blurred reflection models compared to more typical broad-line Seyfert 1 (BLS1) sources with flatter, smoother and weaker soft excesses, which are well fit with the soft Comptonisation model (Waddell and Gallo, 2020). It is also noteworthy that while the warm corona model fits most sources in this sample, other works have clearly demonstrated the presence of broad lines and deep absorption edges associated with Fe K\(\alpha\) emission and absorption. These features cannot easily be explained with the soft Comptonisation model; rather, this requires an absorption component, blurred reflection, or some combination of models. In this work, sources with a very strong reflection component (e.g. \(R\gg 1\)) were typically best fit with a relativistic blurred reflection model, suggesting that sources with very strong reflection components are preferentially detected. Further investigation with a larger sample (e.g. eRASS:1) as well as follow-ups with high resolution spectroscopy (e.g. XRISM; Tashiro et al., 2020) will help to identify more sources with high energy spectral curvature and verify if there are sub-samples of AGN which require blurred reflection to explain the spectral shape. ### The warm corona model Despite the caveats discussed above, the evidence comparison for individual sources shows that most are well fit or best fit with a warm corona model (Table 6). The combined evidence comparison also shows that if we assume that the underlying model for all sources is the same, then the warm corona model is preferred over blurred reflection. From the top panel of Fig. 28, it is evident that the warm corona model tends to produce fairly flat hot corona photon indices, with a median value of \(\sim 1.6\). This is flatter than the typical population values of \(\Gamma\sim 1.8-1.9\). Interestingly, these flatter photon indices are not found when examining the PL+PL model (see the left-hand panel of Fig. 17). As discussed in section 5.2, the hard X-ray photon index is correlated with the detection likelihood in the \(2.3-5\) keV band such that sources with very flat photon indices are also very close to the background in the hard X-ray, which may explain why flatter slopes are measured. However, the fact that the PL+PL modelling returns more reasonable slope values of \(\Gamma\sim 1.85\) may suggest that the stricter priors placed on the soft photon index may also be artificially flattening the hard X-ray photon index. This may also be a result of the combined shape of the two nthComp components, in particular when considering the steep cut-off present in the soft power law: indeed, Xu et al. (2021) also find a very flat slope when fitting the type-1 AGN ESO 362-G18 with a double nthComp model as compared to modelling with blurred reflection. While these flat slopes may be consistent with the expected photon indices given their low accretion rates of \(\sim 0.001-0.01\) times the Eddington limit (see top panel of Fig. 28), Laurenti et al. (2022) demonstrate that this correlation may not necessarily hold, and that sources accreting at approximately their Eddington limit display a broad range of photon indices. These findings suggest that the model parameters obtained in this work are acceptable for this analysis. One limitation to modelling the warm corona using the method outlined in this work is that it fails to characterise the effect one corona has on the emission from the other. In a more realistic scenario, photons from the warm corona would be incident on the hot corona, and vice versa. It could even be plausible that the hard and soft components are part of the same, multi-temperature and multi-density cloud of electrons. Therefore, it is interesting to investigate evidence for interplay between the two coronae. One way to do this is by understanding the soft excess strength, that is the ratio of the fluxes emitted by each of the two coronae in the same energy band. Examining the distribution of soft excesses (Fig. 15), most of the soft excesses span a relatively small parameter space of factor \(\sim 10\), with few extreme values. Interestingly, using a similar definition for the Figure 28: Relationship between warm corona parameters and optical derived parameter measurements. Top: Relationship between Eddington ratio and hot corona photon index, shown for all sources with soft excesses. The best fit line and errors are shown by the shaded purple region. Correlations between these parameters found in previous works are shown in black lines of various line styles. Middle: Same as top, but for the warm corona photon index. Bottom: Relationship between black hole mass and warm corona temperature, kT. soft excess strength and modelling broad-line Seyfert 1 galaxies observed with _Suzaku_, Waddell & Gallo (2020) found that soft excess strengths span about factor \(\sim 10\). By contrast, the observed soft excess strengths in NLS1 galaxies spanned a much larger parameter space (factor \(\sim 100\)), extending to very strong soft excess strengths (\(\sim 10\)), similar to the extreme values found for some sources in this work. This may suggest some coupling between the two coronae which only permits certain ratios of fluxes between the two components, while this restriction does not exist for blurred reflection dominated sources, where the flux ratio between components is more dependant on the height of the corona above the accretion disc. ### Dense absorption in eFEDS may be warm and complex When modelling eFEDS sources with the baseline absorbed power law model, many sources appear to show evidence for Compton-thin absorption with column densities \(>10^{22}\) cm\({}^{-2}\). However, examining these sources more closely, a majority (19/26) show evidence for complex absorption (18 warm absorbers and one partial covering absorber), as seen in Fig. 21. This includes all eight sources with column densities above \(10^{25}\) cm\({}^{-2}\), as measured by the baseline model, which are all best fit by a warm absorber model. This raises an intriguing possibility that some, if not most, of the apparently Compton-thin absorbers observed in the Universe are associated with ionised gas hosted in disc winds, and not necessarily in a more distant obscurer such as the torus. In sources with complex absorbers, while the cold and warm absorption column densities are not individually well constrained, the sums of the simple (ztbabs) and complex (cwa18 or zpcfabs) absorbers are constrained (see also Fig. 5, Fig. 8, Fig. 12 and Fig. 23.) Fig. 29 shows the distributions of total column density for each model. Contrary to Fig. 21, the definition of the column density changes depending on the best-fit model for each source; for sources best fit with a single power law or soft excess, this corresponds to the column density measured in the ztbabs component. For sources best fit with a warm absorber, this corresponds to the sum of the column densities from the ztbabs model and the cwa18 grid, and for sources best fit with neutral partial covering, this corresponds to the sum of the column densities measured from the ztbabs and zpcfabs components. A majority of sources best fit with a power law have low column densities (\(<10^{21}\) cm\({}^{-2}\) ). This can be associated with absorption in the host galaxy, with a small minority showing evidence for some form of Compton-thin obscuration. However, the majority of absorbed sources are obscured by a partial covering absorber or a warm absorber, and thus not a distant torus. Indeed, no significant difference is found between soft excess and a power law, demonstrating that most of the absorption originates in the partial covering of warm absorbers. To investigate this in more detail, the 19 apparently obscured sources best fit with complex absorption are re-fit with a more physical torus model, UXClumpy (Buchner et al. 2019), which describes the X-ray spectrum of a clumpy absorber illuminated by the corona. The Galactic column density is also included in the model, as before. Free parameters include the coronal photon index (given a uniform prior between one and three, as in all other models in this work), the line-of-sight column density (given a log-uniform prior between \(10^{20}\) cm\({}^{-2}\) and \(10^{25}\) cm\({}^{-2}\) ), and the torus inclination, vertical extent of the clouds, and covering factor, all of which were given uniform priors between the minimum and maximum values allowed by the model and none of which were well constrained in spectral fits. The energy cutoff is fixed at 400 keV, but does not influence the spectral shape for large cut-off values given the limited hard X-ray coverage of eROSITA. The Bayes factor is then computed for each source to ease comparison with other models. Of these 19 sources, only six are better fit with UXClumpy. In all of these cases the column density in the torus is \(\sim 10^{23}\) cm\({}^{-2}\), and other torus parameters cannot be constrained. Furthermore, very steep photon indices of \(\Gamma\sim 2-3\), with many in agreement with the upper limit of \(\Gamma=3\), are required to explain the spectral shape. Four of these sources have clear evidence for optical broad lines and thus have type-1 optical spectra. This seems highly unlikely, as type-1 AGN do not typically show evidence for absorption with column densities \(>10^{22}\) cm\({}^{-2}\) (e.g. Shimizu et al. 2018), and this is at odds with the AGN unification model wherein type-1 AGN offer a direct view of the central region and only type-2 AGN are viewed through the torus (e.g. Antonucci 1993; Urry & Padovani 1995). Furthermore, the high photon indices are likely un-physical and are atypical for obscured AGN studied with high quality spectra (e.g. Ricci et al. 2017). Therefore, the warm absorber model, which yields more reasonable photon indices of \(\Gamma\sim 1.8-2\) is preferred for these sources. Future modelling with higher signal to noise spectra or expanded energy coverage (e.g. with _NuSTAR_) would be required to confirm this result (Waddell et al. in prep.). If indeed the majority of Compton-thin obscuration in this sample is caused by warm absorption, this may change our view of the way obscuration is considered and treated in AGN. If these warm absorbers are indeed physically produced by winds launched from the accretion disc, these winds are believed to play a significant role in the evolution of the sys Figure 29: Distribution of total absorbing column density for each model. For sources best fit with a single power law or soft excess, this corresponds to the column density measured in the ztbabs component. For sources best fit with a warm absorber, this corresponds to the sum of the column densities from the ztbabs model and the XSTAR grid, and for sources best fit with neutral partial covering, this corresponds to the sum of the column densities measured from the ztbabs and zpcfabs components. Sources best fit with soft excesses are shown in red, sources best fit with warm absorbers are shown as blue dash-dot lines, and sources best fit with partial covering are shown as orange dashed lines. The typical error bar is shown in black. tem. In a comprehensive work on absorption in AGN, Buchner et al. (2015) studied the evolution of AGN which appear unobscured (column densities \(<10^{22}\,\mathrm{cm^{-2}}\)), Compton-thin (column densities \(10^{22}-10^{24}\,\mathrm{cm^{-2}}\) ), and Compton-thick (column densities \(>10^{24}\,\mathrm{cm^{-2}}\) ), and found some evidence that Compton-thin AGN are evolving faster at redshifts \(z=0.5-4\) (specifically, their space density rises more rapidly and peaks earlier than unobscured and Compton-thick AGN). This is most apparent for sources with column densities \(10^{22}-10^{23.5}\,\mathrm{cm^{-2}}\), where a majority of the sources in this work (15/23) show evidence for warm absorption. This might suggest that disc winds are playing a role in the more rapid space density evolution of apparently Compton-thin sources in this redshift range. A more in-depth analysis incorporating more sources spanning a larger redshift range would be required to more fully understand this result. ### The Eddington ratio distinction between spectral models Perhaps the most important result in this work is that sources with warm absorbers tend to have low Eddington ratios with a few having higher values of \(\lambda_{\mathrm{Edd}}\sim 0.1\), while sources with soft excesses have higher Eddington ratios and an increasing fraction of sources with higher Eddington ratios have soft excesses. The result is robust to various tests of selection effects and bias. This suggests that this is an intrinsic property of the objects. It is therefore of interest to investigate which physical mechanisms may be responsible for these observed differences. With very few sources having both warm absorbers and Eddington ratios available (due to incomplete optical coverage or lack of a well-defined broad line), it is difficult to identify possible correlations here between parameters. However, it appears that sources with higher Eddington ratios have higher warm absorber ionisations than those without. There are not enough sources to confirm this, but such a correlation may well be expected if the warm absorber is indeed associated with a disc wind. There are three primary launch mechanisms for these low-velocity disc winds; a thermal driven wind in which the wind is formed when the disc loses upper layers due to irradiation of the outer disc, a radiation pressure driven wind wherein the wind is launched via radiation pressure in the disc, or by magnetic fields which give rise to magneto-rotational instabilities in the disc (e.g. Lubow et al., 1994; Fukumura et al., 2017; Mizumoto et al., 2021). Thermal winds are likely less important in many systems, given that radiation pressure driven winds can begin to dominate the wind in systems as cool as \(10^{5}\) K. While it is well known that magnetic fields are important in the context of AGN, it is easiest to understand disc winds in the context of radiation pressure. It is possible that the sources with higher accretion rates have higher radiation pressures in the disc, resulting in the launching of ionised winds. However, if the radiation pressure becomes too high, the wind may become over-ionised such that the velocity of the wind is not sufficient to escape the system, resulting in a failed wind (Schurch & Done, 2006; Parker et al., 2017, 2018; Pinto et al., 2018; Gallo et al., 2019; Giustini & Proga, 2019, 2021; Boller et al., 2021). This suggests that highly ionised winds must be launched close to the accretion disc and must not become over-ionised in order to be detected, whereas low ionisation winds can be launched at larger disc radii and should be easier to detect (e.g. Fukumura et al., 2017). A tentative positive correlation is found between the Eddington ratio and warm absorber ionisation for sources in this work, supporting this interpretation. Of particular interest here is the proposition of Schurch & Done (2006); Giustini & Proga (2019, 2021) that these failed winds may be associated with a physical component; the broad-line region (BLR), an X-ray obscurer on BLR scales, or a warm corona. This would nicely explain the observed distributions of fractions of sources in each accretion rate bin presented in Fig. 26. For sources with lower \(\lambda_{\mathrm{Edd}}\), the disc exerts less radiation pressure and thus produces less failed winds, so a warm absorber is more frequently detected in the spectrum (provided the wind intersects the line of sight). At intermediate \(\lambda_{\mathrm{Edd}}\), there is sufficient radiation pressure exerted near the inner disc for some winds to become over-ionised and the velocity is insufficient for the wind to escape. It remains in the system, producing an X-ray obscurer which is detected as a partial covering absorber. Finally, at larger \(\lambda_{\mathrm{Edd}}\), a larger fraction of winds will have insufficient velocity to escape the system and will become failed winds. These failed winds may remain closer to the disc, forming the warm corona and resulting in a large fraction of sources with soft excesses also featuring larger \(\lambda_{\mathrm{Edd}}\). In this framework, sources best fit with a power law only, do not have strong enough winds intercepting the line of sight to be detected, or have not had failed winds with the correct conditions to form an absorber or soft excess. All these physical scenarios are illustrated in Fig. 30, with one panel showing each of the three Eddington ratio regimes. This interpretation is consistent with the tentative correlation found between the warm absorber ionisation and the Eddington ratio, and may also explain why there are no correlations found between the Eddington ratio and the properties of the warm corona, as it suggests that a warm corona is more likely to form for sources with higher \(\lambda_{\mathrm{Edd}}\), but is not necessarily hotter, denser or stronger. It should also be stressed here that this work does not attempt to model the ultra-fast outflows (UFOs; e.g. Tombesi et al., 2010; Igo et al., 2020; Matzeu et al., 2023) which typically have higher column densities (\(\sim 10^{23}\)), higher ionisations (\(10^{3}-10^{6}\) erg s\({}^{-1}\)) and large outflow velocities (v\(\sim 0.033\)c - \(0.3\)c). Such outflows are often detected based on absorption features in the \(7-10\,\mathrm{keV}\) range, at the edge of or outside of the ROSAT bandpass. UFOs are often found in high Eddington ratio sources, but the typical ionisations and large outflow velocities are mostly inconsistent with the ionisation range of (\(10^{-4}-10^{4}\) erg s\({}^{-1}\)) and the zero outflow velocity used in this work. More in-depth modelling with a broader parameter space and likely including data from _XMM_ or _NuSTAR_ would be required to attempt to identify signatures of UFOs in this sample. One caveat to this analysis is that this work does not attempt to characterise properties of potential warm absorbers and disc winds simultaneously. To address this, all sources which showed evidence for a soft excess or warm absorber were also fit with a model including both a disc wind (warmabs) and a secondary power law component (in XSPEC, this corresponds to \(\texttt{tbabs}\times\texttt{ztbabs}\times\texttt{cwa18}\times(\texttt{powerlaw }+\texttt{powerlaw})\)). The same priors as in Sect. 3 were used, and Bayes factors were again calculated for each source. For most sources, the inclusion of both components did not result in a better fit to the spectrum, and many spectral parameters could not be well constrained. Therefore, it appears that for the eFEDS analysis, the components can only be treated separately, which may lead to some modelling errors. Laha et al. (2013) discuss the complications of simultaneously modelling a soft excess and warm absorption component in NLS1 IRAS 13349+2438, and demonstrate that different ionising continua produce different ionisation structures in the warm absorption component, whereas including a soft excess tends to decrease the ionisation while the column density remains consistent. More recently, Parker et al. (2022) discuss the degeneracies between X-ray winds and relativistic blurred reflection, where systematic biases are found in the derived outflow parameters of the wind when the emission from the disc is not properly characterised. These are particularly crucial in the case of broad features, where even microcalorimeter resolution does not help to break some degeneracies (Parker et al., 2022). Care should therefore be taken in interpreting the ionisation parameters in this work, and further analysis should be done to better understand any potential superposition between these components. ## 7 Conclusions In this work, the 200 sources classified as AGN from the eFEDS hard X-ray-selected sample are modelled with a variety of phenomenological and physically motivated models in order to investigate the nature of the X-ray soft excess. X-ray spectra are fit using BXA so that the Bayesian evidence can be compared to select the best fitting models. This work demonstrates that eROSITA can be used to identify signatures of both complex absorption and soft excesses, and using simulations, the significance of these features can be evaluated. This analysis identifies a total of 29 sources that have warm absorbers (14.5% of the sample), 25 sources that have neutral partial covering absorbers (12.5% of the sample), and 29 sources (14.5% of the sample) with soft excesses (all with 97.5% purity), which clearly shows that soft excesses and complex absorbers are key features for understanding the properties of large samples of AGN. It is shown that most sources with true soft excesses are best explained by a warm corona model as opposed to a relativistic blurred reflection scenario. Follow-up observations of these sources with better sensitivity in the hard X-ray (e.g. with _XMM_ or _NuSTAR_) can help to search for the presence of a broad iron line or Compton hump which are strong signatures of blurred reflection (e.g. Fabian et al., 1989; Ross & Fabian, 2005), and a timing analysis incorporating reverberation mapping (e.g. following the prescription of Uttley et al., 2014) can also help to distinguish between soft excess and absorption models. Several interesting results were also found in studying the properties of the warm absorbers and soft excesses, including that warm absorbers are likely the true nature of the absorption in many apparently Compton-thin AGN, and that sources with soft excesses are found in higher Eddington ratio sources while sources with warm absorbers are found in lower Eddington ratios. This Eddington ratio division may be explained in the context of winds which escape the system and intercept the line of sight at low \(\lambda_{\rm Edd}\), but which become over-ionised failed winds resulting in the formation of a partial covering absorber at intermediate \(\lambda_{\rm Edd}\) or the warm corona at high \(\lambda_{\rm Edd}\). However, it is difficult to confirm these findings or to completely explain their physical interpretation using eFEDS alone. Using the results from this work and repeating this analysis using the all-sky survey (e.g. eRASS:1, eRASS:4) will provide a large sample of \(\sim 1000\)s of AGN for enhanced analysis. Furthermore, using data with very high spectral resolution (e.g. XRISM and Athena) will help to confirm these results, and to create a more complete picture of the nature of warm absorbers and soft excesses in AGN in the local Universe. ###### Acknowledgements. We thank the referee for their careful reading of this manuscript and for their very helpful comments and suggestions which improved this work. This work is based on data from eROSITA, the soft X-ray instrument onboard SRG, a joint Russian-German science mission supported by the Russian Space Agency (Roskosmos), in the interests of the Russian Academy of Sciences represented by its Space Research Institute (IKU), and the Deut Deutkes Zentrum fur Luft- und Raumfmitt (DLR). The SRG spacecraft was built by Lavoshkin Association (NPOL) and its subcontractors, and is operated by NPOL with support from the Max-Planck Institute for Extraterrestrial Physics (MPE). The development and construction of the eROSITA X-ray instrument was led by MPE, with contributions from the Dr. Karl Remes Observatory Bamberg & ECAP (FAU Erlangen-Neuerberg), the University of Hamburg Observatory, the Leibniz Institute for Astrophysics Potsdam (AIP), and the Institute for Astronomy and Astrophysics of the University of Tubingen, with the support of DLR and the Max Planck Society. The Argelander Institute for Astronomy of the University of Bonn and the Ludwig Maximilians Universitat Munich also participated in the science preparation for eROSITA. The eROSITA data shown here were processed using the eAS/NRKA software system developed by the German eROSITA consortium. The Hyper Suprime-Cam (HSC) collaboration includes the astronomical communities of Japan and Taiwan, and Princeton University. The HSC instrumentation and software were developed by the National Astronomical Observatory of Japan (NAOJ), the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU), the University of Tokyo, the High Energy Accelerator Research Organization (KEK), the Academia Sinica Institute for Astronomy and Astrophysics in Taiwan (ASIALA), and Princeton University. Funding was contributed by the FIRST program from Japanese Cabinet Office, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), the Japan Society for the Promotion of Science (JSPS), Japan Science and Technology Agency (JST),the Toray Science Foundation, NAOJ, Kavli IPMU, KEK,ASIAA, and Princeton University. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org. SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics | Harvard & Smithsonian (CfA), the Chilean Participation Group, the French Participation Group, Figure 30: Cartoon schematic showing how the emission may vary with Eddington ratio. Left: Sources with relatively lower Eddington ratios (few \(\times 10^{-3}\)), there is sufficient radiation pressure to launch line driven winds, but not so much that the winds become over-ionised, so more warm absorbers are detected. Middle: Sources with intermediate Eddington ratios, (few \(\times 10^{-2}\)) some winds will become over-ionised and their velocities will not exceed the escape velocity and will fail, but will remain somewhat bound to the system, creating patchy partial covering absorbers. Right: Sources accreting at \(0.1-1\) Eddington, high radiation pressure results in too many ionising photons and the wind is destroyed, remaining tightly bound to the system and forming the warm corona. Instituto de Astrofisica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut fur Astrophysik Potsdam (AIP), Max-Planck-Institut fur Astrophysik (MPA Far Astronomico (MPIA Heidelberg), Max-Planck-Institut fur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatorio Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autonoma de Mexico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. Funding for the Sloan Digital Sky Survey (SDSS) has been provided by the Alfred F. Sloan Foundation, the Participating Institutions, the National Aeronautics and Space Administration, the National Science Foundation, the US Department of Energy, the Japanese Monbukagakusho, and the Max Planck Society. The SDSS Web site is [http://www.sdss.org/](http://www.sdss.org/). The SDSS is managed by the Astrophysics Local Research Consortium (ARC) for the Participating Institutions. The Participating Institutions are The University of Chicago, Fermilab, the Institute for Advanced Study, the Japan Participation Group, The Johns Hopkins University, Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, University of Pittsburgh, Princeton University, the United States Naval Observatory, and the University of Washington. MB acknowledges support from the European Innovative Training Network (ITN) "BiD4BEST" funded by the Marie Sklodowska-Curie Actions in Horizon 2020 (GA 860744).
2302.03747
Symmetric higher rank topological phases on generic graphs
Motivated by recent interests in fracton topological phases, we explore the interplay between gapped 2D $\mathbb{Z}_N$ topological phases which admit fractional excitations with restricted mobility and geometry of the lattice on which such phases are placed. We investigate the properties of the phases in a new geometric context -- graph theory. By placing the phases on a 2D lattice consisting of two arbitrary connected graphs, $G_x\boxtimes G_y$, we study the behavior of fractional excitations of the phases. We derive the formula of the ground state degeneracy of the phases, which depends on invariant factors of the Laplacian.
Hiromi Ebisu
2023-02-07T20:53:00Z
http://arxiv.org/abs/2302.03747v2
# Symmetric higher rank topological phases on generic graphs ###### Abstract Motivated by recent interests in fracton topological phases, we explore interplay between gapped 2D \(\mathbb{Z}_{N}\) topological phases which admit fractional excitations with restricted mobility and geometry of the lattice on which such phases are placed. We investigate the properties of the phases in a new geometric context - graph theory. By placing the phases on a 2D lattice consisting of two arbitrary connected graphs, \(G_{x}\boxtimes G_{y}\), we study the behavior of fractional excitations of the phases. We derive the formula of the ground state degeneracy of the phases, which depends on invariant factors of the Laplacian. ## 1 Introduction The importance of the discovery of topologically ordered phases can hardly be overstated [1, 2, 3, 4, 5, 6]. They provide a paradigm shift in understanding of phase transitions away from one based purely on symmetry breaking. Topologically ordered phases also admit exotic phenomena, such as fractionalized quasi-particle excitations (i.e, anyons) [2, 7] and topologically protected ground state degeneracy, independent on the local geometry of the system [8]. These phases also have a great advantage for the purposes of quantum computing, as operation on a state in a subspace of degenerate vacua, realized by braiding anyons, is immune to local perturbations [9, 10]. Theoretical frameworks to describe these phases have been well-developed, such as the topological quantum field theory [11, 8, 12] and the modular tensor category [13]. Recently, new types of topological phases have been proposed, which are beyond these frameworks, often called fracton topological phases [14, 15, 16]. Novel feature of these phases is that they exhibit the sub-extensive ground state degeneracy (GSD) dependence. Due to the UV/IR mixing propriety, one cannot have effective field theory description in the long wavelength limit. The key insight to understand such unusual GSD dependence is that mobility of the quasiparticle excitations is sensitive to the local geometry of the system, which is contrasted with conventional topologically ordered phases where the properties of the excitations depend only on the global topology of the system. Therefore, fracton topological phases hold value for exploring new geometric phases. Theoretical formalism of these phases has yet to be completed. Due to the sensitiveness to the UV physics in fracton topological phases, it would be interesting to study the phases on a curved geometry. Indeed, several works studied gapless theory with fractonic-like mobility constraint on a curved geometry [17, 18, 19], and gapped fracton topological phases on generic lattices [20, 21, 22]. In this paper, we introduce unusual gapped \(\mathbb{Z}_{N}\) topological phases where deconfined fractional excitations are subject to the mobility constraint in the similar fashion as the fracton topological phases and explore the geometric properties of the fractional excitations by placing the phases on the generic lattices beyond the typical square one. In particular, we highlight the behavior of the fractional excitations of the phases in a new geometric context - _graph theory_. (There are a few attempts tackling the problem in this direction, see, e.g, [23, 24, 25].) Introducing 2D lattice composed of arbitrary two connected graphs, we study behavior of the excitations and the superselection sectors (i.e, distinct types of the excitations) of the model on this lattice. By making use of formalism of the graph theory, one can systematically study the properties of the excitations. As we will see in the later section, the properties of the fractional excitations are determined by the Laplacian matrix (the Laplacian in short), which is the graph theoretical analogue of the second order spatial derivative. The Laplacian plays a pivotal role in the graph theory. For instance, one can study the connectivity of the graph by evaluating eigenvalues of the Laplacian [26]. In our context, the fusion rules of the fractional excitations follows from the form of the Laplacian of the graph, and that the GSD depends on the \(N\) and the invariant factors of the Laplacian. Our study might contribute to better understanding of the fracton topological phases in view of graph theory. The outline of this paper is as follows. In Sec. 2, we introduce model Hamiltonian. We demonstrate that our simple model of the topological phase is obtained by gapping the gauge group from \(U(1)\) to \(\mathbb{Z}_{N}\) via Higgs mechanism in the unusual Maxwell theory. After obtaining the Hamiltonian, in Sec. 3, we consider placing the phase on the 2D lattice constructed by the product of two arbitrary graphs. Sec. 4 is devoted for elucidating the properties of fractional excitations of the model and identifying GSD. We show that the fusion rules of the fractional excitations are determined by the form of the Laplacian of the graph and that the superselection sectors are associated with kernel and cokernel (the Picard group) of the Laplacian. We further show that the GSD depends on the invariant factors of the Laplacian. In Sec. 5, we give a simple example of the lattice to see how our result works. Physical intuition on our result is also given. Finally, in Sec. 6, we conclude our work with a few future research directions. ## 2 Model Hamiltonian In this section, we introduce the model Hamiltonian. For the sake of clearer illustration, we first focus on the Hamiltonian on the flat space. The key insight to obtain the model is gapping the gauge group from \(U(1)\) to \(\mathbb{Z}_{N}\) via Higgs mechanism in the unusual Maxwell theory, referred to as the higher rank Maxwell theory in this paper, where the kinetic and potential terms are described by the second order spatial derivative of the gauge potential, instead of the first order. Accordingly, we dub the phases obtained by this procedure as _higher rank topological phases_. This procedure is contrasted with the case where the \(\mathbb{Z}_{N}\) topological phase (toric code) is obtained from the conventional Maxwell theory via Higgs mechanism. See [27, 28, 29, 30] for more explanations on other types of higher rank Maxwell theories and their Higgs phases. ### Higher rank Maxwell theory Before going to the details of the model Hamiltonian, it is useful to discuss the \(U(1)\) higher-rank Maxwell theory in the continuum limit. The difference between this theory and the usual Maxwell theory is that the first order spatial derivative operator, which enters in the Gauss-law or gauge invariant operators in the conventional Maxwell theory, is replaced with the second order derivative. We start by introducing \(U(1)\) gauge fields in 2D, \(A^{k}(\mathbf{x})\), \(E^{k}(\mathbf{x})\) (\(k=x,y\), \(\mathbf{x}\): spatial coordinate) which are canonical conjugate pair: \[[A^{k}(\mathbf{x}),E^{l}(\mathbf{y})]=i\delta_{k,l}\delta(\mathbf{x}-\mathbf{y}) \tag{1}\] Introducing the charge density operator \(\rho(x)\) and the second order spatial derivative operator, \(D_{k}=\partial_{k}^{2}\), the Gauss-law is given by \[\rho(\mathbf{x})=D_{k}E^{k}(\mathbf{x}), \tag{2}\] where the repeated indices are summed over. Define magnetic flux which is invariant under the Gauss-law (2) by \[B(\mathbf{x})=D_{x}E^{y}(\mathbf{x})-D_{y}E^{x}(\mathbf{x}). \tag{3}\] Interesting property of this theory is that not only charge but also dipole and quadrupole moments are conserved, which is in contrast with the conventional Maxwell theory where only the charge is conserved. To see how, transform the dipole moment \(\int d^{2}x(x\rho)\) as \[\int d^{2}x(x\rho)=\int d^{2}x(xD_{k}E^{k}(\mathbf{x}))=(\text{boundary term})+\int d^{2}x(\partial_{x}^{2}(x)E^{x})=(\text{boundary term}). \tag{4}\] Here we have referred to (2) and implemented the partial integration twice, yielding only the boundary term (which is constant). Likewise, one can show that \(\int d^{2}x\rho\), \(\int d^{2}x(y\rho)\), and \(\int d^{2}x(xy\rho)\), corresponding to charge, dipole and quadrupole, are conserved. As we see later, depending on the geometry, conservation of these moments corresponds to the conservation of dipole and quadrupole moments of the fractional excitations in the higher rank \(\mathbb{Z}_{N}\) topological phase. Now we place this theory on the 2D square lattice and gap it to \(\mathbb{Z}_{N}\) via Higgs mechanism, which can be accomplished by two steps. Firstly, discretize the spatial coordinate \(x\) by introducing lattice coordinate so that \(x\to(x,y)\in\lambda\left(\mathbb{Z},\mathbb{Z}\right)\) with \(\lambda\) being lattice spacing. The two pairs of gauge potential and electric field, which are canonical conjugate, are now labeled by \((A^{k}_{(x,y)},E^{l}_{(x,y)})\) with relation \[[A^{k}_{(x,y)},E^{l}_{(x^{\prime},y^{\prime})}]=i\delta_{k,l}\delta_{x,l^{ \prime}}\delta_{y,y^{\prime}}.\] We transforming the second order spatial derivative into the discretized form (\(D_{k}\to\nabla_{k}^{2}\)). The Gauss law (2) becomes \[\rho_{(x,y)}=\nabla_{x}^{2}E^{x}_{(x,y)}+\nabla_{y}^{2}E^{y}_{(x,y)}=(E^{x}_{ (x+1,y)}+E^{x}_{(x-1,y)}-2E^{x}_{(x,y)})+(E^{y}_{(x,y+1)}+E^{y}_{(x,y-1)}-2E^{y }_{(x,y)}). \tag{5}\] Similarly, the magnetic flux operator, corresponding to (3), is defined as \[B_{(x,y)}=\nabla_{x}^{2}A^{y}_{(x,y)}-\nabla_{y}^{2}A^{x}_{(x,y)}=(A^{y}_{(x+ 1,y)}+A^{y}_{(x-1,y)}-2A^{y}_{(x,y)})-(A^{x}_{(x,y+1)}+A^{x}_{(x,y-1)}-2A^{x}_{ (x,y)}). \tag{6}\] The second step is condensing charge \(N\) excitations, reducing the \(U(1)\) gauge group down to \(\mathbb{Z}_{N}\). As a consequence, the gauge fields take \(\mathbb{Z}_{N}\) value: \(A^{k}_{(x,y)}=\frac{2\pi\mathbb{Z}}{N}\) (mod \(2\pi\mathbb{Z}\)). The gauge and electric fields are expressed via \[Z_{1,(x,y)}=e^{iA^{x}_{(x,y)}},\,X_{1,(x,y)}=\omega^{E^{x}_{(x,y)}},Z_{2,(x,y )}=e^{iA^{y}_{(x,y)}},\,X_{2,(x,y)}=\omega^{E^{y}_{(x,y)}}, \tag{7}\] where \(\omega\) denotes \(N\)-th root of unity, i.e., \(\omega=e^{i2\pi/N}\). These operators act on the local \(N\times N\) dimensional Hilbert space \(\ket{a}_{x}\ket{b}_{y}\)\((a,b\in\mathbb{Z}_{N})\) as \[Z_{1,(x,y)}\ket{a}_{x}\ket{b}_{y}=\omega^{a}\ket{a}_{x}\ket{b}_{y},\,Z_{2,(x,y)}\ket{a}_{x}\ket{b}_{y}=\omega^{b}\ket{a}_{x}\ket{b}_{y}\] \[X_{1,(x,y)}\ket{a}_{x}\ket{b}_{y}=\ket{a+1}_{x}\ket{b}_{y},\,X_{2,(x,y)}\ket{a}_{x}\ket{b}_{y}=\ket{a}_{x}\ket{b+1}_{y} \tag{8}\] indicating that (7) represents the \(\mathbb{Z}_{N}\) Pauli algebra. From the expression (7), we can define the \(\mathbb{Z}_{N}\) Gauss and flux operator as (see Fig. 1a) \[V_{(x,y)}\equiv\omega^{\rho_{(x,y)}}=X_{1,(x+1,y)}X_{1,(x-1,y)}( X_{1,(x,y)}^{\dagger})^{2}X_{2,(x,y+1)}X_{2,(x,y-1)}(X_{2,(x,y)}^{\dagger})^{2}\] \[P_{(x,y)}\equiv e^{iB_{(x,y)}}=Z_{1,(x,y+1)}^{\dagger}Z_{1,(x,y-1 )}^{\dagger}Z_{1,(x,y)}^{2}Z_{2,(x+1,y)}Z_{2,(x-1,y)}(Z_{2,(x,y)}^{\dagger})^{ 2}. \tag{9}\] By construction, these two operators commute. It is important to note that the form of the operators (9) is determined by the discretized second order derivative, \(\nabla_{k}^{2}\). The Hamiltonian of the \(\mathbb{Z}_{N}\) Higgs phase whose ground state is a state without charge and flux, is defined by \[H_{\mathbb{Z}_{N}}=-\sum_{x,y}(V_{(x,y)}+P_{(x,y)}). \tag{10}\] This model shares the several feature as the toric code [9], in that the ground state \(\ket{\Omega}\) is the stabilized state satisfying \(V_{(x,y)}\ket{\Omega}=P_{(x,y)}\ket{\Omega}=\ket{\Omega}\), Also, the model admits the two types of deconfined excitations, carrying \(\mathbb{Z}_{N}\) electric and magnetic charges. However, there is a crucial difference between our model and the toric code. There is a mobility constraint on the fractional excitations, yielding unusual GSD dependence on the lattice. ### The simplest example: \(N=2\) on the square lattice - decoupled surface codes To get a handle on physical intuition behind the Hamiltonian (10), and see how the GSD of the model drastically changes depending on the lattice, it is useful to take a closer look at the model in the simplest case by setting \(N=2\) on the square lattice before considering the phases on generic lattices constructed by graphs. For a moment, we consider 2D square lattice without boundary. In the case of \(N=2\), the two terms (9) are simplified (Fig. 1b): \[V_{(x,y)}=X_{1,(x+1,y)}X_{1,(x-1,y)}X_{2,(x,y+1)}X_{2,(x,y-1)}\] \[P_{(x,y)}=Z_{1,(x,y+1)}Z_{1,(x,y-1)}Z_{2,(x+1,y)}Z_{2,(x-1,y)}. \tag{11}\] The Hamiltonian (10) with (11) resembles the \(\mathbb{Z}_{2}\) toric code [9] with a crucial difference that the terms \(V_{(x,y)}\) and \(P_{(x,y)}\) involve four _next_-nearest neighboring Pauli operators in the horizontal and vertical direction, not nearest neighbors. Due to this property, one can classify the mutually commuting terms (11) into following four groups: \[\mathrm{I}:\{V_{(2m,2n)},P_{(2m^{\prime}-1,2n^{\prime}-1)}\}\ \Pi:\{V_{(2m-1,2n)},P_{(2m^{\prime},2n^{\prime}-1)}\}\] \[\mathrm{III}:\{V_{(2m,2n-1)},P_{(2m^{\prime}-1,2n^{\prime})}\}\ \mathrm{IV}:\{V_{(2m-1,2n-1)},P_{(2m^{\prime},2n^{\prime})}\}\ \ (m,n,m^{\prime},n^{\prime}\in\mathbb{Z}) \tag{12}\] We portray these configurations of the terms in Fig. 1c, which are reminiscent of the ones found in the \(\mathbb{Z}_{2}\) surface code [31]. Now we impose the boundary condition on the lattice and evaluate the GSD. Suppose we impose the periodic boundary condition with lattice length, \(n_{x}\), \(n_{y}\), being even number of sites in both of the Figure 1: (a) Two terms defined in (9) on 2D square lattice. (b) These two terms has the simple form (11) in the case of \(N=2\), each of which resembles the ones defined in the \(\mathbb{Z}_{2}\) surface code. (c) Configuration of the mutually commuting terms belonging to I-IV defined in (12). (d) Configuration of the terms which belong to I and the one of \(V_{(x,y)}\) belonging to II (pink dashed lines and red dots), in the case where periodic boundary condition is imposed with \((n_{x},n_{y})=\)(odd,even)[top] and the one with \((n_{x},n_{y})=\)(even,even)[bottom]. For illustration purposes, we slightly extend the geometry, identifying the vertices with the same symbols (yellow star and rhombus) due to the periodic boundary condition. and \(y\)-directions, which is schematically described by \((n_{x},n_{y})=(\text{even, even}).\) In this case, the Hamiltonian (10) with (11) can be decomposed into four according to (12), i.e, the Hamiltonian consists of four decoupled \(\mathbb{Z}_{2}\) surface codes. Since GSD of each \(\mathbb{Z}_{2}\) surface code on torus is given by 4, the GSD of the model is found to be \(4^{4}=256.\) The situation differs when the length of the lattice is set to be odd. For instance, when the length of the lattice in the \(x\)-direction is odd while keeping the one in the \(y\)-direction being even, i.e, \((n_{x},n_{y})=(\text{odd, even}),\) one cannot separate the terms belonging to I and II as well as III and IV. Indeed, the terms which belongs to I are "connected" with the ones belonging to II. For instance, as demonstrated in the top geometry in Fig. 1d, the terms \(P_{(n_{x}-2,2n^{\prime}-1)}\) which belong to I and \(P_{(n_{x}+1,2n^{\prime}-1)}\) belonging to II are located adjacent with each other, which is opposed to the case with \(n_{x}\) being even where \(P_{(n_{x}-2,2n^{\prime}-1)}\) and \(P_{(n_{x}+1,2n^{\prime}-1)}\) are decoupled (bottom geometry in Fig. 1d). The similar argument holds for the terms \(V_{(x,y)}.\) Analogous lines of thought leads to that one cannot separate terms belonging to III and IV. Therefore, the mutually commuting terms fall into two groups: \[\text{I}^{\prime}:\{V_{(m,2n)},P_{(n^{\prime},2n^{\prime}-1)}\}\ \text{III}^{\prime}:\{V_{(m,2n-1)},P_{(n^{\prime},2n^{\prime})}\},\] implying that we have two decoupled \(\mathbb{Z}_{2}\) surface codes. Thus, the GSD is given by \(4^{2}=16.\) One can similarly discuss the GSD in other cases of the length of the lattice. Overall, we have \[\text{GSD}=\begin{cases}256\ \big{[}(n_{x},n_{y})=(\text{even, even})\big{]}\\ 16\ \big{[}(n_{x},n_{y})=(\text{odd, even}),(\text{even, odd})\big{]}\\ 4\ \big{[}(n_{x},n_{y})=(\text{odd, odd})\big{]}.\end{cases} \tag{13}\] To summarize this subsection, in the simplest case, we learn that each term which constitutes the Hamiltonian involves the next nearest neighbors corresponding to the second order derivative of the higher rank Maxwell theory and due to this property, the GSD drastically changes depending on whether the length of the lattice is even and odd. As we will see in the later section, this feature can be understood in terms of graph theory. Indeed, the GSD depends on \(N\) and the invariant factors of the Laplacian. ## 3 Putting the theory on graphs In this section, we introduce lattice consisting of two arbitrary connected graphs and place the model Hamiltonian (10) on it. The central idea is that when placing the Hamiltonian (10) on a graph, we replace the derivative operators \(\nabla_{k}^{2}\) defined on the square lattice with the _Laplacian_, which is the graph theoretical analogue of the second order derivative [26]. In accordance with this replacement, the Gauss law and the flux operator given in (9) is modified. ### Notations from graph theory Let us first give a formal definition of a graph \(G=(V,E).\) It is a pair consisting of a set of vertices \(V\) and a set of edges \(E\) composed of pairs of vertices \(\{v_{i},v_{j}\}.\) Throughout this paper, we assume that the graph is _connected_, i.e, there is a path from a vertex to any other vertex (there is no isolated vertex), and that the graph does not have an edge that emanates from and terminates at the same vertex. We also define two quantities, \(\deg(v_{i})\) and \(l_{ij},\) which play pivotal roles in this paper. The former one, \(\deg(v_{i})\) denotes _degree_ of the vertex \(v_{i},\) i.e., the number of edges emanating from the vertex \(v_{i}\) and the latter one, \(l_{ij}\) represents the number of edges between two vertices \(v_{i}\) and \(v_{j}\) (We have \(l_{ij}=0\) when there is no edge between two vertices, \(v_{i}\) and \(v_{j}\).). Using these two quantities, _Laplacian matrix_ of the graph is defined. For a given graph \(G=(V,E)\), the Laplacian matrix \(L\) (which we abbreviate as Laplacian in the rest of this work) is the matrix with rows and columns indexed by the elements of vertices \(\{v_{i}\}\in V\), with \[L_{ij}=\begin{cases}\deg(v_{i})\ (i=j)\\ -l_{ij}\ (i\neq j)\end{cases}. \tag{14}\] The Laplacian is singular due to the connectivity of the graph. (Summing over all rows or columns gives zero.) As an example, the Laplacian of the cycle graph \(C_{3}\) (i.e., a triangle) consisting of three vertices and three edges, where there is a single edge between a pair of vertices, is given by \[L=\begin{pmatrix}2&-1&-1\\ -1&2&-1\\ -1&-1&2\end{pmatrix}.\] ### 2D lattice and Hamiltonian With these preparations, now we introduce the 2D lattice. Let \(G_{x}(V_{x},E_{x})\) and \(G_{y}(V_{y},E_{y})\) be two connected graphs. We denote vertices of these two graphs as \(v_{i}^{x}\) and \(v_{j}^{y}\) (\(1\leq i\leq n_{x}\), \(1\leq j\leq n_{y}\)), where \(n_{x}(n_{y})\) represents the total number of vertices in graph \(G_{x}(G_{y})\). Moreover, the Laplacian of the graph \(G_{x}(G)_{y}\) is denoted as \(L_{x}(L_{y})\) whose matrix elements are defined by (14), i.e., the Laplacian \(L_{x}\) is defined by \[(L_{x})_{i,i^{\prime}}=\begin{cases}\deg^{x}(v_{i}^{x})\ (i=i^{\prime})\\ -l_{ii^{\prime}}^{x}\ (i\neq i^{\prime})\end{cases}\qquad(1\leq i,i^{\prime} \leq n_{x}),\] and the Laplacian \(L_{y}\) is similarly introduced. The 2D lattice is defined by the product of the two graphs, \(G_{x}\boxtimes G_{y}\), where each coordinate of the vertex is represented by \((v_{i}^{x},v_{j}^{y})\). Intuitively, the lattice is constructed by "stacking the graph \(G_{x}\) along the graph \(G_{y}\)", meaning, the graph \(G_{x}\) is attached at each vertex of the graph \(G_{y}\), \(v_{j}^{y}\) and how these \(G_{x}\)'s are connected follows from edges of the graph \(G_{y}\). We portray examples of such lattices in Fig. 2a 2b. (Since each graph consists of vertices and edges, corresponding to 0- and 1-simplices, it is regarded as the 1D lattice. Hence \(G_{x}\boxtimes G_{y}\) is the 2D lattice.) The square lattice (without taking into the account the boundary) can be reproduced by setting \(\deg^{x}(v_{i}^{x})=\deg^{y}(v_{j}^{y})=2\), \(l_{i,x}^{x}=\delta_{i,i^{\prime}\pm 1}\), \(l_{j,j^{\prime}}^{y}=\delta_{j,i^{\prime}\pm 1}\). We place the higher rank \(\mathbb{Z}_{N}\) topological phase on this lattice \(G_{x}\boxtimes G_{y}\) by defining the \(U(1)\) higher rank Maxwell theory on the graph and gapping the gauge group to \(\mathbb{Z}_{N}\) similarly to the case of the square lattice presented in the previous section. Since the procedure closely parallels the one in the previous section except that we define the second order derivative via the Laplacian \(L_{x}\) and \(L_{y}\), we outline the procedure succinctly. In the 2D lattice \(G_{x}\boxtimes G_{y}\), we introduce two pairs of the \(U(1)\) gauge potential and electric field, which are canonical conjugate, \((A^{k}_{(v_{i}^{x},v_{j}^{y})},E^{k}_{(v_{i}^{x},v_{j}^{y})})\) acting on the coordinate \((v_{i}^{x},v_{j}^{y})\) with relation \[[A^{k}_{(v_{i}^{x},v_{j}^{y})},E^{l}_{(v_{i}^{x},v_{j}^{y})}]=i\delta_{k,l} \delta_{i,l^{\prime}}\delta_{j,l^{\prime}}.\] Replacing \(\nabla_{k}^{2}\) with \(-L_{k}\), the Gauss law and magnetic flux is defined by \[\rho_{(v_{i}^{x},v_{j}^{y})} = -L_{x}E^{x}_{(v_{i}^{x},v_{j}^{y})}-L_{y}E^{y}_{(v_{i}^{x},v_{j}^ {y})}\] \[B_{(v_{i}^{x},v_{j}^{y})} = -L_{x}A^{x}_{(v_{i}^{x},v_{j}^{y})}+L_{y}A^{x}_{(v_{i}^{x},v_{j}^ {y})}. \tag{15}\] We gap the gauge group from \(U(1)\) to down to \(\mathbb{Z}_{N}\) via Higgs mechanism. Introducing two types of generalized \(\mathbb{Z}_{N}\) qubit states (\(\mathbb{Z}_{N}\) clock states) on each vertex of the 2D lattice, labeled by \(|a\rangle_{v_{i}^{x}}|b\rangle_{v_{j}^{y}}\) (\(a,b\in\mathbb{Z}_{N}\)), we define the operators acting on these qubits as \[Z_{1,(v_{i}^{x},v_{j}^{y})}=e^{\frac{i\hat{A}^{x}_{(v_{i}^{x},v_{j}^{y})}}{2}}, \,X_{1,(v_{i}^{x},v_{j}^{y})}=\omega^{E^{x}_{(v_{i}^{x},v_{j}^{y})}},Z_{2,(v_{i }^{x},v_{j}^{y})}=e^{\frac{i\hat{A}^{y}_{(v_{i}^{x},v_{j}^{y})}}{2}},\,X_{2,(v_ {i}^{x},v_{j}^{y})}=\omega^{E^{y}_{(v_{i}^{x},v_{j}^{y})}}. \tag{16}\] Analogously to (7), they form the \(\mathbb{Z}_{N}\) algebra. Similarly to (9), we define the \(\mathbb{Z}_{N}\) Gauss and flux terms at each vertex \((v_{i}^{x},v_{j}^{y})\) by \[V_{(v_{i}^{x},v_{j}^{y})}=\omega^{\rho_{(v_{i}^{x},v_{j}^{y})}},\,P_{(v_{i}^{x },v_{j}^{y})}=e^{iB_{(v_{i}^{x},v_{j}^{y})}}.\] Referring to and (14) and (15), one can rewrite these terms as \[V_{(v_{i}^{x},v_{j}^{y})} = \left(X_{1,(v_{i}^{x},v_{j}^{y})}^{\dagger}\right)\mathop{\rm deg }\nolimits^{g^{x}(v_{i}^{x})}_{s\neq i}\mathop{\rm X}\nolimits^{g^{x}_{(v_{i} ^{x},v_{j}^{y})}}_{1,(v_{i}^{x},v_{j}^{y})}\times\left(X_{2,(v_{i}^{x},v_{j}^{ y})}^{\dagger}\right)\mathop{\rm deg}\nolimits^{g^{y}(v_{j}^{y})}\prod_{t\neq j} \mathop{\rm X}\nolimits^{g^{x}_{(t)}}_{2,(v_{i}^{x},v_{j}^{y})}\] \[P_{(v_{i}^{x},v_{j}^{y})} = \left(Z_{2,(v_{i}^{x},v_{j}^{y})}^{\dagger}\right)\mathop{\rm deg }\nolimits^{g^{x}(v_{i}^{x})}_{s\neq i}\mathop{\rm Z}\nolimits^{g^{x}_{(v_{i }^{x},v_{j}^{y})}}_{2,(v_{i}^{x},v_{j}^{y})}\times\mathop{\rm Z}\nolimits^{{ \rm deg}\nolimits^{g^{y}(v_{j}^{y})}}_{1,(v_{i}^{x},v_{j}^{y})}\prod_{t\neq j} \left(Z_{1,(v_{i}^{x},v_{i}^{x})}^{\dagger}\right)^{\ell_{j}^{y}}. \tag{17}\] We portray these terms in Figs. 2c in the same 2D lattice as Fig. 2a. It is straightforward to check every term given in (17) commute with one another. Using these mutual commuting terms, we introduce Hamiltonian by \[H=-\sum_{i,j}V_{(v_{i}^{x},v_{j}^{y})}-\sum_{i,j}P_{(v_{i}^{x},v_{j}^{y})}. \tag{18}\] the ground state is the stabilized state satisfying \(V_{(v_{i}^{x},v_{j}^{y})}\left|\Omega\right\rangle=P_{(v_{i}^{x},v_{j}^{y})} \left|\Omega\right\rangle=\left|\Omega\right\rangle\). In the next section, we discuss the properties of the excitations. Figure 2: (a)(b)Two examples of the 2D lattice comprised of two connected graphs, \(G_{x}\boxtimes G_{y}\). (c) Two terms given in (17) which are defined on the lattice \(G_{x}\boxtimes G_{y}\) given in (a). ## 4 Superselection sectors Now we come to the main part of this paper. In this section, we discuss the properties of the excitations of the model on the graphs defined in Sec. 3. ### Fusion rules Similarly to the toric code, there are two types of excitations of our model, carrying \(\mathbb{Z}_{N}\) electric and magnetic charges, which violates the condition \(V_{(v^{x}_{i},v^{y}_{j})}\left|\Omega\right\rangle=\left|\Omega\right\rangle\) and \(P_{(v^{x}_{i},v^{y}_{j})}\left|\Omega\right\rangle=\left|\Omega\right\rangle\), respectively. We label these two excitations at coordinate \((v^{x}_{i},v^{y}_{j})\), whose eigenvalue of \(V_{(v^{x}_{i},v^{y}_{j})}\) and \(P_{(v^{x}_{i},v^{y}_{j})}\) is \(\omega\), by \(e_{(v^{x}_{i},v^{y}_{j})}\) and \(m_{(v^{x}_{i},v^{y}_{j})}\). Also, we label their conjugate with eigenvalue \(\omega^{-1}\) by \(\overline{e}_{(v^{x}_{i},v^{y}_{j})}\) and \(\overline{m}_{(v^{x}_{i},v^{y}_{j})}\). One can systematically discuss the fusion rules of these fractional excitations. Let us focus on the fusion rules of the electric charges. Applying the \(\mathbb{Z}_{N}\) operator \(Z_{1,(v^{x}_{i},v^{y}_{j})}\) on the ground state at the coordinate \((v^{x}_{i},v^{y}_{j})\), it violates the condition of \(V_{(v^{x}_{i},v^{y}_{j})}=1\) at the vertex with coordinate \((v^{x}_{i},v^{y}_{j})\) and the ones connected with edges in the horizontal direction, namely, \[V_{(v^{x}_{i},v^{y}_{j})}(Z_{1,(v^{x}_{i},v^{y}_{j})}\left|\Omega\right\rangle )=\omega^{-\deg^{x}(v_{i})}(Z_{1,(v^{x}_{i},v^{y}_{j})}\left|\Omega\right\rangle ),\,\,V_{(v^{x}_{i},v^{y}_{j})}(Z_{1,(v^{x}_{i},v^{y}_{j})}\left|\Omega\right \rangle)=\omega^{\prime\vec{v}_{i}}(Z_{1,(v^{x}_{i},v^{y}_{j})}\left|\Omega \right\rangle)\,\,\,(s\neq i).\] The fusion rule is schematically described by (see also Fig. 3 for an example) \[I\rightarrow(\overline{e}_{(v^{x}_{i},v^{y}_{j})})^{\deg^{x}(v_{i})}\otimes \prod_{s\neq i}(e_{(v^{x}_{i},v^{y}_{j})})^{I^{x}_{i}}, \tag{19}\] where \(I\) denotes the vacuum sector. Likewise, if we apply \(Z_{2,(v^{x}_{i},v^{y}_{j})}\) on the ground state, we have fusion rule \[I\rightarrow(\overline{e}_{(v^{x}_{i},v^{y}_{j})})^{\deg^{y}(v_{i})}\otimes \prod_{i\neq j}(e_{(v^{x}_{i},v^{y}_{i})})^{I^{x}_{i}}. \tag{20}\] The fusion rules (19) (20) are generalization of the ones in 2D topologically ordered phases where a pair of anyons are created. One can rewrite the fusion rules (19) (20) more succinctly by using the Laplacian. On a lattice \(G_{x}\boxtimes G_{y}\) at given \(v^{y}_{j}\), we define \(n_{x}\)-dimensional vector where each entry takes \(\mathbb{Z}_{N}\) value by \[\mathbf{r}_{v^{y}_{j}}=(r_{1},r_{2},\cdots,r_{n_{x}})^{T}\in\mathbb{Z}_{N}^{n_{x}} \tag{21}\] from which we introduce multiple sets of \(Z_{1}\) operators, \(Z^{n}_{1,(v_{1}^{x},v_{2}^{y})}Z^{n}_{1,(v_{2}^{x},v_{2}^{y})}\cdots Z^{n}_{1,(v_ {2}^{x},v_{2}^{y})}\) acting on the ground state. For the sake of the simplicity, in the following, we omit the subscript of \(\mathbf{r}_{v_{j}}^{y}\) in the left hand side of (21) and write it as \(\mathbf{r}\) till the point where it is necessary to mention the \(v_{j}^{y}\) dependence. Introducing fundamental basis of vectors \(\{\mathbf{\lambda}_{i}\}\) as \(\mathbf{\lambda}_{i}=(\underbrace{0,\cdots,0}_{i-1},1,\underbrace{0,\cdots,0}_{ n_{i}-i})^{T}\in\mathbf{r}\), the fusion rule (19) is rewritten as \[I\to e^{a_{1}^{x}}_{(v_{1}^{x},v_{2}^{y})}\otimes e^{a_{2}^{x}}_{(v_{2}^{x},v_ {2}^{y})}\otimes\cdots\otimes e^{a_{x_{e}}^{x}}_{(v_{k_{a}},v_{j}^{y})}\left(a _{i}^{x}\in\mathbb{Z}_{N}\right) \tag{22}\] with \[\mathbf{f}_{e}^{x}:=(a_{1}^{x},a_{2}^{x},\cdots,a_{n_{x}}^{x})^{T}=-L_{x}\mathbf{ \lambda}_{i}. \tag{23}\] Note that in the fusion rule (22), charge conservation is satisfied, i.e, \(\sum_{i}a_{i}^{x}=0\)\((\text{mod}N)\) as the Laplacian \(L_{x}\) is singular (summing over matrix elements along \(i\)-th column gives zero). One can similarly describe the fusion rule (20) in terms of the Laplacian \(L_{y}\). We can also systematically discuss the fusion rules of the electric charges induced by applying _multiple sets_ of \(Z_{1}\) or \(Z_{2}\) operators on the ground state instead of applying a single operator. When we apply \(Z^{r_{1}}_{1,(v_{1}^{x},v_{2}^{y})}Z^{r_{2}}_{1,(v_{2}^{x},v_{2}^{y})}\cdots Z ^{r_{nv}}_{1,(v_{k_{a}},v_{j}^{y})}\) on the ground state, characterized by vector \(\mathbf{r}\) (21), the fusion rule of the electric charges has the same form as (22) by setting \[\mathbf{f}_{e}^{x}=-L_{x}\mathbf{r}. \tag{24}\] One can write the fusion rules by applying sets of \(Z_{2}\) operators as well as the ones for magnetic charges in the similar manner. Since discussion of these fusion rules closely parallels what we have just discussed, we do not present it here. As we will see in the next subsection, the way we describe the fusion rules (22)(24) turn out to be useful to discuss the number of distinct fractional charges in our model on the graph. ### Ground state degeneracy In this subsection, we derive the formula of the GSD of our model on the graph. To this end, we count the distinct types of quasiparticle excitations. The spirit behind such counting is analogous to [24]. In the derivation, we will use the key property of the Laplacian; introducing the invertible integer matrices \(P\) and \(Q\), the Laplacian can be transformed into the diagonal form (_Smith normal form_) via \[PLQ=\text{diag}(u_{1},u_{2},\cdots,u_{n-1},0):=D, \tag{25}\] where \(u_{i}\) represents positive integers, satisfying \(u_{i}|u_{i+1}\) for all \(i\) (i.e., \(u_{i}\) divides \(u_{i+1}\) for all \(i\)) [32]. Since the Laplacian is singular, the last diagonal entry is zero. The diagonal element \(u_{i}\) referred to as the _invariant factors_ of the Laplacian, plays a pivotal role in the graph theory. In what follows, we will see the GSD is characterized by these invariant factors of the Laplacian. This can be achieved by two steps. Firstly, we count the number of distinct loops in the horizontal direction. Secondly, we evaluate the distinct number of configurations of such loops up to deformation in the vertical direction. #### 4.2.1 The number of closed loops in the horizontal direction To start, we first count the number of distinct loops of electric charges in the horizontal direction i,e, the number of closed loops of the electric charges at given \(v_{j}^{y}\). The loop is constructed by a "string" of the operators, \(Z^{r_{1}}_{1,(v_{1}^{x},v_{2}^{y})}Z^{r_{2}}_{1,(v_{2}^{x},v_{2}^{y})}\cdots Z^{r_{n_ {x}}}_{1,(v_{n_{x}}^{x},v_{j}^{y})}\) characterized by the vector, \(\mathbf{r}\) (21). The loops must commute with terms \(V_{(v_{1}^{x},v_{2}^{y})}\) defined in (17), which means the composite of the operators \(Z^{r_{1}}_{1,(v_{1}^{x},v_{2}^{y})}Z^{r_{2}}_{1,(v_{2}^{x},v_{2}^{y})}\cdots Z^{ r_{n_{x}}}_{1,(v_{n_{x}}^{x},v_{j}^{y})}\) does not create an excitation. This condition amounts to be that the fusion rule induced by such a product of the operators becomes trivial. Referring to (22)(24), such condition is rewritten as \[L_{x}\mathbf{r}=\mathbf{0}\mod N. \tag{26}\] Therefore, to count the distinct loops of the electric charges in the horizontal direction, we need to evaluate the kernel of the Laplacian, \(L_{x}\). Note that since the graph is connected, meaning the summing over the entries of the Laplacian along any row gives zero, there are at least \(N\) solutions of (26), \(\mathbf{r}=h(1,1,\cdots,1)^{T}\)\((h\in\mathbb{Z}_{N})\). To proceed, we transform the Laplacian \(L_{x}\) into the Smith normal form (25). Introducing integer matrices \(P_{x}\) and \(Q_{x}\) whose absolute value of the determinant is one, we can transform the Laplacian into the Smith normal form: \[P_{x}L_{x}Q_{x}=\text{diag}(u_{1}^{x},\cdots,u_{n_{x}-1}^{x},0):=D_{x}, \tag{27}\] from which we have \[(\ref{eq:26})\Leftrightarrow P_{x}^{-1}D_{x}Q_{x}^{-1}\mathbf{r}=\mathbf{0} \mod N\] \[\Leftrightarrow D_{x}\mathbf{\tilde{r}}=\mathbf{0}\mod N. \tag{28}\] When moving from the second to the third equation, we have used the fact that \(P_{x}\) is the integer matrix, and we have defined \(\mathbf{\tilde{r}}:=Q_{x}^{-1}\mathbf{r}\). Suppose there are \(m_{x}\) invariant factors of \(L_{x}\) which are greater than one, i.e., \[D_{x}=\text{diag}(\underbrace{1,\cdots,1}_{n_{x}-1-n_{x}},\underbrace{p_{1}, \cdots,p_{m_{x}}}_{n_{x}},0), \tag{29}\] then, from (28), it follows that the first \(n_{x}-1-m_{x}\) components of the vector \(\mathbf{\tilde{r}}\) are zero: \[\tilde{r}_{a^{\prime}}=0\mod N\ (1\leq a^{\prime}\leq n_{x}-1-m_{x}). \tag{30}\] Regarding the elements \(\tilde{r}_{a+n_{x}-1-m_{x}}\)\((1\leq a\leq m_{x})\), one finds \[p_{a}\tilde{r}_{a+n_{x}-1-m_{x}}=0\mod N\ \Leftrightarrow\ p_{a}\tilde{r}_{a+n_{x}-1-m_ {x}}=Nt_{a}\ (1\leq a\leq m_{x},\ t_{a}\in\mathbb{Z}). \tag{31}\] Decompose \(N\) and \(p_{a}\) into two integers as \[N=N^{\prime}_{i}\gcd(N,p_{a}),\ p_{a}=p^{\prime}_{a}\gcd(N,p_{a}), \tag{32}\] where \(\gcd\) stands for the greatest common divisor and \(N^{\prime}_{a}\) and \(p^{\prime}_{a}\) are coprime, (31) becomes \[p^{\prime}_{a}\tilde{r}_{a+n_{x}-1-m_{x}}=N^{\prime}_{a}t_{a}.\] Since \(N^{\prime}_{a}\) and \(p^{\prime}_{a}\) are coprime, one finds \[\tilde{r}_{a+n_{x}-1-m_{x}}=N^{\prime}_{a}\alpha_{a}\ (1\leq a\leq m_{x}), \tag{33}\] where integer \(\alpha_{a}\) takes \(\gcd(N,p_{a})\) distinct values, i.e., \(\alpha_{a}=0,1,\cdots,\gcd(N,p_{a})-1\). There is no constraint on the last element of \(\mathbf{\tilde{r}}\), \(\tilde{r}_{n_{x}}\) as the last diagonal entry of \(D_{x}\) is zero. This implies that \(\tilde{r}_{n_{x}}\) takes \(N\) distinct values. Overall, with the assumption of (29), the condition (28) gives \[\tilde{\mathbf{r}}=(\underbrace{\tilde{r}_{1},\cdots,\tilde{r}_{n-1-m_{x}}}_{n_{x}-1-m _{x}},\underbrace{\tilde{r}_{n_{x}-m_{x}},\cdots,\tilde{r}_{n_{x}-1}}_{m_{x}}, \tilde{r}_{n_{x}})^{T}=(\underbrace{0,\cdots,0}_{n_{x}-1-m_{x}},\underbrace{N^ {\prime}_{1}\alpha_{1},\cdots,N^{\prime}_{m_{x}}\alpha_{m_{x}}}_{m_{x}},\alpha _{m_{x}+1})^{T}\mod N, \tag{34}\] where \(0\leq\alpha_{a}\leq\gcd(N,p_{a})-1(1\leq a\leq m_{x})\), \(0\leq\alpha_{m_{x}+1}\leq N-1\). Thus, the kernel of the Laplacian, which is associated with the closed loops of electric charges, is labeled by \[\mathbb{Z}_{\gcd(N,p_{1})}\times\mathbb{Z}_{\gcd(N,p_{2})}\times\cdots\times \mathbb{Z}_{\gcd(N,p_{mx})}\times\mathbb{Z}_{N}=\prod_{a}\mathbb{Z}_{\gcd(N, p_{a})}\times\mathbb{Z}_{N}. \tag{35}\] Recalling \(\tilde{\mathbf{r}}:=Q_{x}^{-1}\mathbf{r}\), the form of the loop, \(\mathbf{r}\) is obtained by multiplying \(Q_{x}\) from the left in (34). Writing \(n_{x}\times n_{x}\) matrix \(Q_{x}\) as \[Q_{x}=(\underbrace{\mathbf{q}_{1},\cdots,\mathbf{q}_{n_{x}-1-m_{x}}}_{n_{x}-1-m_{x}}, \underbrace{\mathbf{\tilde{q}}_{1},\cdots,\mathbf{\tilde{q}}_{n_{x}}}_{m_{x}},\mathbf{ \tilde{q}}_{m_{x}+1}), \tag{36}\] where each columns is given by \(n_{x}\) dimensional vector, we have \[\mathbf{r} = Q_{x}\tilde{\mathbf{r}}=\alpha_{1}N^{\prime}_{1}\tilde{\mathbf{q}}_{1}+ \cdots+\alpha_{m_{x}}N^{\prime}_{m_{x}}\tilde{\mathbf{q}}_{m_{x}}+\alpha_{m_{x}+1 }\tilde{\mathbf{q}}_{m_{x}+1} \tag{37}\] \[:= \alpha_{1}\mathbf{\Lambda}_{1}+\cdots+\alpha_{m_{x}}\mathbf{\Lambda}_{m_ {x}}+\alpha_{m_{x}+1}\mathbf{\Lambda}_{m_{x}+1}.\] #### 4.2.2 Deformation of the closed loops - analogy to the chip-firing game After identifying the loops of electric charge in the horizontal direction, we need to count the number of distinct configurations of such loops up to the deformation in the \(y\)-direction by the sets of \(P_{(v_{x},v_{y})}\). This feature is contrasted with the toric code, where the non-contractible loop in the horizontal direction is deformed so that is is shifted up or downward. In our case, the way of the loops being deformed is not so immediate as the toric code. We will see that to describe the deformation of the loops, the Laplacian comes into play. For the sake of the illustration, we focus on the case where the 2D lattice is \(C_{n_{x}}\boxtimes C_{n_{y}}\) for the moment and then move on to more general cases of the graph later. Here, \(C_{p}\) represents the cyclic graph consisting of \(p\) vertices in a cyclic order where the adjacent vertices are connected with an edge. In particular, we set \(N=3\) and consider the case with \(C_{6}\boxtimes C_{6}\). The coordinate of the lattice is labeled by \((v^{x}_{i},v^{y}_{j})\)\((1\leq i,j\leq 6)\), where vertex \(v^{x}_{i}\) (\(v^{y}_{j}\)) is aligned in cyclic order along the horizontal (vertical) direction. This geometry is nothing but the 2D torus. As explained in more detail in the next section (Sec. 5), the Smith normal form of the Laplacian of \(C_{6}\) reads \[D_{x}=\text{diag}(1,1,1,1,6,0),\] from which the closed loop of the electric charge at \(v^{y}_{j}\) is labeled by \(\mathbb{Z}_{3}\times\mathbb{Z}_{3}\) [(35)]. Furthermore, by evaluating \(Q_{x}\), and referring to (37), the form of the closed loop at \(v^{y}_{j}\), \(Z^{\prime_{1}}_{1,(v^{y}_{1},v^{y}_{j})}Z^{\prime_{2}}_{1,(v^{x}_{2},v^{y}_{j })}\cdots Z^{\prime_{n_{x}}}_{1,(v^{x}_{n_{x}},v^{y}_{j})}\) characterized by vector \(\mathbf{r}\), is found to be \[\mathbf{r}_{v^{y}_{j}}=\alpha_{1,v^{y}_{j}}(2,1,0,2,1,0)^{T}+\alpha_{2,v^{y}_{j}}( 1,1,1,1,1)^{T}:=\alpha_{1,v^{y}_{j}}\mathbf{\Lambda}_{1,v^{y}_{j}}+\alpha_{2,v^{y}_ {j}}\mathbf{\Lambda}_{2,v^{y}_{j}},\ \ (\alpha_{1,v^{y}_{j}},\alpha_{2,v^{y}_{j}})\in\mathbb{Z}_{3}^{2} \tag{38}\] where we retrieve the subscript, emphasizing \(v^{y}_{j}\) dependence. Defining \[W_{e1,v^{y}_{j}} = \prod_{i=1}^{n_{x}}Z^{(\mathbf{\Lambda}_{1,v^{y}_{j}})}_{1,(v^{y}_{j },v^{y}_{j})}=Z^{2}_{1,(v^{x}_{1},v^{y}_{j})}Z_{1,(v^{x}_{2},v^{y}_{j})}Z^{2} _{1,(v^{x}_{4},v^{y}_{j})}Z_{1,(v^{x}_{5},v^{y}_{j})},\] \[W_{e2,v^{y}_{j}} = \prod_{i=1}^{n_{x}}Z^{(\mathbf{\Lambda}_{2,v^{y}_{j}})}_{1,(v^{x}_{i},v^{y}_{j})}=Z_{1,(v^{x}_{1},v^{y}_{1})}Z_{1,(v^{x}_{2},v^{y}_{j})}Z_{1,(v^{x}_{ 4},v^{y}_{j})}Z_{1,(v^{x}_{5},v^{y}_{j})}Z_{1,(v^{x}_{6},v^{y}_{j})}, \tag{39}\] the closed loop of the electric charge at \(v_{j}^{y}\), \(W_{e,v_{j}^{y}}\) is generated by these two terms, i.e., \(W_{e,v_{j}^{y}}=W_{e1,v_{j}^{y}}^{\alpha_{1,v_{j}^{y}}}W_{e2,v_{j}^{y}}^{\alpha_ {2,v_{j}^{y}}}\). We depict these two loops (39) in Fig. 3(a). Now we deform the loops in the vertical direction. Corresponding to the two vectors \(\mathbf{\Lambda}_{1,v_{j}^{y}}\) and \(\mathbf{\Lambda}_{2,v_{j}^{y}}\), define following two operators \[\Gamma_{1,v_{j}^{y}}:=\prod_{i=1}^{n_{x}}p_{(v_{i}^{x},v_{j}^{y})}^{(\mathbf{ \Lambda}_{1,v_{j}^{y}})_{i}},\ \Gamma_{2,v_{j}^{y}}:=\prod_{i=1}^{n_{x}}p_{(v_{i}^{x},v_{j}^{y})}^{(\mathbf{ \Lambda}_{2,v_{j}^{y}})_{i}}.\] From (17), these terms are rewritten as \[\Gamma_{1,v_{j}^{y}} = Z_{1,(v_{1}^{x},v_{j+1}^{y})}^{2}Z_{1,(v_{2}^{x},v_{j+1}^{y})}Z_{ 1,(v_{4}^{x},v_{j+1}^{y})}^{2}Z_{1,(v_{5}^{x},v_{j+1}^{y})}\times Z_{1,(v_{1}^{ x},v_{j}^{y})}^{2}Z_{1,(v_{2}^{x},v_{j}^{y})}Z_{1,(v_{4}^{x},v_{j}^{y})}Z_{1,(v_{3} ^{x},v_{j}^{y})}\] \[\times Z_{1,(v_{1}^{x},v_{j-1}^{y})}^{2}Z_{1,(v_{2}^{x},v_{j-1}^{y})}Z _{1,(v_{4}^{x},v_{j-1}^{y})}Z_{1,(v_{5}^{x},v_{j-1}^{y})},\] \[\Gamma_{2,v_{j}^{y}} = (\prod_{i=1}^{6}Z_{1,(v_{i}^{x},v_{j-1}^{y})})\times(\prod_{i=1}^ {6}Z_{1,(v_{i}^{x},v_{j}^{y})})\times(\prod_{i=1}^{6}Z_{1,(v_{i}^{x},v_{j+1}^{ y})}), \tag{40}\] which are portrayed in Fig. 3(a). From (39) and (40), it follows that (see also Fig. 3(b) and 3(c)) \[\Gamma_{1,v_{j}^{y}}W_{e1,v_{j}^{y}}=W_{e1,v_{j+1}^{y}}W_{e1,v_{j}^{y}}^{2}W_{ e1,v_{j-1}^{y}},\ \Gamma_{2,v_{j}^{y}}W_{e2,v_{j}^{y}}=W_{e2,v_{j+1}^{y}}W_{e2,v_{j}^{y}}^{2}W_{ e2,v_{j-1}^{y}}. \tag{41}\] We need to evaluate the distinct configurations of the loops up to such deformation. To this end, it is useful to draw the side view of the geometry and see how such deformation of the loops is implemented. One of such examples corresponding to 3(c), is shown in Fig. 5. Viewing from the side, we have \(G_{y}\), which is \(C_{6}\) in the present case. At each vertex \(v_{j}^{y}\), one can assign a \(\mathbb{Z}_{3}\) number, \(\alpha_{2,v_{j}^{y}}\in\mathbb{Z}_{3}\) corresponding to the closed loops of the electric charge, \(W_{e2,v_{j}^{y}}\). In Fig. 5, the charge \(\alpha_{2,v_{4}^{y}}=1\) Figure 4: Closed loops of the electric charge in the case of \(G_{x}\boxtimes G_{y}=C_{6}\boxtimes C_{6}\) and \(N=3\). The periodic boundary condition is imposed so that left and right edges as well as top and bottom edges are identified. (a) (left two) Two closed loops of the electric charge in the horizontal direction at \(v_{4}^{y}\), corresponding to (39). (right two) Sets of operators \(P_{(v_{i}^{x},v_{j}^{y})}\) defined in (40) with which the closed loops are deformed. (b)(c) Deformation of the loops in accordance with (41). is located at \(v_{4}^{y}\) with charges at other vertices being absent. By applying \(\Gamma_{2,v_{4}^{y}}\), the loop is deformed, yielding the configuration on the right in Fig. 5: the charge located at \(v_{4}^{y}\) is decreased by two, i.e., \(1\to-1\simeq 2(\text{mod}3)\) whereas the charge is increase by one at the adjacent vertices, \(v_{3}^{y}\) and \(v_{5}^{y}\), i.e, \(0\to 1\). What we have just described has an intimate relation with the _chip-firing game_ invented in the context of the graph theory [33, 34]. In the chip-firing game, for a given graph \(G(V,E)\), a _chip_ is defined as an integer located at each vertex of the graph. Also, the process of _fire_ is defined as the movement of sending one chip at given vertex, say \(v_{0}\) to each of its neighbors, which are vertices connected with \(v_{0}\) by an edge. In the process of the fire, chip is decreased by \(\text{deg}(v_{0})\) at \(v_{0}\) and at adjacent vertices the chip is increased by one. In our context, the chip introduced at each vertex corresponds to the closed loops with electric charge labeled by \(\alpha_{2,v_{j}^{y}}\) whereas the process of the fire is nothing but the deformation of the loop. Important distinction between the chip-firing game and our consideration is that while the chip is defined as an integer number in the chip-firing game, in our case, what corresponds to the chip is labeled by finite group, corresponding to the charge of the fractional excitation. (In this sense, we are dealing with "anyonic analogue of the chip-firing game".) One of the motivations of the chip-firing game is to classify the distinct configurations of the chips up to the firing processes, and find an optimal configuration of chips. For instance, associating the chips to dollars and the vertices to money borrowers and lenders with interpreting the minus value of the chips as debt, one would be interested in finding a configuration of the chips so that everyone is debt-free. (It is often referred to as the _dollar game_ in the context of the graph theory [34, 35]) It turns out that distinct configurations of the chips are characterized by the cokernel of the Laplacian, a.k.a. the _Picard group_, \(Pic(G)\)[33, 34]. Figure 5: Deformation of closed loops of magnetic charges in the case of \(C_{3}\boxtimes C_{3}\) and \(N=3\) corresponding to Fig. 4c. (Top) The same figure of the deformation of the loop given in Fig. 4c. (Bottom) The side view of Fig. 4c, where one assigns \(\mathbb{Z}_{3}\) number on each vertex, corresponding to the configuration of the loops. These numbers are regarded as chips located at each vertex. By applying \(\Gamma_{2,v_{4}^{y}}\), the closed loop is deformed, which corresponds to the one of the firing process where the chip at vertex \(v_{4}^{y}\) is transferred into the adjacent ones, \(v_{3}^{y}\) and \(v_{5}^{y}\) (red arrows). To see this in more formal fashion, we now turn to the generic cases of the 2D lattice given by \(G_{x}\boxtimes G_{y}\). As we have seen in Sec. 4.2.1, the closed loops of the electric charge in the horizontal direction at \(v_{j}^{y}\) are labeled by \((\alpha_{1,v_{j}^{y}},\cdots,\alpha_{m_{x}+1,v_{j}^{y}})\in\prod_{a}\mathbb{Z}_{ \gcd(N,p_{a})}\times\mathbb{Z}_{N}\). At \(v_{j}^{y}\), the form of loops of the electric charge is given by \[\mathbf{r}_{v_{j}^{y}}=\alpha_{1,v_{j}^{y}}\mathbf{\Lambda}_{1,v_{j}^{y}}+\cdots+\alpha _{m_{x},v_{j}^{y}}\mathbf{\Lambda}_{m_{x},v_{j}^{y}}+\alpha_{m_{x}+1,v_{j}^{y}} \mathbf{\Lambda}_{m_{x}+1,v_{j}^{y}}. \tag{42}\] We focus on the deformation of the loop labeled by \(\alpha_{a,v_{j}^{y}}\) which we dub the loop with type \(a\)\((1\leq a\leq m_{x}+1)\). Looking at the geometry from the side, at each vertex of graph \(G_{y}\), \(v_{j}^{y}\), one can assign a number \(\alpha_{a,v_{j}^{y}}\) associated with the configuration of the closed loops with type \(a\). We define a vector \(\mathbf{\alpha}_{a}\) as \[\mathbf{\alpha}_{a}=(\alpha_{a,v_{1}^{y}},\cdots,\alpha_{a,v_{m}^{y}})^{T}\in[ \mathbb{Z}_{\gcd(N,p_{a})}]^{n_{y}}. \tag{43}\] (For the sake of notational simplicity, we conventionally set \(p_{m_{x}+1}=0\) so that \(\mathbf{\alpha}_{m_{x}+1}\in\mathbb{Z}_{N}^{n_{y}}\).) Corresponding to vector \(\mathbf{\Lambda}_{a,v_{j}^{y}}(1\leq a\leq m_{x}+1)\), define following composite of the operators \(P_{(v_{1}^{y},v_{j}^{y})}\) \[\Gamma_{a,v_{j}^{y}}:=\prod_{i=1}^{n_{x}}P_{(v_{1}^{y},v_{j}^{y})}^{(\mathbf{ \Lambda}_{a,v_{j}^{y}})_{i}},\] which is rewritten as \[\Gamma_{a,v_{j}^{y}}=\left(\prod_{i\neq j}\prod_{i=1}^{n_{x}}Z_{1,(v_{1}^{y},v _{1}^{y})}^{(\mathbf{\Lambda}_{a,v_{j}^{y}})_{i}^{T})}\right)\times\left[\prod_{i= 1}^{n_{x}}Z_{1,(v_{1}^{y},v_{1}^{y})}^{(\mathbf{\Lambda}_{a,v_{j}^{y}})_{a}}\right] ^{-\deg^{y}(v_{j}^{y})}. \tag{44}\] Using \(\Gamma_{a,v_{j}^{y}}\), we deform the loops with configuration \(\mathbf{\alpha}_{a}\). Suppose we deform the loop by the operator \(\Gamma_{a,v_{1}^{y}}^{\sigma_{a,v_{1}^{y}}}\times\cdots\Gamma_{a,v_{m}^{y}}^{ \sigma_{a,v_{m}^{y}}}\) characterized by the vector \(\mathbf{\sigma}_{a}=(\sigma_{a,v_{1}^{y}},\cdots,\sigma_{a,v_{m}^{y}})^{T}\in[ \mathbb{Z}_{\gcd(N,p_{a})}]^{n_{y}}\). Using the Laplacian of the graph \(G_{y}\), the configuration of the deformed loop with type \(a\), \(\mathbf{\tilde{\alpha}}_{a}\) reads \[\mathbf{\tilde{\alpha}}_{a}=\mathbf{\alpha}_{a}-L_{y}\mathbf{\sigma}_{a}. \tag{45}\] Distinct configuration of the loop with type \(a\) up to the deformation is found to be \[[\mathbb{Z}_{\gcd(N,p_{a})}]^{n_{y}}/\mathrm{im}(L_{y}), \tag{46}\] which is nothing but the cokernel of the Laplacian, the Picard group. To proceed, we need to evaluate \(\mathrm{im}(L_{y})\). Recalling the Laplacian is transformed into the Smith normal form \[P_{y}L_{y}Q_{y}=\mathrm{diag}(u_{1}^{y},\cdots,u_{n_{y}-1}^{y},0), \tag{47}\] we have \[\mathrm{im}(L_{y}) = L_{y}\eta,\ \forall\eta\in\mathbb{Z}_{\gcd(N,p_{a})}^{n_{y}} \tag{48}\] \[= P_{y}^{-1}D_{y}\eta\ (\eta:=Q_{-1}^{-1}\eta)\] \[= \mathrm{span}(\pi_{1}^{\prime},\pi_{2}^{\prime},\cdots,\pi_{n_{y }}^{\prime}).\] Here, \(\pi_{j}^{\prime}\) represents the vector corresponding to the \(j\)-th column of \(P_{y}^{-1}D_{y}\). Since \(D_{y}\) is the diagonal with the last entry being zero, (48) is further written as \[\mathrm{im}(L_{y})=\mathrm{span}(u_{1}^{y}\pi_{1},u_{2}^{y}\pi_{2},\cdots,u_{n_ {y}-1}^{y}\pi_{n_{y}-1}), \tag{49}\] where \(\pi_{j}\) represents the vector which corresponds to the \(j\)-th column of \(P_{y}^{-1}\). Now we write \(\mathbf{s}_{\alpha_{a}}\in\mathbb{Z}_{\gcd(p_{a},N)}^{n_{y}}/\mathrm{im}(L_{y})\) in these basis: \[\mathbf{s}_{\alpha_{a}}=\sum_{j=1}^{n_{y}}c_{a,j}\pi_{j}\left(c_{a,j}\in \mathbb{Z}_{\gcd(p_{a},N)}\right). \tag{50}\] From (49), \(c_{a,j}\) is subject to (the symbol " \(\sim\) " represents identification) \[c_{a,j}\sim c_{a,j}+u_{j}^{y}\left(1\leq j\leq n_{y}-1\right). \tag{51}\] By definition, it also must satisfy \[c_{a,j}\sim c_{a,j}+\gcd(N,p_{a})\ (1\leq j\leq n_{y}). \tag{52}\] The algebraic structure of the Picard group is determined by the number of distinct \(\boldsymbol{s}\) with the two constraints (51)(52). Assuming the Smith normal form of the Laplacian \(L_{y}\) has \(m_{y}\) invariant factors greater than one, i.e, \[D_{y}=\mathrm{diag}(\underbrace{1,\cdots,1}_{n_{y}-1-m_{y}},\underbrace{q_{1 },\cdots,q_{m_{y}}},0) \tag{53}\] then we have \[c_{a,b^{\prime}}\sim c_{a,b^{\prime}}+1\ (1\leq b^{\prime}\leq n_{y}-1-m_{y}),\] implying the coefficients of the first \(n_{y}-1-m_{y}\) basis are trivial. As for the coefficients \(c_{a,b+n_{y}-1-m_{y}}\ (1\leq b\leq m_{y})\), they satisfy following two conditions \[c_{a,b+n_{y}-1-m_{y}} \sim c_{a,b+n_{y}-1-m_{y}}+q_{b}\] \[c_{a,b+n_{y}-1-m_{y}} \sim c_{a,b+n_{y}-1-m_{y}}+\gcd(N,p_{a})\ (1\leq b\leq m_{y}),\] from which it follows that \(c_{a,b+n_{y}-1-m_{y}}\ (1\leq b\leq m_{y})\) takes \(\gcd(q_{b},\gcd(N,p_{a}))=\gcd(p_{a},q_{b},N)\) distinct values. Together with the fact that the last coefficient \(c_{a,n_{y}}\) takes \(\gcd(N,p_{a})\) distinct values, we find that \[\mathbf{c}_{a}:=(\underbrace{c_{a,1},\cdots,c_{a,n_{y}-1-m_{y}}}_{n_{y}-1-m_{ y}},\underbrace{c_{a,n_{y}-m_{y}},\cdots,c_{a,n_{y}-1}}_{m_{y}},c_{a,n_{y}})^{T}=( \underbrace{0,\cdots,0}_{n_{y}-1-m_{y}},\underbrace{\beta_{a,1},\cdots,\beta_ {a,m_{y}}}_{m_{y}},\beta_{a,m_{y}+1})^{T}\mod N \tag{54}\] with \(\beta_{a,b}\in\mathbb{Z}_{\gcd(N,p_{a},q_{b})}\), \(\beta_{a,m_{y}+1}\in\mathbb{Z}_{\gcd(N,p_{a})}\). Therefore, distinct configurations of the closed loops of the charges with type \(a\) are labeled by \[\mathbb{Z}_{\gcd(N,p_{a},q_{1})}\times\cdots\times\mathbb{Z}_{\gcd(N,p_{a},q_ {m_{y}})}\times\mathbb{Z}_{\gcd(N,p_{a})}=\prod_{b=1}^{n_{y}}\mathbb{Z}_{ \gcd(N,p_{a},q_{b})}\times\mathbb{Z}_{\gcd(N,p_{a})} \tag{55}\] Since \[\mathbf{s}_{\alpha_{a}}=\sum_{j=1}^{n_{y}}c_{a,j}\pi_{j}=P_{y}^{-1}\mathbf{c} _{a}, \tag{56}\] the explicit form of the configuration of the loops \(\mathbf{s}_{\alpha_{a}}\) is obtained by multiplying \(P_{y}^{-1}\) from the left in (54). Taking the deformation of the loops with all of the types into the consideration, distinct configurations of the closed loops are labeled by \[\prod_{a=1}^{m_{x}+1}\left[\prod_{b=1}^{n_{y}}\mathbb{Z}_{\gcd(N,p_{a},q_{b})} \times\mathbb{Z}_{\gcd(N,p_{a})}\right]=\mathbb{Z}_{N}\times\prod_{a=1}^{m_{x }}\mathbb{Z}_{\gcd(N,p_{a})}\times\prod_{b=1}^{m_{y}}\mathbb{Z}_{\gcd(N,p_{a},q_{b})}\times\prod_{a=1}^{m_{x}}\prod_{b=1}^{m_{y}}\mathbb{Z}_{\gcd(N,p_{a},q _{b})}. \tag{57}\] So far we have considered closed loops of the electric charges. Regarding the closed loops of the magnetic charges, the similar argument follows as the electric charges, thus they are labeled by the same quantum numbers (57). To recap the argument, we have considered distinct loops of electric and magnetic charges in our model placed on the 2D lattice \(G_{x}\boxtimes G_{y}\). The GSD is obtained by counting the number of such distinct loops. Assuming there are \(m_{x}\) and \(m_{y}\) invariant factors of the Laplacian \(L_{x}\) and \(L_{y}\) which are greater than one [(29)(53)], indexed by \(p_{a}\) (\(1\leq a\leq m_{x}\)), \(q_{b}\) (\(1\leq b\leq m_{y}\)), respectively, we finally arrive at \[GSD=\big{[}N\times\prod_{a}\gcd(N,p_{a})\times\prod_{b}\gcd(N,q_{b})\times \prod_{a,b}\gcd(N,p_{a},q_{b})\big{]}^{2}. \tag{58}\] As opposed to fracton topological phases, where the GSD exhibits the sub-extensive dependence on the system size, the GSD of our model depends on \(N\) and the greatest common divisor of \(N\) and invariant factors of the Laplacian. ## 5 Example Our result (58) is applicable to arbitrary connected graphs, yet it is still useful to take a closer look at the simple example of the 2D lattice, torus geometry, for an illustrative example to see how our formula works. ### Torus geometry \(C_{n_{x}}\boxtimes C_{n_{y}}\) The cycle graph \(C_{n}\) consists of \(n\) vertices placed in a cyclic order so that adjacent vertices are connected by a single edge. We consider the 2D lattice constructed by the product of cycle graphs, \(C_{n_{x}}\boxtimes C_{n_{y}}\). We need to find the Smith normal form of the Laplacian of the cyclic graph. We concentrate on the transformation of \(L_{x}\) into the Smith normal form. the Laplacian \(L_{x}\) is described by the following \(n_{x}\times n_{x}\) matrix: \[L_{x}=\begin{pmatrix}2&-1&&&-1\\ -1&2&-1&&\\ &-1&2&\ddots&\\ &&\ddots&\ddots&-1\\ -1&&&-1&2\end{pmatrix}. \tag{59}\] Adding the first \(n_{x}-1\) columns to the last one and doing the same procedure for rows, the Laplacian is transformed as \[L_{x}\rightarrow\begin{pmatrix}\bar{L}_{x}&\mathbf{0}_{n_{x}-1}\\ \mathbf{0}_{n_{x}-1}^{T}&0\end{pmatrix}, \tag{60}\] with \[\bar{L}_{x}=\begin{pmatrix}2&-1&&&\\ -1&2&-1&&\\ &-1&2&\ddots&\\ &&\ddots&\ddots&-1\\ &&&-1&2\end{pmatrix}_{n_{x}-1\times n_{x}-1}. \tag{61}\] Any Laplacian of connected graph is transformed into the form (60), where \(\bar{L}_{x}\) is obtained by removing the last row and column from the Laplacian \(L_{x}\). We further transform \(\bar{L}_{x}\) as \[\bar{L}_{x}\rightarrow\begin{pmatrix}1&-2&1&&\\ 2&-1&&\\ &-1&2&\ddots&\\ &&\ddots&\ddots&-1\\ &&&-1&2\end{pmatrix}\rightarrow\begin{pmatrix}1&0&0&&\\ 2&3&-2&&\\ &-1&2&\ddots&\\ &&\ddots&\ddots&-1\\ &&&-1&2\end{pmatrix}, \tag{62}\] where the first transformation, we have exchanged the first and second rows and multiply \((-1)\) on the first rows, and in the second transformation, we have added the first column to the second one twice and subtract the first column from the third one. By subtracting the first row fro m the second one twice, the matrix is further transformed as \[\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq: Figure 6: (a)[(b)] Distinct configurations of closed loops labeled by \(\alpha_{1,v_{j}^{\prime}}[\alpha_{2,v_{j}^{\prime}}]\), corresponding to (68)[(69)] in the case of \(N=3\) and \(n_{x}=n_{y}=6\). The periodic boundary condition is imposed in such a way that left and right edges as well as top and bottom edges are identified. (c) Left: A closed loop of dipole of the fractional charge which corresponds to (72) in the case of \(N=3\) with \(n_{x}\) being divisible by three. Regarding the pattern “\(2,1,0\)”, as the dipole of the fractional charges, one can make an interpretation on this loop as the arrays of such dipoles. Right: Schematic picture of the quadrupole consisting of a pair of closed loops of dipole. are described by the cokernel \(\mathbf{s}_{\alpha_{1}}=[\mathbb{Z}_{\gcd(N,n_{x})}]^{n_{y}}/im(L_{y})\). By evaluating the form of \(P_{y}^{-1}\) and referring to (56), these configurations are described by \[\mathbf{s}_{\alpha_{1}}=\beta_{1,1}\begin{pmatrix}1\\ 0\\ \vdots\\ 0\\ -1\end{pmatrix}+\beta_{1,2}\begin{pmatrix}0\\ 0\\ \vdots\\ 0\\ 1\end{pmatrix}\mod N, \tag{68}\] where \(\beta_{1,1}=\mathbb{Z}_{\gcd(N,n_{x},n_{y})}\) and \(\beta_{1,2}=\mathbb{Z}_{\gcd(N,n_{x})}\). Note that \(\mathbf{s}_{1}\) is \(n_{y}\)-dimensional vector, indexed by vertices of the graph \(G_{y}\) and each entry corresponds to the loops going in the horizontal direction. We portray these two configurations in Fig. 5(a) in the case of \(N=3\) and \(n_{x}=n_{y}=6\). Likewise, the distinct configurations of loops labeled by \(\alpha_{2,v_{j}^{\prime}}\) are given by the cokernel \(\mathbf{s}_{\alpha_{2}}=[\mathbb{Z}_{N}]^{n_{y}}/im(L_{y})\), which is found to be \[\mathbf{s}_{\alpha_{2}}=\beta_{2,1}\begin{pmatrix}1\\ 0\\ \vdots\\ 0\\ -1\end{pmatrix}+\beta_{2,2}\begin{pmatrix}0\\ 0\\ \vdots\\ 0\\ 1\end{pmatrix}\mod N \tag{69}\] with \(\beta_{2,1}=\mathbb{Z}_{\gcd(N,n_{y})}\), \(\beta_{2,2}=\mathbb{Z}_{N}\). These configurations are depicted in Fig. 5(b). ### Physical interpretation In this subsection, we try to interpret the physical meaning of the configurations of the loops, especially the ones given in (68)(portrayed in Fig. 5(a)), i.e, the configurations of loops labeled by \(\alpha_{1,v_{j}^{\prime}}\). We warn the readers that discussion presented in this subsection is schematic, yet it conveys physical intuition behind these loops. For simplicity, suppose we set \(n_{x}\) so that it is divisible by \(N\), i.e, \(n_{x}=Nd(d\in\mathbb{Z})\). Then the form of the closed loop labeled by \(\alpha_{1,v_{j}^{\prime}}\) which corresponds to the first term of (67) becomes \[(n_{x}-1,n_{x}-2,\cdots,1,0)^{T}=(N-1,N-2,\cdots,1,0,N-1,N-2,\cdots,1,0,\cdots, )^{T}\mod N, \tag{70}\] where on the right hand side, the entry repeats the pattern "\(N-1,N-2,\cdots,0\)" \(d\) times. Renaming the vertex of the cyclic graph \(C_{n_{x}}\) as \(x\) (\(1\leq x\leq n_{x}\)) we define the following vector \[\mathbf{\rho}_{x}^{f}=(\underbrace{0,\cdots,0}_{x},1,\underbrace{0,\cdots,0}_{n_{ x}-x-1})^{T} \tag{71}\] which is associated with the charge density operator of the fractional excitation, where a single fractional excitation is located at the coordinate \(x\), (70) is rewritten as \[\eqref{eq:22}=-\sum_{b=0}^{d-1}\biggl{[}\sum_{x=1}^{N}(x+b)\mathbf{\rho}_{x+b}^{f }\biggr{]} \tag{72}\] This form looks familiar to us recalling the argument of the conservation of the dipole of charges in the higher rank Maxwell theory discussed in (4). It is tempting to regard the term inside the braket in (72) as "the dipole of the fractional charges" as this term shows the charge monotonically decreasing as function of \(x\), inducing the polarization (see also Fig. 6c). Since the form of the loop (72) repeats the pattern "\(N-1,N-2,\cdots,0\)" \(d\) times, one can interpret it as the loops formed by the trajectories of the dipole of the fractional charges around in the \(x\)-direction, analogously to the fact that the Wilson loops is formed by the trajectory of the anyons in the topologically ordered phases. Having interpreted the form of the loop (72) as the trajectory of the dipole of the fractional charges, now we turn to the distinct configurations of such loops up to the deformation. According to (68), any configuration of the loops is generated by two configurations. One configuration is a single loop of the dipole located at a given vertex, which corresponds to the second term of (68). Another configuration, corresponding to the first term of (68), is a pair of loops of the dipole with opposite sign located adjacent to each other in the \(y\)-direction, yielding "dipole of dipoles", which is quadrupole of the fractional charges (Fig. 6c). In summary, depending on the kernel and cokernel of the Laplacian, the phase admits closed loops of dipole or quadrupole of fractional charges, which accounts for the unusual behaviour of the GSD. ## 6 Conclusion Motivated by recent interests in fracton topological phases, especially in those phases on curved geometry, in this paper, we explore geometric aspect of the unusual topological phases which admit fractional excitations with mobility constraint in a new context, graph theory. Due to the second order derivative introduced in the higher rank Maxwell theory with which our model is defined via Higgs mechanism, the GSD of our model exhibits the unusual dependence on the lattice. Placing the phases on the 2D lattices beyond the regular square one, composed of two arbitrary graphs, we demonstrate that physical properties of the phases can be systematically studied by analyzing the Laplacian of the graph. We show that the fusion rules of the fractional excitations are determined by the form of the Laplacian of the graph. Furthermore, we also show that the closed loops of the excitations are associated with the kernel of the Laplacian. Such loops are deformed analogously to the process of the firing in the chip-firing game, studied in the context of the graph theory. By making use of such analogy, we count the number of distinct configurations of the loops up to the deformation by evaluating the cokernel of the Laplacian. Based on this analysis, we derive a formula of the GSD of our phases on graphs, which depends on \(N\) and invariant factors of the Laplacian. Depending on the graph, the phases admit a closed loop of dipole or quadrupole of fractional charges, which seemingly corresponds to the fact that the dipole and quadruple of charges are conserved in the higher rank Maxwell theory. Our study may contribute to understanding fracton topological phases in view of graph theory. \begin{table} \begin{tabular}{c|c|c|c} & Continuum \(U(1)\) theory & Higgs phase & GSD on \(G_{x}\boxtimes G_{y}\) \\ \hline conventional & Maxwell theory & \(\mathbb{Z}_{N}\) topologically ordered phase (\(\mathbb{Z}_{N}\) toric code) & \(N^{2g_{x}g_{y}}\) \\ \hline new type & higher-rank Maxwell theory & higher-rank \(\mathbb{Z}_{N}\) topological phase & (58) \\ \end{tabular} \end{table} Table 1: Digest of this paper. We consider the topological phases obtained by gapping the higher rank Maxwell theory via Higgs mechanism on the 2D lattice \(G_{x}\boxtimes G_{y}\). If we instead place the \(\mathbb{Z}_{N}\) topologically ordered phases, obtained from the conventional Maxwell theory via Higgs mechanism, on the same lattice, the GSD is given by \(N^{2g_{x}g_{y}}\), where \(g_{x/y}\) represents the number of genus of the graph \(G_{x/y}\), \(g_{x/y}:=|E_{x/y}|-|V_{x/y}|+1\). Our result is contrasted with conventional topological phases whose GSD depends on global topology of the lattice, i.e, the number of genus. For instance, if we introduce the \(\mathbb{Z}_{N}\) toric code which is obtained by gapping the gauge group via Higgs mechanism in the usual Maxwell theory, and placed it on the 2D lattice \(G_{x}\boxtimes G_{y}\), the GSD depends on the total number of genus, thus \(GSD=N^{2g_{x}g_{y}}\), where \(g_{x/y}\) represents the genus of the graph \(G_{x/y}\), \(g_{x/y}:=|E_{x/y}|-|V_{x/y}|+1\). Such comparison is summarized in Table. 1. There are several future directions regarding research presented in this paper. Firstly, it is important to address the stability of the closed loops of fractional charges in view of quantum information as these can be utilized for logical operators. The stability can be analyzed by evaluating invariant factors of the sub-matrix of the Laplacian. It would be interesting to see whether the condition of having the stable loops is associated with other quantities of the graph such as connectivity. In this paper, we have considered Abelian higher rank topological phases. One would naively wonder the case with non-Abelian topological phases. To study the closed loops of non-Abelian fractional charges systematically, one would consider the "non-Abelian chip-firing game", the chip-firing game with each chip associated with non-Abelian fractional charges, which is interesting on its own right in both of graph theoretical and physical point of view. While intensive studies have been done in the case of bosonic fracton phases, much is not elucidated in the fermionic theories (and even more exotic supersymmetric theories [36]). Extension of our study to the fermionic cases would be an another direction. One could investigate other topological quantities of the model. For example, it would be intriguing to study entanglement entropy of our phases on graphs and see how different it is from the case of the topologically ordered phases [37]. It is well known that in the topologically ordered phases, the number of superselections sectors is related to the topological entanglement entropy [38, 39], which is the secondary leading constant term of the entanglement entropy. Since the number of superselection sectors crucially depends on the geometry in our model, it is worth studying to see whether such a number enters in entanglement entropy of various geometries of subsystems. ## Acknowledgement The author thanks Bo Han and Masazumi Honda for helpful discussion.
2302.12892
Collisional- and photo-excitations of Ca IV including strong 3.2 $μ$m emission line
We report a detailed study of features of electron-impact excitation (EIE) of Ca IV for the first time using the relativistic Breit-Pauli R-Matrix method with a large close coupling wavefunction expansion of 54 fine structure levels belonging to n=2,3,4 complexes. Our study predicts presence of a strong 3.2 $\mu$m emission line in IR. The EIE collision strength ($\Omega$) shows extensive resonances with enhanced background resulting in an effective collision strength ($\gamma$) of 2.2 at about 10,000 K that increases to 9.66 around 300,000 K. The present results include collision strength of all 1431 excitations among the 54 levels and effective collision strength for a limited number of transitions of possible interest. We have found extensive resonances in the low energy region, convergence of the resonances, and of the partial waves with the 54 levels wavefunction. At higher energy, the collision strength decreases beyond the resonance region for forbidden transitions, is almost constant or decreases slowly for dipole-allowed transitions with low oscillator strengths, and rises with Bethe-Coulomb behavior of ln(E)to almost a plateau for transitions with high f-values.
Sultana Nahar, Bilal Shafique
2023-02-24T20:55:46Z
http://arxiv.org/abs/2302.12892v2
# Collisional- and photo-excitations of Ca IV including strong 3.2 \(\mu\)m emission line ###### Abstract We report a detailed study of features of electron-impact excitation (EIE) of Ca IV (Ca IV + e \(\rightarrow\) Ca IV* + \(e^{\prime}\rightarrow\) Ca IV + \(\mathbf{h\nu+e^{\prime}}\)), for the first time using the relativistic Breit-Pauli R-Matrix method with a large close coupling wavefunction expansion of 54 fine structure levels belonging to n=2,3,4 complexes. Calcium lines in the infrared (IR) are expected to be observed by the high resolution James Webb Space Telescope. Our study predicts presence of a strong 3.2 \(\mu\)m emission line in IR formed due to EIE of \(\mathbf{3p^{5}\ ^{2}\mathit{P}_{3/2}^{o}-3p^{5}\ ^{2}\mathit{P}_{1/2}^{o}}\) in Ca IV. The EIE collision strength (\(\mathbf{\Omega}\)) for the transition shows extensive resonances with enhanced background resulting in an effective collision strength (\(\mathbf{\Upsilon}\)) of 2.2 at about \(10^{4}\) K that increases to 9.66 around 3\(\mathbf{\times 10^{5}}\) K. The present results include \(\mathbf{\Omega}\) of all 1431 excitation among the 54 levels and \(\mathbf{\Upsilon}\) for a limited number of transitions of possible interest. We have found extensive resonances in the low energy region of \(\mathbf{\Omega}\), convergence of the resonances and of the partial waves with the 54 levels wavefunction. At high energy \(\mathbf{\Omega}\) decreases beyond the resonance region for forbidden transitions, is almost constant or decreases slowly for dipole allowed transitions with low oscillator strengths (f-values), and rises with Bethe-Coulomb behavior of ln(E) to almost a plateau for transitions with high f-values. The wavefunction of Ca IV was obtained from optimization of 13 configurations \(\mathbf{3s^{2}3p^{5}}\), \(\mathbf{3s3p^{6}}\), \(\mathbf{3s^{2}3p^{4}3d}\), \(\mathbf{3s^{2}3p^{4}4}\), \(\mathbf{3s^{2}3p^{4}4}\)\(\mathbf{p}\), \(\mathbf{3s^{2}3p^{4}4}\)\(\mathbf{d}\), \(\mathbf{3s^{2}3p^{4}4}\)\(\mathbf{f}\), \(\mathbf{3s^{2}3p^{4}5s}\), \(\mathbf{3s3p^{5}3d}\), \(\mathbf{3s3p^{5}4}\)\(\mathbf{s}\), \(\mathbf{3s3p^{5}4}\)\(\mathbf{p}\), \(\mathbf{3p^{6}3d}\), \(\mathbf{3s3p^{4}3d^{2}}\), each with the core configuration of \(\mathbf{1s^{2}2s^{2}2p^{6}}\), using the atomic structure program SUPERSTRUCTURE. They produce 387 fine structure levels. We report transition parameters - oscillator strengths, line strength (\(\mathbf{S}\)) and \(\mathbf{A}\)-values for a total of 93296 electric dipole (E1), quadrupole (E2), octupole (E3), magnetic dipole (M1) and quadrupole (M2) transitions among these levels. Lifetimes of these levels are also presented. Collisional excitation, Photoexcitation, 3.2 micron line of Ca IV 1 Footnote 1: Corresponding author: E-mail address autoionizing Rydberg state \[E_{x}\nu l=E^{**}(X^{+z}\nu l)=E_{x}-z^{2}/\nu^{2} \tag{2}\] before going out free. This intermediate state introduces a resonance in the collisional scattering. \(E_{x}\) is the excited energy of the ion, \(\nu l\) are the effective quantum number and angular momentum of the scattered electron. The present study implements close coupling (CC) wavefunction expansion that produces the autoionizing resonances automatically. The other process, photo-excitation/ de-excitation, forms a line as the ion absorbs or emits a photon \[X^{+z}+h\nu\leftrightarrow X^{+z*} \tag{3}\] This process occurs most commonly when there is a radiative source, such as, a star shining the plasma. Among the low ionization stages of Ca, Ca IV has been the least studied ion. There is no study on electron impact excitation of it found in the literature. EIE of Ca IV is the main focus of the present study. Among the past studies on energies and transitions, Sugar and Corliss [10] compiled the experimentally measured energy levels of Ca IV which are available at NIST website [11]. The radiative transition rates of Ca IV were reported by Naqvi [12], Varsavsky [13], Fawcett and Gabriel [14], Huang et al [16], Wilson et al. [17], Gabriel et. al [15] who identified several lines generated from observed UV transitions. The probability of detection of Ca IV lines has increased considerably with high resolution observation of James Webb Space Telescope (JWST) in the infra-red region. The present work reports collisional excitation and photo-excitations for many levels of Ca IV which include collision strengths (\(\Omega\)) and the Maxwellian averaged collision strengths or effective collision strengths (\(\Upsilon\)), and parameters \(f\)-, \(S-\) and \(A-\) values for radiative transitions. ## 2 Theoretical Approximation We give a brief outline of the theoretical background for electron impact excitation and radiative photo-excitations below as guidance for the readers. More details can be found, e.g. in Pradhan and Nahar [9]. We have treated EIE of Ca IV for collision strengths with relativistic Breit-Pauli R-matrix (BPRM) method, as developed under the Iron Project (IP, [18, 19]). We used a wavefunction expansion in close-coupling (CC) approximation that includes excitation to n=2,3,4 levels in the target and obtained collision strengths. We obtained radiative transition parameters for an extensive set of transition using relativistic Breit-Pauli approximation implemented in atomic structure program SUPERSTRUCTURE (SS, [20, 21]). Although two approaches are used for collision and photo-excitation, the computations are related. BPRM calculations are initiated with the wavefunction expansion of the target ion, e.g. Ca IV, generated by program SUPERSTRUCTURE (SS). We discuss the outlines of collisional excitation first and then photo-excitations or radiative transitions. ### Breit-Pauli R-matrix (BPRM) calculations for EIE BPRM Hamiltonian as adopted under the Iron Project [18, 19] in atomic Rydberg unit is given by \[H_{N+1}^{\rm BP}=\sum_{i=1}^{N+1}\left\{-\nabla_{i}^{2}-\frac{2Z}{r_{i}} \right\}+\sum_{j>i}^{N+1}\frac{2}{r_{ij}}+H_{N+1}^{\rm mass}+H_{N+1}^{\rm Dar} +H_{N+1}^{\rm so}. \tag{4}\] where the first three terms belong to the non-relativistic Hamiltonian and last three terms are the 1-body relativistic corrections which are mass, Darwin, and spin-orbit interaction terms respectively. BPRM codes include all of them and part of the two-body correction terms of the Breit-interaction (e.g. [9]). One Rydberg (Ry) is half of a Hartree giving the factor 2 in the terms. BPRM calculations start with the target ion wavefunction generated by SS and calculates the wavefunction of the total atomic system of the target ion and the interacting electron in the close coupling (CC) approximation. In CC approximation, the wavefunction of (e+ion) in a state \(SL\pi J\), where \(S\) is total spin, \(L\) is the orbital, and \(J\) is the total angular momenta, is expressed as \[\Psi_{E}(e+ion)=A\sum_{i}^{n}\chi_{i}(ion)\theta_{i}+\sum_{j}c_{j}\Phi_{j}(e+ion) \tag{5}\] In the first term \(\chi_{i}(ion)\) is the wavefunction expansion of the target ion, \(\theta_{i}\) is that of the interacting electron, in channel \(S_{t}L_{t}\pi_{t}J_{t}k_{i}^{2}l(SL\pi J)\) where \(S_{t}L_{t}\pi_{t}J_{t}\) is the target ion state interacting with the projectile electron of energy \(k_{i}^{2}\) and orbital angular momentum \(l\). The sum represents ground and various excited states of the target ion. \(A\) is the anti-symmetrization operator. In the second term, \(\Phi_{j}(e+ion)\) represents the (target+electron) wavefunction, basically part of the first term separated out to show the orthogonality condition of the interacting electron and short range interaction. Close-coupling wave function expansion which includes target ion excitations enables producing the resonances inherently. The interference of the bound states of the target ion and the projectile electron continuum wavefunction in the transition matrix introduces the resonances. Substitution of the CC expansion in the Schrodinger equation with the Breit-Pauli Hamiltonian results in a set of coupled equations. The R-matrix method is used to solve this set of equations for the energy and wavefunctions of the (e+ion) system. The scattering matrix for transition of the target ion from state \(i\) to state \(k\) by collision, \({\bf S}_{SL\pi J}(S_{i}L_{i}J_{i}l-S_{k}L_{k}J_{k}l^{\prime})\) where \(SL\pi J\) is the (e+ion) state, \(l\) and \(l^{\prime}\) are the incident and scattered partial waves of the free electron, is derived from the reactant matrix of the incident wave (e.g. [9, 23, 24, 25]). The collision strength \(\Omega\) for electron impact excitation (EIE) is given by, \[\Omega(S_{i}L_{i}J_{i}-S_{k}L_{k}J_{k})=\frac{1}{2}\sum_{SL\pi J}\sum_{l,l^{ \prime}}(2J+1)|r\mathbf{S}_{SL\pi J}(S_{i}L_{i}J_{i}l-S_{k}L_{k}J_{k}l^{\prime} )|^{2} \tag{6}\] \(\Omega\) reveals the detailed features with resonances of the collision. The plasma models use the temperature dependent quantity, the effective collision strengths \(\Upsilon(T)\) which is obtained by averaging \(\Omega\) over Maxwellian distribution function of the electrons at temperature \(T_{e}\) as \[\Upsilon_{ij}(T_{e})=\int_{0}^{\infty}\Omega_{ij}(E)\exp(-E/kT_{e})d(E/kT_{e}), \tag{7}\] where \(k\) is the Boltzmann constant and \(E\) is the energy of the projectile electron after the excitation, that is, the energy of the scattered electron. The excitation rate coefficient (\(q_{ij}(T_{e})\)) is related to the effective collision strength \(\Upsilon_{ij}\) as \[q_{ij}(T_{e})=\frac{8.63\times 10^{-6}}{g_{i}T_{e}^{1/2}}e^{-E_{ij}/kT_{e}} \Upsilon_{ij}(T_{e})cm^{3}/s, \tag{8}\] where \(g_{i}\) is the statistical weight of the initial level, T is in K, \(E_{ij}\) is the transition energy in Rydberg, and (1/kT = 157885/T). The high energy background of collision strength shows certain general behaviors depending on the type of excitation. The background of \(\Omega\) is the smooth curve at the base of resonant features. For forbidden transitions, \(\Omega\) decreases to almost zero with higher energy. For dipole allowed transitions, using Born approximation with Coulombic wavefunction, \(\Omega\) shows a high energy limiting behavior and is given by Coulomb-Bethe approximation \[\Omega_{ij}(E)=\frac{4g_{i}f_{ij}}{E_{ij}}ln\frac{E}{E_{ij}}, \tag{9}\] where \(f_{ij}\) is the oscillator strength for a dipole allowed transition, and E is the incident electron energy. In the high energy limit, \[\Omega_{ij}(E)\sim_{E\rightarrow\infty}d_{ij}ln(E) \tag{10}\] where \(d_{ij}\) is proportional to the oscillator strength. The logarithmic function will increase with increase in electron energy, but the the rising trend of the function slows down with very high values of the argument. Hence, for a low value of \(d\), \(\Omega\) is may not change as it is multiplied by a small number. For high value of \(d\), \(\Omega\) will increase but will lead toward a plateau. ### Atomic structure calculations for radiative transitions Theoretical details for obtaining radiative transition parameters through atomic structure calculations using program SUPERSTRUCTURE can be found, for example, in [9, 21]. The Hamiltonian includes relativistic mass, Darwin, spin-orbit interaction correction terms, full 2-body Breit interaction and some additional two body terms. The interacting electron and core ion potential, implemented in program SS, is represented by Thomas-Fermi-Amaldi-Dirac potential. The program uses configuration interaction wavefunction expansion, which for a symmetry \(J\pi\) can be expressed as \[\Psi(J\pi)=\sum_{i=1}^{N}a_{i}\psi[C_{i}(J\pi)] \tag{11}\] where \(a_{i}\)s is the amplitude or the mixing coefficient of wavefunction of configuration \(C_{i}\), \(\psi[C_{i}(J\pi)]\) with symmetry \(J\pi\), and the sum is over all \(N\) configurations that can produce a level of symmetry \(J\pi\). The wavefunction will result in having \(N\) number of eigenvalues from the Hamiltonian matrix and each eigenvalue will correspond to energy of one level of the symmetry. Accuracy of the energy of a level may depend on the size of the expansion and identification on the value of the mixing coefficient (e.g. [9].) The transition matrix element, for example, for electric dipole allowed transition (E1), is given by \(<\Psi_{B}||\mathbf{D}||\Psi_{B^{\prime}}>\) where \(\Psi_{B}\) and \(\Psi_{B^{\prime}}\) are the initial and final state bound wavefunctions, \(\mathbf{D}=\sum_{i}\mathbf{r}_{i}\) is the dipole operator where the sum is over the number of electrons. The line strength \(\mathbf{S}\) is obtained from the mod squared of the transition matrix, \[\mathbf{S}=|\left\langle\Psi_{\mathbf{f}}|\sum_{\mathbf{j=1}}^{\mathbf{N+1}} \mathbf{r_{j}}|\Psi_{\mathbf{i}}\right\rangle|^{\mathbf{2}} \tag{12}\] where \(\Psi_{i}\) and \(\Psi_{f}\) are the initial and final wavefunctions. The transition parameters, oscillator strength (\(f_{ij}\)) and radiative decay rate (\(A\)) can be obtained from line strength as \[f_{ij}=\frac{E_{ji}}{3g_{i}}\mathbf{S},\ \ A_{ji}(sec^{-1})=\left[0.8032\times 1 0^{10}\frac{E_{ji}^{3}}{3g_{j}}\right]\mathbf{S} \tag{13}\] Transition probabilities for electric quadrupole (E2), magnetic dipole (M1), electric octupole (E3), magnetic quadrupole (M2) transition parameters can be obtained from their respective line strengths,(e.g. [9, 21]). Lifetime of an excited level can be obtained from the inverse of the sum of all transition probabilities to lower levels, \[\tau_{i}(s)=1/[\sum_{j}A_{ji}(s^{-1})] \tag{14}\] In atomic unit of time \(\tau_{0}=2.4191\times 10^{-17}\)s, the transition probabilities or the radiative decay rate can be expressed as \(A_{ji}(s^{-1})=A_{ji}(a.u.)/\tau_{0}\). ## 3 Computation The R-matrix calculations start with the target wavefunctions as an input. These wavefunctions are obtained from atomic structure calculations, mainly using program SUPERSTRUCTURE (SS) [20, 21]. Ca IV wavefunction, energies and the relevant radiative transition parameters were obtained from an optimized a set of 13 configurations of the ion, \(3s^{2}3p^{5}\), \(3s3p^{6}\), \(3s^{2}3p^{4}3d\), \(3s^{2}3p^{4}4s\), \(3s^{2}3p^{4}4p\), \(3s^{2}3p^{4}4d\), \(3s^{2}3p^{4}4f\), \(3s^{2}3p^{4}5s\), \(3s3p^{5}3d\), \(3s3p^{5}4s\), \(3s3p^{5}4p\), \(3p^{6}3d\), \(3s3p^{4}3d^{2}\), with the same core configuration \(1s^{2}2s^{2}2p^{6}\) for each, using SS. The set of optimized Thomas-Fermi orbital scaling parameters are 1.26865(1s), 1.0395(2s), 1.04288(2p), 1.1(3s), 1.1(3p), 1.1(3d), 1.083(4s), 1.079(4p), 1.031(4d), 1.1 (4f), 1.1 (5s) respectively. The configurations set provided 387 fine structure levels, 54 of which were considered for the collisional excitation in the present study and hence used in the wavefunction expansion of Ca IV. Selecting a set of levels of the target or core ion from a large set is the standard for a R-matrix calculation. Computations of collision strengths, which depend on the size of the wavefunction expansion, needed several hundreds of CPU hours on the high performance computers at the Ohio Supercomputer Center. The purpose of having an optimized set of configurations, which produced a large set of levels, is to ensure contributions from configuration interactions are included in the wavefunctions of levels, and thus can provide a set of levels, starting from the ground level, of higher accuracy for the CC wavefunction expansion. Inclusion of all levels, 387 in the present case, in the calculations will require considerably large computational time, and expected to be computationally prohibitive which is very common, and hence impractical to consider. The set of excited levels considered for a R-matrix calculation is typically based on the expected physics to be revealed, mainly the resonant features. No new physics was expected from levels higher than 54 levels since the resonances would have converged and hence those belonging to higher ones would no longer be strong but have weakened to converge to the background. This was also our finding as demonstrated in the Results section. Table 1 presents the 54 fine structure levels of Ca IV included in the wavefunction expansion and compares the calculated energies from SS with experimental values tabulated by the National Institute of Standards and Technology (NIST) (www.nist.gov). The comparison shows that the present computed energies are generally within a few percent of the observed values. The accuracy of a level energy depends on the how well the wavefunction expansion is representing the level through interaction of the given set of configurations. For more precise energy positions of the resonances in EIE collision strength, we have replaced the calculated energies with the available observed energies in the BPRM calculations. \begin{table} \begin{tabular}{l l l l l l} K & Configuration & Term & E\({}_{present}\) & E\({}_{NIST}\) & \% diff \\ & & & (Ry) & (Ry) & \\ \hline [MISSING_PAGE_POST] _{3/2}^{o}\) & 3.1017 & It is important to ensure convergence in contributions by the number of partial waves and (e+ion) symmetries to the collision strengths \(\Omega\). We computed \(\Omega\) several times by varying different sets of partial waves \(l\) going up to 22 and (e-ion) symmetries J going up to \(\leq\) 22 of even and odd parities. We found that i) \(\Omega\) background is converged with the highest values of \(l\) = 20 and \(J\pi\) = 11, ii) larger number of \(l\) and \(J\pi\) than these introduced computational instability which gives NaN (Not a Number) for the collision strengths at various electron energies. Getting "NaN" for \(\Omega\) values is a known problem for a R-matrix calculation. They are introduced when the computation goes through very small numbers. Usually the NaN points are deleted from the sets of \(\Omega\) values. However, we made attempts to get approximate values \(\Omega\) at energies of NaN values obtained with large \(l\) and \(J\) values by extrapolating or interpolating the neighboring \(\Omega\) points. Smooth background can be extrapolated but resonances are not reproduced in \(\Omega\) as can be seen in the blue (with extrapolation) and red curves (no extrapolation) in Figure 1 representing the same excitation of the ion. The resonant peaks of the blue curve, which has many NaN points and interpolation was carried out to replace the NaNs with real numbers, are lower than the red curves. Figure 1 also demonstrates the test of convergence of \(\Omega\) for a few sets of highest values of \(l\) and \(J\) for the excitations of a) \({}^{2}P^{o}_{3/2}-^{2}P^{o}_{1/2}\) and b) \({}^{2}P^{o}_{1/2}-^{2}\)\(S_{1/2}\) computed at a coarse energy mesh. The red curves correspond to use of \(l\) = 0 - 20 and \(J\pi\) = 0- 11, the blue ones to \(l\) = 0 - 22 and \(J\pi\) = 0 - 21 and the magenta ones to \(l\) = 0 - 11 and \(J\pi\) = 0 - 10. \(\Omega\) in magenta (\(l\) = 0- 11 and \(J\pi\)= 0- 10) are considerably lower than the other curves indicating that convergence of contributions from 12 partial waves has not been reached. The blue and the red curves have about the same background indicating convergence in contributions of partial waves has reached. Hence specifications for the red curve, \(l\) = 0 - 20 and \(J\pi\) = 0- 11, have been used for the computation of \(\Omega\). Final \(\Omega\) values were obtained at fine energy meshes. We used a very fine energy mesh of \(\Delta E<10^{-6}\) Rydberg to resolve the near-threshold resonances. The above discussion concerns impact of partial waves that are noticeable. Beyond it, top-up which gives some additional contributions is added. As top-up contributions, we included contributions of higher multipole potentials to \(\Omega\) using the option ipert=1 in STGF of the R-matrix codes. The other top-up contribution can come from higher partial waves, beyond the specified ones, using the option chosen as ipert=2, in STGF. The approximation incorporated in the R-matrix code STGF is most probably based on the treatment of Burgess and Tully [22] of higher partial waves. However, computation of contributions of the higher partial waves, not only takes much longer time, it is not done for most cases as it stops due to numerical issues except for a few excitations and at some energies. We computed contributions of higher partial waves when it is possible to test them and found negligible contributions in increasing the \(\Omega\) values to affect the average collision strengths \(\Upsilon\). The problem is compensated, as done in the present case, by including larger number of partial waves without the approximation. Program "ecs-omg.f" [26] was used to calculate the effective collision strengths \(\Upsilon\), Eq.(7), at various temperatures where \(\Omega\) is integrated over the energy of the scattered electron from zero to a high value. The high energy limit is chosen to a value at which \(\Omega\) has diminished to a near zero value or has reached a plateau and the exponential factor of \(\Upsilon\) has approached a near zero value. \(\Omega\) points between the highest electron energy computed by the BPRM codes to the highest energy limit of the \(\Upsilon\) integral are obtained using the logarithmic behavior of Coulomb-Bethe approximation of Eq.(10). The radiative data of \(f-\) and \(A-\)values for dipole allowed photo-excitations (E1) have been reprocessed with experimental energies using code PRCSS (e.g. [2]). This allows to obtain the transition parameters at observed wavelengths. For the reprocessing, the transition energies were obtained from the experimental level energies and then multiplied, following Eq. 13. to the calculated line strengths from code SUPERSTRUCTURE. For the levels for which no observed or measured values are available, calculated energies were used. Figure 1: Demonstration of convergence of contributions of partial waves with various \(l\) and \(J\) values to \(\Omega\). The x-axis corresponds to the energy of the scattered electron after the excitation which starts with zero energy. Red curves indicate the best convergent condition. ## 4 Results and Discussions We present atomic parameters for electron impact excitation of (e + Ca IV \(\rightarrow\) Ca IV\({}^{*}\) + e' \(\rightarrow\) Ca IV + h\(\nu\) + e') and photo-excitation of (Ca IV + h\(\nu\)\(\leftrightarrow\) Ca IV\({}^{*}\)). The results for the collisional excitation are reported for the first time, as indicated by literature search. They are described below first followed by those for photoexcitation. ### Collisional excitation of Ca IV We discuss the characteristic features of collisional excitation with illustrative examples. The first excitation of the target is usually of particular interest because of its high probability through EIE and the emitted photon, typically of low energy, can travel for a long time without being absorbed. If the corresponding emission line is strong, it can be detected easily in low density plasmas and be used for identification of the ion and environmental diagnostics. The first excitation in Ca IV, \(3p^{5}\)\({}^{2}P_{3/2}^{o}-3p^{5}\)\({}^{2}P_{1/2}^{o}\), within the ground configuration is of particular importance since the wavelength of the emission line, 3.207 \(\mu\)m, is well within the high resolution IR wavelength detection range, 0.6 - 28.3 \(\mu\)m, of JWST, and could be used for diagnostics, abundances (e.g. [6]). Figure 2 upper panel presents collision strength for the first excitation \(\Omega(^{2}P_{3/2}^{o}-^{2}P_{1/2}^{o})\) of the target ion Ca IV with respect to scattered electron energy after the excitation. The electron energy is relative to the excitation threshold and hence starts at zero. We note that the \(\Omega\) for the collisional excitation is quite strong as it shows extensive resonances with enhanced background in the energy region between the first excited level \({}^{2}P_{1/2}^{o}\) and the next one \({}^{2}S_{1/2}\) (pointed by arrows), and continues to be strong beyond it up to \({}^{2}D_{3/2}\) level, the one before the last dipole allowed transition in the 54 level wavefunction expansion. Beyond, \({}^{2}D_{3/2}\) the transitions are forbidden except one and the resonances become weaker. There are 29 dipole allowed levels in total that exist in this energy range. Each excitation of the target ion corresponds to a Rydberg series of resonances. Typically, the resonances corresponding to a dipole allowed excitation are visible while others are suppressed. The resonances belonging to very high energy levels are found to become weaker. Such weakening trend indicates convergence of resonant contributions to the collisional parameters. Figure 2 lower panel presents effective collision strength (\(\Upsilon\)) for \(\Omega[(^{2}P_{3/2}^{o}-^{2}P_{1/2}^{o})\). Starting low at lower energy, \(\Upsilon\) forms a shoulder bump at 10\({}^{4}\) K. Then it rises relatively quickly reaching to the high peak value 9.66 at about 3\(\times\)10\({}^{5}\) K. The peak indicates existence of a strong line of the transition in Ca IV which can be detected. The intensity of the line will depend on the plasma environment. We present illustrative examples of forbidden transitions in Ca IV. Figure 3 presents collision strengths for two forbidden excitations in Ca IV, a) **Fig. 2** Upper panel: EIE collision strength (\(\Omega\)) of Ca IV for the first excitation \(3s^{2}3p^{5}\ {}^{2}P^{o}_{3/2}-3s^{2}3p^{5}\ {}^{2}P^{o}_{1/2}\) of the ground level with respect to scattered electron energy in Ry unit. Extensive resonances with enhanced background can be noted within the energy region of dipole allowed transitions from the ground level \(3s^{2}3p^{5}\ {}^{2}P^{o}_{3/2}\) up to \({}^{2}D^{o}_{1/2}\). Lower panel: Effective collision strength for \(3s^{2}3p^{5}\ {}^{2}P^{o}_{3/2}-3s^{2}3p^{5}\ {}^{2}P^{o}_{1/2}\) excitation in Ca IV showing a high peak value at about 3\(\times 10^{5}\) K indicating high probability of detection of the line by JWST. \(\Omega(3s^{2}3p^{5}\ ^{2}P_{3/2}^{o}-3s^{2}3p^{4}4s^{4}D_{7/2})\) and b) \(\Omega(3s^{2}3p^{5}\ ^{2}P_{3/2}^{o}-3s^{2}3p^{4}4s^{4}F_{9/2})\). Both transitions show presence of strong resonances in the lower energy region. The resonances become weaker in higher energy region indicating contribution of resonances is converging. This is the typical trend of \(\Omega\) for forbidden transitions. Both of these transitions lie in the extreme ultraviolet region with wavelengths of 496 and 458 \(\AA\) respectively. Collision strengths for dipole allowed transitions may show a different trend in the high energy background from those of forbidden transitions. The dipole in the target can affect the partial waves of the incident electron and contribute to the collision strength. The contribution depends on the oscillator strength for the dipole transition. For stronger transitions, inclusion of larger number of partial waves is important for converged contributions. Figure 4 presents features of \(\Omega\) (EIE) for two dipole allowed transitions to low lying excited levels, a) \(3s^{2}3p^{5}\ ^{2}P_{1/2}^{o}-3s3p^{6}\ ^{2}S_{1/2}\) (levels 2-3) at 670 s \(\AA\) and b) \(3s^{2}3p^{5}\ {}^{2}P^{o}_{3/2}-3p^{4}3d\ ^{2}D_{5/2}\) (levels 1-18) at 435 \(\AA\). \(\Omega\) for these transitions show presence of prominent resonances in the lower energy regions which become weaker converging to smooth background at higher energy. It can be noted that the high energy background is decreasing very slowly or remaining almost constant. These transitions are much weaker compared to others as indicated by their smaller values for the oscillator strengths and \(A\)-values presented in Table 2. Figure 5 presents \(\Omega\) for dipole allowed excitation of a) \(3s^{2}3p^{5}\ {}^{2}P^{o}_{3/2}-(3s^{2}3p^{4}3d)^{2}D_{5/2}\) at 340 \(\AA\) and b) \(3s^{2}3p^{5}\ {}^{2}P^{o}_{3/2}-(3s^{2}3p^{4}4s)\ ^{2}P_{3/2}\) at 332 \(\AA\) in Ca IV. The high peak resonances are converging near 1 Ry. Although there are much more core ion excitations included in the wavefunction expansion, the Rydberg series of resonances belonging to them have become weak with almost no contributions to \(\Omega\). This indicates convergence in the wavefunction Figure 4: Features of \(\Omega(EIE)\) for two weaker dipole allowed transitions (oscillator strengths are given in Table 2): a) \(3s^{2}3p^{5}\ {}^{2}P^{o}_{1/2}-3s3p^{6}\ {}^{2}S_{1/2}\) (levels 2-3) and b) relatively higher transition \(3s^{2}3p^{5}\ {}^{2}P^{o}_{3/2}-3s^{2}3p^{4}3d^{2}D_{5/2}\) (levels 1 -18). Resonances have converged to a smooth background at high energy where the background remains almost constant or decreases slowly with increase of energy, typical for weak transitions. expansion for generating resonant features. However, the background of \(\Omega\) is rising with energy toward forming a plateau. This trend is in agreement with the expected Coulomb-Bethe behavior of \(\Omega_{if}(E)\sim_{E\rightarrow\infty}dln(E)\) (Eq.10) at high energy for transitions with stronger or larger values for oscillator strength or \(A\)-values. The \(f\)- and \(A\)-values for for the two transitions are given in Table 2. As discussed above, for these transitions, larger number of partial waves needs to be considered in calculating \(\Omega\). The rising trend toward a plateau may not result in with inadequate number of partial waves. In such case, \(\Omega\) is often extrapolated with Coulomb-Bethe form. Astrophysical models require temperature dependent effective collision strength (\(\Upsilon\)). Table 3 presents \(\Upsilon\) for some excitations to levels that could be of importance for astrophysical applications. \(\Upsilon\) for any other excitation of the 1431 transitions can be computed by averaging \(\Omega\) over Maxwellian distribution function at any temperature. The \(\Omega\) values, and \(\Upsilon\) for a number of excitations, in addition to the ones presented here, are available electronically at NORAD-Atomic-Data [27]. ### Energy levels and radiative transition parameters of Ca IV We present about 93000 radiative transition rates for Ca IV obtained from atomic structure calculations in Breit-Pauli approximation implemented in code SUPERSTRUCTURE. The 13 configurations of Ca IV, listed in Computation section, resulted in 387 fine structure energy levels, and 93,296 radiative transitions of types allowed (E1) and forbidden (E2, E3, M1, M2). Our calculated energies for Ca IV have been compared with the measured values in Table 1 in the Computation section. They are in good agreement with the observed energies available at NIST table. As seen in Table 1, the differences between calculated and measured energies are within 5% for most of the levels except for the highest lying \({}^{4}D^{o}\) and \({}^{4}F^{o}\) states where the differences are about 8%. The A-values have been benchmark with limited number of available data. The present transition probabilities are compared, in Table 4, with the only 3 available A-values in the NIST compilation, and with those available from other sources. Transition \(3s^{2}3p^{5}(^{2}P^{o}_{3/2})\to 3s^{2}3p^{5}(^{2}P^{o}_{1/2})\) can be both M1 and E2 types. The present A-value for the M1 transition, 0.545, is in excellent agreement with 0.543 of Naqvi [12]. The present value is also in good agreement with, 0.379 by Huang et al [16] given the typical magnitude of A-values of the order of \(10^{8}\) s\({}^{-1}\) or higher for E1 transitions. Similarly for the very weak E2 transition, the present \(A\)-value, 3.82e-05 sec\({}^{-1}\) is also in very good agreement with, 1.906E-05 sec\({}^{-1}\), of Huang et al [16]. For the first dipole allowed transitions \(3s^{2}3p^{5}(^{2}P^{o}_{3/2,1/2})\to 3s3p^{6}(^{2}S_{1/2})\) present A-values agree very well with those of Wilson et al [17], but differ from Huang et al and Varsavsky [13] who also differ from each other and from Wilson et al. As Table 4 shows, the present results are in agreement with those of Wilson et al for other transitions. We are providing two calculated A-values for the dipole allowed transitions in order to compare the accuracy, the first A-value has been obtained using \begin{table} \begin{tabular}{l c c c c c c} \hline Figure & i - j & \(SL\pi J\) & \(\lambda_{ij}\) & \(f_{ij}\) & \(A_{ji}\)(s\({}^{-1}\)) \\ & & \(i\) & \(j\) & (\(\AA\)) & & \\ \hline 4a & 2 - 3 & \({}^{2}P^{o}_{1/2}\) & \({}^{2}S_{1/2}\) & 670 & 1.55E-02 & 2.01E+08 \\ 4b & 1 -18 & \({}^{2}P^{o}_{3/2}\) & \({}^{2}D_{5/2}\) & 435 & 8.14E-03 & 1.98E+08 \\ 5a & 1 -33 & \({}^{2}P^{o}_{3/2}\) & \({}^{2}D_{5/2}\) & 319 & 2.60E+00 & 1.14E+11 \\ 5b & 1 -30 & \({}^{2}P^{o}_{3/2}\) & \({}^{2}P_{3/2}\) & 332 & 1.09E+00 & 7.10E+10 \\ \hline \end{tabular} \end{table} Table 2: Dipole allowed transitions between levels \(i\) and \(j\), their \(f\) and A-values, and transition wavelength \(\lambda_{ij}\) in Angstrom unit for the illustrated examples of \(\Omega\) of Ca IV in Figures 4 and 5. The level indices \(i\) and \(j\) correspond to those in Table 1. \begin{table} \begin{tabular}{c c c c c c c} \hline logT & T (K) & \(\Upsilon\): 1- 2 & 1- 3 & 2- 3 & 1- 4 & 1- 5 \\ \hline [MISSING_PAGE_POST] the experimental transition energy and the second one, below it, using the calculated energy. We can see that they themselves do not differ significantly from each other. Some differences among the results are expected due to use of different optimization of the configurations included in each calculations and the accuracy in the methods considered for calculations of the transition parameters. The present work provides lifetimes of all 386 excited levels in a file that is available electronically at NORAD-Atomic-Data [27]. Lifetime of an excited level can be calculated if the A-values or the radiative decay rates of the level to the lower levels are known. Lifetimes are measurable at experimental set-ups. Table 5 presents lifetimes of a few levels to illustrate their values. For each excited level in the table, the level number, configuration number, spectroscopic notation and energy are given. This line is followed by the A-values of the level decaying to lower levels. The A-values are added and the sum is inverted to obtain the lifetime. Levels decaying through the forbidden \begin{table} \begin{tabular}{c c c c c c c} \hline logT & T (K) & \(\Upsilon\): 1- 6 & 1- 7 & 1-12 & 1-13 & 1-18 \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 3: continues transitions (E2,E3,M1,M2) have longer lifetimes than those through dipole allowed transitions (same spin E1d and intercombination E1i). No lifetimes for Ca IV levels were found in literature for comparison. However, their accuracies are related to those of the A-values which have been discussed for Table 4. ## 5 Conclusion We have studied collision strength of Ca IV using a 54-levels close coupling wave function expansion that corresponds to target ion excitations to high lying levels. This ensures inclusion of converged contributions of resonances generated by all the levels. We have demonstrated the effect of number of partial waves in the collision strength \(\Omega\) and showed convergence of partial waves contributing to collision strengths. Features of \(\Omega\) show resonances in the low energy region but they converge to the background much before reaching the highest 54th excitation of Ca IV. We find that \(\Omega\) of the emission line of 3.2 \(\mu\)m due to collisional excitation of \({}^{2}P_{3/2}^{o}-^{2}P_{1/2}^{o}\) of ground configuration \(3s^{2}3p^{5}\) has extensive resonances with \begin{table} \begin{tabular}{c c c c c} \hline K & KP & \(\lambda\) & Transition & A(Present) & A(Others) \\ & & (\(\AA\)) & & (sec\({}^{-1}\)) & (sec\({}^{-1}\)) \\ \hline 1 & 2 & & \(3s^{2}3p^{5}(^{2}P_{3/2}^{o})\to 3s^{2}3p^{5}(^{2}P_{1/2}^{o})\) & M1:0.545 & M1:0.541[12],0.3786[16] \\ 1 & 2 & & \(3s^{2}3p^{5}(^{2}P_{3/2}^{o})\to 3s^{2}3p^{5}(^{2}P_{1/2}^{o})\) & E2:3.82e-5 & E2:1.906E-5[16] \\ 1 & 3 & 656 & \(3s^{2}3p^{5}(^{2}P_{3/2}^{o})\to 3s3p^{6}(^{2}S_{1/2})\) & 5.11E+8 & 7.425E+8[17],1.20E+10[13], \\ & & & 4.22E+8 & 1.09e+10[16] \\ 2 & 3 & 669.7 & \(3s^{2}3p^{5}(^{2}P_{1/2}^{o})\to 3s3p^{6}(^{2}S_{1/2})\) & 2.466E+8 & 3.529E+8[17], 5.4E+9[13] \\ & & & 2.01E+8 & & \\ 1 & 10 & 454.6 & \(3s^{2}3p^{5}(^{2}P_{3/2}^{o})\to 3s^{2}3p^{4}3d(^{4}F_{5/2})\) & 1.84E+6 & 1.692E+6[17] \\ 1 & 12 & 543 & \(3s^{2}3p^{5}(^{2}P_{3/2}^{o})\to 3s^{2}3p^{4}3d(^{2}P_{1/2})\) & 9.696E+6 & 7.889E+6[17] \\ & & & & 9.57E+6 & \\ 2 & 12 & 459.5 & \(3s^{2}3p^{5}(^{2}P_{1/2}^{o})\to 3s^{2}3p^{4}3d(^{2}P_{1/2})\) & 3.930E+7 & 3.503E+7[17] \\ & & & & 3.87E+7 & \\ 1 & 14 & 440 & \(3s^{2}3p^{5}(^{2}P_{3/2}^{o})\to 3s^{2}3p^{4}3d(^{4}P_{1/2})\) & 1.350E+7 & 1.243E+7[17] \\ & & & & 1.31E+7 & \\ 1 & 15 & 439 & \(3s^{2}3p^{5}(^{2}P_{3/2}^{o})\to 3s^{2}3p^{4}3d(^{4}P_{3/2})\) & 3.90E+6 & 5.367E+6[17] \\ & & & & 3.77E+6 & \\ 1 & 16 & 437.3 & \(3s^{2}3p^{5}(^{2}P_{3/2}^{o})\to 3s^{2}3p^{4}3d(^{4}P_{5/2})\) & 2.740E+6 & 2.754E+6[17] \\ & & & & 2.66E+6 & \\ 1 & 17 & 437.8 & \(3s^{2}3p^{5}(^{2}P_{3/2}^{o})\to 3s^{2}3p^{4}3d(^{2}D_{3/2})\) & 6.530E+7 & 5.201E+7[17] \\ & & & & 6.45E+7 & \\ 1 & 18 & 434.6 & \(3s^{2}3p^{5}(^{2}P_{3/2}^{o})\to 3s^{2}3p^{4}3d(^{2}D_{5/2})\) & 1.920E+8 & 1.576E+8[17] \\ & & & & 1.90E+8 & \\ 1 & 27 & 341.3 & \(3s^{2}3p^{5}(^{2}P_{3/2}^{o})\to 3s^{2}3p^{4}3d(^{2}S_{1/2})\) & 6.23E+10 & 4.264E+10[17],6.543E+10[17] \\ & & & & 6.43E+10 & \\ \hline \end{tabular} \end{table} Table 4: Comparison of present A-values for Ca-IV with those available in literature. For the E1 transitions, the first A-value from the present work represents use of experimental transition energy while the second one (below it) of calculated energy. K, KP are the initial and final transitional energy level indices (as given in Table 1), \(\lambda\) is the transition wavelength in \(\AA\) unit. The references are given in superscripts. enhanced background in the low energy region. This has resulted in a strong effective collision strength \(\Upsilon\) with a peak around 3 \(\times 10^{5}\) K indicating a distinct presence of an emission line when the environmental plasma effects are low. The 3.2 \(\mu\)m line is within the wavelength range of JWST. The present \(\Omega\) has shown expected features at high electron energy, such as, decaying background for the forbidden transitions, slow decay or almost constant value for weak dipole transitions and rising trend of Coulomb Bethe ln(E) behavior toward a plateau for strong dipole allowed transitions. We present a set of over 93,000 radiative transitions among 387 energy levels with orbitals going up to 5s in Ca IV. Results include lifetimes of all 386 excited levels. The present results are expected to be accurate and large enough for the two processes to provide a complete astrophysical modeling for all practical purposes. All atomic data for energies, radiative transitions, collisional excitations, and effective collision strengths of a set of transition are available online at the NORAD-Atomic-Data database at the Ohio State University at: [https://norad.astronomy.osu.edu/](https://norad.astronomy.osu.edu/) All computations were carried on the high performance computers of the Ohio Supercomputer Center. BS acknowledges of IRSIP fellowship from the Government of Pakistan to carry out the research at the Ohio State University. \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline Type & LSi & Ci & gi & lvi & LSj & Ci & gj & lvj & f(E1)/S(E2, & Aji & Eij \\ & & & & & & & & E3,M1,M2) & (s\({}^{-1}\)) & (\(\AA\)) \\ \hline lifetime: sslevel j= & 2, Cf= & 1, \({}^{2}\)P\({}^{o}_{1/2}\) & [E= 3.019E-02 Ryz & 3.3134852E+03 /cm] & & & & & \\ E2 & \({}^{2}P^{o}\) & 1 & 4 & 1 & \({}^{2}P^{o}\) & 1 & 2 & 2 & 1.54E+00 & 5.180E-05 & 3.0180E+04 \\ M1 & \({}^{2}P^{o}\) & 1 & 4 & 1 & \({}^{2}P^{o}\) & 1 & 2 & 1.33E+00 & 6.540E-01 & 3.0180E+04 \\ Summed A-values: Af (forbidden)= & 6.541E-01, Aa (allowed)= & 0.000E+00 s-1 & & & & & & \\ Total Sum(Af+Aa)= Afji (2 transitions) to the level= & 6.541E-01 s-1 & & & & & & & \\ Lifetime (=1/Aji)= & 1.529E+00 s & & & & & & & & \\ lifetime: sslevel j= & 7, cf= & 3, \({}^{4}\)D\({}_{1/2}\) & [E= 1.826E+00 Ry= 2.0041544E+05 /cm] & & & & & & \\ E1i & \({}^{2}P^{o}\) & 1 & 4 & 1 & \({}^{4}D\) & 3 & 2 & 7 & 1.01E-06 & 5.420E+04 & 4.9896E+02 \\ E1i & \({}^{2}P^{o}\) & 1 & 2 & 2 & 4\({}^{4}D\) & 3 & 2 & 7 & 3.96E-06 & 1.030E+05 & 5.0735E+02 \\ E2 & \({}^{2}S\) & 2 & 2 & 3 & \({}^{4}D\) & 3 & 2 & 7 & 0.00E+00 & 0.000E+00 & 1.7417E+03 \\ M1 & \({}^{2}S\) & 2 & 2 & 3 & \({}^{4}D\) & 3 & 2 & 7 & 3.18E-02 & 8.120E-04 & 1.7417E+03 \\ E2 & \({}^{4}D\) & 3 & 6 & 5 & \({}^{4}D\) & 3 & 2 & 7 & 1.15E-03 & 2.890E-12 & 2.0196E+05 \\ E2 & \({}^{4}D\) & 3 & 6 & 5 & \({}^{4}D\) & 3 & 2 & 7 & 1.15E-03 & 2.890E-12 & 2.0196E+05 \\ E2 & \({}^{4}D\) & 3 & 4 & 6 & \({}^{4}D\) & 3 & 2 & 7 & 2.69E-03 & 1.120E-13 & 4.5789E+05 \\ M1 & \({}^{4}D\) & 3 & 4 & 6 & \({}^{4}D\) & 3 & 2 & 7 & 5.97E+00 & 8.380E-04 & 4.5789E+05 \\ Summed A-values: Af (forbidden)= & 1.650E-03, Aa (allowed)= & 1.572E+05 s-1 & & & & & & \\ Total Sum(Af+Aa)= Afji(8 transitions) to the level= & 1.572E+05 s-1 & & & & & & & & \\ Lifetime (=1/Aji)= & 6.361E-06 s & & & & & & & & & \\ \hline \end{tabular} \end{table} Table 5: Lifetimes of a few levels illustrating the complete table of lifetimes of all 386 levels. Radiative decay rates of level j to various lower levels \(j\to i\) are given. Notation C is for configuration, g for statistical weight factor, lv for level, f for f-value for an E1 transition or S-values for E2, E3, M1, M2 transitions, A for A-value and E for transition energy. ## Declarations Both authors, S.N. Nahar and B. Shafique, contributed equally on the contents of the paper. While SNN trained BS, set up the project, wrote necessary program, and remained engaged in studying the project, BS picked up all aspects of computations, carried out computations, and was engaged in the analysis.
2301.05666
Beyond MP2 initialization for unitary coupled cluster quantum circuits
The unitary coupled cluster (UCC) ansatz is a promising tool for achieving high-precision results using the variational quantum eigensolver (VQE) algorithm in the NISQ era. However, results on quantum hardware are thus far very limited and simulations have only accessed small system sizes. We advance the state of the art of UCC simulations by utilizing an efficient sparse wavefunction circuit solver and studying systems up to 64 qubits. Here we report results obtained using this solver that demonstrate the power of the UCC ansatz and address pressing questions about optimal initial parameterizations and circuit construction, among others. Our approach enables meaningful benchmarking of the UCC ansatz, a crucial step in assessing the utility of VQE for achieving quantum advantage.
Mark R. Hirsbrunner, Diana Chamaki, J. Wayne Mullinax, Norm M. Tubman
2023-01-13T17:06:50Z
http://arxiv.org/abs/2301.05666v3
# Beyond MP2 initialization for unitary coupled cluster quantum circuits ###### Abstract The unitary coupled cluster singles and doubles (UCCSD) ansatz is a promising approach to prepare accurate quantum states for use in quantum algorithms. In this paper, we compared the performance of two methods for generating the initial UCCSD circuit parameters: CCSD and MP2. Our results, obtained through an efficient sparse wavefunction circuit solver, show that UCCSD circuits with CCSD parameterizations significantly outperformed those with MP2 parameterizations for systems of up to 64 qubits. These findings suggest that CCSD should be the preferred choice for generating initial parameters. _Introduction._-- Simulating many-body fermionic systems is a promising future application of quantum computing [1, 2, 3]. While it is not yet clear that quantum advantage can generically be achieved in this area [4], it is believed that phase estimation can solve ground state problems for molecular systems that are beyond the reach of classical computers. However, it remains an open question whether or not other approaches can achieve quantum advantage with fewer resources [5, 6, 7, 8]. Phase estimation and other algorithms benefit from, or even require, significant overlap between the trial quantum state and the true solution. Single Slater determinants, such as Hartree-Fock states [9], are often used as the trial state when solving for ground states, as they are assumed to produce a sufficiently large overlap with the ground state wavefunction in many cases [10, 11, 12]. Yet such single determinant states may not be sufficient for arbitrarily large system sizes [10, 13, 14]. Improving quantum state preparation techniques is a key step toward advancing quantum computing for quantum chemistry and other Hamiltonian simulation applications since many algorithms require high-quality initial quantum states. Noise and decoherence present another central difficulty of achieving quantum advantage in the current noisy intermediate-scale quantum (NISQ) era of quantum hardware [15]. The variational quantum eigensolver (VQE) is a quantum-classical hybrid algorithm that is particularly well-suited for NISQ devices [16, 17, 18]. While VQE does not provide exact ground state solutions like quantum phase estimation, the approximate wavefunctions produced by VQE are often sufficiently accurate to provide meaningful physical insights. Furthermore, these approximate solutions are well-suited for quantum state preparation for use in more accurate algorithms [19]. Despite its current popularity, VQE possesses a number of drawbacks. In particular, the classical optimization of circuit parameters presents many challenges, including barren plateaus (i.e., exponentially vanishing gradients in high dimensions), local minima, and saddle points [20, 21, 22, 23, 24]. Many approaches exist for minimizing the computational burden of classical optimization for VQE, with some proposals eschewing optimization entirely [25, 26, 27, 28]. The crux of several of these strategies is a focus on choosing high-quality initial parameters, shifting some of the computational burden from optimization to initialization. In this work we compare the utility of different initialization strategies for a particular VQE ansatz that is often employed in quantum chemistry problems, the unitary coupled cluster (UCC) ansatz [16, 19, 29, 30, 31, 32]. There are several proposed strategies for generating the initial parameters for the UCC ansatz [33, 34, 35, 31, 31], including applications in which no optimization is performed on quantum hardware [28]. The most widely employed strategy generates parameters using classical Moller-Plesset perturbation theory of the second order (MP2) [33, 34, 17, 31]. Another less thoroughly studied approach is the use of the coupled cluster singles and doubles (CCSD) classical simulation method to generate initial parameters [36, 37]. The CCSD technique generally produces more accurate ground state energies than MP2 calculations, yet CCSD is rarely employed to initialize VQE circuits. This is curious, considering that neither technique is computationally burdensome for all but the largest of problems. This raises the question: Which technique produces superior initial parameters for UCC ansatzes, MP2 or CCSD? In this paper, we provide the first numerical study (to our knowledge) comparing the performance of UCC ansatzes prepared using parameters generated via MP2 and via CCSD. We employ an algorithm for the factorized form of UCC implemented using our state-of-the-art sparse wavefunction circuit solver, enabling us to study problems of up to 64 qubits [38, 39]. By calculating the ground state energy of a wide range of molecules using both MP2 and CCSD parameters in the UCC ansatz, we show conclusively that CCSD parameters outperform MP2, generating significantly more accurate ground state energies. _Technique._-- The UCC ansatz is an exponential operator acting on the Hartree-Fock reference wavefunction defined as \[\ket{\Psi_{\text{UCC}}}=\exp\Bigl{(}\hat{T}-\hat{T}^{\dagger}\Bigr{)}\ket{\Psi_{0}} \tag{1}\] where the coupled cluster operator \(\hat{T}\) is \[\hat{T}=\sum_{i}^{\text{occ}}\sum_{a}^{\text{vir}}\theta_{i}^{a}\hat{a}_{a}^{ \dagger}\hat{a}_{i}+\sum_{ij}^{\text{occ}}\sum_{ab}^{\text{vir}}\theta_{ij}^{ ab}\hat{a}_{a}^{\dagger}\hat{b}_{a}\hat{a}_{i}+\cdots \tag{2}\] The \(\hat{a}^{\dagger}\) and \(\hat{a}\) operators are the second-quantized creation and annihilation operators, respectively, acting on the occupied molecular orbitals in the reference wavefunction indexed by \(i,j,\ldots\) or the virtual orbitals indexed by \(a,b,\ldots\). The parameters of the UCC ansatz are indicated by \(\theta\). We employ the factorized form of the UCC ansatz, which is given by \[\ket{\Psi_{\text{UCC}}}=\prod_{ij\cdots ab\cdots}^{\text{occ}}\hat{U}_{ij \cdots}^{ab\cdots}\ket{\Psi_{0}}, \tag{3}\] where the individual UCC exponential factors are defined as \[\hat{U}_{ij\cdots}^{ab\cdots}=\exp\Bigl{(}\theta_{ij\cdots}^{ab\cdots}(\hat{ a}_{ij\cdots}^{ab\cdots}\cdots-\hat{a}_{ab\cdots}^{i\prime j\cdots})\Bigr{)}. \tag{4}\] We only include single excitations (\(\hat{a}_{i}^{a}=\hat{a}_{i}^{\dagger}\hat{a}_{a}\)) and double excitations (\(\hat{a}_{ij}^{ab}=\hat{a}_{i}^{\dagger}\hat{a}_{j}^{\dagger}\hat{a}_{b}\hat{a}_ {a}\)) in the ansatz, along with the conjugate deexcitation operators \(\hat{a}_{a}^{i}\) and \(\hat{a}_{ab}^{ij}\), an approximation to the full UCC ansatz known as the unitary coupled cluster singles and doubles (UCCSD) ansatz. We utilize a specific representation of the UCCSD ansatz that exploits the fact that each UCC factor \(\hat{U}_{ij\cdots}^{ab\cdots}\) can be expressed in terms of sines and cosines of the parameters that can be efficiently computed on classical hardware [38]. The order of the individual UCCSD factors is not strictly defined [40], and we chose to order them based on the magnitude of the parameter values (\(\ket{\theta}\)), placing the factor \(\hat{U}_{ij\cdots}^{ab\cdots}\) that contains the largest parameter to the right in Equation (3). We refer to this as the "magnitude" ordering. We generate the conventional MP2 and CCSD UCCSD parameters using PySCF, noting that the MP2 parameterization does not include any single excitation operators [41]. We use a computationally efficient sparse wavefunction approach, limiting the number of determinants included in the wavefunction after each UCC factor is applied [39]. We do this by checking the number of determinants \(N\) in the wavefunction after applying each UCC factor. If \(N\) is greater than the desired number of determinants, \(N_{\text{WF}}\), we sort the amplitudes by magnitude and discard the determinants with the smallest amplitudes such that only \(N_{\text{WF}}\) determinants remain in the wavefunction. _Results.--_ Here we report the correlation energies ob Figure 1: (a) The UCC(MP2) (light blue solid line) and UCC(CCSD) (light purple dashed line) correlation energies of CH\({}_{2}\)O as functions of \(N_{\text{WF}}\). The dark purple dot-dashed and black dotted lines denote the CCSD and CCSD(T) correlation energies, respectively, and the black solid line marks the ASCI energy calculated using \(10^{6}\) determinants. (b) The UCC(MP2) (circles) and UCC(CCSD) (pentagons) correlation energies as a fraction of the CCSD(T) correlation energy plotted versus \(1/N_{\text{WF}}\). The solid lines are quadratic fits to the UCC(CCSD) and UCC(MP2) data, fitted using the twenty data points with the largest \(N_{\text{WF}}\). The dashed lines mark the \(y\)-intercepts of the fits. The dot-dashed and dotted lines indicate the ratio of the CCSD and CCSD(T) correlation energies to the ASCI correlation energy. tained from UCCSD circuits parameterized using MP2 and CCSD parameters for a wide range of molecules. For the molecules LiH, HF, NH\({}_{3}\), CH\({}_{4}\), H\({}_{2}\)O, N\({}_{2}\), F\({}_{2}\), and CH\({}_{2}\)O we use experimental geometries from the CCSDB database and employ the cc-pCVDZ basis set [42]. We also study the linear hydrogen chains H\({}_{8}\), H\({}_{10}\), H\({}_{12}\), and H\({}_{14}\), for which we use an interatomic distance of 1 A, and a stretched geometry of H\({}_{10}\) with an interatomic distance of 1.5 A, for all of which we employ the STO-6G basis set. Our sparse wavefunction circuit solver is limited to 64 qubits, so we include only the 32 lowest-energy molecular orbitals in each molecule [43]. Our approach has similar scaling to a time-dependent selected configuration interaction approach, which some of us have applied to larger systems in other contexts [25; 26]. Because we limit the number of determinants retained in the wavefunction to \(N_{\rm WF}\), we must study the dependence of the correlation energies on \(N_{\rm WF}\) and extrapolate to the large-\(N_{\rm WF}\) limit to obtain a converged result. Specifically, we calculate the correlation energy as a function of \(N_{\rm WF}\) up to \(N_{\rm WF}\) =100,000 for each molecule, as shown in Fig. 1a for CH\({}_{2}\)O. We extrapolate to the large-\(N_{\rm WF}\) limit via a quadratic fitting of the data as a function of \(N_{\rm WF}^{-1}\), \[E=aN_{\rm WF}^{-2}+bN_{\rm WF}^{-1}+c, \tag{5}\] as shown in Fig. 1b. The fit accounts for the twenty data points at the largest values of \(N_{\rm WF}\), which produces the fit parameters. The \(y\)-intercept of the quadratic fit is the extrapolated correlation energy that would be obtained if we pruned no determinants during the calculation. Thus this is a prediction of the energy that would be produced on perfect quantum hardware. We refer to these extrapolated energies as the UCC(MP2) and UCC(CCSD) correlation energies, depending on the initial parameters used in the circuit. We report the CCSD(T), CCSD, UCC(MP2), UCC(CCSD), and full configuration interaction (FCI) correlation energies for the hydrogen chains and LiH in Table 1. Calculating the FCI energy for the remaining molecules is impractical, so we instead report the adaptive sampling configuration interaction (ASCI) correlation energies [44] for these molecules in Table 2, along with the CCSD(T), CCSD, UCC(MP2), and UCC(CCSD) correlation energies [45]. We also plot these energies as fractions of the best reference energy, either FCI or ASCI, for each molecule in Fig. 2. The UCC(CCSD) energy outperforms the UCC(MP2) energy by a wide margin in all cases, with a difference of approximately 15% of the reference energy for the hydrogen chains (including stretched H\({}_{10}\)) and differences ranging from 1.3% to 9.6% for the remaining molecules. Because the individual terms in the factorized form of the UCCSD ansatz do not necessarily commute, the ordering of the operators can have a significant impact on the accuracy of the ansatz [40]. To address this concern, Figure 2: The CCSD(T), CCSD, UCC(CCSD), and UCC(MP2) correlation energies as percentages of the best available reference energy (FCI for hydrogen chains and LiH, ASCI otherwise). we calculate the correlation energy of the molecules we study in this work using 100 random orderings of the UCCSD factors. We find that the standard deviation of the correlation energy is less than 0.1 mHa for molecules with equilibrium geometries, with the exception of F\({}_{2}\) and C\({}_{2}\), the standard deviations of which are 0.1 mHa and 0.4 mHa, respectively. Only the strongly correlated stretched geometry of H\({}_{10}\) has a significant standard deviation of 2.4 mHa. We set \(N_{\rm WF}\) to 10,000 for these calculations to reduce the computational burden, which likely artificially inflates the standard deviations. We conclude that factor ordering is significant only for strongly correlated molecules, in agreement with previous studies [40]. Importantly, we find that the magnitude ordering obtains energies close to the minimum energy produced by random orderings for all molecules besides CH\({}_{2}\)O, for which the magnitude ordering produced an energy approximately 0.15 mHa above the minimum. The UCC(CCSD) energy closely matches the CCSD energy for all molecules studied, with the exception of stretched H\({}_{10}\), and produces energies _lower_ than the CCSD energy for HF, H\({}_{2}\)O, and C\({}_{2}\). However, the MP2 and UCC(MP2) energies do not exhibit the remarkably good agreement obtained by the CCSD and UCC(CCSD) energies. Excluding the results for stretched \(H_{10}\), the differences between the CCSD and UCC(CCSD) energies range between 0.0% and 2.1% with an average of 0.4% for the molecules we study, while for MP2 and UCC(MP2) they range between 0.4% and 19.7% with an average of 9.2%. These statistics show a clear advantage for UCC(CCSD) As such, the CCSD parameterization is likely better suited than MP2 for use in recent proposals for no-optimization strategies to obtain quantum advantage [28]. Classical coupled cluster techniques have well-known properties in which the obtained energies are not variational (dropping below the FCI results) or, even worse energies can diverge away from the physical ground state result. One such failure scenario can be seen in the chemistry of bond breaking, which we investigate here using the H\({}_{10}\) molecule with a stretched interatomic distance of 1.5 A. The CCSD and CCSD(T) energies of this molecule are lower than the FCI energy, representing a well known problem in classical coupled cluster techniques. Despite the failure of CCSD to produce an accurate energy for this molecule, the UCC circuit parameterized with CCSD must produce a variational energy because the VQE approach is a wavefunction technique, where as classical coupled cluster approaches are not in general. The UCC(CCSD) energy for stretched H\({}_{10}\) is 12.2% higher than the FCI energy, compared to 1.7% higher for the equilibrium geometry. These results show that the UCC ansatz parameterized with CCSD is robust to failures of the classical theory, but with some loss of accuracy. Regardless, our results show a close correspondence between UCC(CCSD) and CCSD theories and further study of this can help us understand the power of coupled cluster approaches on quantum hardware. _Discussion_.-- In this paper we demonstrated through extensive calculations that CCSD parameterizations of the UCC ansatz consistently outperform their MP2 counterparts. As such, it is important to compare the computational costs of obtaining the CCSD and MP2 parameterizations. Although MP2 is faster and, in fact, often used as a starting point for coupled cluster simulations, CCSD nevertheless requires only reasonable classical computation resources for even moderately sized systems. For example, the CCSD calculations presented in this work and others run in minutes or less on a laptop [28; 35]. MP2 and CCSD runtimes scale as O(\(N^{5}\)) and O(\(N^{6}\)), respectively, making these prohibitively expensive algorithms in the large-\(N\) qubit limit, but it is unlikely that NISQ era quantum computers will exceed classically-accessible simulations of CCSD in the next few years. Classical coupled cluster simulations can be accelerated \begin{table} \begin{tabular}{c c c c c c c} Mol & ASCI & CCSD(T) & CCSD & \begin{tabular}{c} UCC \\ (CCSD) \\ \end{tabular} & MP2 & \begin{tabular}{c} UCC \\ (MP2) \\ \end{tabular} \\ \hline HF & 251.24 & 250.75 & 248.61 & 249.03 & 242.84 & 245.76 \\ NH\({}_{3}\) & 239.02 & 238.42 & 234.42 & 233.12 & 217.13 & 228.29 \\ CH\({}_{4}\) & 183.28 & 182.72 & 179.58 & 178.98 & 156.41 & 173.38 \\ H\({}_{2}\)O & 255.71 & 255.09 & 251.79 & 252.14 & 241.37 & 247.68 \\ N\({}_{2}\) & 365.24 & 363.53 & 351.26 & 351.13 & 347.65 & 338.73 \\ F\({}_{2}\) & 456.66 & 454.75 & 445.42 & 443.28 & 436.34 & 430.58 \\ CH\({}_{2}\)O & 284.90 & 283.59 & 275.63 & 269.73 & 261.75 & 260.67 \\ C\({}_{2}\) & 382.23 & 380.23 & 351.91 & 352.21 & 350.27 & 315.59 \\ \end{tabular} \end{table} Table 2: The ASCI, CCSD(T), CCSD, UCC(CCSD), MP2, and UCC(MP2) correlation energies of the larger molecules for which FCI is impractical. The UCC energies are obtained via the fitting procedure shown in Fig. 1. All energies are reported as absolute values and in units of milliHartrees. \begin{table} \begin{tabular}{c c c c c c c} Mol & FCI & CCSD(T) & CCSD & \begin{tabular}{c} UCC \\ (CCSD) \\ \end{tabular} & MP2 & \begin{tabular}{c} UCC \\ (MP2) \\ \end{tabular} \\ \hline H\({}_{8}\) & 134.68 & 134.65 & 133.60 & 133.00 & 85.19 & 111.69 \\ H\({}_{10}\) & 167.78 & 167.64 & 165.77 & 164.86 & 107.62 & 139.08 \\ H\({}_{10}^{*}\) & 403.81 & 434.55 & 426.50 & 354.74 & 208.45 & 293.67 \\ H\({}_{12}\) & 200.90 & 200.62 & 197.81 & 196.62 & 130.27 & 166.44 \\ H\({}_{14}\) & 234.05 & 233.60 & 229.75 & 228.92 & 153.11 & 194.45 \\ LiH & 64.75 & 64.74 & 64.69 & 64.69 & 51.80 & 61.28 \\ \end{tabular} \end{table} Table 1: The FCI, CCSD(T), CCSD, UCC(CCSD), MP2, and UCC(MP2) correlation energies of the hydrogen chains and LiH. The UCC energies are obtained via the fitting procedure shown in Fig. 1. The row labeled H\({}_{10}^{*}\) uses a stretched geometry with an interatomic distance of 1.5 Å. All energies are reported as absolute values and in units of milliHartrees. in various ways [46; 47], indicating that simulations involving hundreds of qubits to parameterize circuits is in reach. Considering this, as well as the small prefactors of these runtime scalings and the efficiency of modern implementations of these techniques, CCSD is poised to remain an accessible and highly accurate method of UCC parameterization for the forseeable future of the NISQ era. As such, our results suggest that CCSD should replace MP2 as the standard approach to classically parameterizing UCC circuits. Our results also display the power of our sparse wavefunction circuit solver, which enables us to perform UCC simulations at system sizes that have not been previously explored. Because our solver is capable of handling up to 64 qubit problems with its current implementation, we are able to access a regime in which it is possible to meaningfully test and differentiate VQE results. In this case, the ability to access large systems sizes enabled us to explore a widely used parameterization for UCC circuits and challenge conventional held ideas. There are number of directions for future research. Testing our approach with higher order coupled cluster techniques on both the classical [48] and quantum side [35] is one such direction. The correspondence we identified between CCSD and UCC(CCSD) is weakened when classical CCSD breaks down, as seen in for strongly correlated molecules like stretched H\({}_{10}\). These results motivate the study of more advanced classical approaches to parameterize UCC-type circuits. Establishing the correspondence between higher order classical coupled cluster theories and the UCC analogues of them, such as a UCC(CCSDT) circuit [35], would elucidate the full potential of the UCC ansatz. _Acknowledgements.--_ We are grateful for support from NASA Ames Research Center. We acknowledge funding from the NASA ARMD Transformational Tools and Technology (TTT) Project. Part of this work is funded by U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Co-Design Center for Quantum Advantage under Contract No. DE-SC0012704. Calculations were performed as part of the XSEDE computational Project No. TG-MCA93S030 on Bridges-2 at the Pittsburgh supercomputer center. M.H and D.C. were supported by NASA Academic Mission Services, Contract No. NNA16BD14C. M.H. and D.C. participated in the Feynman Quantum Academy internship program.
2308.09557
Spaces not distinguishing ideal pointwise and $σ$-uniform convergence
We examine topological spaces not distinguishing ideal pointwise and ideal $\sigma$-uniform convergence of sequences of real-valued continuous functions defined on them. For instance, we introduce a purely combinatorial cardinal characteristic (a sort of the bounding number $\mathfrak{b}$) and prove that it describes the minimal cardinality of topological spaces which distinguish ideal pointwise and ideal $\sigma$-uniform convergence. Moreover, we provide examples of topological spaces (focusing on subsets of reals) that do or do not distinguish the considered convergences. Since similar investigations for ideal quasi-normal convergence instead of ideal $\sigma$-uniform convergence have been performed in literature, we also study spaces not distinguishing ideal quasi-normal and ideal $\sigma$-uniform convergence of sequences of real-valued continuous functions defined on them.
Rafał Filipów, Adam Kwela
2023-08-18T13:40:51Z
http://arxiv.org/abs/2308.09557v1
# Spaces not Distinguishing ideal pointwise and \(\sigma\)-uniform convergence ###### Abstract. We examine topological spaces not distinguishing ideal pointwise and ideal \(\sigma\)-uniform convergence of sequences of real-valued continuous functions defined on them. For instance, we introduce a purely combinatorial cardinal characteristic (a sort of the bounding number b) and prove that it describes the minimal cardinality of topological spaces which distinguish ideal pointwise and ideal \(\sigma\)-uniform convergence. Moreover, we provide examples of topological spaces (focusing on subsets of reals) that do or do not distinguish the considered convergences. Since similar investigations for ideal quasi-normal convergence instead of ideal \(\sigma\)-uniform convergence have been performed in literature, we also study spaces not distinguishing ideal quasi-normal and ideal \(\sigma\)-uniform convergence of sequences of real-valued continuous functions defined on them. Key words and phrases:ideal, filter, ideal convergence, statistical convergence, filter convergence, convergence of a sequence of functions, sigma-uniform convergence, quasi-normal convergence, pointwise convergence, bounding number, QN-spaces 2 Reclaw and Repicky [7] and were thoroughly examined in the following years [3, 4, 6, 7, 8, 29, 30, 33, 36]. A notion of convergence (such as pointwise or quasi-normal convergence of sequences of functions) often can be generalized using ideals on the set of natural numbers. For instance, the ordinary convergence of sequences of reals generalized with the aid of the ideal of sets of asymptotic density zero is known as the statistical convergence [16, 20, 35]. It is known [13, Theorem 5.1] (see also [2, Theorem 1.2]) that quasi-normal convergence is equivalent to \(\sigma\)-uniform convergence. Thus, QN-spaces are in fact topological spaces not distinguishing pointwise and \(\sigma\)-uniform convergence of sequences of real-valued continuous functions defined on them. The research on ideal analogues of QN-spaces, initiated by Das and Chandra [14] and continued by others [5, 27, 31, 32, 38, 39], has concentrated only on spaces not distinguishing ideal pointwise and ideal quasi-normal convergence of sequences of continuous functions so far. However, it is known [34] that ideal quasi-normal and ideal \(\sigma\)-uniform convergence are not the same for a large class of ideals. What is more, \(\sigma\)-uniform convergence seems to be better known than quasi-normal convergence and ideal analogue of \(\sigma\)-uniform convergence seems more natural than ideal analogue of quasi-normal convergence (the latter was even initially introduced in two different ways [14, 19]). It seems that the research on ideal QN-spaces would be incomplete without studying spaces not distinguish ideal pointwise and ideal \(\sigma\)-uniform convergence of sequences of real-valued continuous functions defined on them. Our paper is an attempt to fill this gap, and it is organized in the following way. In Section 3, we show (Corollary 3.5) that every infinite space distinguishes between ideal uniform convergence and the other considered convergences (i.e. pointwise, \(\sigma\)-uniform and quasi-normal). Moreover, we show (Corollary 3.6) that a space does not distinguish ideal pointwise and \(\sigma\)-uniform convergence if and only if it simultaneously does not distinguish ideal pointwise and quasi-normal convergence and does not distinguish ideal quasi-normal and \(\sigma\)-uniform convergence. In Section 4, we prove the main result of the paper (Corollary 4.6) which provides a purely combinatorial characterization of the minimal cardinality of a topological space which distinguishes ideal pointwise and ideal \(\sigma\)-uniform convergence of sequences of continuous functions. In Section 5, we examine various properties of combinatorial cardinal characteristics introduced in the preceding section (some of these properties are used in the following sections). In Section 6, we show (Corollary 6.5) that the property of "not distinguishing ideal pointwise and \(\sigma\)-uniform convergence of continuous functions" is of the topological nature rather than set-theoretic. We also provide (Theorem 6.6) under CH an example of an uncountable subspace of the reals revealing the above phenomenon. In Section 7, we show (Theorem 7.3) that combinatorial cardinal characteristics introduced in the preceding section can be described in a uniform manner as the bounding numbers of binary relations. These descriptions are crucial for the results obtain in the following section. In Section 8, we construct (Theorem 8.2) a subset of the reals of the minimal size which distinguish the ideal pointwise convergence and \(\sigma\)-uniform convergence. Finally in Section 9, we show (Proposition 9.1) that consistently there exists a space which does not distinguish ordinary pointwise convergence and ordinary \(\sigma\)-uniform convergence but it does distinguish statistical pointwise convergence and statistical \(\sigma\)-uniform convergence. ## 2. Preliminaries By \(\omega\) we denote the set of all natural numbers. We identify a natural number \(n\) with the set \(\{0,1,\ldots,n-1\}\). We write \(A\subseteq^{*}B\) if \(A\setminus B\) is finite. For a set \(A\) and a cardinal number \(\kappa\), we write \([A]^{\kappa}=\{B\subseteq A:|B|=\kappa\}\), where \(|B|\) denotes the cardinality of \(B\). If \(A\) and \(B\) are two sets then by \(A^{B}\) we denote the family of all functions \(f:B\to A\). If \(f\in A^{B}\) and \(C\subseteq B\) then \(f\upharpoonright C:C\to A\) is the restriction of \(f\) to \(C\). In the case \(B=\omega\), an element of \(A^{\omega}\) will sometimes be denoted \((a_{n})\) - by this we mean \(f:\omega\to A\) given by \(f(n)=a_{n}\) for all \(n\). For \(A\subseteq X\), we write \(\mathbf{1}_{A}(n)\) to denote the characteristic function of \(A\) i.e. \(\mathbf{1}_{A}(x)=1\) for \(x\in A\) and \(\mathbf{1}_{A}(x)=0\) for \(x\in X\setminus A\). By \(\omega\), \(\omega_{1}\) and \(\mathfrak{c}\) we denote the first infinite cardinal, the first uncountable cardinal and the cardinality of \(\mathbb{R}\), respectively. By \(\mathrm{cf}(\kappa)\) we denote the cofinality of a cardinal \(\kappa\). ### Ideals An _ideal on a set_\(X\) is a family \(\mathcal{I}\subseteq\mathcal{P}(X)\) that satisfies the following properties: 1. if \(A,B\in\mathcal{I}\) then \(A\cup B\in\mathcal{I}\), 2. if \(A\subseteq B\) and \(B\in\mathcal{I}\) then \(A\in\mathcal{I}\), 3. \(\mathcal{I}\) contains all finite subsets of \(X\), 4. \(X\notin\mathcal{I}\). An ideal \(\mathcal{I}\) on \(X\) is _tall_ if for every infinite \(A\subseteq X\) there is an infinite \(B\in\mathcal{I}\) such that \(B\subset A\). An ideal \(\mathcal{I}\) on \(X\) is a _P-ideal_ if for any countable family \(\mathcal{A}\subseteq\mathcal{I}\) there is \(B\in\mathcal{I}\) such that \(A\setminus B\) is finite for every \(A\in\mathcal{A}\). An ideal \(\mathcal{I}\) on \(X\) is _countably generated_ if there is a countable family \(\mathcal{B}\subseteq\mathcal{I}\) such that for every \(A\in\mathcal{I}\) there is \(B\in\mathcal{B}\) with \(A\subseteq B\). The vertical section of a set \(A\subseteq X\times Y\) at a point \(x\in X\) is defined by \((A)_{x}=\{y\in Y:(x,y)\in A\}\). For ideals \(\mathcal{I},\mathcal{J}\) on \(X\) and \(Y\), respectively, we define the following new ideals: 1. \(\mathcal{I}\otimes\mathcal{J}=\{A\subseteq X\times Y:\{x\in X:(A)_{x}\notin \mathcal{J}\}\in\mathcal{I}\}\), 2. \(\mathcal{I}\otimes\{\emptyset\}=\{A\subseteq X\times\omega:\{x\in X:(A)_{x} \neq\emptyset\}\in\mathcal{I}\}\). 3. \(\{\emptyset\}\otimes\mathcal{J}=\{A\subseteq\omega\times Y:(A)_{x}\in\mathcal{ J}\text{ for all }x\in X\}\). The following specific ideals will be considered in the paper (see e.g. [23] for these and many more examples). **Example 2.1**.: * \(\mathrm{Fin}=\{A\subseteq\omega:|A|<\omega\}\) is the ideal of all finite subsets of \(\omega\). It is a non-tall P-ideal. * \(\mathrm{Fin}\otimes\{\emptyset\}\) is an ideal that is not tall and not a P-ideal. * \(\{\emptyset\}\otimes\mathrm{Fin}\) is a non-tall P-ideal. * \(\mathrm{Fin}\otimes\mathrm{Fin}\) is a tall non-P-ideal. * \(\mathcal{I}_{1/n}=\{A\subseteq\omega:\sum_{n\in A}\frac{1}{n+1}<+\infty\}\) is a tall P-ideal called _the summable ideal_. * \(\mathcal{I}_{d}=\{A\subseteq\omega:\lim_{n\to\infty}\frac{|A\cap n|}{n+1}=0\}\) is a tall P-ideal called the _ideal of sets of asymptotic density zero_. * Let \(\Omega\) be the set of all clopen subsets of the Cantor space \(2^{\omega}\) having Lebesgue measure \(1/2\) (note that \(\Omega\) is countable). Then the _Solecki's ideal_, denoted by \(\mathcal{S}\), is the collection of all subsets of \(\Omega\) that can be covered by finitely many sets of the form \(G_{x}=\{A\in\Omega:x\in A\}\) for \(x\in 2^{\omega}\). \(\mathcal{S}\) is a tall non-P-ideal. ### Ideal convergence Let \(\mathcal{I}\) be an ideal on \(\omega\). A sequence \((a_{n})\) of reals is \(\mathcal{I}\)_-convergent to zero_\((x_{n}\xrightarrow{\mathcal{I}}0)\) if \[\{n\in\omega:|x_{n}|\geq\varepsilon\}\in\mathcal{I}\text{ for each }\varepsilon>0.\] A sequence \((f_{n})\) of real-valued functions defined on \(X\) is * \(\mathcal{I}\)_-pointwise convergent to zero_\((f_{n}\xrightarrow{\mathcal{I}\text{-}\text{-}\text{-}\text{-}\text{-} 0})\) if \(f_{n}(x)\xrightarrow{\mathcal{I}}0\) for all \(x\in X\) i.e. \[\{n\in\omega:|f_{n}(x)|\geq\varepsilon\}\in\mathcal{I}\text{ for each }x\in X\text{ and }\varepsilon>0;\] * \(\mathcal{I}\)_-uniformly convergent to zero_\((f_{n}\xrightarrow{\mathcal{I}\text{-}\text{-}\text{-}\text{-}\text{-} 0})\) if \[\{n\in\omega:\exists x\in X\left(|f_{n}(x)|\geq\varepsilon\right)\}\in\mathcal{I} \text{ for each }\varepsilon>0;\] * \(\mathcal{I}\)_-\(\sigma\)-uniformly convergent to zero_\((f_{n}\xrightarrow{\mathcal{I}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} 0})\) if there is a family \(\{X_{k}:k\in\omega\}\) of subsets of \(X\) such that \[\bigcup_{k\in\omega}X_{k}=X\text{ and }f_{n}\upharpoonright X_{k}\xrightarrow{ \mathcal{I}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{ For instance, we write \(X\in(\mathcal{I}\)-p,\(\mathcal{I}\)-u) if \(X\) is normal and \[f_{n}\xrightarrow{\mathcal{I}\cdot\text{p}}0\iff f_{n}\xrightarrow{\mathcal{I} \cdot\text{u}}0\] for any sequence \((f_{n})\) of continuous real-valued functions defined on \(X\). ## 3. Spaces not distinguishing uniform convergence **Proposition 3.1**.: _Let \(\mathcal{I},\mathcal{J}\) be ideals on \(\omega\). Let \(X\) be a nonempty topological space. Let \((f_{n})\) be a sequence in \(\mathcal{C}(X)\)._ 1. \(f_{n}\xrightarrow{\mathcal{I}\cdot\text{-}u}0\implies f_{n}\xrightarrow{ \mathcal{I}\cdot\sigma\cdot u}0\implies f_{n}\xrightarrow{\mathcal{I}\cdot \text{-}\text{qn}}0\implies f_{n}\xrightarrow{\mathcal{I}\cdot\text{-}\text{p}}0\)_._ 2. _If_ \(\mathcal{I}\subseteq\mathcal{J}\)_, then_ 1. \(f_{n}\xrightarrow{\mathcal{I}\cdot\text{-}u}0\implies f_{n}\xrightarrow{ \mathcal{J}\cdot\text{-}u}0\)_,_ 2. \(f_{n}\xrightarrow{\mathcal{I}\cdot\text{-}\text{-}u}0\implies f_{n}\xrightarrow{ \mathcal{J}\cdot\text{-}\text{-}\text{qn}}0\)_,_ 3. \(f_{n}\xrightarrow{\mathcal{I}\cdot\text{-}\text{-}\text{qn}}0\implies f_{n} \xrightarrow{\mathcal{J}\cdot\text{-}\text{qn}}0\)_,_ 4. \(f_{n}\xrightarrow{\mathcal{I}\cdot\text{-}\text{-}\text{p}}0\implies f_{n} \xrightarrow{\mathcal{J}\cdot\text{-}\text{qn}}0\)_._ Proof.: (1) The first implication is obvious, the second is proved in [14, Theorem 2.1 along with Note 2.1], whereas the third one is shown in [18, Proposition 4.4]. (2) Straightforward. **Proposition 3.2**.: _Let \(\mathcal{I}\) and \(\mathcal{J}\) be ideals on \(\omega\). Let \(X\) be a nonempty topological space. The following conditions are equivalent._ 1. \(\mathcal{I}\subseteq\mathcal{J}\)_._ 2. \(f_{n}\xrightarrow{\mathcal{I}\cdot\text{-}u}0\implies f_{n}\xrightarrow{ \mathcal{J}\cdot\sigma\cdot u}0\) _for every sequence_ \((f_{n})\) _in_ \(\mathcal{C}(X)\)_._ 3. \(f_{n}\xrightarrow{\mathcal{I}\cdot\text{-}u}0\implies f_{n}\xrightarrow{ \mathcal{J}\cdot\text{-}\text{qn}}0\) _for every sequence_ \((f_{n})\) _in_ \(\mathcal{C}(X)\)_._ 4. \(f_{n}\xrightarrow{\mathcal{I}\cdot\sigma\cdot u}0\implies f_{n}\xrightarrow{ \mathcal{J}\cdot\text{-}\text{qn}}0\) _for every sequence_ \((f_{n})\) _in_ \(\mathcal{C}(X)\)_._ 5. \(f_{n}\xrightarrow{\mathcal{I}\cdot\sigma\cdot u}0\implies f_{n}\xrightarrow{ \mathcal{J}\cdot\text{-}\text{p}}0\) _for every sequence_ \((f_{n})\) _in_ \(\mathcal{C}(X)\)_._ 6. \(f_{n}\xrightarrow{\mathcal{I}\cdot\text{-}\text{qn}}0\implies f_{n}\xrightarrow{ \mathcal{J}\cdot\text{-}\text{p}}0\) _for every sequence_ \((f_{n})\) _in_ \(\mathcal{C}(X)\)_._ 7. \(f_{n}\xrightarrow{\mathcal{I}\cdot\text{-}\text{qn}}0\implies f_{n}\xrightarrow{ \mathcal{J}\cdot\text{-}\text{p}}0\) _for every sequence_ \((f_{n})\) _in_ \(\mathcal{C}(X)\)_._ _The above characterizations are presented graphically on Figure 1._ Proof.: First, we see that it is enough to prove the following chains of implications: * \((1)\implies(2)\implies(3)\implies(6)\implies(1)\), * \((1)\implies(4)\implies(5)\implies(6)\implies(1)\), * \((1)\implies(7)\implies(1)\). Second, we observe that the following implications easily follow from Proposition 3.1: * \((1)\implies(2)\), \((2)\implies(3)\), \((3)\implies(6)\), * \((1)\implies(4)\), \((4)\implies(5)\), \((5)\implies(6)\), * \((1)\implies(7)\). Third, we prove the remaining two implications: \((6)\implies(1)\) and \((7)\implies(1)\) simultaneously. Let \(A\in\mathcal{I}\). We define \(f_{n}:X\to\mathbb{R}\) by \(f_{n}(x)=\mathbf{1}_{A}(n)\) for every \(x\in X\). Then \(f_{n}\) are constant so continuous. Since \(f_{n}\xrightarrow{\mathcal{I}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\ Second, we show that \(X\) is finite. Suppose, for sake of contradiction, that \(X\) is infinite. Since \(X\) is an infinite Hausdorff space, it is not difficult to show that there is an infinite sequence \((U_{n}:n\in\omega)\) of pairwise disjoint nonempty open subsets of \(X\) (see e.g. [22, Theorem 12.1, p. 45]). For each \(n\in\omega\), we pick \(x_{n}\in U_{n}\). Since \(X\) is a normal space, we can use Urysohn's Lemma to obtain that for every \(n\) there is a continuous function \(f_{n}:X\to[0,1]\) such that \(f_{n}(x_{n})=1\) and \(f_{n}(x)=0\) for every \(x\in X\setminus U_{n}\). If we show that \(f_{n}\xrightarrow{\mathcal{I}\text{-}\sigma\text{-}u}0\) holds but \(f_{n}\xrightarrow{\mathcal{J}\text{-}u}0\) does not hold, we obtain a contradiction and the proof will be finished. Let us show \(f_{n}\xrightarrow{\mathcal{I}\text{-}\sigma\text{-}u}0\). We put \(X_{0}=X\setminus\bigcup\{U_{k}:k<\omega\}\) and \(X_{k+1}=U_{k}\) for every \(k\in\omega\). Then \(X\) is covered by \(\{X_{k}:k\in\omega\}\). Since \(f_{n}\upharpoonright X_{0}\) is a constant function with value zero for every \(n\), \(f_{n}\upharpoonright X_{0}\xrightarrow{\mathcal{I}\text{-}u}0\). Whereas for \(k\in\omega\), \(f_{n}\upharpoonright X_{k+1}\) is a constant function with value zero for every \(n\neq k\), so \(f_{n}\upharpoonright X_{k+1}\xrightarrow{\mathcal{I}\text{-}u}0\). To show that \(f_{n}\xrightarrow{\mathcal{J}\text{-}u}0\) does not hold, it is enough to see that \(\{n\in\omega:\exists x\in X\left(|f_{n}(x)|>1/2\right)\}\supseteq\{n\in\omega: f_{n}(x_{n})=1\}=\omega\notin\mathcal{J}\). **Corollary 3.4**.: _Let \(\mathcal{I}\) and \(\mathcal{J}\) be ideals on \(\omega\). Let \(X\) be a nonempty normal space. The following conditions are equivalent._ 1. \(|X|<\omega\) _and_ \(\mathcal{I}=\mathcal{J}\)_._ 2. \(f_{n}\xrightarrow{\mathcal{I}\text{-}p}0\iff f_{n}\xrightarrow{\mathcal{J} \text{-}u}0\) _for every sequence_ \((f_{n})\) _in_ \(\mathcal{C}(X)\)_._ 3. \(f_{n}\xrightarrow{\mathcal{I}\text{-}qn}0\iff f_{n}\xrightarrow{\mathcal{J} \text{-}u}0\) _for every sequence_ \((f_{n})\) _in_ \(\mathcal{C}(X)\)_._ 4. \(f_{n}\xrightarrow{\mathcal{I}\text{-}\sigma\text{-}u}0\iff f_{n}\xrightarrow{ \mathcal{J}\text{-}u}0\) _for every sequence_ \((f_{n})\) _in_ \(\mathcal{C}(X)\)_._ Proof.: It follows from Propositions 3.2 and 3.3. **Corollary 3.5**.: _Let \(\mathcal{I}\) and \(\mathcal{J}\) be ideals on \(\omega\). Let \(X\) be a normal space._ 1. _If_ \(\mathcal{I}\neq\mathcal{J}\)_, then_ \(\text{non}(\mathcal{I}\text{-}p\text{,}\mathcal{J}\text{-}u)=\text{non}( \mathcal{I}\text{-}qn\text{,}\mathcal{J}\text{-}u)=\text{non}(\mathcal{I} \text{-}\sigma\text{-}u\text{,}\mathcal{J}\text{-}u)=1\)_._ 2. \(X\in(\mathcal{I}\text{-}p\text{,}\mathcal{I}\text{-}u)\iff X\in(\mathcal{I} \text{-}qn\text{,}\mathcal{I}\text{-}u)\iff X\in(\mathcal{I}\text{-}\sigma \text{-}u\text{,}\mathcal{I}\text{-}u)\iff|X|<\omega\)_._ 3. \(\text{non}(\mathcal{I}\text{-}p\text{,}\mathcal{I}\text{-}u)=\text{non}( \mathcal{I}\text{-}qn\text{,}\mathcal{I}\text{-}u)=\text{non}(\mathcal{I} \text{-}\sigma\text{-}u\text{,}\mathcal{I}\text{-}u)=\omega\)_._ 4. _There is no infinite normal space in the classes_ \((\mathcal{I}\text{-}p\text{,}\mathcal{I}\text{-}u)\)_,_ \((\mathcal{I}\text{-}qn\text{,}\mathcal{I}\text{-}u)\)_,_ \((\mathcal{I}\text{-}\sigma\text{-}u\text{,}\mathcal{I}\text{-}u)\)_._ Proof.: It follows from Corollary 3.4. **Corollary 3.6**.: _Let \(\mathcal{I}\) be an ideal on \(\omega\). Let \(X\) be a normal space._ 1. \(X\in(\mathcal{I}\text{-}p\text{,}\mathcal{I}\text{-}\sigma\text{-}u)\iff X\in( \mathcal{I}\text{-}p\text{,}\mathcal{I}\text{-}qn)\) _and_ \(X\in(\mathcal{I}\text{-}qn\text{,}\mathcal{I}\text{-}\sigma\text{-}u)\)_._ 2. \(\text{non}(\mathcal{I}\text{-}p\text{,}\mathcal{I}\text{-}\sigma\text{-}u)= \text{min}\{\text{non}(\mathcal{I}\text{-}p\text{,}\mathcal{I}\text{-}qn), \text{non}(\mathcal{I}\text{-}qn\text{,}\mathcal{I}\text{-}\sigma\text{-}u)\}\)_._ Proof.: (1) Since the implication "\(\iff\) " is obvious, we only show the reversed one. Assume that \(X\in(\mathcal{I}\text{-}p\text{,}\mathcal{I}\text{-}q\text{-}u)\). First we will show that \(X\) is in the class \((\mathcal{I}\text{-}p\text{,}\mathcal{I}\text{-}qn)\). By Proposition 3.1, if \(f_{n}\xrightarrow{\mathcal{I}\text{-}qn}0\) then \(f_{n}\xrightarrow{\mathcal{I}\text{-}p}0\), for every sequence \((f_{n})\) in \(\mathcal{C}(X)\). On the other hand, if \((f_{n})\in\mathcal{C}(X)\) is such that \(f_{n}\xrightarrow{\mathcal{I}\text{-}p}0\) then \(f_{n}\xrightarrow{\mathcal{I}\text{-}\sigma\text{-}u}0\) (as \(X\) is in the class \((\mathcal{I}\text{-}p\text{,}\mathcal{I}\text{-}\sigma\text{-}u)\)), so also \(f_{n}\xrightarrow{\mathcal{I}\text{-}qn}0\) (by Proposition 3.1). Now we show that \(X\) is in the class \((\mathcal{I}\text{-}qn\text{,}\mathcal{I}\text{-}\sigma\text{-}u)\). By Proposition 3.1, if \(f_{n}\xrightarrow{\mathcal{I}\text{-}\sigma\text{-}u}0\) then \(f_{n}\xrightarrow{\mathcal{I}\text{-}qn}0\), for every sequence \((f_{n})\) in \(\mathcal{C}(X)\). On the other hand, if \((f_{n})\in\mathcal{C}(X)\) is such that \(f_{n}\xrightarrow{\mathcal{I}\text{-}qn}0\) then \(f_{n}\xrightarrow{\mathcal{I}\text{-}p}0\) (by Proposition 3.1), so also \(f_{n}\xrightarrow{\mathcal{I}\text{-}\sigma\text{-}u}0\) (as \(X\) is in the class \((\mathcal{I}\text{-}p\text{,}\mathcal{I}\text{-}\sigma\text{-}u)\)). (2) It follows from item (1). ## 4. Spaces not distinguishing \(\sigma\)-uniform convergence In the sequel, we use the convention that \(\min\emptyset=\infty\) and \(\kappa<\infty\) for every cardinal \(\kappa\). NotationLet \(\mathcal{I}\) be an ideal on \(\omega\). 1. \(\widehat{\mathcal{P}}_{\mathcal{I}}=\{(A_{n})\in\mathcal{I}^{\omega}:A_{n}\cap A _{k}=\emptyset\) for all distinct \(n,k\}\). 2. \(\mathcal{P}_{\mathcal{I}}=\{(A_{n})\in\widehat{\mathcal{P}}_{\mathcal{I}}: \bigcup\{A_{n}:n\in\omega\}=\omega\}\). 3. \(\mathcal{M}_{\mathcal{I}}=\{(E_{k})\in\mathcal{I}^{\omega}:\forall k\in\omega \left(E_{k}\subseteq E_{k+1}\right)\}\). **Definition 4.1**.: Let \(\mathcal{I},\mathcal{J},\mathcal{K}\) be ideals on \(\omega\). 1. \(\mathfrak{b}_{s}(\mathcal{I},\mathcal{J},\mathcal{K})=\min\{|\mathcal{E}|: \mathcal{E}\subseteq\widehat{\mathcal{P}}_{\mathcal{K}}\wedge\forall(A_{n}) \in\mathcal{P}_{\mathcal{J}}\,\exists(E_{n})\in\mathcal{E}\left(\bigcup_{n \in\omega}(A_{n+1}\cap\bigcup_{i\leq n}E_{i})\notin\mathcal{I}\right)\}\). 2. \(\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})=\min\{|\mathcal{E}|:\mathcal{ E}\subseteq\mathcal{M}_{\mathcal{I}}\wedge\forall(A_{n})\in\mathcal{M}_{\mathcal{J}}\, \exists(E_{n})\in\mathcal{E}\,\exists^{\infty}n\left(E_{n}\not\subseteq A_{n }\right)\}\). 3. \(\mathrm{add}_{\omega}(\mathcal{I},\mathcal{J})=\min\{|\mathcal{A}|:\mathcal{ A}\subseteq\mathcal{I}\wedge\forall(B_{n})\in\mathcal{J}^{\omega}\,\exists A\in \mathcal{A}\,\forall n\in\omega\left(A\not\subseteq B_{n}\right))\}\). In the sequel, we will use the following shorthands: \(\mathfrak{b}_{s}(\mathcal{I})=\mathfrak{b}_{s}(\mathcal{I},\mathcal{I}, \mathcal{I})\), \(\mathfrak{b}_{\sigma}(\mathcal{I})=\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{ I})\), \(\mathrm{add}_{\omega}(\mathcal{I})=\mathrm{add}_{\omega}(\mathcal{I},\mathcal{I})\). The cardinal \(\mathfrak{b}_{s}(\mathcal{I},\mathcal{J},\mathcal{K})\) was introduced by Staniszewski [34, p. 1184] to characterize the smallest size of a space which is not \((\mathcal{I},\mathcal{J},\mathcal{K})\)-QN. Later Repicky [31, 32], among others, characterized the same class of spaces in terms of another cardinal. In [39], Supina introduced the cardinal \(\kappa(\mathcal{I},\mathcal{J})\) which is equal to \(\mathfrak{b}_{s}(\mathcal{J},\mathcal{J},\mathcal{I})\). In the case of maximal ideal, \(\mathfrak{b}_{s}(\mathcal{I},\mathcal{I},\mathcal{I})\) and \(\mathfrak{b}_{s}(\mathcal{I},\mathrm{Fin},\mathrm{Fin})\) were studied by Canjar [11, 9, 10]. In the case of Borel ideals, \(\mathfrak{b}_{s}(\mathcal{I},\mathcal{I},\mathcal{I})\) and \(\mathfrak{b}_{s}(\mathcal{I},\mathrm{Fin},\mathrm{Fin})\) were extensively studied in [17]. The cardinals \(\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})\) and \(\mathrm{add}_{\omega}(\mathcal{I},\mathcal{J})\) are introduced here but the latter cardinal appeared, in a sense, in [34] were the author introduced the notion of \(\kappa\)-P\((\mathrm{Fin},\mathcal{J})\)-ideals, because it is not difficult to see that \(\mathrm{add}_{\omega}(\mathcal{I},\mathcal{J})=\min\{\kappa:\mathcal{I}\) is not \(\kappa\)-P\((\mathrm{Fin},\mathcal{J})\}\). **Theorem 4.2**.: _Let \(\mathcal{I},\mathcal{J},\mathcal{K}\) be ideals on \(\omega\). Let \(X\) be a nonempty topological space._ 1. _In the following list of conditions, each implies the next._ 1. \(|X|<\mathfrak{b}_{s}(\mathcal{J},\mathcal{J},\mathcal{I})\)_._ 2. \(f_{n}\xrightarrow{\mathcal{I}\cdot p}0\implies f_{n}\xrightarrow{\mathcal{J} \cdot qn}0\) _for every sequence_ \((f_{n})\) _in_ \(\mathcal{C}(X)\)_._ 3. \(\mathcal{I}\subseteq\mathcal{J}\)_._ 2. _In the following list of conditions, each implies the next._ 1. \(|X|<\mathrm{add}_{\omega}(\mathcal{J},\mathcal{K})\)_._ 2. \(f_{n}\xrightarrow{\mathcal{J}\cdot qn}0\implies f_{n}\xrightarrow{\mathcal{K} \cdot\sigma\cdot u}0\) _for every sequence_ \((f_{n})\) _in_ \(\mathcal{C}(X)\)_._ 3. \(\mathcal{J}\subseteq\mathcal{K}\)_._ 3. _In the following list of conditions, each implies the next._ 1. \(|X|<\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{K})\)_._ 2. \(f_{n}\xrightarrow{\mathcal{I}\cdot p}0\implies f_{n}\xrightarrow{\mathcal{K} \cdot\sigma\cdot u}0\) _for every sequence_ \((f_{n})\) _in_ \(\mathcal{C}(X)\)_._ 3. \(\mathcal{I}\subseteq\mathcal{K}\)_._ _The above implications are presented graphically on Figure 3._ Proof.: \((1a)\implies(1b)\) It follows from [39, Theorems 5.1 and 6.2]. \((1b)\implies(1c)\) Let \(A\in\mathcal{I}\). We define \(f_{n}:X\to\mathbb{R}\) by \(f_{n}(x)=\mathbf{1}_{A}(n)\) for every \(x\in X\). Then \(f_{n}\) are constant so continuous and \(f_{n}\xrightarrow{\mathcal{I}\cdot\text{p}}0\). Thus \(f_{n}\xrightarrow{\mathcal{J}\cdot\text{qn}}0\). Then there exists a sequence \((\varepsilon_{n})\) of positive reals which is \(\mathcal{J}\)-convergent to zero and \(\{n\in\omega:|f_{n}(x)|\geq\varepsilon_{n}\}\in\mathcal{J}\) for every \(x\in X\). Let \(x_{0}\in X\). Then \(A=\{n\in\omega:|f_{n}(x_{0})|>1/2\}\subseteq\{n\in\omega:|f_{n}(x_{0})|> \varepsilon_{n}\wedge\varepsilon_{n}<1/2\}\cup\{n\in\omega:\varepsilon_{n}\geq 1 /2\}\subseteq\{n\in\omega:|f_{n}(x_{0})|>\varepsilon_{n}\}\cup\{n\in\omega: \varepsilon_{n}\geq 1/2\}\in\mathcal{J}\). \((2a)\implies(2b)\) If \(\mathcal{J}\not\subseteq\mathcal{K}\), then it is easy to see that \(\text{add}_{\omega}(\mathcal{J},\mathcal{K})=1\). (Indeed, let \(E\in\mathcal{I}\setminus\mathcal{J}\) and \(\mathcal{E}=\{E\}\). Take any \((A_{n})\in\mathcal{M}_{\mathcal{J}}\). Then \(E\not\subseteq A_{n}\) for every \(n\in\omega\).) Hence, there is nothing to prove in that case. Below we assume that \(\mathcal{J}\subseteq\mathcal{K}\). Suppose that \(|X|<\text{add}_{\omega}(\mathcal{J},\mathcal{K})\) and let \((f_{n})\) be a sequence in \(\mathcal{C}(X)\) such that \(f_{n}\xrightarrow{\mathcal{J}\cdot\text{qn}}0\). Then there exists a sequence \((\varepsilon_{n})\) of positive reals which is \(\mathcal{J}\)-converegnt to zero and \(\{n\in\omega:|f_{n}(x)|\geq\varepsilon_{n}\}\in\mathcal{J}\) for every \(x\in X\). We define \(E^{x}=\{n\in\omega:|f_{n}(x)|\geq\varepsilon_{n}\}\) for every \(x\in X\). Since \(\{E^{x}:x\in X\}\subseteq\mathcal{J}\) and \(|X|<\text{add}_{\omega}(\mathcal{J},\mathcal{K})\), there is \(\mathcal{B}=\{B_{k}:k\in\omega\}\subseteq\mathcal{K}\) such that for each \(x\in X\) there is \(k\in\omega\) with \(E^{x}\subseteq B_{k}\). We define \(X_{k}=\{x\in X:E^{x}\subseteq B_{k}\}\) for each \(k\in\omega\). It is easy to see that \(X=\bigcup\{X_{k}:k\in\omega\}\), and we show that \(f_{n}\upharpoonright X_{k}\) converges \(\mathcal{K}\)-uniformly to \(0\) for every \(k\in\omega\). Fix any \(k\in\omega\) and \(\varepsilon>0\). Since \(\mathcal{J}\subseteq\mathcal{K}\) and \(\varepsilon_{n}\xrightarrow{\mathcal{J}}0\), the set \(C_{\varepsilon}=\{n\in\omega:\varepsilon_{n}\geq\varepsilon\}\in\mathcal{K}\). For every \(x\in X_{k}\), we have \(\{n\in\omega:|f_{n}(x)|\geq\varepsilon\}\subseteq\{n\in\omega:|f_{n}(x)|\geq \varepsilon_{n}\wedge\varepsilon>\varepsilon_{n}\}\cup\{n\in\omega:\varepsilon _{n}\geq\varepsilon\}\subseteq E^{x}\cup C_{\varepsilon}\subseteq B_{k}\cup C_ {\varepsilon}\). Consequently, \(\{n\in\omega:\exists x\in X_{k}\left(|f_{n}(x)|\geq\varepsilon\right)\} \subseteq B_{k}\cup C_{\varepsilon}\in\mathcal{K}\). \((2b)\implies(2c)\) Let \(A\in\mathcal{J}\). We define \(f_{n}:X\to\mathbb{R}\) by \(f_{n}(x)=\mathbf{1}_{A}(n)\) for every \(x\in X\). Then \(f_{n}\) are constant so continuous and \(f_{n}\xrightarrow{\mathcal{J}\cdot\text{qn}}0\). Thus \(f_{n}\xrightarrow{\mathcal{K}\cdot\text{qn}}0\). Then there exists a cover \(\{X_{k}:k\in\omega\}\) of \(X\) such that \(f_{n}\upharpoonright X_{k}\xrightarrow{\mathcal{K}\cdot\text{u}}0\) for every \(k\in\omega\). Let \(x_{0}\in X\) and \(k_{0}\in\omega\) be such that \(x_{0}\in X_{k_{0}}\). Then \(A=\{n\in\omega:|f_{n}(x_{0})|>1/2\}\subseteq\{n\in\omega:\exists x\in X_{k_{0} }\left(|f_{n}(x)|>1/2\right)\}\in\mathcal{K}\). \((3a)\implies(3b)\) Suppose that \(|X|<\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{K})\) and let \((f_{n})\) be a sequence in \(\mathcal{C}(X)\) such that \(f_{n}\xrightarrow{\mathcal{I}\cdot\text{p}}0\). For every \(x\in X\) and \(k\in\omega\) define: \[E^{x}_{k}=\left\{n\in\omega:|f_{n}(x)|\geq\frac{1}{k+1}\right\}.\] Observe that \(E^{x}_{k}\in\mathcal{I}\) and \(E^{x}_{k}\subseteq E^{x}_{k+1}\) for all \(x\in X\) and \(k\in\omega\), i.e., \((E^{x}_{k})\in\mathcal{M}_{\mathcal{I}}\) for all \(x\in X\). Since the family \(\mathcal{E}=\{(E_{k}^{x}):x\in X\}\) has cardinality \(|X|<\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{K})\), there is \((A_{k})\in\mathcal{M}_{\mathcal{K}}\) such that for each \(x\in X\) there is \(m_{x}\in\omega\) such that \(E_{k}^{x}\subseteq A_{k}\) for all \(k\geq m_{x}\). Define \(X_{m}=\{x\in X:m=m_{x}\}\) and note that \(\bigcup_{m\in\omega}X_{m}=X\). We claim that \(f_{n}\upharpoonright X_{m}\) converges \(\mathcal{K}\)-\(\sigma\)-uniformly to \(0\) for every \(m\in\omega\). Fix any \(m\in\omega\) and \(\varepsilon>0\). Let \(k\in\omega\) be such that \(k\geq m\) and \(\frac{1}{k+1}<\varepsilon\). Since \(A_{k}\in\mathcal{K}\), to finish the proof it suffices to show that \(|f_{n}(x)|<\varepsilon\) for every \(x\in X_{m}\) and \(n\in\omega\setminus A_{k}\). Fix \(x\in X_{m}\) and \(n\in\omega\setminus A_{k}\). Since \(k\geq m=m_{x}\), we have \(E_{k}^{x}\subseteq A_{k}\). Hence, \(\omega\setminus E_{k}^{x}\supseteq\omega\setminus A_{k}\ni n\). Thus, \(|f_{n}(x)|<\frac{1}{k+1}<\varepsilon\) and we are done. \((3b)\implies(3c)\) It follows from item (1), because \(f_{n}\xrightarrow{\mathcal{K}\text{-}\sigma\text{-}u}0\implies f_{n} \xrightarrow{\mathcal{K}\text{-}\mathfrak{q}n}0\) by Proposition 3.1. **Corollary 4.3**.: _Let \(\mathcal{I}\) and \(\mathcal{J}\) be ideals on \(\omega\). If \(\mathcal{I}\neq\mathcal{J}\), then \(\operatorname{non}(\mathcal{I}\text{-}p,\mathcal{J}\text{-}\sigma\text{-}u)= \operatorname{non}(\mathcal{I}\text{-}p,\mathcal{J}\text{-}qn)=\operatorname {non}(\mathcal{I}\text{-}qn,\mathcal{J}\text{-}\sigma\text{-}u)=1\)._ Proof.: It follows from Proposition 3.2 and Theorem 4.2. **Proposition 4.4**.: _Let \(\mathcal{I}\) be an ideal on \(\omega\). Let \(X\) be a topological space and suppose that \(X=\bigcup\{X_{\alpha}:\alpha<\kappa\}\). Let \((f_{n})\) be a sequence in \(\mathcal{C}(X)\)._ 1. _If_ \(\kappa<\mathfrak{b}_{s}(\mathcal{I})\) _and_ \(f_{n}\upharpoonright X_{\alpha}\xrightarrow{\mathcal{I}\text{-}qn}0\) _for every_ \(\alpha<\kappa\)_, then_ \(f_{n}\xrightarrow{\mathcal{I}\text{-}qn}0\)_._ 2. _If_ \(\kappa<\mathfrak{b}_{\sigma}(\mathcal{I})\) _and_ \(f_{n}\upharpoonright X_{\alpha}\xrightarrow{\mathcal{I}\text{-}\sigma\text{-}u}0\) _for every_ \(\alpha<\kappa\)_, then_ \(f_{n}\xrightarrow{\mathcal{I}\text{-}\sigma\text{-}u}0\)_._ Proof.: (1) For each \(\alpha<\kappa\), there is a sequence \((\varepsilon_{n}^{\alpha})\) of positive reals which is \(\mathcal{I}\)-convergent to zero and \(A_{x,\alpha}=\{n\in\omega:|f_{n}(x)|\geq\varepsilon_{n}^{\alpha}\}\in\mathcal{I}\) for every \(x\in X_{\alpha}\). For each \(n\in\omega\), we define \(\phi_{n}:\kappa\to\mathbb{R}\) by \(\phi_{n}(\alpha)=\varepsilon_{n}^{\alpha}\) for each \(\alpha\in\kappa\). Having the discrete topology on \(\kappa\), functions \(\phi_{n}\) are continuous. Since \(\phi_{n}\xrightarrow{\mathcal{I}\text{-}\mathfrak{p}}0\) and \(\kappa<\mathfrak{b}_{s}(\mathcal{I})\), we obtain that \(\phi_{n}\xrightarrow{\mathcal{I}\text{-}\mathfrak{q}n}0\) (by Proposition 4.2(1)). Thus, there is a sequence \((\varepsilon_{n})\) of positive reals which is \(\mathcal{I}\)-convergent to zero and \(B_{\alpha}=\{n\in\omega:|\phi_{n}(\alpha)|\geq\varepsilon_{n}\}\in\mathcal{I}\) for every \(\alpha\in\kappa\). We claim that the sequence \((\varepsilon_{n})\) also witnesses \(f_{n}\xrightarrow{\mathcal{I}\text{-}\mathfrak{q}n}0\). Take any \(x\in X\). There is \(\alpha<\kappa\) with \(x\in X_{\alpha}\). Then \(\{n\in\omega:|f_{n}(x)|\geq\varepsilon_{n}\}\subseteq\{n:|f_{n}(x)|\geq \varepsilon_{n}^{\alpha}\wedge\varepsilon_{n}^{\alpha}<\varepsilon_{n}\}\cup\{n \in\omega:\varepsilon_{n}^{\alpha}\geq\varepsilon_{n}\}\subseteq A_{x,\alpha} \cup B_{\alpha}\in\mathcal{I}\). (2) If \(\kappa\) is finite, then the result is obvious. If \(\kappa\) is infinite, then \(\kappa\cdot\omega=\kappa\), so without loss of generality we can assume that \(f_{n}\upharpoonright X_{\alpha}\xrightarrow{\mathcal{I}\text{-}\mathfrak{q}}0\) for every \(\alpha<\kappa\). Now, we define \(A_{k}^{\alpha}=\{n\in\omega:\exists x\in X_{\alpha}\left(|f_{n}(x)|>\frac{1}{k +1}\right)\}\) for every \(\alpha<\kappa\) and \(k\in\omega\). Since \((A_{k}^{\alpha})\in\mathcal{M}_{\mathcal{I}}\) for every \(\alpha<\kappa\) and \(\kappa<\mathfrak{b}_{\sigma}(\mathcal{I})\), there is \((B_{n})\in\mathcal{M}_{\mathcal{I}}\) such that for each \(\alpha<\kappa\) there is \(k_{\alpha}\in\omega\) such that \(A_{k}^{\alpha}\subseteq B_{k}\) for every \(k\geq k_{\alpha}\). For each \(k\in\omega\), we define \(Y_{k}=\bigcup\{X_{\alpha}:k_{\alpha}=k\}\). Then \(X=\bigcup\{Y_{k}:k\in\omega\}\), and once we show that \(f_{n}\upharpoonright Y_{k}\xrightarrow{\mathcal{I}\text{-}\mathfrak{q}}0\) for each \(k\in\omega\), the proof will be finished. Take any \(k\in\omega\) and \(\varepsilon>0\). Let \(i\in\omega\) be such that \(\varepsilon>\frac{1}{i+1}\) and \(i\geq k\). Then \(\{n\in\omega:\exists x\in Y_{k}\left(|f_{n}(x)|\geq\varepsilon\right)\}\subseteq\{n \in\omega:\exists x\in Y_{k}\left(|f_{n}(x)|\geq\frac{1}{i+1}\right)\}\subseteq\{n \in\omega:\exists\alpha<\kappa\,\exists x\in X_{\alpha}\left(k_{\alpha}=k\wedge|f_ {n}(x)|\geq\frac{1}{i+1}\right)\}\subseteq B_{i}\in\mathcal{I}\). **Theorem 4.5**.: _Let \(\mathcal{I},\mathcal{J},\mathcal{K}\) be ideals on \(\omega\). Let \(X\) be a discrete topological space._ 1. _The following conditions are equivalent._ 1. \(f_{n}\xrightarrow{\mathcal{I}\text{-}\mathfrak{p}}0\implies f_{n}\xrightarrow{ \mathcal{J}\text{-}\mathfrak{q}n}0\) _for any sequence_ \((f_{n})\) _in_ \(\mathcal{C}(X)\)_._ 2. \(|X|<\mathfrak{b}_{s}(\mathcal{J},\mathcal{I},\mathcal{I})\)_._ 2. _The following conditions are equivalent._ 1. \(f_{n}\xrightarrow{\mathcal{I}\text{-}\mathfrak{p}}0\implies f_{n}\xrightarrow{ \mathcal{K}\text{-}\sigma\text{-}u}0\) _for any sequence_ \((f_{n})\) _in_ \(\mathcal{C}(X)\) _._ 2. \(|X|<\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{K})\)_._ 3. _The following conditions are equivalent._ 1. \(f_{n}\xrightarrow{\mathcal{J}-qn}0\implies f_{n}\xrightarrow{\mathcal{K}-\sigma \cdot u}0\) _for any sequence_ \((f_{n})\) _in_ \(\mathcal{C}(X)\)_._ 2. \(|X|<\operatorname{add}_{\omega}(\mathcal{J},\mathcal{K})\)_._ Proof.: (1) It follows from [39, Theorems 5.1 and 6.2] and [34, Theorem 4.9(1)] as the property \(W(\mathcal{J},\mathcal{J},\mathcal{I})\) from [34] is equivalent to \(\mathcal{J}\) being a "weak \(\operatorname{P}(\mathcal{I})\)-ideal" from [39]. \((2a)\implies(2b)\) Enumerate \(X=\{x_{\alpha}:\alpha<|X|\}\) and fix any \(\mathcal{E}=\{(E_{k}^{\alpha}):\alpha<|X|\}\subseteq\mathcal{M}_{\mathcal{I}}\). We need to show that \(\mathcal{E}\) is not a witness for \(\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{K})\), i.e. there is \((A_{k})\in\mathcal{M}_{\mathcal{K}}\) such that for each \(\alpha<|X|\) there is \(m\in\omega\) such that \(E_{k}^{\alpha}\subseteq A_{k}\) for all \(k\geq m\). Define functions \(f_{n}:X\to\mathbb{R}\) by: \[f_{n}(x_{\alpha})=\begin{cases}\frac{1}{k+1},&\text{if }n\in E_{k}^{\alpha} \setminus E_{k-1}^{\alpha},\\ 0,&\text{otherwise}\end{cases}\] for every \(\alpha<|X|\) (here we put \(E_{-1}^{\alpha}=\emptyset\)). Since \(X\) is _discrete_, functions \(f_{n}\) are continuous for every \(n\). Observe that \(f_{n}\xrightarrow{\mathcal{I}-\text{p}}0\), since for each \(x\in X\) and \(k\in\omega\) we have: \[\left\{n\in\omega:|f_{n}(x)|\geq\frac{1}{k+1}\right\}=E_{k}^{\alpha}\in \mathcal{I},\] where \(\alpha<|X|\) is given by \(x=x_{\alpha}\). By our assumption, \(f_{n}\xrightarrow{\mathcal{K}-\sigma\cdot u}0\). Thus, there is \((X_{m})\subseteq\mathcal{P}(X)\) such that \(\bigcup_{m}X_{m}=X\) and \(f_{n}\upharpoonright X_{m}\xrightarrow{\mathcal{K}-u}0\) for all \(m\in\omega\), i.e., \[B_{m,k}=\left\{n\in\omega:\exists x\in X_{m}\,\left(|f_{n}(x)|\geq\frac{1}{k+ 1}\right)\right\}\in\mathcal{K}\] for every \(k,m\in\omega\). Define \(A_{k}=B_{0,k}\cup B_{1,k}\cup\ldots\cup B_{k,k}\in\mathcal{K}\) for all \(k\in\omega\). Note that \(A_{k}\subseteq B_{0,k+1}\cup B_{1,k+1}\cup\ldots\cup B_{k,k+1}\subseteq A_{k+1}\) for every \(k\in\omega\). We claim that \((A_{k})\in\mathcal{M}_{\mathcal{K}}\) is as needed, i.e., for each \(\alpha<|X|\) there is \(m\in\omega\) such that \(E_{k}^{\alpha}\subseteq A_{k}\) for all \(k\geq m\). Fix \(\alpha<|X|\) and let \(m\in\omega\) be such that \(x_{\alpha}\in X_{m}\). Fix any \(k\geq m\) and \(n\in E_{k}^{\alpha}\). Then \(f_{n}(x_{\alpha})\geq\frac{1}{k+1}\). Since \(x_{\alpha}\in X_{m}\) and \(k\geq m\), \(n\in B_{m,k}\subseteq B_{0,k}\cup B_{1,k}\cup\ldots\cup B_{k,k}=A_{k}\). As \(n\) was arbitrary, we can conclude that \(E_{k}^{\alpha}\subseteq A_{k}\). This finishes the proof. \((2b)\implies(2a)\) It follows from Theorem 4.2(3). \((3a)\implies(3b)\) Enumerate \(X=\{x_{\alpha}:\alpha<|X|\}\) and fix any \(\mathcal{A}=\{A_{\alpha}:\alpha<|X|\}\subseteq\mathcal{J}\). We need to show that \(\mathcal{A}\) is not a witness for \(\operatorname{add}_{\omega}(\mathcal{J},\mathcal{K})\), i.e. there is \(\{B_{k}:k\in\omega\}\subseteq\mathcal{K}\) such that for each \(\alpha<|X|\) there is \(k\in\omega\) such that \(A_{\alpha}\subseteq B_{k}\). We define functions \(f_{n}:X\to\mathbb{R}\) by \[f_{n}(x_{\alpha})=\mathbf{1}_{A_{\alpha}}(n)\] for every \(\alpha<|X|\). Since \(X\) is _discrete_, functions \(f_{n}\) are continuous for every \(n\). Observe that \(f_{n}\xrightarrow{\mathcal{J}-\text{qn}}0\). Indeed, if we take any sequence \((\varepsilon_{n})\) of positive reals which is ordinary convergent to zero, then for each \(x\in X\) there is \(\alpha\) with \(x=x_{\alpha}\) and \(\{n\in\omega:|f_{n}(x_{\alpha})|\geq\varepsilon_{n}\}=\{n\in A_{\alpha}:|f_{n}( x_{\alpha})|\geq\varepsilon_{n}\}\cup\{n\in\omega\setminus A_{\alpha}:|f_{n}(x_{ \alpha})|\geq\varepsilon_{n}\}=\{n\in A_{\alpha}:1\geq\varepsilon_{n}\}\cup\{n \in\omega\setminus A_{\alpha}:0\geq\varepsilon_{n}\}\subseteq A_{\alpha}\cup \emptyset\in\mathcal{J}\). By our assumption, \(f_{n}\xrightarrow{\mathcal{K}\text{-}\sigma\text{-}\mathfrak{u}}0\). Thus, there is a covering \(\{X_{k}:k\in\omega\}\) of \(X\) such that \(f_{n}\upharpoonright X_{k}\xrightarrow{\mathcal{K}\text{-}\mathfrak{u}}0\) for all \(k\in\omega\). For each \(k\in\omega\), we define \[B_{k}=\left\{n\in\omega:\exists x\in X_{k}\,\left(|f_{n}(x)|>\frac{1}{2}\right) \right\}.\] We see that \(B_{k}\in\mathcal{K}\) for each \(k\in\omega\), and we claim that for every \(A\in\mathcal{A}\) there is \(k\) with \(A\subseteq B_{k}\). Indeed, let \(A\in\mathcal{A}\). Let \(\alpha\) be such that \(A=A_{\alpha}\). Then there is \(k\in\omega\) such that \(x_{\alpha}\in X_{k}\). Let \(n\in A_{\alpha}\). Then \(f_{n}(x_{\alpha})=1>1/2\), so \(n\in B_{k}\). \((3b)\implies(3a)\) It follows from Theorem 4.2(2). In [7], the authors proved that non(Fin-p,Fin-qn) \(=\mathfrak{b}\) i.e. the smallest size of non-QN-spaces equals \(\mathfrak{b}\). The following corollary is a counterpart of the above result which gives a purely combinatorial characterization of the topological cardinal characteristics non(\(\mathcal{I}\)-p,\(\mathcal{I}\)-qn), non(\(\mathcal{I}\)-p,\(\mathcal{I}\)-\(\sigma\)-u), non(\(\mathcal{I}\)-qn,\(\mathcal{I}\)-\(\sigma\)-u) with the aid of other bounding-like numbers. **Corollary 4.6**.: _Let \(\mathcal{I}\) be an ideal on \(\omega\)._ 1. non(\(\mathcal{I}\)_-p,\(\mathcal{I}\)-\(\sigma\)-u)_\(=\mathfrak{b}_{\sigma}(\mathcal{I})\)_._ 2. non(\(\mathcal{I}\)_-p,\(\mathcal{I}\)-qn)_\(=\mathfrak{b}_{s}(\mathcal{I})\)_._ 3. non(\(\mathcal{I}\)_-qn,\(\mathcal{I}\)-\(\sigma\)-u)_\(=\)add\({}_{\omega}(\mathcal{I})\)_._ Proof.: (1) The inequality non(\(\mathcal{I}\)-p,\(\mathcal{I}\)-\(\sigma\)-u) \(\geq\mathfrak{b}_{\sigma}(\mathcal{I})\) follows from Proposition 3.2 and Theorem 4.2. On the other hand, if \(X\) is a discrete topological space of cardinality \(\mathfrak{b}_{\sigma}(\mathcal{I})\), then by Theorem 4.5, \(X\) is not in (\(\mathcal{I}\)-p,\(\mathcal{I}\)-\(\sigma\)-u). Consequently, non(\(\mathcal{I}\)-p,\(\mathcal{I}\)-\(\sigma\)-u) \(\leq\mathfrak{b}_{\sigma}(\mathcal{I})\). Items (2) and (3) can be proved in the same way. In Section 6, we show that we _cannot_ add an item: "there is no space of cardinality \(\mathfrak{b}_{\sigma}(\mathcal{I})\) in (\(\mathcal{I}\)-p,\(\mathcal{I}\)-\(\sigma\)-u)" in Corollary 4.6 (in contrast with Corollary 3.5). ## 5. Properties of cardinals describing minimal size of spaces distinguishing convergence In this section we will take a closer look on the cardinals \(\mathfrak{b}_{s}(\mathcal{I},\mathcal{J},\mathcal{K})\), \(\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})\) and add\({}_{\omega}(\mathcal{I},\mathcal{J})\). The following easy proposition shows that these cardinals are coordinate-wise monotone (increasing or decreasing depending on a coordinate). **Proposition 5.1**.: _Let \(\mathcal{I},\mathcal{I}^{\prime},\mathcal{J},\mathcal{J}^{\prime},\mathcal{K}, \mathcal{K}^{\prime}\) be ideals on \(\omega\)._ 1. _If_ \(\mathcal{I}\subseteq\mathcal{I}^{\prime}\)_, then_ \(\mathfrak{b}_{s}(\mathcal{I},\mathcal{J},\mathcal{K})\leq\mathfrak{b}_{s}( \mathcal{I}^{\prime},\mathcal{J},\mathcal{K})\)_,_ \(\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})\geq\mathfrak{b}_{\sigma}( \mathcal{I}^{\prime},\mathcal{J})\) _and_ \(\text{add}_{\omega}(\mathcal{I},\mathcal{J})\geq\text{add}_{\omega}(\mathcal{I} ^{\prime},\mathcal{J})\)_._ 2. _If_ \(\mathcal{J}\subseteq\mathcal{J}^{\prime}\)_, then_ \(\mathfrak{b}_{s}(\mathcal{I},\mathcal{J},\mathcal{K})\leq\mathfrak{b}_{s}( \mathcal{I},\mathcal{J}^{\prime},\mathcal{K})\)_,_ \(\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})\leq\mathfrak{b}_{\sigma}( \mathcal{I},\mathcal{J}^{\prime})\) _and_ \(\text{add}_{\omega}(\mathcal{I},\mathcal{J})\leq\text{add}_{\omega}(\mathcal{I},\mathcal{J}^{\prime})\)_._ 3. _If_ \(\mathcal{K}\subseteq\mathcal{K}^{\prime}\)_, then_ \(\mathfrak{b}_{s}(\mathcal{I},\mathcal{J},\mathcal{K})\geq\mathfrak{b}_{s}( \mathcal{I},\mathcal{J},\mathcal{K}^{\prime})\)_._ The following theorem reveals the relationship between the considered cardinals. **Theorem 5.2**.: _Let \(\mathcal{I},\mathcal{J}\) be ideals on \(\omega\)._ 1. \(\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})=\min\{\mathfrak{b}_{s}(\mathcal{I }\cap\mathcal{J},\mathcal{J},\mathcal{I}),\text{add}_{\omega}(\mathcal{I}, \mathcal{J})\}\)_._ 2. \(\mathfrak{b}_{\sigma}(\mathcal{I})=\min\{\mathfrak{b}_{s}(\mathcal{I}),\text{add}_ {\omega}(\mathcal{I})\}\)_._ Proof.: (1, \(\leq\)) First, we show \(\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})\leq\mathfrak{b}_{s}(\mathcal{I} \cap\mathcal{J},\mathcal{J},\mathcal{I})\). Let \(\mathcal{E}=\{(E_{n}^{\alpha}:n\in\omega):\alpha<\mathfrak{b}_{\sigma}(\mathcal{ I},\mathcal{J})\}\) be a "witness" for \(\mathfrak{b}_{s}(\mathcal{I}\cap\mathcal{J},\mathcal{J},\mathcal{I})\) i.e. \((E_{n}^{\alpha}:n\in\omega)\in\widehat{\mathcal{P}}_{\mathcal{I}}\) for every \(\alpha\) and for every \((A_{n})\in\mathcal{P}_{\mathcal{J}}\) there is \(\alpha\) with \(\bigcup_{n\in\omega}(A_{n+1}\cap\bigcup_{i\leq n}E_{i}^{\alpha})\notin\mathcal{ I}\cap\mathcal{J}\). For every \(\alpha<\mathfrak{b}_{s}(\mathcal{K},\mathcal{J},\mathcal{I})\) and \(n\in\omega\), we define \(F_{n}^{\alpha}=\bigcup_{i\leq n}E_{i}^{\alpha}\). Then \((F_{n}^{\alpha}:n\in\omega)\in\mathcal{M}_{\mathcal{I}}\), and we claim that \(\{(F_{n}^{\alpha}):\alpha<\mathfrak{b}_{s}(\mathcal{I}\cap\mathcal{J},\mathcal{ J},\mathcal{I})\}\) is a "witness" for \(\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})\) i.e. for every \((A_{n})\in\mathcal{M}_{\mathcal{J}}\) there is \(\alpha\) such that \(F_{n}^{\alpha}\not\subseteq A_{n}\) for infinitely many \(n\). Indeed, take any \((A_{n})\in\mathcal{M}_{\mathcal{J}}\). Without loss of generality, we can assume that \(n\in A_{n}\) for every \(n\in\omega\). We define \(B_{0}=A_{0}\) and \(B_{n}=A_{n}\setminus A_{n-1}\) for \(n\geq 1\). Then \((B_{n})\in\mathcal{P}_{\mathcal{J}}\), so there is \(\alpha\) with \(\bigcup_{n\in\omega}(B_{n+1}\cap\bigcup_{i\leq n}E_{i}^{\alpha})\notin\mathcal{ I}\cap\mathcal{J}\). Now, suppose for sake of contradiction that \(F_{n}^{\alpha}\subseteq A_{n}\) for almost all \(n\in\omega\), say for all \(n>n_{0}\). Then \(B_{n+1}\cap F_{n}^{\alpha}=\emptyset\) for every \(n>n_{0}\). Consequently, \(B_{n+1}\cap\bigcup_{i\leq n}E_{i}^{\alpha}=\emptyset\) for every \(n>n_{0}\). Thus, \(\bigcup_{n\in\omega}(B_{n+1}\cap\bigcup_{i\leq n}E_{i}^{\alpha})\subseteq \bigcup_{n\leq n_{0}}(B_{n+1}\cap\bigcup_{i\leq n}E_{i}^{\alpha})\in\mathcal{I} \cap\mathcal{J}\), a contradiction. Second, we show \(\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})\leq\operatorname{add}_{\omega} (\mathcal{I},\mathcal{J})\). Let \(\mathcal{A}=\{A_{\alpha}:\alpha<\operatorname{add}_{\omega}(\mathcal{I}, \mathcal{J})\}\) be a "witness" for \(\operatorname{add}_{\omega}(\mathcal{I},\mathcal{J})\) i.e. \(A_{\alpha}\in\mathcal{I}\) for every \(\alpha\) and for every \(\{B_{n}:n\in\omega\}\subseteq\mathcal{J}\) there is \(\alpha\) such that \(A_{\alpha}\not\subseteq B_{n}\) for every \(n\in\omega\). For every \(\alpha<\operatorname{add}_{\omega}(\mathcal{I},\mathcal{J})\) and \(n\in\omega\), we define \(E_{n}^{\alpha}=A_{\alpha}\). Then \((E_{n}^{\alpha}:n\in\omega)\in\mathcal{M}_{\mathcal{I}}\), and we claim that \(\{(E_{n}^{\alpha}:n\in\omega):\alpha<\operatorname{add}_{\omega}(\mathcal{I}, \mathcal{J})\}\) is a "witness" for \(\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})\) i.e. for every \((B_{n})\in\mathcal{M}_{\mathcal{J}}\) there is \(\alpha\) such that \(E_{n}^{\alpha}\not\subseteq B_{n}\) for infinitely many \(n\). Indeed, take any \((B_{n})\in\mathcal{M}_{\mathcal{J}}\) then \(\{B_{n}:n\in\omega\}\subseteq\mathcal{J}\), so there is \(\alpha\) such that \(A_{\alpha}\not\subseteq B_{n}\) for every \(n\in\omega\). Since \(E_{n}^{\alpha}=A_{\alpha}\) for every \(n\), we obtain \(E_{n}^{\alpha}\not\subseteq B_{n}\) for every \(n\in\omega\). (1, \(\geq\)) Let \(\kappa<\min\{\mathfrak{b}_{s}(\mathcal{I}\cap\mathcal{J},\mathcal{J}, \mathcal{I}),\operatorname{add}_{\omega}(\mathcal{I},\mathcal{J})\}\). If we show that \(\kappa<\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})\), the proof will be finished. We take any \(\mathcal{E}=\{(E_{n}^{\alpha}:n\in\omega):\alpha<\kappa\}\subseteq\mathcal{M}_ {\mathcal{I}}\) and need to find \((A_{n})\in\mathcal{M}_{\mathcal{J}}\) such that for every \(\alpha<\kappa\) we have \(E_{n}^{\alpha}\subseteq A_{n}\) for all but finitely many \(n\in\omega\). For every \(\alpha<\kappa\) and \(n\in\omega\), we define \(F_{n}^{\alpha}=E_{n}^{\alpha}\setminus\bigcup_{i<n}E_{i}^{\alpha}\). Since \((F_{n}^{\alpha}:n\in\omega)\in\widehat{\mathcal{P}}_{\mathcal{I}}\) for every \(\alpha<\kappa\) and \(\kappa<\mathfrak{b}_{s}(\mathcal{I}\cap\mathcal{J},\mathcal{J},\mathcal{I})\), we obtain \((B_{n}:n\in\omega)\in\mathcal{P}_{\mathcal{J}}\) such that \(G_{\alpha}=\bigcup_{n<\omega}(B_{n+1}\cap E_{n}^{\alpha})=\bigcup_{n<\omega}(B_{ n+1}\cap\bigcup_{i\leq n}F_{i}^{\alpha})\in\mathcal{I}\cap\mathcal{J}\) for every \(\alpha\). Since \(G_{\alpha}\in\mathcal{I}\) for every \(\alpha<\kappa\) and \(\kappa<\operatorname{add}_{\omega}(\mathcal{I},\mathcal{J})\), we obtain \((C_{n}:n\in\omega)\in\mathcal{J}^{\omega}\) such that for every \(\alpha<\kappa\) there is \(n_{\alpha}\in\omega\) with \(G_{\alpha}\subseteq C_{n_{\alpha}}\). For every \(n\in\omega\), we define \(A_{n}=\bigcup_{i\leq n}(B_{i}\cup C_{i})\). Then \((A_{n}:n\in\omega)\in\mathcal{M}_{\mathcal{J}}\) and we claim that for every \(\alpha<\kappa\) we have \(E_{n}^{\alpha}\subseteq A_{n}\) for all but finitely many \(n\in\omega\). Indeed, take any \(\alpha<\kappa\) and notice that \[E_{n}^{\alpha}\subseteq\bigcup_{i\leq n}B_{i}\cup\bigcup_{k\geq n}(B_{k+1}\cap E _{k}^{\alpha})\subseteq\bigcup_{i\leq n}B_{i}\cup G_{\alpha}\subseteq\bigcup_{i \leq n}B_{i}\cup\bigcup_{i\leq n}C_{i}=A_{n}\] for every \(n\geq n_{\alpha}\). (2) It follows from item (1), but one could also show it "topologically" by using Corollaries 3.6(2) and 4.6. The following proposition reveals some bounds for the considered cardinals. In this proposition we use some known cardinals considered in the literature so far which we define first. For any ideal \(\mathcal{I}\), we define \[\operatorname{add}^{*}(\mathcal{I})=\min\{|\mathcal{A}|:\mathcal{A}\subseteq \mathcal{I}\wedge\forall B\in\mathcal{I}\,\exists A\in\mathcal{A}\,(|A\setminus B|= \omega)\}.\] For \(f,g\in\omega^{\omega}\) we write \(f\leq^{*}g\) if \(f(n)\leq g(n)\) for all but finitely many \(n\in\omega\). The _bounding number_\(\mathfrak{b}\) is the smallest size of \(\leq^{*}\)-unbounded subset of \(\omega^{\omega}\): \[\mathfrak{b}=\min\{|\mathcal{F}|:\mathcal{F}\subseteq\omega^{\omega}\wedge\neg \exists g\in\omega^{\omega}\,\forall f\in\mathcal{F}\,(f\leq^{*}g))\}.\] **Proposition 5.3**.: _Let \(\mathcal{I},\mathcal{J},\mathcal{K}\) be ideals on \(\omega\)._ 1. 1. _If_ \(\mathcal{I}\not\subseteq\mathcal{J}\)_, then_ \(\mathfrak{b}_{s}(\mathcal{I}\cap\mathcal{J},\mathcal{J},\mathcal{I})=1\)_._ 2. _If_ \(\mathcal{I}\subseteq\mathcal{J}\)_, then_ \(\mathfrak{b}_{s}(\mathcal{I}\cap\mathcal{J},\mathcal{J},\mathcal{I})\geq \omega_{1}\)_._ 3. \(\omega_{1}\leq\mathfrak{b}_{s}(\mathcal{I})\leq\mathfrak{c}\)_._ 4. \(\mathfrak{b}_{s}(\operatorname{Fin},\mathcal{J},\operatorname{Fin})=\mathfrak{b}\)_._ 2. _If_ \(\mathcal{I}\not\subseteq\mathcal{J}\)_, then_ \(\operatorname{add}_{\omega}(\mathcal{I},\mathcal{J})=1\)_._ 5. _If_ \(\mathcal{I}\subseteq\mathcal{J}\)_, then_ \(\operatorname{add}_{\omega}(\mathcal{I},\mathcal{J})\geq\max\{\omega_{1}, \operatorname{add}^{\star}(\mathcal{I})\}\)_._ 6. \(\operatorname{add}_{\omega}(\mathcal{I})<\infty\iff\mathcal{I}\) _is not countably generated._ 3. 1. \(\mathfrak{b}_{\sigma}(\operatorname{Fin},\mathcal{J})=\mathfrak{b}\)_._ 2. _If_ \(\mathcal{I}\not\subseteq\mathcal{J}\) _then_ \(\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})=1\)_._ 3. _If_ \(\mathcal{I}\subseteq\mathcal{J}\) _then_ \(\omega_{1}\leq\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})\leq\mathfrak{b}\)_._ 4. _If_ \(\mathcal{I}\subseteq\mathcal{J}\) _then_ \(\operatorname{cf}(\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J}))\geq\omega_{1}\)_._ 4. \(\mathfrak{b}_{\sigma}(\mathcal{I})\geq\mathfrak{b}_{s}(\operatorname{Fin}, \mathcal{I},\mathcal{I})=\min\{\mathfrak{b},\operatorname{add}^{\star}( \mathcal{I})\}\)_._ Proof.: (1) See [17, Proposition 3.13 and Theorem 4.2]. (2a) Let \(E\in\mathcal{I}\setminus\mathcal{J}\). Let \(\mathcal{E}=\{E\}\) and take any \((A_{n})\in\mathcal{M}_{\mathcal{J}}\). Then \(E\not\subseteq A_{n}\) for every \(n\in\omega\) (otherwise, \(E\subseteq A_{n}\in\mathcal{J}\) would imply \(E\in\mathcal{J}\)). Thus, \(\operatorname{add}_{\omega}(\mathcal{I},\mathcal{J})\leq 1\). (2b) The inequality \(\operatorname{add}_{\omega}(\mathcal{I},\mathcal{J})\geq\omega_{1}\) will follow from item (3c) and Theorem 5.2. To show that \(\operatorname{add}_{\omega}(\mathcal{I},\mathcal{J})\geq\operatorname{add}^{ \star}(\mathcal{I})\), let \(\mathcal{A}\subseteq\mathcal{I}\) be a witness for \(\operatorname{add}_{\omega}(\mathcal{I},\mathcal{J})\). We claim that \(\mathcal{A}\) is also a witness for \(\operatorname{add}^{\star}(\mathcal{I})\). Indeed, take any \(B\in\mathcal{I}\). Let \(\operatorname{Fin}=\{F_{n}:n\in\omega\}\) and define \(B_{n}=B\cup F_{n}\) for every \(n\in\omega\). Since \(\mathcal{I}\subseteq\mathcal{J}\), we have \((B_{n})\in[\mathcal{J}]^{\omega}\). Consequently, there is \(A\in\mathcal{A}\) such that \(A\not\subseteq B_{n}=B\cup F_{n}\) for any \(n\in\omega\). Thus, \(|A\setminus B|=\omega\). (2c) Straightforward. (3a) The inequality \(\mathfrak{b}_{\sigma}(\operatorname{Fin},\mathcal{J})\leq\mathfrak{b}\) follows from item (1d) and Theorem 5.2. Below we show \(\mathfrak{b}\leq\mathfrak{b}_{\sigma}(\operatorname{Fin},\mathcal{J})\). Using Proposition 5.1, we see that it is enough to show \(\mathfrak{b}\leq\mathfrak{b}_{\sigma}(\operatorname{Fin})\). Fix any \(\mathcal{E}=\{(E_{k}^{\alpha}):\alpha<\mathfrak{b}_{\sigma}(\operatorname{Fin}) \}\subseteq\mathcal{M}_{\operatorname{Fin}}\) which is a witness for \(\mathfrak{b}_{\sigma}(\operatorname{Fin})\). For each \(\alpha<\mathfrak{b}_{\sigma}(\operatorname{Fin})\), we define a function \(f_{\alpha}\in\omega^{\omega}\) by \(f_{\alpha}(k)=\max E_{k}^{\alpha}\). We claim that \(\{f_{\alpha}:\alpha<\mathfrak{b}_{\sigma}(\operatorname{Fin})\}\) is \(\leq^{\star}\)-unbounded subset of \(\omega^{\omega}\). Fix any \(g\in\omega^{\omega}\). We want to find \(\alpha<\mathfrak{b}_{\sigma}(\operatorname{Fin})\) such that \(f_{\alpha}\not\leq^{\star}g\). Without loss of generality we may assume that \(g\) is increasing. Define \(A_{k}=\{i\in\omega:i\leq g(k)\}\) for all \(k\in\omega\). Then \((A_{k})\in\mathcal{M}_{\operatorname{Fin}}\). Since \(\mathcal{E}\) is a witness for \(\mathfrak{b}_{\sigma}(\operatorname{Fin})\), there is \(\alpha<\mathfrak{b}_{\sigma}(\operatorname{Fin})\) such that \(E_{k}^{\alpha}\not\subseteq A_{k}\) for infinitely many \(k\in\omega\). Observe that \(E_{k}^{\alpha}\not\subseteq A_{k}\) implies \(g(k)<f_{\alpha}(k)\). Hence, \(g(k)<f_{\alpha}(k)\) for infinitely many \(k\in\omega\), which means that \(f_{\alpha}\not\leq^{\star}g\). (3b) It follows from item (2a) and Theorem 5.2. (3c) The inequality \(\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})\leq\mathfrak{b}\) follows from item (3a) and Proposition 5.1. Below we show \(\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})\geq\omega_{1}\). Fix any \(\{(E_{k}^{n}):k\in\omega\}\subseteq\mathcal{M}_{\mathcal{I}}\). We will find \((A_{k})\in\mathcal{M}_{\mathcal{J}}\) such that \(\{k\in\omega:E_{k}^{n}\not\subseteq A_{k}\}\in\operatorname{Fin}\) for all \(n\in\omega\). Define \(A_{k}=E_{k}^{0}\cup E_{k}^{1}\cup\ldots\cup E_{k}^{k}\) for all \(k\in\omega\). Then \(A_{k}\in\mathcal{I}\subseteq\mathcal{J}\) and \(A_{k}\subseteq E_{k+1}^{0}\cup E_{k+1}^{1}\cup\ldots\cup E_{k+1}^{k}\subseteq A _{k+1}\) as \((E_{k}^{n})\in\mathcal{M}_{\mathcal{I}}\) for each \(n\in\omega\). Moreover, for each \(n\in\omega\) and \(k\geq n\) we have \(E_{k}^{n}\subseteq A_{k}\). Hence, \((A_{k})\in\mathcal{M}_{\mathcal{J}}\) is as needed. (3d) Let \(\mathcal{E}\) be a witness for \(\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})\) i.e. \(|\mathcal{E}|=\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})\), \(\mathcal{E}\subseteq\mathcal{M}_{\mathcal{I}}\) and for every \((A_{n})\in\mathcal{M}_{\mathcal{J}}\) there is \((E_{n})\in\mathcal{E}\) such that \(E_{n}\not\subseteq A_{n}\) for infinitely many \(n\in\omega\). Now, suppose for sake of contradiction that \(\operatorname{cf}(\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J}))=\omega\). Using the properties of cofinality, we know that \(\mathcal{E}\) can be decomposed into the union of countably many subfamilies \(\mathcal{E}_{k}\) of cardinalites less than \(\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})\). Since \(|\mathcal{E}_{k}|<\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})\), there is \((A_{n}^{k})\in\mathcal{M}_{\mathcal{J}}\) such that for every \((E_{n})\in\mathcal{E}_{k}\) we have \(E_{n}\subseteq A_{n}^{k}\) for all but finitely many \(n\in\omega\). Then \(\mathcal{A}=\{(A_{n}^{k}):k\in\omega\}\subseteq\mathcal{M}_{\mathcal{J}}\) and \(|\mathcal{A}|\leq\omega<\mathfrak{b}_{\sigma}(\mathcal{J})\) (by item (3c)), so there is \((B_{n})\in\mathcal{M}_{\mathcal{J}}\) such that for every \(k\in\omega\) we have \(A_{n}^{k}\subseteq B_{n}\) for all but finitely many \(n\in\omega\). Consequently, for every \((E_{n})\in\mathcal{E}\) we have \(E_{n}\subseteq B_{n}\) for all but finitely many \(n\in\omega\), a contradiction with the choice of the family \(\mathcal{E}\). (4) The equality \(\mathfrak{b}_{s}(\operatorname{Fin},\mathcal{I},\mathcal{I})=\min\{\mathfrak{ b},\operatorname{add}^{\star}(\mathcal{I})\}\) is shown in [17, Theorem 4.8]. Below we show that \(\mathfrak{b}_{\sigma}(\mathcal{I})\geq\mathfrak{b}_{s}(\operatorname{Fin}, \mathcal{I},\mathcal{I})\). Let \(\mathcal{E}=\{\{E_{n}^{\alpha}:n\in\omega\}:\alpha<\mathfrak{b}_{\sigma}( \mathcal{I})\}\subseteq\mathcal{M}_{\mathcal{I}}\) be a witness for \(\mathfrak{b}_{\sigma}(\mathcal{I})\). We define \(F_{0}^{\alpha}=E_{0}^{\alpha}\) and \(F_{n}^{\alpha}=E_{n}^{\alpha}\setminus E_{n-1}^{\alpha}\) for every \(\alpha<\mathfrak{b}_{\sigma}(\mathcal{I})\) and \(n\geq 1\). Then \(\mathcal{F}=\{F_{n}^{\alpha}:n\in\omega\}:\alpha<\mathfrak{b}_{\sigma}( \mathcal{I})\}\subseteq\widehat{\mathcal{P}}_{\mathcal{I}}\), and we claim that \(\mathcal{F}\) is a witness for \(\mathfrak{b}_{s}(\operatorname{Fin},\mathcal{I},\mathcal{I})\). Indeed, take any \((A_{n})\in\mathcal{P}_{\mathcal{I}}\). For every \(n\in\omega\), we define \(B_{n}=\bigcup_{i\leq n}A_{i}\). Then \((B_{n})\in\mathcal{M}_{\mathcal{I}}\), so there exists \(\alpha\) such that \(E_{n}^{\alpha}\not\subseteq B_{n}\) for infinitely many \(n\). Let \((k_{n})\) be a strictly increasing sequence such that \(E_{k_{n}}^{\alpha}\not\subseteq B_{k_{n}}\) for every \(n\in\omega\). Thus, for every \(n\in\omega\) there is \(l_{n}>k_{n}\) and \(a_{n}\in A_{l_{n}}\cap E_{k_{n}}^{\alpha}\). Then \(A=\{a_{n}:n\in\omega\}\) is infinite. If we show that \(A\subseteq\bigcup_{n<\omega}(A_{n+1}\cap\bigcup_{i\leq n}F_{n}^{\alpha})\), the proof will be finished. Take any \(a_{n}\in A\). Then \(a_{n}\in A_{l_{n}}\cap E_{k_{n}}^{\alpha}=A_{l_{n}}\cap\bigcup_{i\leq k_{n}}F_ {i}^{\alpha}\subseteq A_{l_{n}}\cap\bigcup_{i<l_{n}}F_{i}^{\alpha}=A_{(l_{n}-1) +1}\cap\bigcup_{i\leq l_{n}-1}F_{i}^{\alpha}\). **Corollary 5.4**.: _For every ideal \(\mathcal{I}\) on \(\omega\) we have_ \[\omega_{1}\leq\mathfrak{b}_{\sigma}(\mathcal{I})=\min\{\mathfrak{b}_{s}( \mathcal{I}),\operatorname{add}_{\omega}(\mathcal{I})\}\leq\mathfrak{b}.\] Proof.: It follows from Theorem 5.2 and Proposition 5.3(3c). **Corollary 5.5**.: _The cardinals \(\mathfrak{b}_{s}(\mathcal{I})\), \(\mathfrak{b}_{\sigma}(\mathcal{I})\) and \(\operatorname{add}_{\omega}(\mathcal{I})\) are regular for every ideal \(\mathcal{I}\)._ Proof.: The regularity of \(\mathfrak{b}_{s}(\mathcal{I})\) is shown in [17, Corollary 3.12] (however, one could also show it using a similar "topological" argument as for \(\mathfrak{b}_{\sigma}(\mathcal{I})\) presented below). We will present two proofs of regularity of \(\mathfrak{b}_{\sigma}(\mathcal{I})\) - one "topological" and one "purely combinatorial". We start with the "topological" proof. Suppose for sake of contradiction that \(\mathfrak{b}_{\sigma}(\mathcal{I})=\bigcup\{A_{\alpha}:\alpha<\kappa\}\) where \(\kappa<\mathfrak{b}_{\sigma}(\mathcal{I})\) and \(|A_{\alpha}|<\mathfrak{b}_{\sigma}(\mathcal{I})\) for every \(\alpha<\kappa\). Let \(X\) be a normal space such that \(X\notin(\mathcal{I}\)-p,\(\mathcal{I}\)-\(\sigma\)-u) and \(|X|=\mathfrak{b}_{\sigma}(\mathcal{I})\) (which exists by Corollary 4.6(1)). Then we can write \(X=\bigcup\{X_{\alpha}:\alpha<\kappa\}\) with \(|X_{\alpha}|=|A_{\alpha}|\) for each \(\alpha<\kappa\). Take a sequence \((f_{n})\) in \(\mathcal{C}(X)\) such that \(f_{n}\xrightarrow{\mathcal{I}\text{-p}}0\) but \(f_{n}\xrightarrow{\mathcal{I}\text{-}\sigma\text{-u}}0\) does not hold. Since \(f_{n}\upharpoonright X_{\alpha}\xrightarrow{\mathcal{I}\text{-p}}0\) and \(|X_{\alpha}|<\mathfrak{b}_{\sigma}(\mathcal{I})\) for every \(\alpha<\kappa\), we can use Theorem 4.2(3) to obtain that \(f_{n}\upharpoonright X_{\alpha}\xrightarrow{\mathcal{I}\text{-}\sigma\text{-u}}0\) for every \(\alpha<\kappa\). Now, Proposition 4.4(2) implies that \(f_{n}\xrightarrow{\mathcal{I}\text{-}\sigma\text{-u}}0\), a contradiction. Now we present the "purely combinatorial" proof of regularity of \(\mathfrak{b}_{\sigma}(\mathcal{I})\). Let \(\mathcal{E}\) be a witness for \(\mathfrak{b}_{\sigma}(\mathcal{I})\) i.e. \(|\mathcal{E}|=\mathfrak{b}_{\sigma}(\mathcal{I})\), \(\mathcal{E}\subseteq\mathcal{M}_{\mathcal{I}}\) and for every \((A_{n})\in\mathcal{M}_{\mathcal{I}}\) there is \((E_{n})\in\mathcal{E}\) such that \(E_{n}\not\subseteq A_{n}\) for infinitely many \(n\in\omega\). Using the properties of cofinality, we know that \(\mathcal{E}\) can be decomposed into the union of \(\operatorname{cf}(\mathfrak{b}_{\sigma}(\mathcal{I}))\) subfamilies \(\mathcal{E}_{\alpha}\) of cardinalites less than \(\mathfrak{b}_{\sigma}(\mathcal{I})\). Since \(|\mathcal{E}_{\alpha}|<\mathfrak{b}_{\sigma}(\mathcal{I})\), there is \((A_{n}^{\alpha})\in\mathcal{M}_{\mathcal{I}}\) such that for every \((E_{n})\in\mathcal{E}_{\alpha}\) we have \(E_{n}\subseteq A_{n}^{\alpha}\) for all but finitely many \(n\in\omega\). Now, suppose for sake of contradiction that \(\mathfrak{b}_{\sigma}(\mathcal{I})\) is not regular i.e. \(\operatorname{cf}(\mathfrak{b}_{\sigma}(\mathcal{I}))<\mathfrak{b}_{\sigma}( \mathcal{I})\). Then \(\mathcal{A}=\{(A_{n}^{\alpha}):\alpha<\operatorname{cf}(\mathfrak{b}_{\sigma}( \mathcal{I}))\}\subseteq\mathcal{M}_{\mathcal{I}}\) and \(|\mathcal{A}|<\mathfrak{b}_{\sigma}(\mathcal{I})\), so there is \((B_{n})\in\mathcal{M}_{\mathcal{I}}\) such that for every \(\alpha<\operatorname{cf}(\mathfrak{b}_{\sigma}(\mathcal{I}))\) we have \(A_{n}^{\alpha}\subseteq B_{n}\) for all but finitely many \(n\in\omega\). Consequently, for every \((E_{n})\in\mathcal{E}\) we have \(E_{n}\subseteq B_{n}\) for all but finitely many \(n\in\omega\), a contradiction with the choice of the family \(\mathcal{E}\). Finally, we show the regularity of \(\operatorname{add}_{\omega}(\mathcal{I})\). Suppose for sake of contradiction that \(\operatorname{add}_{\omega}(\mathcal{I})=\bigcup\{A_{\alpha}:\alpha<\kappa\}\) where \(\kappa<\operatorname{add}_{\omega}(\mathcal{I})\) and \(|A_{\alpha}|<\operatorname{add}_{\omega}(\mathcal{I})\) for every \(\alpha<\kappa\). Let \(\mathcal{B}\subseteq\mathcal{I}\) be such that \(|\mathcal{B}|=\operatorname{add}_{\omega}(\mathcal{I})\) and for every \((D_{k})\in\mathcal{I}^{\omega}\) there is \(B\in\mathcal{B}\) with \(B\not\subseteq D_{k}\) for any \(k<\omega\). Then we can write \(\mathcal{B}=\bigcup\{\mathcal{B}_{\alpha}:\alpha<\kappa\}\) with \(|\mathcal{B}_{\alpha}|=|A_{\alpha}|\) for every \(\alpha<\kappa\). Since \(|\mathcal{B}_{\alpha}|<\operatorname{add}_{\omega}(\mathcal{I})\) and \(\mathcal{B}_{\alpha}\subseteq\mathcal{I}\) for every \(\alpha<\kappa\), we can find \((C_{n}^{\alpha})\in\mathcal{I}^{\omega}\) such that for every \(B\in\mathcal{B}_{\alpha}\) there is \(n\in\omega\) with \(B\subseteq C_{n}^{\alpha}\). Let \(\mathcal{C}=\{C_{n}^{\alpha}:\alpha<\kappa,n<\omega\}\). Then \(\mathcal{C}\subseteq\mathcal{I}\) and \(|\mathcal{C}|\leq\kappa\cdot\omega<\operatorname{add}_{\omega}(\mathcal{I})\) (by Proposition 5.3(2b)), so there is \((D_{k})\in\mathcal{I}^{\omega}\) such that for every \(\alpha<\kappa\) and \(n<\omega\) there is \(k<\omega\) with \(C_{n}^{\alpha}\subseteq D_{k}\). Thus, for every \(B\in\mathcal{B}\) we can find \(k\) with \(B\subseteq D_{k}\), a contradiction. ### P-ideals An ideal \(\mathcal{I}\) is a _P-ideal_ if for every countable family \(\mathcal{A}\subseteq\mathcal{I}\) there exists a set \(B\in\mathcal{I}\) such that \(A\setminus B\) is finite for every \(A\in\mathcal{A}\). It is easy to see that \(\operatorname{add}^{*}(\mathcal{I})\geq\omega_{1}\) for P-ideals and \(\operatorname{add}^{*}(\mathcal{I})=\omega\) for non-P-ideals. _Remark_.: The inequality from Proposition 5.3(4) is interesting, in a sense, only for P-ideals. Indeed, by Proposition 5.3(3c)(4), we have \(\mathfrak{b}_{s}(\operatorname{Fin},\mathcal{I},\mathcal{I})=\operatorname{ add}^{*}(\mathcal{I})=\omega<\omega_{1}\leq\mathfrak{b}_{\sigma}(\mathcal{I})\) in the case of non-P-ideals. **Proposition 5.6**.: _If \(\mathcal{I}\) is a P-ideal on \(\omega\), then_ \[\operatorname{add}_{\omega}(\mathcal{I})=\operatorname{add}^{*}(\mathcal{I}).\] Proof.: From Proposition 5.3(2b) it follows that we only need to show \(\operatorname{add}_{\omega}(\mathcal{I})\leq\operatorname{add}^{*}(\mathcal{I})\). Let \(\mathcal{A}\subseteq\mathcal{I}\) be a witness for \(\operatorname{add}^{*}(\mathcal{I})\). We claim that \(\mathcal{A}\) is also a witness for \(\operatorname{add}_{\omega}(\mathcal{I})\). Indeed, take any \((B_{n})\in[\mathcal{I}]^{\omega}\). Since \(\mathcal{I}\) is a P-ideal, there is \(B\in\mathcal{I}\) such that \(|B_{n}\setminus B|<\omega\) for every \(n\in\omega\). Since \(B\in\mathcal{I}\), we find \(A\in\mathcal{A}\) such that \(A\setminus B\) is infinite. Consequently, \(A\setminus B_{n}\) is infinite for every \(n\in\omega\). Thus, \(A\not\subseteq B_{n}\) for any \(n\in\omega\). _Remark_.: The cardinal \(\operatorname{add}^{*}(\mathcal{I})\) has been extensively studied so far (see e.g. a very good survey of Hrusak [23]). However, this cardinal is useless for non-P-ideals (because its value is \(\omega\) for non-P-ideals). On the other hand, the cardinal \(\operatorname{add}_{\omega}(\mathcal{I})\) coincides with \(\operatorname{add}^{*}(\mathcal{I})\) for P-ideals (as shown in Proposition 5.6) and it can distinguish non-P-ideals (as shown in Theorem 5.13). Thus, the cardinal \(\operatorname{add}_{\omega}(\mathcal{I})\) is, in a sense, more sensitive variant of \(\operatorname{add}^{*}(\mathcal{I})\), and maybe it will turn out to be more useful than \(\operatorname{add}^{*}(\mathcal{I})\) in the future research. **Corollary 5.7**.: _If \(\mathcal{I}\) is a P-ideal on \(\omega\) then_ \[\mathfrak{b}_{\sigma}(\mathcal{I})=\mathfrak{b}_{s}(\operatorname{Fin}, \mathcal{I},\mathcal{I})=\min\{\mathfrak{b},\operatorname{add}^{*}(\mathcal{ I})\}\leq\operatorname{add}_{\omega}(\mathcal{I}).\] Proof.: It is enough to note that \(\mathfrak{b}_{\sigma}(\mathcal{I})\geq\mathfrak{b}_{s}(\operatorname{Fin}, \mathcal{I},\mathcal{I})=\min\{\mathfrak{b},\operatorname{add}^{*}(\mathcal{ I})\}\) follows from Proposition 5.3(4), \(\mathfrak{b}_{\sigma}(\mathcal{I})\leq\operatorname{b}\) follows from Proposition 5.3(3c), \(\mathfrak{b}_{\sigma}(\mathcal{I})\leq\operatorname{add}^{*}(\mathcal{I})\) follows from Theorem 5.2 and Proposition 5.6 and \(\min\{\mathfrak{b},\operatorname{add}^{*}(\mathcal{I})\}\leq\operatorname{ add}_{\omega}(\mathcal{I})\) follows from Proposition 5.6. ### Fubini products **Lemma 5.8**.: _Let \(\mathcal{I},\mathcal{J}\) be ideals on \(\omega\)._ 1. \(\mathfrak{b}_{\sigma}(\mathcal{I}\otimes\mathcal{J})\leq\mathfrak{b}_{\sigma}( \mathcal{I})\)_._ 2. \(\operatorname{add}_{\omega}(\mathcal{I}\otimes\mathcal{J})\leq\operatorname{ add}_{\omega}(\mathcal{I})\)_._ Proof.: (1) Let \(\{(E_{k}^{\alpha}):\alpha<\mathfrak{b}_{\sigma}(\mathcal{I})\}\subseteq\mathcal{M}_{ \mathcal{I}}\) be a witness for \(\mathfrak{b}_{\sigma}(\mathcal{I})\). Define \(D_{k}^{\alpha}=E_{k}^{\alpha}\times\omega\) for all \(k\in\omega\) and \(\alpha<\mathfrak{b}_{\sigma}(\mathcal{I})\). Then \(\{(D_{k}^{\alpha}):\alpha<\mathfrak{b}_{\sigma}(\mathcal{I})\}\subseteq \mathcal{M}_{\mathcal{I}\otimes\mathcal{J}}\). Fix any \((B_{k})\in\mathcal{M}_{\mathcal{I}\otimes\mathcal{J}}\). Define \(A_{k}=\{n\in\omega:(B_{k})_{(n)}\notin\mathcal{J}\}\) for all \(k\in\omega\). Then \((A_{k})\in\mathcal{M}_{\mathcal{I}}\), so there is \(\alpha<\mathfrak{b}_{\sigma}(\mathcal{I})\) such that \(Z=\{k\in\omega:E_{k}^{\alpha}\not\subseteq A_{k}\}\notin\operatorname{Fin}\). For each \(k\in Z\), we pick \(n_{k},m_{k}\in\omega\) such that \(n_{k}\in E_{k}^{\alpha}\setminus A_{k}\) and \(m_{k}\in\omega\setminus(B_{k})_{(n_{k})}\) (which is possible as \(n_{k}\notin A_{k}\) implies \((B_{k})_{(n_{k})}\in\mathcal{J}\)). Then \((n_{k},m_{k})\in D_{k}^{\alpha}\setminus B_{k}\) for each \(k\in Z\), so \(D_{k}^{\alpha}\not\subseteq B_{k}\) for infinitely many \(k\in\omega\). (2) This is an easy modification of the proof of item (1). **Lemma 5.9**.: _Let \(\mathcal{I},\mathcal{J}\) be ideals on \(\omega\)._ 1. \(\mathfrak{b}_{\sigma}(\mathcal{I}\otimes\mathcal{J})\leq\mathfrak{b}_{\sigma }(\mathcal{J})\)_._ 2. \(\operatorname{add}_{\omega}(\mathcal{I}\otimes\mathcal{J})\leq\operatorname {add}_{\omega}(\mathcal{J})\)_._ Proof.: (1) Let \(\{(E_{k}^{\alpha}):\alpha<\mathfrak{b}_{\sigma}(\mathcal{J})\}\subseteq \mathcal{M}_{\mathcal{J}}\) be a witness for \(\mathfrak{b}_{\sigma}(\mathcal{J})\). Define \(D_{k}^{\alpha}=\omega\times E_{k}^{\alpha}\) for all \(k\in\omega\) and \(\alpha<\mathfrak{b}_{\sigma}(\mathcal{J})\). Then \(\{(D_{k}^{\alpha}):\alpha<\mathfrak{b}_{\sigma}(\mathcal{J})\}\subseteq \mathcal{M}_{\mathcal{I}\otimes\mathcal{J}}\). Fix any \((B_{k})\in\mathcal{M}_{\mathcal{I}\otimes\mathcal{J}}\). Define \(i_{k}=\min\{n\in\omega:(B_{k})_{(n)}\in\mathcal{J}\}\) and \(A_{k}=(B_{k})_{(i_{k})}\) for all \(k\in\omega\) (note that \(i_{k}\) is well defined as \(\{n\in\omega:(B_{k})_{(n)}\notin\mathcal{J}\}\in\mathcal{I}\)). For every \(k\in\omega\), we define \(C_{k}=\bigcup_{j\leq k}A_{j}\). Then \((C_{k})\in\mathcal{M}_{\mathcal{J}}\), so there is \(\alpha<\mathfrak{b}_{\sigma}(\mathcal{J})\) such that \(Z=\{k\in\omega:E_{k}^{\alpha}\not\subseteq C_{k}\}\notin\operatorname{Fin}\). For each \(k\in Z\), we pick \(m_{k}\in\omega\) such that \(m_{k}\in E_{k}^{\alpha}\setminus C_{k}\). Then for each \(k\in Z\) we have \((i_{k},m_{k})\in D_{k}^{\alpha}\setminus B_{k}\) (as \((i_{k},m_{k})\in B_{k}\) would imply \(m_{k}\in(B_{k})_{(i_{k})}=A_{k}\subseteq C_{k}\)), so \(D_{k}^{\alpha}\not\subseteq B_{k}\) for infinitely many \(k\in\omega\). (2) This is an easy modification of the proof of item (1). **Lemma 5.10**.: \(\mathfrak{b}_{\sigma}(\mathcal{I}\otimes\mathcal{J})\geq\min\{\mathfrak{b}_{ \sigma}(\mathcal{I}),\mathfrak{b}_{\sigma}(\mathcal{J})\}\) _for every ideals \(\mathcal{I},\mathcal{J}\) on \(\omega\)._ Proof.: Suppose that \(\kappa<\min(\mathfrak{b}_{\sigma}(\mathcal{I}),\mathfrak{b}_{\sigma}(\mathcal{ J}))\) and fix any \(\{(E_{k}^{\alpha}:k\in\omega):\alpha<\kappa\}\subseteq\mathcal{M}_{\mathcal{I} \otimes\mathcal{J}}\). We want to define \((A_{k})\in\mathcal{M}_{\mathcal{I}\otimes\mathcal{J}}\) such that for each \(\alpha<\kappa\) we have \(E_{k}^{\alpha}\not\subseteq A_{k}\) only for finitely many \(k\in\omega\). For each \(\alpha<\kappa\) and \(k,n\in\omega\) put: \[D_{k}^{\alpha}=\{m\in\omega:(E_{k}^{\alpha})_{(m)}\notin\mathcal{J}\},\] \[C_{k,n}^{\alpha}=\begin{cases}(E_{k}^{\alpha})_{(n)},&\text{if }n\in\omega \setminus D_{k}^{\alpha},\\ \emptyset,&\text{otherwise}.\end{cases}\] Then \(\{(D_{k}^{\alpha}):\alpha<\kappa\}\subseteq\mathcal{M}_{\mathcal{I}}\). Since \(\kappa<\mathfrak{b}_{\sigma}(\mathcal{I})\), there is \((B_{k})\in\mathcal{M}_{\mathcal{I}}\) such that for each \(\alpha<\kappa\) we have \(\{k\in\omega:D_{k}^{\alpha}\not\subseteq B_{k}\}\in\operatorname{Fin}\). Moreover, for each \(n\in\omega\) the family \(\{(\bigcup_{i\leq k}C_{i,n}^{\alpha}:k\in\omega):\alpha<\kappa\}\subseteq \mathcal{M}_{\mathcal{J}}\), so there is \((B_{k}^{n})\in\mathcal{M}_{\mathcal{J}}\) such that \(\{k\in\omega:\bigcup_{i\leq k}C_{i,n}^{\alpha}\not\subseteq B_{k}^{n}\}\in \operatorname{Fin}\) for each \(\alpha<\kappa\) (as \(\kappa<\mathfrak{b}_{\sigma}(\mathcal{J})\)). For every \(\alpha<\kappa\) define \(f_{\alpha}\in\omega^{\omega}\) by: \[f_{\alpha}(n)=\max\left\{k\in\omega:\bigcup_{i\leq k}C_{i,n}^{\alpha}\not \subseteq B_{k}^{n}\right\}.\] By Proposition 5.3(3c), \(\kappa<\mathfrak{b}\), so there is \(g\in\omega^{\omega}\) such that \(f_{\alpha}+1\leq^{\star}g\) for all \(\alpha<\kappa\). Define: \[A_{k}=(B_{k}\times\omega)\cup\bigcup_{n\in\omega}\left(\{n\}\times\left(B_{k}^ {n}\cup B_{g(n)}^{n}\right)\right).\] Fix \(\alpha<\kappa\). We want to find \(m\in\omega\) such that \(E_{k}^{\alpha}\subseteq A_{k}\) for each \(k>m\). Define \(n_{0}=\max\{n\in\omega:f_{\alpha}(n)+1>g(n)\}\) (\(n_{0}\) is well defined as \(f_{\alpha}+1\leq^{\star}g\)) and: \[m=\max\left(\{n_{0}\}\cup\{f_{\alpha}(n):n\leq n_{0}\}\cup\{k\in\omega:D_{k}^{ \alpha}\not\subseteq B_{k}\}\right)\] (\(m\) is well defined as \(\{k\in\omega:D_{k}^{\alpha}\not\subseteq B_{k}\}\in\operatorname{Fin}\)). Fix \(k>m\) and any \((x,y)\in E_{k}^{\alpha}\). We will show that \((x,y)\in A_{k}\). There are four possible cases: * if \(x\in D_{k}^{\alpha}\) then \(x\in B_{k}\) (as \(k>m\geq\max\{k^{\prime}\in\omega:D_{k^{\prime}}^{\alpha}\not\subseteq B_{k^{ \prime}}\}\)), so \((x,y)\in B_{k}\times\omega\subseteq A_{k}\); * if \(x\notin D_{k}^{\alpha}\) and \(f_{\alpha}(x)<k\) then \((x,y)\in E_{k}^{\alpha}\) implies \(y\in(E_{k}^{\alpha})_{(x)}=C_{k,x}^{\alpha}\subseteq\bigcup_{i\leq k}C_{i,x}^{ \alpha}\subseteq B_{k}^{x}\), so \((x,y)\in\{x\}\times B_{k}^{x}\subseteq A_{k}\); * if \(x\notin D_{k}^{\alpha}\) and \(x\leq n_{0}\) then \(k>m\geq\max\{f_{\alpha}(n):n\leq n_{0}\}\geq f_{\alpha}(x)\), so this case is covered by the previous one; * if \(x\notin D_{k}^{\alpha}\), \(f_{\alpha}(x)\geq k\) and \(x>n_{0}\) then \(k\leq f_{\alpha}(x)<g(x)\) (by \(x>n_{0}\)), so \((x,y)\in E_{k}^{\alpha}\) implies \(y\in(E_{k}^{\alpha})_{(x)}=C_{k,x}^{\alpha}\subseteq\bigcup_{i\leq g(x)}C_{i, x}^{\alpha}\subseteq B_{g(x)}^{x}\) (as \(g(x)>f_{\alpha}(x)\)), so \((x,y)\in\{x\}\times B_{g(x)}^{x}\subseteq A_{k}\). This finishes the entire proof. **Theorem 5.11**.: _Let \(\mathcal{I},\mathcal{J}\) be ideals on \(\omega\)._ 1. \(\mathfrak{b}_{s}(\mathcal{I}\otimes\mathcal{J})=\mathfrak{b}_{s}(\mathcal{I})\)_._ 2. \(\mathfrak{b}_{\sigma}(\mathcal{I}\otimes\mathcal{J})=\min(\mathfrak{b}_{ \sigma}(\mathcal{I}),\mathfrak{b}_{\sigma}(\mathcal{J}))\)_._ 3. \(\operatorname{add}_{\omega}(\mathcal{I}\otimes\mathcal{J})\leq\min\{ \operatorname{add}_{\omega}(\mathcal{I}),\operatorname{add}_{\omega}(\mathcal{ J})\}\)__ Proof.: (1) See [17, Theorem 5.13]. (2) and (3) It follows from Lemmas 5.8, 5.9 and 5.10. The following example shows that, in general, there is no way to calculate \(\operatorname{add}_{\omega}(\mathcal{I}\otimes\mathcal{J})\) using only values \(\operatorname{add}_{\omega}(\mathcal{I})\) and \(\operatorname{add}_{\omega}(\mathcal{J})\). **Example 5.12**.: \(\operatorname{add}_{\omega}(\operatorname{Fin}\otimes\operatorname{Fin})= \mathfrak{b}\), but \(\operatorname{add}_{\omega}(\operatorname{Fin})=\infty\). Proof.: The equality \(\operatorname{add}_{\omega}(\operatorname{Fin})=\infty\) follows from Proposition 5.3(2c) as \(\operatorname{Fin}\) is countably generated. Now, we show \(\operatorname{add}_{\omega}(\operatorname{Fin}\otimes\operatorname{Fin})\leq \mathfrak{b}\). Let \(\{f_{\alpha}:\alpha<\mathfrak{b}\}\) be an \(\leq^{*}\)-unbounded set in \(\omega^{\omega}\). For each \(\alpha\), we define \(A_{\alpha}=\{(n,k)\in\omega^{2}:k\leq f_{\alpha}(n)\}\). Then \(\{A_{\alpha}:\alpha<\mathfrak{b}\}\subseteq\operatorname{Fin}\otimes \operatorname{Fin}\), and we claim that for every \((B_{n})\in(\operatorname{Fin}\otimes\operatorname{Fin})^{\omega}\) there is \(\alpha\) with \(A_{\alpha}\not\subseteq B_{n}\). Indeed, take any \((B_{n})\in(\operatorname{Fin}\otimes\operatorname{Fin})^{\omega}\) and suppose, for sake of contradiction, that for every \(\alpha\) there is \(n\in\omega\) with \(A_{\alpha}\subseteq B_{n}\). Since \(B_{n}\in\operatorname{Fin}\otimes\operatorname{Fin}\), for every \(n\in\omega\) there is \(g_{n}\in\omega^{\omega}\) and \(k_{n}\in\omega\) with \(\max((B_{n})_{(k)})\leq g_{n}(k)\) for every \(k\geq k_{n}\). Let \(g\in\omega^{\omega}\) be such that \(g_{n}\leq^{*}g\) for every \(n\in\omega\) (we can find \(g\) because \(\mathfrak{b}\geq\omega_{1}\)). Consequently, \(f_{\alpha}\leq^{*}g\) for every \(\alpha<\mathfrak{b}\), a contradiction. Finally, we show that \(\operatorname{add}_{\omega}(\operatorname{Fin}\otimes\operatorname{Fin})\geq \mathfrak{b}\). Let \(\mathcal{A}\subseteq\operatorname{Fin}\otimes\operatorname{Fin}\) with \(|\mathcal{A}|<\mathfrak{b}\). If we find \((B_{n})\in(\operatorname{Fin}\otimes\operatorname{Fin})^{\omega}\) such that for every \(A\in\mathcal{A}\) there is \(n\in\omega\) with \(A\subseteq B_{n}\), then \(\operatorname{add}(\operatorname{Fin}\otimes\operatorname{Fin},\omega)\geq \mathfrak{b}\), and the proof will be finished. For every \(A\in\mathcal{A}\) there is \(f_{A}\in\omega^{\omega}\) and \(n_{A}\in\omega\) such that \(\max(A_{(n)})\leq f_{A}(n)\) for every \(n\geq n_{A}\). Since \(|\mathcal{A}|<\mathfrak{b}\), there is \(g\in\omega^{\omega}\) such that \(f_{A}\leq^{*}g\) for every \(A\in\mathcal{A}\). Hence, for each \(A\in\mathcal{A}\) there is \(k_{A}\in\omega\) such that \(f_{A}(n)\leq g(n)\) for all \(n>k_{A}\). For every \(n\in\omega\), we define \(B_{n}=(n\times\omega)\cup\{(i,k)\in\omega^{2}:k\leq g(i)\}\). Then \(B_{n}\in\operatorname{Fin}\otimes\operatorname{Fin}\) and \(A\subseteq B_{\max(n_{A},k_{A})}\) for every \(A\in\mathcal{A}\). ### Some examples and comparisons Denote by \(\mathcal{N}\) the \(\sigma\)-ideal of Lebesgue null subsets of \(\mathbb{R}\) and recall the definition of _additivity_ of \(\mathcal{N}\): \[\operatorname{add}(\mathcal{N})=\min\left\{|\mathcal{A}|:\mathcal{A}\subseteq \mathcal{N}\ \wedge\ \bigcup\mathcal{A}\notin\mathcal{N}\right\}.\] It is known that \(\omega_{1}\leq\operatorname{add}(\mathcal{N})\leq\mathfrak{b}\leq\mathfrak{c}\) (see e.g. [1]). **Theorem 5.13**.: 1. \(\mathfrak{b}_{\sigma}(\operatorname{Fin})=\mathfrak{b}_{s}(\operatorname{Fin})= \mathfrak{b}<\infty=\operatorname{add}_{\omega}(\operatorname{Fin})\)_._ 2. \(\mathfrak{b}_{\sigma}(\operatorname{Fin}\otimes\{\emptyset\})=\mathfrak{b}_{s}( \operatorname{Fin}\otimes\{\emptyset\})=\mathfrak{b}<\infty=\operatorname{add}_{ \omega}(\operatorname{Fin}\otimes\{\emptyset\})\). 3. \(\mathfrak{b}_{\sigma}(\mathcal{I}_{d})=\operatorname{add}_{\omega}(\mathcal{I }_{d})=\operatorname{add}(\mathcal{N})\leq\mathfrak{b}=\mathfrak{b}_{s}( \mathcal{I}_{d})\). 4. \(\mathfrak{b}_{\sigma}(\mathcal{I}_{1/n})=\operatorname{add}_{\omega}( \mathcal{I}_{1/n})=\operatorname{add}(\mathcal{N})\leq\mathfrak{b}=\mathfrak{b }_{s}(\mathcal{I}_{1/n})\). 5. \(\mathfrak{b}_{\sigma}(\operatorname{Fin}\otimes\operatorname{Fin})=\mathfrak{ b}_{s}(\operatorname{Fin}\otimes\operatorname{Fin})=\operatorname{add}_{ \omega}(\operatorname{Fin}\otimes\operatorname{Fin})=\mathfrak{b}\). 6. \(\mathfrak{b}_{\sigma}(\{\emptyset\}\otimes\operatorname{Fin})=\mathfrak{b}_{s }(\{\emptyset\}\otimes\operatorname{Fin})=\operatorname{add}_{\omega}(\{ \emptyset\}\otimes\operatorname{Fin})=\mathfrak{b}\). 7. \(\mathfrak{b}_{\sigma}(\mathcal{S})=\mathfrak{b}_{s}(\mathcal{S})= \operatorname{add}_{\omega}(\mathcal{S})=\omega_{1}\). Proof.: (1) It follows from Proposition 5.3(3a) and 5.3(1d) and Example 5.12. (2) The equality \(\operatorname{add}_{\omega}(\operatorname{Fin}\otimes\{\emptyset\})=\infty\) follows from Proposition 5.3(2c) as \(\operatorname{Fin}\otimes\{\emptyset\}\) is countably generated. The equality \(\mathfrak{b}_{s}(\operatorname{Fin}\otimes\{\emptyset\})=\mathfrak{b}\) follows from [17, Example 5.15] and \(\mathfrak{b}_{\sigma}(\operatorname{Fin}\otimes\{\emptyset\})=\mathfrak{b}\) follows from Theorem 5.2. (3) and (4) It is known that \(\operatorname{add}^{\star}(\mathcal{I}_{d})=\operatorname{add}^{\star}( \mathcal{I}_{1/n})=\operatorname{add}(\mathcal{N})\) (see e.g. [23]) and \(\mathfrak{b}_{s}(\mathcal{I}_{d})=\mathfrak{b}_{s}(\mathcal{I}_{1/n})= \mathfrak{b}\) (see [17, Corollary 6.4]). Thus, the remaining inequalities follow from Proposition 5.6 and Corollary 5.7 (5) It follows from item (1), Theorem 5.11(1)(2) and Example 5.12. (6) It is known that \(\operatorname{add}^{\star}(\{\emptyset\}\otimes\operatorname{Fin})=\mathfrak{b}\) (see e.g. [23]) and \(\mathfrak{b}_{s}(\{\emptyset\}\otimes\operatorname{Fin})=\mathfrak{b}\) (see [17, Theorem 5.13]). Thus, the remaining inequalities follow from Proposition 5.6 and Corollary 5.7 (7) It is known that \(\mathfrak{b}_{s}(\mathcal{S})=\omega_{1}\) (see [17, Theorem 7.4]). Then, using Proposition 5.3(3c) and Theorem 5.2, we obtain \(\mathfrak{b}_{\sigma}(\mathcal{S})=\omega_{1}\). Below we show that \(\operatorname{add}_{\omega}(\mathcal{S})=\omega_{1}\). Let \(Y\subseteq 2^{\omega}\) be any set of cardinality \(\omega_{1}\). We claim that \(\mathcal{A}=\{G_{y}:y\in Y\}\), where \(G_{y}=\{A\in\Omega:y\in A\}\), witnesses \(\operatorname{add}_{\omega}(\mathcal{S})=\omega_{1}\). Let \((B_{n})\in\mathcal{I}^{\omega}\). Then for each \(n\in\omega\) there are \(k_{n}\in\omega\) and \(x_{0}^{n},\ldots,x_{k_{n}}^{n}\in 2^{\omega}\) such that \(B_{n}\subseteq\bigcup_{i\leq k_{n}}G_{x_{i}^{n}}\). Since \(|Y|=\omega_{1}\), we can find \(y\in Y\setminus\{x_{i}^{n}:n\in\omega,i\leq k_{n}\}\). We will show that \(G_{y}\not\subseteq B_{n}\) for all \(n\). Let \(n\in\omega\). There is \(k\in\omega\) such that \(2^{k}>2k_{n}\) and \(y\upharpoonright k\neq x_{i}^{n}\upharpoonright k\) for all \(i\leq k_{n}\). Since \(2^{k}>2k_{n}\), we can find pairwise distinct \(y_{j}\in 2^{k}\), for \(j<2^{k-1}-1\), such that \(y\upharpoonright k\neq y_{j}\) and \(x_{i}^{n}\upharpoonright k\neq y_{j}\) for all \(i\leq k_{n}\). Then \[X=\{x\in 2^{\omega}:x\upharpoonright k=y\upharpoonright k\text{ or }x\upharpoonright k=y_{j} \text{ for some }j<2^{k-1}-1\}\in\Omega\] and \(X\in G_{y}\setminus B_{n}\). By Theorem 5.2 we know that \(\mathfrak{b}_{\sigma}(\mathcal{I})=\min\{\mathfrak{b}_{s}(\mathcal{I}), \operatorname{add}_{\omega}(\mathcal{I})\}\) for every ideal \(\mathcal{I}\). The above result shows that \[\mathfrak{b}_{\sigma}(\mathcal{I})=\mathfrak{b}_{s}(\mathcal{I})<\operatorname{ add}_{\omega}(\mathcal{I})\] for some P-ideal (item (1)) as well as for some non-P-ideal (item (2)). Since \(\operatorname{add}(\mathcal{N})<\mathfrak{b}\) is consistent (see e.g. [1]), we obtain that it is consistent that \[\mathfrak{b}_{\sigma}(\mathcal{I})=\operatorname{add}_{\omega}(\mathcal{I})< \mathfrak{b}_{s}(\mathcal{I})\] for some P-ideals (items (3) and (4)). Next example shows that the latter is consistent also for some non-P-ideal. **Example 5.14**.: Consider the ideal \(\mathcal{I}=\operatorname{Fin}\otimes\mathcal{S}\), which is not a P-ideal. By Theorems 5.11 and 5.13 and Corollary 5.4 we have \(\mathfrak{b}_{\sigma}(\mathcal{I})=\mathfrak{b}_{\sigma}(\mathcal{S})=\omega_{1}\) and \(\operatorname{add}_{\omega}(\mathcal{I})=\omega_{1}\). On the other hand, \(\mathfrak{b}_{s}(\mathcal{I})=\mathfrak{b}_{s}(\operatorname{Fin})=\mathfrak{b}\) (by [17, Theorems 4.2 and 5.13]). It is known that \(\omega_{1}<\mathfrak{b}\) is consistent (see e.g. [1]). Thus, consistently \(\mathfrak{b}_{\sigma}(\mathcal{I})=\operatorname{add}_{\omega}(\mathcal{I})< \mathfrak{b}_{s}(\mathcal{I})\) also for non-P-ideals. ## 6. Spaces not distinguishing convergence can be of arbitrary cardinality In this section, we show (see e.g. Corollary 6.5) that the properties "\(X\in(\mathcal{I}\)-p,\(\mathcal{I}\)-\(\sigma\)-u)" "\(X\in(\mathcal{I}\)-p,\(\mathcal{I}\)-qn)" and "\(X\in(\mathcal{I}\)-qn,\(\mathcal{I}\)-\(\sigma\)-u)" are of the topological nature rather than set-theoretic. **Lemma 6.1**.: _Let \(\mathcal{I},\mathcal{J}\) be ideals on \(\omega\) such that \(\mathcal{I}\subseteq\mathcal{J}\). Let \(X\) be a topological space such that for each \(f\in\mathcal{C}(X)\) there is a set \(Y\subseteq X\) such that \(|Y|<\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})\) and \(f\upharpoonright(X\setminus Y)\) is constant. Then_ \[f_{n}\xrightarrow{\mathcal{I}\cdot p}0\implies f_{n}\xrightarrow{\mathcal{J} \cdot\sigma\cdot u}0\text{ for any sequence }(f_{n})\text{ in }\mathcal{C}(X)\text{,}\] Proof.: Let \((f_{n})\) be a sequence in \(\mathcal{C}(X)\) such that \(f_{n}\xrightarrow{\mathcal{I}\cdot p}0\). For each \(n\in\omega\) there is a set \(Y_{n}\subseteq X\) such that \(|Y_{n}|<\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})\) and \(f_{n}\upharpoonright(X\setminus Y_{n})\) is constant. Let \(Y=\bigcup\{Y_{n}:n\in\omega\}\) and put \(Z=X\setminus Y\). Since \(f_{n}\xrightarrow{\mathcal{I}\cdot p}0\) and \(\mathcal{I}\subseteq\mathcal{J}\), we have \(f_{n}\xrightarrow{\mathcal{J}\cdot p}0\). Since \(f_{n}\upharpoonright Z\) are constant for each \(n\) and \(f_{n}\upharpoonright Z\xrightarrow{\mathcal{J}\cdot p}0\), we obtain \(f_{n}\upharpoonright Z\xrightarrow{\mathcal{J}\cdot u}0\). Since \(\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})\) has uncountable cofinality (by Proposition 5.3(3d)), we obtain \(|Y|<\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})\). Thus, we can use Theorem 4.2 to obtain \(f_{n}\upharpoonright Y\xrightarrow{\mathcal{J}\cdot\sigma\cdot u}0\). Since \(X=Y\cup Z\), we obtain \(f_{n}\xrightarrow{\mathcal{J}\cdot\sigma\cdot u}0\). **Lemma 6.2**.: _Let \(\mathcal{I},\mathcal{J}\) be ideals on \(\omega\) such that \(\mathcal{I}\subseteq\mathcal{J}\). Let \(X\) be a topological space such that there exists a point \(p\in X\) with the property that \(|X\setminus N|<\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})\) for each neighborhood \(N\) of \(p\). Then_ \[f_{n}\xrightarrow{\mathcal{I}\cdot p}0\implies f_{n}\xrightarrow{\mathcal{J} \cdot\sigma\cdot u}0\text{ for any sequence }(f_{n})\text{ in }\mathcal{C}(X)\text{,}\] Proof.: Let \((f_{n})\) be a sequence in \(\mathcal{C}(X)\) such that \(f_{n}\xrightarrow{\mathcal{I}\cdot p}0\). We will show that we can apply Lemma 6.1 to the space \(X\). Let \(f:X\to\mathbb{R}\) be continuous. Using continuity of \(f\) only at the point \(p\), for each \(n\in\omega\) we find a neighborhood \(N_{n}\) of \(p\) such that \(|f(p)-f(x)|<1/n\) for each \(x\in N_{n}\). Let \(Y=X\setminus\bigcap\{N_{n}:n\in\omega\}\). Since \(\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})\) has uncountable cofinality (by Proposition 3(3d)), we obtain \(|Y|<\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})\). Then \(|f(p)-f(x)|<1/n\) for each \(x\in X\setminus Y\) and each \(n\in\omega\). Consequently, \(f\upharpoonright(X\setminus Y)\) is constant with the value \(f(p)\). The following theorem shows that one cannot strengthen Theorem 4.5 to all normal spaces. **Theorem 6.3**.: _Let \(\mathcal{I},\mathcal{J}\) be ideals on \(\omega\) such that \(\mathcal{I}\subseteq\mathcal{J}\). There exists a Hausdorff compact (hence normal) space \(X\) of arbitrary cardinality such that_ \[f_{n}\xrightarrow{\mathcal{I}\cdot p}0\implies f_{n}\xrightarrow{\mathcal{J} \cdot\sigma\cdot u}0\text{ for any sequence }(f_{n})\text{ in }\mathcal{C}(X)\text{.}\] Proof.: Obviously every finite space \(X\) has the required property. Let \(D\) be an infinite (of arbitrary cardinality) discrete spaces. Then \(D\) is a Hausdorff and locally compact space but not a compact space. Thus, the Alexandroff one-point compactification \(X=D\cup\{\infty\}\) of \(D\) is a Hausdorff compact space. In particular, \(X\) is a normal space (see e.g. [15, Theorem 3.1.9]). We will show that we can apply Lemma 6.2 to the space \(X\). Recall that open neighborhoods of the point \(\infty\) are of the form \(N=(D\setminus K)\cup\{\infty\}\) where \(K\) is a compact subset of \(D\) (see e.g. [15, Theorem 3.5.11]). Since every compact subset of \(D\) is finite, we have that \(X\setminus N\) is finite for every neighborhood \(N\) of the point \(\infty\). In particular, \(|X\setminus N|<\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})\) (by Proposition 5.3(3c)). In the above theorem, all but one point are isolated in the constructed spaces. Below, we show that there also are required spaces (at least of cardinality up to the cardinality of the continuum) in which only countably many points are isolated. **Theorem 6.4**.: _Let \(\mathcal{I},\mathcal{J}\) be ideals on \(\omega\) such that \(\mathcal{I}\subseteq\mathcal{J}\). There exists a Hausdorff separable, sequentially compact, compact (hence normal) space \(X\) of arbitrary cardinality up to \(\mathfrak{c}\) such that only countably many points of \(X\) are isolated and_ \[f_{n}\xrightarrow{\mathcal{I}\cdot p}0\implies f_{n}\xrightarrow{\mathcal{J} \cdot\sigma\cdot u}0\text{ for any sequence }(f_{n})\text{ in }\mathcal{C}(X)\text{.}\] Proof.: Obviously every finite space \(X\) has the required property. Let \(\mathcal{A}\) be an infinite (of arbitrary cardinality up to \(\mathfrak{c}\)) almost disjoint family \(\mathcal{A}\) of infinite subsets of \(\omega\) (see e.g. [25, Lemma 9.21]). Let \(\Psi(\mathcal{A})=\omega\cup\mathcal{A}\) and introduce a topology on \(\Psi(\mathcal{A})\) as follows: the points of \(\omega\) are isolated and a basic neighborhood of \(A\in\mathcal{A}\) has the form \(\{A\}\cup(A\setminus F)\) with \(F\) finite. Let \(\Phi(\mathcal{A})=\Psi(\mathcal{A})\cup\{\infty\}\) be the Alexandroff one-point compactification of \(\Psi(\mathcal{A})\). It is known (see e.g. [21]) that \(\Phi(\mathcal{A})\) is Hausdorff, compact, sequentially compact and separable. We will show that we can apply Lemma 6.2 to the space \(\Phi(\mathcal{A})\). Recall that open neighborhoods of the point \(\infty\) are of the form \(U=(\Psi(\mathcal{A})\setminus K)\cup\{\infty\}\) where \(K\) is a compact subset of \(\Psi(\mathcal{A})\) (see e.g. [15, Theorem 3.5.11]). Since for every compact subset \(K\) of \(\Psi(\mathcal{A})\), both sets \(K\cap\mathcal{A}\) and \((K\cap\omega)\setminus\bigcup\{A:A\in K\cap\mathcal{A}\}\) are finite (see e.g. [21]), we obtain that \(\Phi(\mathcal{A})\setminus N\) is countable for every neighborhood \(N\) of the point \(\infty\). In particular, \(|\Phi(\mathcal{A})\setminus N|<\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})\) (by Proposition 5.3(3c)). **Corollary 6.5**.: _For every ideal \(\mathcal{I}\) the classes (\(\mathcal{I}\)-p,\(\mathcal{I}\)-\(\sigma\)-\(u\)), (\(\mathcal{I}\)-p,\(\mathcal{I}\)-qn) and (\(\mathcal{I}\)-qn,\(\mathcal{I}\)-\(\sigma\)-u) contain spaces of arbitrary cardinality._ Proof.: Let \(\mathcal{I}\) be an ideal and \(X\) be a space from Theorem 6.3. Then \[f_{n}\xrightarrow{\mathcal{I}\cdot p}0\implies f_{n}\xrightarrow{\mathcal{I} \cdot\sigma\cdot u}0\text{ for any sequence }(f_{n})\text{ in }\mathcal{C}(X)\text{.}\] On the other hand, by Proposition 3.1 we have \[f_{n}\xrightarrow{\mathcal{I}\cdot\sigma\cdot u}0\implies f_{n}\xrightarrow{ \mathcal{I}\cdot p}0\text{ for any sequence }(f_{n})\text{ in }\mathcal{C}(X)\text{.}\] Thus, \(X\in\) (\(\mathcal{I}\)-p,\(\mathcal{I}\)-\(\sigma\)-u). Now, Corollary 3.6 implies that \(X\in\) (\(\mathcal{I}\)-p,\(\mathcal{I}\)-qn) and \(X\in\) (\(\mathcal{I}\)-qn,\(\mathcal{I}\)-\(\sigma\)-u). ### Subsets of reals not distinguishing convergence Obviously, countable subspaces of \(\mathbb{R}\) are in the classes (\(\mathcal{I}\)-p,\(\mathcal{I}\)-\(\sigma\)-u), (\(\mathcal{I}\)-p,\(\mathcal{I}\)-qn) and (\(\mathcal{I}\)-qn,\(\mathcal{I}\)-\(\sigma\)-u). Uncountable spaces constructed in the proof of Corollary 6.5 are not homeomorphic to any subspace of \(\mathbb{R}\) as those spaces contain uncountable discrete subspaces. Below we show that consistently there is an uncountable subspace of \(\mathbb{R}\) in the considered classes at least for the ideal \(\mathcal{I}=\{\emptyset\}\otimes\operatorname{Fin}\). Recall that an uncountable set \(S\subseteq R\) is called a _Sierpinski set_ if \(S\cap N\) is countable for every Lebesgue null set \(N\subseteq\mathbb{R}\). **Theorem 6.6**.: _Let \(\mathcal{I}=\{\emptyset\}\otimes\operatorname{Fin}\)._ 1. _Every Sierpinski set belongs to the classes (_\(\mathcal{I}\)_-p,_\(\mathcal{I}\)_-_\(\sigma\)_-u), (_\(\mathcal{I}\)_-p,_\(\mathcal{I}\)_-qn) and (_\(\mathcal{I}\)_-qn,_\(\mathcal{I}\)_-_\(\sigma\)_-u)._ 2. _Consistently (e.g. under the Continuum Hypothesis), there exists an uncountable subspace of_ \(\mathbb{R}\) _which belongs to the classes (_\(\mathcal{I}\)_-p,_\(\mathcal{I}\)_-_\(\sigma\)_-u), (_\(\mathcal{I}\)_-_p,_\(\mathcal{I}\)_-qn) and (_\(\mathcal{I}\)_-qn,_\(\mathcal{I}\)_-_\(\sigma\)_-u)._ Proof.: (1) Let \(S\subseteq\mathbb{R}\) be a Sierpinski set. Without loss of generality we can assume that \(S\subseteq[0,1]\). By Corollary 3.6, it is enough to show that \(S\in\) (\(\mathcal{I}\)-p,\(\mathcal{I}\)-\(\sigma\)-u). Let \((f_{n})\) be a sequence in \(\mathcal{C}(S)\) which is \(\mathcal{I}\)-pointwise convergent to zero. By [24, Theorem 5], there is a set \(A\in\mathcal{I}\) such that the subsequence \((f_{n}:n\in\omega\setminus A)\) is Fin-pointwise convergent to zero. There are a \(G_{\delta}\) set \(G\subseteq[0,1]\) and continuous functions \(g_{n}:G\to\mathbb{R}\) such that \(S\subseteq G\) and \(f_{n}=g_{n}\upharpoonright S\) for every \(n\in\omega\setminus A\) (see e.g. [26, Theorem 3.8]). It is not difficult to see that the set \(B=\{x\in G\,:\,(g_{n}(x):n\in\omega\setminus A)\) is Fin-convergent to zero\(\}\) is Borel and \(S\subseteq B\). Applying repeatedly Egorov's theorem (see e.g. [12, Proposition 3.1.4]) to the sequence \((g_{n}\upharpoonright B:n\in\omega\setminus A)\), we find a sequence of pairwise disjoint Borel sets \(\{C_{k}:k\in\omega\}\) such that \((g_{n}\upharpoonright C_{k}:n\in\omega\setminus A)\) is uniformly convergent to zero and \(N=B\setminus\bigcup\{C_{k}:k\in\omega\}\) is Lebesgue null. Then \(S\cap N\) is countable, so \((f_{n}\upharpoonright(S\cap N):n\in\omega\setminus A)\) is \(\sigma\)-uniformly convergent to zero. Consequently, \((f_{n}:n\in\omega\setminus A)\) is \(\sigma\)-uniformly convergent to zero. Since \(A\in\mathcal{I}\), we obtain that \((f_{n}:n\in\omega)\) is \(\mathcal{I}\)-\(\sigma\)-uniformly convergent to zero. (2) It follows from item (1) as under the Continuum Hypothesis there is a Sierpinski set (see e.g. [28, Theorem 2.2]). **Question 6.7**.: Let \(\mathcal{I}\) be an arbitrary ideal. Do the classes (\(\mathcal{I}\)-p,\(\mathcal{I}\)-\(\sigma\)-u), (\(\mathcal{I}\)-p,\(\mathcal{I}\)-qn) and (\(\mathcal{I}\)-qn,\(\mathcal{I}\)-\(\sigma\)-u) contain an uncountable subspace of \(\mathbb{R}\)? ## 7. Bounding numbers of binary relations If \(R\) is a binary relation, then by \(\operatorname{dom}(R)\) and \(\operatorname{ran}(R)\) we denote the domain and range of \(R\), respectively, i.e. \(\operatorname{dom}(R)=\{x:\exists y\,((x,y)\in R)\}\) and \(\operatorname{ran}(R)=\{y:\exists x\,((x,y)\in R)\}\). A set \(B\subseteq\operatorname{dom}(R)\) is called \(R\)_-unbounded_ if for every \(y\in\operatorname{ran}(R)\) there is \(x\in B\) with \((x,y)\notin R\). Following Vojtas [37], for a binary relation \(R\) we define \[\mathfrak{b}(R)=\min\{|B|:B\text{ is an $R$-unbounded set}\}.\] It is easy to see that the bounding number \(\mathfrak{b}\) is equal to the bounding number of the relation \(\leq^{*}\) on \(\omega^{\omega}\) i.e. \(\mathfrak{b}=\mathfrak{b}(\leq^{*})\). **Definition 7.1**.: 1. The binary relation \(\succeq\) is define by \(\operatorname{dom}(\succeq)=\operatorname{ran}(\succeq)=\omega^{\omega}\) and \[x\succeq y\iff\{m\in\omega:\exists k\in\omega\,(x(k)\leq m<y(k))\}\in \operatorname{Fin}.\] 2. The binary relation \(\leq^{\omega}\) is defined by \(\operatorname{dom}(\leq^{\omega})=2^{\omega}\), \(\operatorname{ran}(\leq^{\omega})=(2^{\omega})^{\omega}\) and \[x\leq^{\omega}(y_{k})\iff\exists k\in\omega\,\forall n\in\omega(x(n)\leq y_{k }(n)).\] 3. For an ideal \(\mathcal{I}\) on \(\omega\), the binary relation \(\leq_{\mathcal{I}}\) is defined by \(\operatorname{dom}(\leq_{\mathcal{I}})=\omega^{\omega}\), \(\operatorname{ran}(\leq_{\mathcal{I}})=\omega^{\omega}\) and \[x\leq_{\mathcal{I}}y\iff\{n\in\omega:x(n)>y(n)\}\in\mathcal{I}.\] In a similar manner we define \(<_{\mathcal{I}}\), \(\geq_{\mathcal{I}}\) and \(>_{\mathcal{I}}\). **Proposition 7.2**.: _The relation \(\succeq\) is a preorder on \(\omega^{\omega}\) i.e. the relation \(\succeq\) is reflexive and transitive._ Proof.: Since reflexivity is obvious, we show only transitivity. If \(f\succeq g\) and \(g\succeq h\), then put: \(n=\max(\{m\in\omega:\exists k\in\omega\left(f(k)\leq m<g(k)\right)\}\cup\{m\in \omega:\exists k\in\omega\left(g(k)\leq m<h(k)\right)\}).\) Fix any \(m>n\). Then for each \(k\in\omega\), if \(m<h(k)\) then also \(m<g(k)\), and consequently \(m<f(k)\). Hence, \(\{m\in\omega:\exists k\in\omega\left(f(k)\leq m<h(k)\right)\}\subseteq\{i\in \omega:i\leq n\}\in\mathrm{Fin}\). _Notation_.: For an ideal \(\mathcal{I}\), we define \[\mathcal{C}_{\mathcal{I}} =\{x\in 2^{\omega}:x^{-1}[\{1\}]\in\mathcal{I}\}=\{\mathbf{1}_{ \mathcal{A}}:A\in\mathcal{I}\},\] \[\mathcal{D}_{\mathcal{I}} =\{x\in\omega^{\omega}:x^{-1}[\{n\}]\in\mathcal{I}\text{ for every }n\in\omega\}.\] **Theorem 7.3**.: _Let \(\mathcal{I},\mathcal{J},\mathcal{K}\) be ideals on \(\omega\)._ 1. \(\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})=\mathfrak{b}(\succeq\cap( \mathcal{D}_{\mathcal{I}}\times\mathcal{D}_{\mathcal{J}}))\)_._ 2. \(\mathrm{add}_{\omega}(\mathcal{I},\mathcal{J})=\mathfrak{b}(\leq^{\omega} \cap(\mathcal{C}_{\mathcal{I}}\times(\mathcal{C}_{\mathcal{J}})^{\omega}))\)_._ 3. \(\mathfrak{b}_{s}(\mathcal{I},\mathcal{J},\mathcal{K})=\mathfrak{b}(\geq_{ \mathcal{I}}\cap(\mathcal{D}_{\mathcal{K}}\times\mathcal{D}_{\mathcal{J}}))\)_. If_ \(\mathcal{J}\cap\mathcal{K}\subseteq\mathcal{I}\)_, then_ \(\mathfrak{b}_{s}(\mathcal{I},\mathcal{J},\mathcal{K})=\mathfrak{b}(>_{ \mathcal{I}}\cap(\mathcal{D}_{\mathcal{K}}\times\mathcal{D}_{\mathcal{J}}))\)_._ Proof.: (1) First, we show \(\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})\leq\mathfrak{b}(\succeq\cap( \mathcal{D}_{\mathcal{I}}\times\mathcal{D}_{\mathcal{J}}))\). Let \(\{f_{\alpha}:\alpha<\mathfrak{b}(\succeq\cap(\mathcal{D}_{\mathcal{I}}\times \mathcal{D}_{\mathcal{J}}))\}\) be unbounded in \((\succeq\cap(\mathcal{D}_{\mathcal{I}}\times\mathcal{D}_{\mathcal{J}}))\). Define \(E_{k}^{\alpha}=f_{\alpha}^{-1}[[0,k]]\) for each \(k\in\omega\) and \(\alpha<\mathfrak{b}(\succeq\cap(\mathcal{D}_{\mathcal{I}}\times\mathcal{D}_{ \mathcal{J}}))\). Then \(\mathcal{E}=\{(E_{k}^{\alpha}):\alpha<\mathfrak{b}(\succeq\cap(\mathcal{D}_{ \mathcal{I}}\times\mathcal{D}_{\mathcal{J}}))\}\subseteq\mathcal{M}_{\mathcal{I}}\) as each \(f_{\alpha}\) is in \(\mathcal{D}_{\mathcal{I}}\). We claim that \(\mathcal{E}\) witnesses \(\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})\). Fix \((A_{k})\in\mathcal{M}_{\mathcal{J}}\) and define \(B_{k}=(A_{k}\cup\{k\})\setminus\bigcup_{i<k}B_{i}\). Then \((B_{k})\) is a partition of \(\omega\) into sets belonging to \(\mathcal{J}\). Define a function \(g\in\omega^{\omega}\) by \[g(n)=k\ \Leftrightarrow\ n\in B_{k}.\] Then \(g\in\mathcal{M}_{\mathcal{J}}\), so there is \(\alpha<\mathfrak{b}(\succeq\cap(\mathcal{D}_{\mathcal{I}}\times\mathcal{D}_{ \mathcal{J}}))\) such that \(f_{\alpha}\not\succeq g\). Hence, there are infinitely many \(m\in\omega\) such that \(f_{\alpha}(n_{m})\leq m<g(n_{m})\) for some \(n_{m}\in\omega\). Observe that in this case we have \(n_{m}\in E_{m}^{\alpha}\) and \(n_{m}\notin A_{m}\) (as \(n_{m}\in A_{m}\) would imply \(n_{m}\in\bigcup_{i\leq m}B_{i}\) and consequently \(g(n_{m})\leq m\)). Second, we show \(\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})\geq\mathfrak{b}(\succeq\cap( \mathcal{D}_{\mathcal{I}}\times\mathcal{D}_{\mathcal{J}}))\). Let \(\{(E_{k}^{\alpha}):\alpha<\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})\} \subseteq\mathcal{M}_{\mathcal{I}}\) be a witness for \(\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})\). For each \(\alpha<\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})\) define \(f_{\alpha}\in\omega^{\omega}\) by: \[f_{\alpha}(n)=k\ \Leftrightarrow\ n\in B_{k}^{\alpha},\] where \(B_{k}^{\alpha}=(E_{k}^{\alpha}\cup\{k\})\setminus\bigcup_{i<k}B_{i}^{\alpha}\). Note that each \(f_{\alpha}\) is well defined and belongs to \(\mathcal{D}_{\mathcal{I}}\) as \((B_{k}^{\alpha})\) is a partition of \(\omega\) into sets belonging to \(\mathcal{I}\). We claim that \(\{f_{\alpha}:\alpha<\mathfrak{b}(\mathcal{I},\mathcal{J})\}\) is unbounded in \((\succeq\cap(\mathcal{D}_{\mathcal{I}}\times\mathcal{D}_{\mathcal{J}}))\). Fix any \(g\in\mathcal{D}_{\mathcal{J}}\) and define \(A_{k}=g^{-1}[[0,k]]\). Then \((A_{k})\in\mathcal{M}_{\mathcal{J}}\), so there is \(\alpha<\mathfrak{b}_{\sigma}(\mathcal{I},\mathcal{J})\) such that \(E_{k}^{\alpha}\not\subseteq A_{k}\) for infinitely many \(k\in\omega\). Note that if \(n\in E_{k}^{\alpha}\setminus A_{k}\) for some \(k\in\omega\), then \(f_{\alpha}(n)\leq k\) (as \(n\in E_{k}^{\alpha}\subseteq\bigcup_{i\leq k}B_{i}^{\alpha}\)) and \(k<g(n)\). Thus, there are infinitely many \(k\in\omega\) such that \(f_{\alpha}(n)\leq k<g(n)\) for some \(n\in\omega\). (2) It easily follows from the fact that \(A\subseteq B\iff\mathbf{1}_{A}(n)\leq\mathbf{1}_{B}(n)\) for every \(n\in\omega\). (3) See [17, Theorem 3.10]. ## 8. Subsets of reals distinguishing convergence In this section, we show (Theorem 8.2) that, in a sense, the connection between cardinals \(\mathfrak{b}_{\sigma}(\mathcal{I})\) (\(\mathfrak{b}_{s}(\mathcal{I})\), \(\mathrm{add}_{\omega}(\mathcal{I})\), resp.) and non(\(\mathcal{I}\)-p,\(\mathcal{I}\)-\(\sigma\)-u) (non(\(\mathcal{I}\)-p,\(\mathcal{I}\)-qn), non(\(\mathcal{I}\)-qn,\(\mathcal{I}\)-\(\sigma\)-u), resp.) is even deeper than that following from the proof of Corollary 4.6, as here we obtain subspaces of \(\mathbb{R}\) as spaces which realize the minimum value of spaces not distinguishing the considered convergences. **Lemma 8.1**.: _Let \(\mathcal{I},\mathcal{J},\mathcal{K}\) be ideals on \(\omega\)._ 1. _For each_ \(n\in\omega\)_, let_ \(f_{n}:\omega^{\omega}\to\mathbb{R}\) _be given by_ \(f_{n}(x)=\frac{1}{x(n)+1},\) _for all_ \(x\in\omega^{\omega}\)_. Then_ 1. \(\forall x\in\omega^{\omega}\left(f_{n}(x)\xrightarrow{\mathcal{I}}0\iff x\in \mathcal{D}_{\mathcal{I}}\right)\)_,_ 2. \(\forall X\subseteq\mathcal{D}_{\mathcal{I}}\left(f_{n}\upharpoonright X \xrightarrow{\mathcal{K}\cdot\sigma\cdot u}0\iff X\text{ is bounded in }(\succeq\cap(\mathcal{D}_{ \mathcal{I}}\times\mathcal{D}_{\mathcal{K}}))\)_._ 2. _For each_ \(n\in\omega\)_, we define_ \(g_{n}:2^{\omega}\to\mathbb{R}\) _by_ \(g_{n}(x)=x(n)\) _for all_ \(x\in 2^{\omega}\)_. Then_ 1. \(\forall X\subseteq 2^{\omega}\left(g_{n}\upharpoonright X \xrightarrow{\mathcal{J}\cdot qn}0\iff X\subseteq\mathcal{C}_{\mathcal{J}}\right)\)_,_ 2. \(\forall X\subseteq\mathcal{C}_{\mathcal{J}}\left(g_{n}\upharpoonright X \xrightarrow{\mathcal{K}\cdot\sigma\cdot u}0\iff X\text{ is bounded in }(\leq^{\omega}\cap(\mathcal{C}_{\mathcal{J}}\times(\mathcal{C}_{ \mathcal{K}})^{\omega}))\)_._ 3. _For each_ \(n\in\omega\)_, we define_ \(h_{n}:\omega^{\omega}\to\mathbb{R}\) _by_ \(h_{n}(x)=\frac{1}{x(n)+1}\) _for all_ \(x\in\omega^{\omega}\)_. Then_ 1. \(\forall x\in\omega^{\omega}\left(h_{n}(x)\xrightarrow{\mathcal{I}}0\iff x\in \mathcal{D}_{\mathcal{I}}\right)\)_,_ 2. \(\forall X\subseteq\mathcal{D}_{\mathcal{I}}\left(h_{n}\upharpoonright X \xrightarrow{\mathcal{J}\cdot qn}0\iff X\text{ is bounded in }(\geq_{ \mathcal{J}}\cap(\mathcal{D}_{\mathcal{I}}\times\mathcal{D}_{\mathcal{J}})))\)_._ Proof.: (1a) If \(x\in\mathcal{D}_{\mathcal{I}}\) and \(\varepsilon>0\) then find \(k\in\omega\) such that \(\varepsilon\geq\frac{1}{k+1}\) and observe that: \[\left\{n\in\omega:f_{n}(x)\geq\varepsilon\right\}\subseteq\left\{n\in\omega: \frac{1}{x(n)+1}\geq\frac{1}{k+1}\right\}=x^{-1}[[0,k]]\in\mathcal{I}.\] On the other hand, if \(x\notin\mathcal{D}_{\mathcal{I}}\) then there is \(k\in\omega\) such that \(x^{-1}[\left\{k\right\}]\notin\mathcal{I}\). Then \(\left\{n\in\omega:f_{n}(x)\geq\frac{1}{k+1}\right\}=x^{-1}[[0,k]]\supseteq x^{ -1}[\left\{k\right\}]\notin\mathcal{I}\). (1b) If \(X\subseteq\mathcal{D}_{\mathcal{I}}\) is bounded in \((\succeq\cap(\mathcal{D}_{\mathcal{I}}\times\mathcal{D}_{\mathcal{K}}))\) by some \(g\in\mathcal{D}_{\mathcal{K}}\) then for each \(x\in X\) denote \(m_{x}=\max\left\{m\in\omega:\exists_{k\in\omega}\,x(k)\leq m<g(k)\right\}\) (recall that this set is finite since \(x\succeq g\)). Define \(X_{m}=\left\{x\in X:m_{x}=m\right\}\) for each \(m\in\omega\). Then \(X=\bigcup_{m\in\omega}X_{m}\). We claim that \(f_{n}\upharpoonright X_{m}\xrightarrow{\mathcal{K}\cdot\mathrm{u}}0\) for each \(m\in\omega\). Fix \(m\in\omega\) and \(\varepsilon>0\). Find \(k\in\omega\) such that \(\varepsilon\geq\frac{1}{k+1}\). Since \(g\in\mathcal{D}_{\mathcal{K}}\), \(g^{-1}[0,\max\{m+1,k\}]\in\mathcal{K}\). Fix \(n\in\omega\setminus g^{-1}[0,\max\{m+1,k\}]\) and \(x\in X_{m}\). Then \(g(n)>m+1\), so \(x(n)\geq g(n)\) (otherwise we would have \(x(n)\leq g(n)-1<g(n)\) which contradicts the choice of \(m_{x}\) as \(g(n)-1>m=m_{x}\)). Thus, we have: \[\varepsilon\geq\frac{1}{k+1}>\frac{1}{g(n)+1}\geq\frac{1}{x(n)+1}=f_{n}(x)\] (as \(g(n)>k\)). Assume now that \(X\subseteq\mathcal{D}_{\mathcal{I}}\) is unbounded in \((\succeq\cap(\mathcal{D}_{\mathcal{I}}\times\mathcal{D}_{\mathcal{K}}))\). Suppose to the contrary that \(X=\bigcup_{m\in\omega}X_{m}\) for some sets \(X_{m}\) such that \(f_{n}\upharpoonright X_{m}\xrightarrow{\mathcal{K}\cdot\mathrm{u}}0\) for each \(m\in\omega\). Then for each \(m,k\in\omega\) we can find \(A_{k}^{m}\in\mathcal{K}\) such that \(f_{n}(x)<\frac{1}{k+1}\) for all \(n\in\omega\setminus A_{k}^{m}\) and \(x\in X_{m}\). Define \(A_{k}=\bigcup_{i\leq k}A_{k}^{i}\) (observe that if \(n\in\omega\setminus A_{k}\) and \(x\in\bigcup_{i\leq k}X_{i}\) then \(f_{n}(x)<\frac{1}{k+1}\)). Define \(B_{k}=(A_{k}\cup\left\{k\right\})\setminus\bigcup_{i<k}B_{i}\), for all \(k\in\omega\), and \(g\in\mathcal{D}_{\mathcal{K}}\) by: \[g(n)=k\ \Leftrightarrow\ n\in B_{k}\] (\(g\) is well defined as \((B_{k})\in\mathcal{P}_{\mathcal{K}}\)). Since \(X\) is unbounded, there is \(x\in X\) such that \(x\not\succeq g\). Let \(m\in\omega\) be such that \(x\in X_{m}\). Then there is \(m^{\prime}>m\) such that \(x(n)\leq m^{\prime}<g(n)\) for some \(n\in\omega\). Since \(m^{\prime}<g(n)\), \(n\notin A_{m^{\prime}}\), so \(f_{n}(x)<\frac{1}{m^{\prime}+1}\) (by \(x\in X_{m}\subseteq\bigcup_{i\leq m^{\prime}}X_{i}\)). On the other hand, \(f_{n}(x)=\frac{1}{x(n)+1}\geq\frac{1}{m^{\prime}+1}\), since \(x(n)\leq m^{\prime}\). Thus, we obtained a contradiction, which proves that \(f_{n}\upharpoonright X\xrightarrow{\mathcal{K}\text{-}\sigma\text{-}\text{-} \text{-}}0\) does not hold. (2a, \(\implies\)) Let \(X\subseteq 2^{\omega}\) be such that \(g_{n}\upharpoonright X\xrightarrow{\mathcal{J}\text{-}\text{-}\text{-}\text{-} }0\). Then there exists a \(\mathcal{J}\)-convergent to zero sequence \((\varepsilon_{n})\) of positive reals such that \(\{n\in\omega:|g_{n}(x)|\geq\varepsilon_{n}\}\in\mathcal{J}\) for every \(x\in X\). Let \(A=\{n\in\omega:\varepsilon_{n}>1/2\}\). Then \(A\in\mathcal{J}\) and \(\{n\in\omega:x(n)=1\}=\{n\in\omega:|g_{n}(x)|>1/2\}\subseteq\{n\in\omega:|g_{n} (x)|\geq\varepsilon_{n}\}\cup A\in\mathcal{J}\) for every \(x\in X\). Thus, \(x\in\mathcal{C}_{\mathcal{J}}\) for every \(x\in X\), and consequently \(X\subseteq\mathcal{C}_{\mathcal{J}}\). (2a, \(\iff\)) Let \(X\subseteq\mathcal{C}_{\mathcal{J}}\). We claim that any sequence \((\varepsilon_{n})\) of positive reals which \(\mathcal{J}\)-converges to zero witnesses that \(g_{n}\upharpoonright X\xrightarrow{\mathcal{J}\text{-}\text{-}\text{-}\text{-} }0\). Indeed, take any sequence \((\varepsilon_{n})\) of positive reals which \(\mathcal{J}\)-converges to zero and fix \(x\in X\). Then \(A=\{n\in\omega:\varepsilon_{n}>1/2\}\in\mathcal{J}\) and \(\{n\in\omega:|g_{n}(x)|\geq\varepsilon_{n}\}=\{n\in\omega:x(n)\geq\varepsilon_ {n}\}\subseteq\{n\in\omega:x(n)\geq 1/2\}\cup\{n\in\omega:\varepsilon_{n}>1/2\}=x^{-1}[ \{1\}]\cup A\in\mathcal{J}\). (2b, \(\implies\)) Let \(X\subseteq\mathcal{C}_{\mathcal{J}}\) and assume that \(f_{n}\upharpoonright X\xrightarrow{\mathcal{K}\text{-}\sigma\text{-}\text{-} \text{-}\text{-}\text{-}}0\). Then there exists a cover \(\{X_{k}:k\in\omega\}\) of \(X\) such that \(f_{n}\upharpoonright X_{k}\xrightarrow{\mathcal{K}\text{-}\text{-}\text{-} \text{-}}0\) for every \(k\in\omega\). For every \(k\in\omega\), we define \(A_{k}=\{n\in\omega:\exists x\in X_{k}\left(|g_{n}(x)|>1/2\right)\}\) and \(y_{k}=\mathbf{1}_{A_{k}}\). Since \(A_{k}\in\mathcal{K}\) for every \(k\in\omega\), we have \((y_{k})\in(\mathcal{C}_{\mathcal{K}})^{\omega}\). If we show that \(x\leq^{\omega}(y_{k})\) for every \(x\in X\), the proof will be finished. Take any \(x\in X\). Then there is \(k\in\omega\) with \(x\in X_{k}\). If \(n\in A_{k}\), then \(x(n)\leq 1=y_{k}(n)\), and if \(n\in\omega\setminus A_{k}\), then \(x(n)=g_{n}(x)\leq 1/2\), so \(x(n)=0\) and consequently \(x(n)=0\leq y_{k}(n)\). All in all, \(x\leq^{\omega}(y_{k})\). (2b, \(\iff\)) Let \(X\subseteq\mathcal{C}_{\mathcal{J}}\) be bounded in \((\leq^{\omega}\cap(\mathcal{C}_{\mathcal{J}}\times(\mathcal{C}_{\mathcal{K}} )^{\omega}))\). Then there is \((y_{k})\in(\mathcal{C}_{\mathcal{K}})^{\omega}\) such that for every \(x\in X\) there is \(k\in\omega\) with \(x(n)\leq y_{k}(n)\) for every \(n\in\omega\). For every \(k\in\omega\), we define \(X_{k}=\{x\in X:x(n)\leq y_{k}(n)\) for every \(n\in\omega\}\). Then \(\{X_{k}:k\in\omega\}\) is a cover of \(X\). If we show that \(g_{n}\upharpoonright X_{k}\xrightarrow{\mathcal{K}\text{-}\text{-}\text{-} \text{-}}0\) for every \(k\in\omega\), the proof will be finished. Take any \(k\in\omega\) and \(\varepsilon>0\). Then \(\{n\in\omega:\exists x\in X_{k}\left(|g_{n}(x)|\geq\varepsilon\right)\}=\{n \in\omega:\exists x\in X_{k}\left(x(n)\geq\varepsilon\right)\}\subseteq\{n\in \omega:y_{k}(n)\geq\varepsilon\}\}\subseteq y_{k}^{-1}[\{1\}]\in\mathcal{K}\). (3a) This is item (1a) as \(f_{n}=h_{n}\) for all \(n\in\omega\). (3b, \(\implies\)) Let \(X\subseteq\mathcal{D}_{\mathcal{I}}\) be such that \(h_{n}\upharpoonright X\xrightarrow{\mathcal{J}\text{-}\text{-}\text{-}\text{-} \text{-}}0\). Then there exists a \(\mathcal{J}\)-convergent to zero sequence \((\varepsilon_{n})\) of positive reals such that \(\{n\in\omega:|h_{n}(x)|\geq\varepsilon_{n}\}\in\mathcal{J}\) for every \(x\in X\). We define \(y\in\omega^{\omega}\) by \(y(n)=\max\{0,[1/\varepsilon_{n}-1]\}\) for every \(n\in\omega\) (here \([r]\) means the integer part of \(x\)). We claim that \(y\in\mathcal{D}_{\mathcal{J}}\) and \(y\) is a \(\geq_{\mathcal{J}}\)-bound of a set \(X\). To see that \(y\in\mathcal{D}_{\mathcal{J}}\), we fix \(k\in\omega\) and notice \(\{n\in\omega:y(n)\leq k\}=\{n\in\omega:1/\varepsilon_{n}-1<k+1\}=\{n\in\omega: \varepsilon_{n}>1/(k+2)\}\in\mathcal{J}\) as \((\varepsilon_{n})\) is \(\mathcal{J}\)-convergent to zero. To see that \(y\) is a \(\geq_{\mathcal{J}}\)-bound of a set \(X\), we fix \(x\in X\) and notice \(\{n\in\omega:x(n)<y(n)\}\subseteq\{n\in\omega:x(n)<1/\varepsilon_{n}-1\}=\{n\in \omega:\frac{1}{x(n)+1}>\varepsilon_{n}\}=\{n\in\omega:|h_{n}(x)|>\varepsilon_{n}\} \in\mathcal{J}\) as the sequence \((\varepsilon_{n})\) witnesses \(h_{n}\upharpoonright X\xrightarrow{\mathcal{J}\text{-}\text{-}\text{-}\text{-}}0\). (3b, \(\iff\)) Let \(X\subseteq\mathcal{D}_{\mathcal{I}}\) be \(\geq_{\mathcal{J}}\)-bounded in \((\geq_{\mathcal{J}}\cap(\mathcal{D}_{\mathcal{I}}\times\mathcal{D}_{\mathcal{J}}))\). Then there exists \(y\in\mathcal{D}_{\mathcal{J}}\) such that \(\{n\in\omega:x(n)<y(n)\}\in\mathcal{J}\) for every \(x\in X\). We define a sequence \((\varepsilon_{n})\) by \(\varepsilon_{n}=1/(y(n)+1)\) for every \(n\in\omega\). We claim that \((\varepsilon_{n})\) is a witness for \(h_{n}\upharpoonright X\xrightarrow{\mathcal{J}\text{-}\text{-}\text{-}\text{-} \text{-}}0\) To see that \((\varepsilon_{n})\) is \(\mathcal{J}\)-convergent to zero, we fix \(\varepsilon>0\) and notice \(\{n\in\omega:\varepsilon_{n}\geq\varepsilon\}=\{n\in\omega:y(n)\leq 1/ \varepsilon-1\}\in\mathcal{J}\) as \(y\in\mathcal{D}_{\mathcal{J}}\). Now, we fix \(x\in X\) and notice that \(\{n\in\omega:|h_{n}(x)|\geq\varepsilon_{n}\}=\{n\in\omega:x(n)\leq 1/ \varepsilon_{n}-1\}\subseteq\{n\in\omega:x(n)<y(n)\}\cup\{n\in\omega:x(n)\leq 1/ \varepsilon_{n}-1\wedge x(n)\geq y(n)\}\subseteq\{n\in\omega:x(n)<y(n)\}\cup\{n\in \omega:y(n)\leq 1/\varepsilon_{n}-1\}\in\mathcal{J}\) as \(y\in\mathcal{D}_{\mathcal{J}}\) **Theorem 8.2**.: _Let \(\mathcal{I}\) be an ideal on \(\omega\)._ 1. _There is_ \(X\subseteq\omega^{\omega}\) _such that_ \(|X|=\mathrm{non}(\mathcal{I}\text{-}p,\mathcal{I}\text{-}\sigma\text{-}u)\) _and_ \(X\notin(\mathcal{I}\text{-}p,\mathcal{I}\text{-}\sigma\text{-}u)\)_._ 2. _If_ \(\mathcal{I}\) _is not countably generated then there is_ \(X\subseteq 2^{\omega}\) _such that_ \(|X|=\mathrm{non}(\mathcal{I}\text{-}qn,\mathcal{I}\text{-}\sigma\text{-}u)\) _and_ \(X\notin(\mathcal{I}\text{-}qn,\mathcal{I}\text{-}\sigma\text{-}u)\)_._ 3. _There is_ \(X\subseteq\omega^{\omega}\) _such that_ \(|X|=\mathrm{non}(\mathcal{I}\text{-}p,\mathcal{I}\text{-}qn)\) _and_ \(X\notin(\mathcal{I}\text{-}p,\mathcal{I}\text{-}qn)\)_._ Proof.: (1) Since \(\mathfrak{b}_{\sigma}(\mathcal{I})=\mathfrak{b}(\succeq\cap(\mathcal{D}_{ \mathcal{I}}\times\mathcal{D}_{\mathcal{I}}))<\infty\) (by Theorem 7.3(1) and Proposition 5.3(3c)), there is a set \(X\subseteq\mathcal{D}_{\mathcal{I}}\) which is unbounded in \(\succeq\cap(\mathcal{D}_{\mathcal{I}}\times\mathcal{D}_{\mathcal{I}})\) and \(|X|=\mathfrak{b}_{\sigma}(\mathcal{I})\). By Corollary 4.6(1), \(|X|=\mathrm{non}(\mathcal{I}\text{-}p,\mathcal{I}\text{-}\sigma\text{-}u)\) and by Lemma 8.1(1) we obtain \(X\notin(\mathcal{I}\text{-}p,\mathcal{I}\text{-}\sigma\text{-}u)\). (2) Since \(\mathrm{add}_{\omega}(\mathcal{I})=\mathfrak{b}(\leq^{\omega}\cap(\mathcal{C}_ {\mathcal{I}}\times(\mathcal{C}_{\mathcal{J}})^{\omega}))<\infty\) (by Theorem 7.3(2) and Proposition 5.3(2c)), there is a set \(X\subseteq\mathcal{C}_{\mathcal{I}}\) which is unbounded in \((\leq^{\omega}\cap(\mathcal{C}_{\mathcal{I}}\times(\mathcal{C}_{\mathcal{J}}) ^{\omega}))\) and \(|X|=\mathrm{add}_{\omega}(\mathcal{I})\). By Corollary 4.6(3), \(|X|=\mathrm{non}(\mathcal{I}\text{-}qn,\mathcal{I}\text{-}\sigma\text{-}u)\) and by Lemma 8.1(2) we obtain \(X\notin(\mathcal{I}\text{-}qn,\mathcal{I}\text{-}\sigma\text{-}u)\). (3) Since \(\mathfrak{b}_{s}(\mathcal{I})=\mathfrak{b}(\succeq\cap(\mathcal{D}_{\mathcal{ I}}\times\mathcal{D}_{\mathcal{I}}))<\infty\) (by Theorem 7.3(3) and Proposition 5.3(1c)), there is a set \(X\subseteq\mathcal{D}_{\mathcal{I}}\) which is unbounded in \(\geq_{\mathcal{J}}\cap(\mathcal{D}_{\mathcal{I}}\times\mathcal{D}_{\mathcal{ J}})\) and \(|X|=\mathfrak{b}_{s}(\mathcal{I})\). By Corollary 4.6(2), \(|X|=\mathrm{non}(\mathcal{I}\text{-}p,\mathcal{I}\text{-}qn)\) and by Lemma 8.1(3) we obtain \(X\notin(\mathcal{I}\text{-}p,\mathcal{I}\text{-}qn)\). _Remark_.: Since \(\omega^{\omega}\) is homeomorphic with \(\mathbb{R}\setminus\mathbb{Q}\) and \(2^{\omega}\) is homeomorphic with the Cantor ternary subset of \(\mathbb{R}\) (see e.g. [26]), we can write "\(X\subseteq\mathbb{R}\)" instead of "\(X\subseteq\omega^{\omega}\)" and "\(X\subseteq 2^{\omega}\)" in Theorem 8.2. _Remark_.: We know that \(\mathrm{non}(\mathcal{I}\text{-}p,\mathcal{I}\text{-}\sigma\text{-}u)= \mathfrak{b}_{\sigma}(\mathcal{I})\leq\mathfrak{b}\) (by Corollary 4.6 and Proposition 5.3(3c)) and it is known that \(\mathfrak{b}<\mathfrak{c}\) is consistent (see e.g. [1]). Consequently, a subset of the reals which distinguishes the considered convergences and constructed in the proof of Theorem 8.2 can have the cardinality strictly less than the cardinality of the continuum. On the other hand, the whole set \(\mathcal{D}_{\mathcal{I}}\) is a subset of reals of cardinality continuum which distinguishes between \(\mathcal{I}\)-pointwise and \(\mathcal{I}\text{-}\sigma\)-uniform convergences (by Lemma 8.1(1) as \(\mathcal{D}_{\mathcal{I}}\) is unbounded in \(\succeq\cap(\mathcal{D}_{\mathcal{I}}\times\mathcal{D}_{\mathcal{I}})\)). Similar reasoning can be performed in the case of the classes (\(\mathcal{I}\text{-}qn,\mathcal{I}\text{-}\sigma\text{-}u\)) (provided that \(\mathcal{I}\) is not countably generated) and (\(\mathcal{I}\text{-}p,\mathcal{I}\text{-}qn\)). ## 9. Distinguishing between spaces not distinguishing convergences If \(\mathfrak{b}_{\sigma}(\mathcal{J})<\mathfrak{b}_{\sigma}(\mathcal{I})\), then using Corollary 4.6(1) we see that there exists a space \(X\in(\mathcal{I}\text{-}p,\mathcal{I}\text{-}\sigma\text{-}u)\) such that \(X\notin(\mathcal{J}\text{-}p,\mathcal{J}\text{-}\sigma\text{-}u)\), and using Theorem 8.2(1), one can even find \(X\subseteq\mathbb{R}\) with the above property (and similarly for other types of considered convergences). As an application of this method we have: **Proposition 9.1**.: 1. _The following statments are consistent with ZFC._ 1. _There is_ \(X\subseteq\mathbb{R}\) _such that_ \(X\in(\text{Fin-}p,\text{Fin-}\sigma\text{-}u)\) _and_ \(X\notin(\mathcal{I}_{d}\text{-}p,\mathcal{I}_{d}\text{-}\sigma\text{-}u)\)_._ 2. _There is_ \(X\subseteq\mathbb{R}\) _such that_ \(X\in(\text{Fin-}p,\text{Fin-}qn)\) _and_ \(X\notin(\mathcal{S}\text{-}p,\mathcal{S}\text{-}qn)\)_._ 2. _There is_ \(X\subseteq\mathbb{R}\) _such that_ \(X\in(\text{Fin-}qn,\text{Fin-}\sigma\text{-}u)\) _and_ \(X\notin(\mathcal{I}_{d}\text{-}qn,\mathcal{I}_{d}\text{-}\sigma\text{-}u)\)_._ Proof.: (1a) By Theorem 5.13, we have \(\mathfrak{b}_{\sigma}(\text{Fin})=\mathfrak{b}\) and \(\mathfrak{b}_{\sigma}(\mathcal{I}_{d})=\mathrm{add}(\mathcal{N})\) and it is known (see e.g. [1]) that \(\mathrm{add}(\mathcal{N})<\mathfrak{b}\) is consistent with ZFC. (1b) By Theorem 5.13, we have \(\mathfrak{b}_{\sigma}(\text{Fin})=\mathfrak{b}\) and \(\mathfrak{b}_{\sigma}(\mathcal{S})=\omega_{1}\) and it is known (see e.g. [1]) that \(\omega_{1}<\mathfrak{b}\) is consistent with ZFC. (2) By Theorem 5.13, we have \(\mathrm{add}_{\omega}(\text{Fin})=\infty>\mathrm{add}(\mathcal{N})=\mathrm{add}_{ \omega}(\mathcal{I}_{d})\). However, if \(\mathfrak{b}_{\sigma}(\mathcal{J})=\mathfrak{b}\) (so it has the largest possible value, as shown in Proposition 5.3(3c)), then the above described method is useless for distinguishing between spaces not distinguishing considered convergences. In particular, this is the case for \(\mathcal{J}=\text{Fin}\) (by Proposition 5.3(3a)). **Question 9.2**.: Do there exist a space \(X\) and an ideal \(\mathcal{I}\) such that \(X\in(\mathcal{I}\text{-p},\mathcal{I}\text{-}\sigma\text{-}\text{u})\) but \(X\notin(\text{Fin-p},\text{Fin-}\sigma\text{-}\text{u})\)?
2305.17795
Kohn-Sham computation and the bivariate view of density functional theory
Informed by an abstraction of Kohn-Sham computation called a KS machine, a functional analytic perspective is developed on mathematical aspects of density functional theory. A natural semantics for the machine is bivariate, consisting of a sequence of potentials paired with a ground density. Although the question of when the KS machine can converge to a solution (where the potential component matches a designated target) is not resolved here, a number of related ones are. For instance: Can the machine progress toward a solution? Barring presumably exceptional circumstances, yes in an energetic sense, but using a potential-mixing scheme rather than the usual density-mixing variety. Are energetic and function space distance notions of proximity-to-solution commensurate? Yes, to a significant degree. If the potential components of a sequence of ground pairs converges to a target density, do the density components cluster on ground densities thereof? Yes, barring particle number drifting to infinity.
Paul E. Lammert
2023-05-28T18:50:36Z
http://arxiv.org/abs/2305.17795v2
# Kohn-Sham computation and the bivariate view of density functional theory ###### Abstract Informed by an abstraction of Kohn-Sham computation called a KS machine, a functional analytic perspective is developed on mathematical aspects of density functional theory. A natural semantics for the machine is bivariate, consisting of a sequence of potentials paired with a ground density. Although the question of when the KS machine can converge to a solution (where the potential component matches a designated target) is not resolved here, a number of related ones are. For instance: Can the machine progress toward a solution? Barring presumably exceptional circumstances, yes in an energetic sense, but using a potential-mixing scheme rather than the usual density-mixing variety. Are energetic and function space distance notions of proximity-to-solution commensurate? Yes, to a significant degree. If the potential components of a sequence of ground pairs converges to a target density, do the density components cluster on ground densities thereof? Yes, barring particle number drifting to infinity. ## I Introduction Density functional theory (DFT) has developed into a ubiquitous tool in physics, chemistry, materials science, and beyond[1; 2; 3; 4; 5; 6], overwhelmingly in the specific form of Kohn-Sham[7] (KS) computation. The two distinguishing features of KS computation are (i) a splitting of the intrinsic energy functional into noninteracting, Hartree, and exchange-correlation contributions, and (ii) an idiosyncratic procedure of iterating to so-called self-consistency. Meanwhile, the functional analytic approach initiated by Lieb[8] has had little[9; 10; 11; 12; 13] to say about these things. Working in that functional analytic tradition, this paper aims both at filling that gap, and at developing a more physical interpretation of KS computation. Pursuit of these goals is synergetic, as the following sketch of major themes shows. ### Appetizer What is the physical interpretation of intermediate stages of a KS computation, i.e., before self-consistency is achieved? The course of the computation can be cast (sec. 4.2) as a sequence of _ground pairs_ -- pairs consisting of a potential and a corresponding (interacting) ground density. This transparent framing is a promising basis for both theoretical analysis and algorithmic development. Thinking of potential and density simultaneously variable, we have moved into a bivariate perspective. The action takes place in the _product_ of potential and density space. For an iterative procedure to find a ground density of a given (_target_) potential, it first needs to _make progress_ toward that goal from one iteration to the next. A scheme using just the usual Kohn-Sham computational resources, is described (sec. 5.5), which makes progress in the sense of finding a density with lower energy in the target potential, barring exceptional circumstances (hitting a potential with a degenerate ground state or none at all, lack of exchange-correlation potential). The proposed scheme involves potential mixing, in contrast to the usual density-mixing ones. Such progress is far short of convergence. However, as already noted, we can cast all KS computations as sequences of ground pairs. Suppose, optimistically, that we have such a sequence \((v_{n},\rho_{n})\) for which \(v_{n}\) converges to the target potential. Does it follow that the densities \(\rho_{n}\) converges to a target ground density? The pleasant answer (Sec. 15) is that the sequence of densities \((\rho_{n})\) clusters at target ground densities (with respect to \(L^{1}\) metric), as long as it does not have nonzero particle number drifting to infinity. If the target ground density is unique, this means the sequence converges. Paraphrasing, look after the potential and the density will take care of itself. With the finding about a potential-mixing scheme, this supports the idea that current ways of thinking are too density-centric. It is known, since Lieb's seminal work[8], that the intrinsic energy functional \(F\) (a.k.a. Levy or Levy-Lieb functional, see section 2.3) is not continuous with respect to natural topologies. Should the practitioner be worried about that? A surprisingly encouraging answer emerges (Secs. 9 and 13). Restricted to the set of ground pairs, \(F\) is continuous (with respect to _product_ topology, that is). Living on this subset of ground pairs within the product space, KS computation is, in a sense, insulated from the discontinuity. ### Outline Section 2 traces the reduction of quantum mechanics to density functional theory, characterizing DFT as an observable/state theory. This primitive physical framework must be the touchstone for all mathematical, in particular topological, refinement. Section 3 presents a version of unilateral functional differentiation for real-valued functions. To avoid explicitly introducing topological considerations at this stage, derivatives are defined relative to a dual pair. Section 4 analyzes the basic operations of KS computation and the ways they can be combined, and abstracts these resources in the form of a _Kohn-Sham machine_. The bivariate view and the _excess energy_ (\(\Delta\)) make their appearance here. For an exact functional, \(\Delta(v,\rho)\) is the lowest energy achievable by a quantum state of density \(\rho\) in potential \(v\), relative to the ground energy, thus quantifying the (for our purposes) mismatch of \(v\) and \(\rho\). Section 5 examines the possibility of guaranteed progress in the sense of reducing \(\Delta(v^{\circ},\rho)\), where \(v^{\circ}\) is the target potential, verifiable by the resources of a KS machine. A proposed scheme is argued to usually (_vide infra_ for the meaning of this) be able to progress. It is a potential-mixing scheme in contrast to the usual density-mixing schemes, which we are unable to meaningfully analyze. A general discussion of semimetrics and metrics in Section 7 prepares the way to bring topology into consideration. This is essential for discussing convergence and approximation in density-potential space. Observable-state duality is the basis for the development here. Sections 9 and 13: when a sequence of ground pairs (or nearly ground) pairs converges, and when the limit is also a ground pair, and uncover an unanticipated "almost continuity" of intrinsic energy \(F\). Sections 10 and 11 examine how proximity in density-potential space to a ground pair compares to small excess energy, showing that a pair of low excess energy is close to a ground pair, and slightly perturbing the potential component of a ground pair increases the excess energy only a little. Finally, section 15 shows that convergence to \(v\) of the potential components of a sequence of ground pairs guarantees that the density components accumulate on ground densities of \(v\) as long as particle number does not drift to infinity. Throughout the paper, axioms are introduced one-by-one, verified on the standard interpretation, and their consequences traced. This helps to make their specific significance clearer. Interludes serve to motivate steps of the development. The reader should not hesitate to skip proofs and demonstrations on a first reading. ### Notational notes Parentheses are used for pairs, e.g., \((v,\rho)\), and also for sequences, e.g., \((x_{n})_{n\in\mathbb{N}}\). However, the index is usually obvious, so we can write \(((v_{n},\rho_{n}))\), for a sequence of potential-density pairs, or even just \((v_{n},\rho_{n})\) since sequences of this specific type are ubiquitous here. Limit inferior is denoted \(\liminf\) or \(\varliminf\), correspondingly, limit superior by \(\limsup\) or \(\varlimsup\). The abbreviation "iff" is used for "if and only if". ### From QM to DFT This section sketches a view of DFT as a common sort of state/observable theory. Classical as well as quantum theories can be framed this way. However, we develop the theme only to an extent which can reasonably ground the subsequent development, obtaining DFT proper by a contraction of the full quantum mechanical description of an \(\mathcal{N}\)-particle system. A motto for this section is: density is not an observable. The viewpoint of this section on the relation between DFT and QM is analogous to that between thermodynamics and statistical mechanics. Thermodynamics is \(a\) theory, in the sense of giving an autonomous description of certain aspects of the world and having its own proper vocabulary and concepts. It is weak in the sense that it does not have the resources to compute equations of state or free energy functions. For that, one relies on statistical mechanics. However, thermodynamics proper imposes constraints, for instance, a free energy must be convex in certain of its variables, and concave in the others. One of the aims is to formulate DFT as a theory in an analogous way. ### general quantum mechanics The setting for general quantum mechanics is a Hilbert space \(\mathcal{H}\). Observables (\(\mathsf{Obs}\)) are represented by bounded hermitian linear operators on \(\mathcal{H}\): \[\mathsf{Obs}=\mathcal{B}_{sa}(\mathcal{H}). \tag{1}\] States (\(\mathsf{Sts}\)) are represented by normalized, positive, trace-class operators: \[\mathsf{Sts}=\mathcal{B}_{+}^{1}(\mathcal{H})\subset\mathcal{B}^{1}(\mathcal{ H}). \tag{2}\] Finally, there is a canonical pairing between observables and states given by \[\left\langle A\,,\,\Gamma\right\rangle=\operatorname{Tr}\Gamma A. \tag{3}\] This represents the expectation value of observable \(A\) in the state \(\Gamma\). The notation on the LHS may seem gratuitous, however, it represents a general idea of pairing observables and states which may have different operational formulas (RHS) in different contexts. This occurs in particular for DFT. Pointy brackets are also a common notation in functional analysis for dual pairings of vector spaces. \(\mathsf{Obs}\) is a vector space over \(\mathbb{R}\). \(\mathsf{Sts}\) is not a vector space, but it is identified as a subset of the vector space of trace-class operators. Moreover, \(\mathcal{B}^{1}(\mathcal{H})\) is the linear span of \(\mathsf{Sts}\), denoted \(\operatorname{span}\mathsf{Sts}\). The pairing naturally extends from a mapping \(\mathsf{Obs}\times\mathsf{Sts}\to\mathbb{R}\) to a mapping \(\mathsf{Obs}\times\operatorname{span}\mathsf{Sts}\to\mathbb{R}\), which is _bilinear_. In this sense, we can say that our observables are linear. The more specific context that interests us is a system of \(\mathcal{N}\) identical particles in three-dimensional space \(\mathbb{R}^{3}\) or more generally, a three-dimensional riemannian manifold \({\cal M}\). The case of a three-torus shows that the more general situation is of genuine interest. For a single particle on \({\cal M}\), the relevant Hilbert space is \({\cal H}_{1}=L^{2}({\cal M})\) if it is spinless, \({\cal H}_{1}=L^{2}({\cal M})\otimes{\mathbb{C}}^{2}\) if spin-\(1/2\). For the \({\cal N}\)-particle system, \({\cal H}\) is the symmetrized (bosons) or antisymmetrized (fermions) \({\cal N}\)-fold tensor product of \({\cal H}_{1}\). Everything we do will be valid for both cases. ### Function spaces defined by integrability conditions This subsection is not called on until (9), but placed here to minimize disruption of the flow. For measurable functions, we use the following standard notation for \(1\leq p<\infty\): \[\|f\|_{p}\,:=\,\,\left(\int|f|^{p}\,dx\right)^{1/p}\in[0,\infty]. \tag{4}\] Actually, to be accurate, \(f\) above should be considered an equivalence class of functions, any two of which differ only on a set of measure zero. However, it is common to gloss over the distinction, and we will follow that custom. In addition, we define \(\|f\|_{\infty}\) in \([0,\infty]\) to be the largest number such that \(\{x\,:\,|f(x)|>\|f\|_{\infty}\}\) has measure zero. For a bounded continuous function, this is just the maximum, but more generally we again must accomodate measure-zero exceptional sets. Now, we define the _vector spaces_ \[{\cal L}^{p}({\cal M})=\{\mbox{measurable }f\;:\;\|f\|_{p}<\infty\}\,. \tag{5}\] \(L^{p}({\cal M})\) is \({\cal L}^{p}({\cal M})\), equipped with \(\|\cdot\|_{p}\) as a norm. At this stage of the development, we are using \(\|\cdot\|_{p}\) only as a selection mechanism. That is, there is no defined distance between members of \({\cal L}^{p}({\cal M})\). Topological considerations (norms, seminorms and so forth) are deferred to Section 7. We need spaces a little more complicated than the pure \({\cal L}^{p}({\cal M})\). The intersections \({\cal L}^{p}({\cal M})\cap{\cal L}^{q}({\cal M})\). of the two spaces \({\cal L}^{p}({\cal M})\) and \({\cal L}^{q}({\cal M})\) is again a vector space, as is the sum \({\cal L}^{p}({\cal M})+{\cal L}^{q}({\cal M})\), consisting of all sums of a function from each of the summand spaces. ### Dft Our development of DFT begins with a contraction of the general QM observables, although we shall later expand to a set which is neither subset nor superset of \({\cal B}({\cal H})\). #### ii.3.1 Contracting QM We put subscripts on \({\sf Obs}\) and \({\sf Sts}\) to help avoid confusion, as there will be more than one set. Start with \[{\sf Obs}_{0}\;:=\;\,\{\mbox{Num}(U)\;:\;U\;\mbox{open in }{\cal M}\}\,. \tag{6}\] Here, \(\mbox{Num}(U)\) is the operator reporting the number of particles in the set \(U\). (Choosing to start specifically with open sets is a somewhat arbitrary choice.) Appealing to well-known facts about QM, the map \[U\mapsto\langle\mbox{Num}(U)\,,\,\Gamma\rangle\] extends to a Borel measure, which is, moreover, absolutely continuous with respect to Lebesgue measure (Fubini is helpful here). This implies that there is an integrable function \(\rho\colon{\cal M}\to{\mathbb{R}}\) such that for any Lebesgue-measurable set \(A\), \[\langle\mbox{Num}(A)\,,\,\Gamma\rangle=\int_{A}\rho(x)\,dx. \tag{7}\] The measure theory just deployed is no cause for anxiety. The main point is that, while in some other contexts (e.g., classical statistical mechanical) we might want to consider Dirac measures, the underlying QM precludes that here. We give the QM-state-to-density mapping the name \({\sf dens}\). Then, \(\rho\) in the preceding integral is \({\sf dens}\,\Gamma\). Where there is a measure, there is an integral. Indeed, we can write the preceding formula as an integral of the indicator function \(1(A)\), equal to one on \(A\), zero elsewhere: \[\langle\mbox{Num}(A)\,,\,\Gamma\rangle=\int 1(A)({\sf dens}\,\Gamma)(x)\,dx.\] This extends to _some_ measurable functions as \[\langle\mbox{Num}(f)\,,\,\Gamma\rangle=\int f(x)({\sf dens}\,\Gamma)(x)\,dx \tag{8}\] in the usual way. However, which functions \(f\) are legitimate here? If we are dealing with the densities associated with general quantum mechanical states, the answer is _bounded_ ones, denoted \({\cal L}^{\infty}({\cal M})\), because otherwise we are not assured that the integral in (8) exists. Thus, we pass to a second stage with \[{\sf Obs}_{1}\,:=\,{\cal L}^{\infty}({\cal M}). \tag{9}\] Relative to this class of observables, the state simply _is_ a density, specifically, \({\sf dens}\,\Gamma\) in (8). The states are now \[{\sf Sts}_{1}\,:=\,{\cal L}^{1}({\cal M})_{+,{\cal N}}, \tag{10}\] non-negative integrable functions with total integral \({\cal N}\), and the observable-state pairing is \[\langle f\,,\,\rho\rangle=\int_{\cal M}f(x)\rho(x)\,dx, \tag{11}\] which satisfies \[\langle f\,,\,\rho\rangle=\langle\mbox{Num}(f)\,,\,\Gamma\rangle\,,\] whenever \(\Gamma\in{\sf dens}^{-1}\rho\). The pairings on the LHS and RHS are not literally the same thing, but are realizations of the same abstract idea in two different settings. For DFT, we want to modify this structure somewhat, by restricting to densities coming from states of finite kinetic energy, and considering nonlinear observables. finite kinetic energy In the general QM context, a single-particle state which is, for example, a nonzero constant over a cubical region and zero outside, is legitimate, but we want to insist on finite kinetic energy. This entails a state space smaller than \({\cal L}^{1}({\cal M})_{+,{\cal N}}\). We will denote this set of densities by \({\mathscr{D}}\); Lieb[8] calls it \({\mathscr{I}}_{\cal N}\). Correspondingly, the space of observables can be expanded. In fact, the integral \(\int v\rho\,dx\) is well-defined for every \(\rho\in{\mathscr{D}}\) not only when \(v\) is essentially bounded, but also when \(|v|^{3/2}\) is integrable. Thus, \[{\sf Obs}_{2}\,:=\,{\cal L}^{\infty}({\cal M})+{\cal L}^{3/2}({\cal M}). \tag{12}\] #### ii.1.3 Nonlinear observables and intrinsic energy Now suppose \(A\) is any bounded operator. We can certainly associate the set \(\{{\rm Tr}\,\Gamma A\,:\,{\sf dens}\,\Gamma=\rho\}\) with \(\rho\). In case \(A\) represents an energy, it is physically well-motivated to associate the infimum of this set to \(\rho\). That works even if \(A\) is only bounded below, like kinetic energy. Define, therefore, \[F_{0}(\rho)\,:\,=\,\,\inf\left\{{\rm Tr}\,\hat{T}{\rm T}\;:\;{\sf dens}\, \Gamma=\rho\right\}. \tag{13}\] This makes sense for all densities. For some \(\rho\), \(F_{0}(\rho)=+\infty\) by this definition. Those densities for which \(F_{0}\) is less than \(+\infty\) is called the _effective domain_, denoted \({\rm dom}\,F_{0}\). It is exactly \({\mathscr{D}}\). Because \({\sf dens}\) is a linear map, it follows from (13) that \(F_{0}\) is convex: \[0\leq s\leq 1\Rightarrow\\ F((1-s)\rho+s\rho^{\prime})\leq(1-s)F(\rho)+sF(\rho^{\prime}). \tag{14}\] Because \(F_{0}\) is bounded below, it does not matter whether \(\rho\) and \(\rho^{\prime}\) are in \({\rm dom}\,F_{0}\) (\(\infty+a=\infty\) if \(-\infty<a\)). \(F_{0}\) is the noninteracting _intrinsic energy_ (functional). If \(\hat{W}\) is an interaction between the particles, then we can analogously define an interacting intrinsic energy with \(\hat{T}\) in (13) replaced by \(\hat{T}+\hat{W}\). Assuming \(\hat{W}\) is relatively bounded with respect to \(\hat{T}\), e.g., Coulomb interaction, \({\rm dom}\,F={\rm dom}\,F_{0}\). #### ii.1.4 Constrained search and Legendre-Fenchel transform Now, suppose \(v\) is some external one-body potential. The minimum energy of states with density \(\rho\) in presence of \(v\) is \(F(\rho)+\int v\rho\,dx\). Thus, if there is a ground state, the ground state energy is \[E(v)=\min\left\{F(\rho)+\int v\rho\,dx\;:\;\rho\in{\mathscr{D}}\right\} \tag{15}\] This embodies the central, appealing, idea of the constrained-search formulation[8; 14]. The minimum will not exist, and \(E(v)\) will not be defined, if there are no ground states. This is not an exotic possibility; it occurs for a constant potential on \({\mathbb{R}}^{3}\). That problem is easily fixed by replacing \(\min\) by \(\inf\). Even so, what is the domain of \(E\)? Consider that the integral \(\int v\rho\,dx\) is well-defined, for a trap potential, i.e., bounded below and satisfying \(v(x)\to\infty\) as \(|x|\to\infty\). For some densities, the integral has the value \(+\infty\), for others it is finite. We will rule these out, however, requiring the integral to be finite for every \(\rho\). One might regard this a valid physical requirement as it stands. Another reason, discussed below, is that potentials play the role of derivatives of \(F\). They should therefore be unambiguously integrable against differences of densities. Thus, we arrive at the conclusion that the sensible space of potentials is precisely \({\sf Obs}_{2}\) (12), times a unit of energy. For notational simplicity (and forgetting about the energy unit), we give this space a new name: \[{\mathscr{V}}\;:=\,{\cal L}^{\infty}({\cal M})+{\cal L}^{3/2}({\cal M}). \tag{16}\] In a more abstract context, we continue to use the symbol \({\mathscr{V}}\) to represent whatever vector space plays this role. The integral in (15) is thus our previously introduced pairing, giving us the final form **Definition 2.1** (ground energy).: The _ground energy_ of \(v\in{\mathscr{V}}\) is \[E(v)=\inf\left\{F(\rho)+\langle v\,,\,\rho\rangle\;:\;\rho\in{\mathscr{X}} \right\}, \tag{17}\] As the infimum of a collection of linear functionals, the ground energy is automatically _concave_, i.e., \(-E\) is convex. For a concave functional, the effective domain is defined oppositely from that for a convex functional (i.e., where it is greater than \(-\infty\)). Although not obvious on its face, \(E(v)>-\infty\) for _every_\(v\in{\mathscr{V}}\). So, \({\rm dom}\,E={\mathscr{V}}\). Now, in case there is a ground state for \(v\), a basic idea of calculus suggests that the minimum of the RHS of (17) should have a differential characterization, e.g. \[{\sf D}F(\rho)+v\stackrel{{?}}{{=}}0\] for some kind of derivative \({\sf D}\). Therefore, we turn next to the problem of differentiation of functions in the context of a dual pair of vector spaces. Although \({\mathscr{D}}\) is not a vector space, if we are interested not only in densities, but arbitrary multiples of differences of densities, the vector space generated by \({\mathscr{D}}\), denoted \({\rm Vec}\,{\mathscr{D}}\) comes in naturally. Then, we also need to extend the intrinsic energy to \({\rm Vec}\,{\mathscr{D}}\). Some elements of \({\rm Vec}\,{\mathscr{D}}\) which are not in \({\mathscr{D}}\) are densities for which \(F\) and \(F_{0}\) are already defined as \(+\infty\). Now we give that value to all the others, as well. This maintains convexity of those functionals. ## III A unilateral derivative Starting in this section, we begin to work in a relatively abstract way. For instance, instead of the specific vector spaces \(\operatorname{Vec}\mathscr{D}\) and \(\mathscr{V}\) in (16), we say simply that we have a pair of vector spaces \(\mathscr{V}\) and \(\mathscr{X}\) with a nondegenerate pairing \(\langle\cdot\,,\,\cdot\rangle\) (See Def. 3.1). ### Motivation We pursue here the idea that the essence of _derivative_ is some sort of (local) linear approximation, what kind of approximation being open to discussion. For example, the derivative of a smooth function \(f\colon\mathbb{R}^{2}\to\mathbb{R}\) is packaged as a linear functional through its gradient. The graph of the affine function \(x^{\prime}\mapsto f(x)+\nabla f(x)\cdot(x^{\prime}-x)\) is tangent to the graph of \(f\) at \(x\), and in that sense constitutes the best affine approximation to \(f\) near \(x\). The dot product is a pairing of \(\mathbb{R}^{2}\) with itself, \(\langle x\,,\,y\rangle=x\cdot y\), so we might also write this as \(f(x)+\langle\nabla f(x)\,,\,x^{\prime}-x\rangle\). Now, suppose that \(f\) is not smooth, for example, \(f(x)=|x|\), at the origin. If our interest is in minimization, a _one-sided_ kind of approximation can be perfectly suitable. If \(|n|\leq 1\), the graph of the linear functional \(x\mapsto\langle x\,,\,n\rangle\) touches that of \(f\) at \(x=0\) and is nowhere below it. In that weak sense, it is a kind of linear approximation. Because there is not just a single \(n\) which works here, we see that when we relax our notion of approximation in this way, we can end up with derivatives which are _set-valued_. ### Lower and upper semiderivatives We will define derivatives for dual systems. **Definition 3.1** (dual system).: A _dual system_ consists of a pair of vector spaces \(\mathscr{V}\) and \(\mathscr{X}\) and a map \(\langle\,,\,\cdot\rangle\colon\mathscr{V}\times\mathscr{X}\to\mathbb{R}\) which is linear in each variable with the other held fixed, and such that for every \(x\in\mathscr{X}\), there is \(v\in\mathscr{V}\) (and for every \(v\in\mathscr{V}\) there is \(x\in\mathscr{X}\)) such that \(\langle v\,,\,x\rangle\neq 0\). For a compact notation, we denote this dual system by \(\langle\mathscr{V}\,,\,\mathscr{X}\rangle\) Nondegeneracy is the new concept in this definition. Essentially it means neither space involved is "too small", since it says that \(x\in\mathscr{X}\) can be unambiguously identified by the values of \(\langle v\,,\,x\rangle\) as \(v\) ranges over \(\mathscr{V}\), and vice versa. Now we can define a unilateral notion of derivative relative to a dual system. Putting a bar above or below '\(\mathbb{R}\)' indicates augmentation by \(+\infty\) or \(-\infty\), for instance, \(\overline{\underline{\mathbb{R}}}=\mathbb{R}\cup\{-\infty,+\infty\}\), and \(\underline{\underline{\mathrm{lim}}}\) denotes limit inferior (\(\liminf\)). **Definition 3.2**.: The _lower semiderivative_ of \(f\colon\mathscr{X}\to\overline{\underline{\mathbb{R}}}\) at \(x\) [with respect to the pairing \(\langle\cdot\,,\,\cdot\rangle\colon\mathscr{V}\times\mathscr{X}\to\mathbb{R}\)] is the _set_ of \(v\in\mathscr{V}\) such that \[\lim_{s\downarrow 0}\frac{f(x+su)-f(x)}{s}\geq\langle v\,,\,u\rangle\,,\text{ for all }u\in\mathscr{X}. \tag{18}\] The lower semiderivative is denoted \(\underline{\mathsf{D}}f(x)\). The _upper semiderivative_, \(\overline{\mathsf{D}}f(x)\) is defined by an analogous equation with \(\underline{\mathrm{lim}}\) replaced by \(\overline{\mathrm{lim}}\), and \(\geq\) by \(\leq\). Similarly, exchanging the roles of \(\mathscr{X}\) and \(\mathscr{V}\), we obtain semiderivatives of functions on \(\mathscr{V}\) with respect to the same pairing. ### Discussion 1. If \(v\) is in both \(\underline{\mathsf{D}}f(x)\) and \(\overline{\mathsf{D}}f(x)\), then \(\lim_{s\to 0}s^{-1}[f(x+su)-f(x)]=\langle v\,,\,u\rangle\). 2. For a _convex_ function \(f\colon\mathscr{X}\to\overline{\mathbb{R}}\), \(\underline{\mathsf{D}}f\) has a much simpler characterization, and it does not involving limits at all. \(v\in\underline{\mathsf{D}}f(x)\) precisely when \[f(y)\geq f(x)+\langle v\,,\,y-x\rangle\,.\] (19) For application to DFT, this would suffice for \(F\), and its concave-function counterpart for \(E\). However, we cannot assume that \(\Phi\) is either convex or concave. ## 4 Kohn-Sham machines The top level of a Kohn-Sham computation involves densities and potentials alone, with no explicit reference to quantum mechanics. This Section abstracts that top level perspective as a _Kohn-Sham machine_, offering a limited menu of operations on potentials and densities, and provided by modules which are regarded as black boxes. The following Section then analyzes the question, given an external potential \(v^{\otimes}\), how can those operations be harnessed to make progress toward finding an interacting ground density for \(v^{\otimes}\)? This will be given an abstract phrasing, and we will have to find an appropriate sense of _progress_ to deal with it. ### Postulates We abstract the situation described in the preceding section in the form of the following assumptions. 1. \(\mathscr{D}\subset\operatorname{Vec}\mathscr{D}\) is convex 2. \(F\colon\mathscr{D}\to\mathbb{R}\) is a convex function, bounded below 3. \(\langle\,,\,\rangle\colon\mathscr{V}\times\operatorname{Vec}\mathscr{D}\to \mathbb{R}\) is a nondegenerate bilinear pairing of a second real vector space \(\mathscr{V}\) with \(\operatorname{Vec}\mathscr{D}\). These are just the beginning. Additional axioms refining the set-up will accumulate in later sections as their desirability emerges. In all cases, they will reflect properties of exact functionals in the standard interpretation. Postulates A3, A1, and A2 are _descriptive_. In this section and the next, we assume that there is a second function \(F_{0}\) satisfying A2. Moreover, we will have _computational/procedural_ assumptions on \(F_{0}\) and \(\Phi\::=\:F-F_{0}\) as specified in section 4.4. Those will have no direct relevance for the development following section 4.4. All three functions, \(F_{0}\), \(F\), and \(\Phi\) are extended to all of \(\mathrm{Vec}\,\mathscr{D}\) by setting them equal to \(+\infty\) off of \(\mathscr{D}\). This maintains convexity of \(F\) and \(F_{0}\), creates no barriers to lower semi-differentiability, and maintains the equality \[F=F_{0}+\Phi. \tag{20}\] ### Standard interpretation The intended interpretation is that \(\mathscr{D}\) is the set of densities of finite intrinsic energy, i.e., \(\mathscr{D}=\mathrm{dom}\,F_{0}=\mathrm{dom}\,F\), while \(\,\mathscr{V}=\mathcal{L}^{\infty}(\mathcal{M})+\mathcal{L}^{3/2}(\mathcal{M})\). \(F_{0}\) is the noninteracting, and \(F\) the interacting intrinsic energy. \(\Phi\) is a _model_ Hartree-exchange-correlation energy functional. It is not assumed to be exact. For the time being, it is not assumed to have any special properties except to make \(F\), given by (20), convex. Later, additional properties will be required of \(F\), that an exact functional ought to have. Ground energy \(E\) is defined from and \(F\) by (17). In these abstract terms, the Standard Problem of finding a ground density for potential \(v^{\odot}\) can be phrased as: find \(\rho\in\overline{\mathsf{D}}E(v^{\odot})\). We will find a different formulation more useful and enlightening. ### Excess energy Intrinsic energy \(F\) is a function of density, ground energy \(E\) of potential. Underlying our approach is the idea that it is fruitful to think in terms of both density and potential simultaneously. This means that we mostly think of things as functions on the product space \(\mathscr{V}\times\mathscr{D}\), and package \(F\) and \(E\) together into the _excess energy_ \[\Delta(v,\rho)\;:=\,F(\rho)+\langle v\,,\,\rho\rangle-E(v)\geq 0. \tag{21}\] \(\Delta(v,\rho)\) answers the question, "how close to the ground energy \(E(v)\) can one get with states of density \(\rho\)?" and is convex in each variable, holding the other fixed. The zero set \(\mathscr{Z}=\{\Delta=0\}\) contains all possible solutions of all possible ground density problems. If \((v,\rho)\) is in \(\mathscr{Z}\), we call it a _ground pair_. Noninteracting versions, \(E_{0}\), \(\Delta_{0}\), and \(\mathscr{Z}_{0}\) are defined from \(F_{0}\) in the same way as \(E\), \(\Delta\) and \(\mathscr{Z}\) from \(F\). In distinguishing between the two, we prefer the more neutral designations _reference/perturbed_ to _noninteracting/interacting_. Fig. 1 depicts, in a cartoon way, the zero sets in the product space \(\mathscr{V}\times\mathscr{D}\). ### Primitive operations and feasibility Some of the functions of the theory listed above, e.g., \(F\), are not provided in modular form by ordinary DFT software. This is the reason why it is an interesting to ask about strategies to solve the basic problem. The menu of primitive operations consists of: solution of the noninteracting problem, computation of HXC energy and potential, and calculation of the integral \(\int v(x)\rho(x)\,dx\). In our more neutral language, they are given in Table 1. Computations obtained by combining the primitive operations are called _feasible_. We will be interested in demonstrating feasibility. Infeasibility would require a much more formalized setup. Also, nothing we say about feasibility has any bearing on the possibilities of computation by wholly different means such as quantum Monte Carlo. ### Generating ground pairs From the primitive operations (Table 1), we will now synthesize some new feasible operations which allow us to generate reference and perturbed ground pairs, and which may be useful in solving the Standard Problem. \begin{table} \begin{tabular}{l c} operation & standard interpretation \\ \hline \(E_{0}\) & noninteracting ground energy \\ \([\overline{\mathsf{D}}E_{0}]\) & (one) noninteracting ground density \\ \(\Phi\) & HXC energy \\ \([\underline{\mathsf{D}}\Phi]\) & (one) HXC potential \\ \(\langle\cdot\,,\,\cdot\rangle\) & potential-density pairing \\ \end{tabular} \end{table} Table 1: Primitive feasible operations of the Kohn-Sham machine. Lower and upper semicderivatives are set-valued. The notation \([\cdot]\) means that we can find out if the set is nonempty, and get (at least) one member if so. They are listed in Table 2 and some are illustrated in a schematic way on Fig. 1. Let us consider these operations. \(Z_{0}\) is a trivial rephrasing of \(\left[\overline{\mathsf{D}}E_{0}\right]\); it merely pairs a potential with a corresponding reference system ground density. The point is that it is not a map into densities, but into the subset \(\mathscr{Z}_{0}\) of the product space \(\mathscr{V}\times\mathscr{D}\). KS puts \(\underline{\mathsf{D}}\Phi\) to work, and is more interesting: By definition, \(F=F_{0}+\Phi\), so \[(v,\rho)\in\mathscr{Z}_{0} \Rightarrow-v\in\underline{\mathsf{D}}F_{0}(\rho)\Rightarrow-v+ \underline{\mathsf{D}}\Phi(\rho)\in\underline{\mathsf{D}}F(\rho)\] \[\Rightarrow(v-\underline{\mathsf{D}}\Phi(\rho),\rho)\in\mathscr{Z}\] Summing up, given \(v\in\mathscr{V}\), \(Z_{0}v\) is a reference ground pair, and \(\widehat{Z}_{0}v=\mathsf{KS}\circ\!Z_{0}\,v=\mathsf{KS}(Z_{0}\,v)\) is a perturbed ground pair, and \(v\mapsto\widehat{v}\) projects out the \(\mathscr{V}\) component. To see the point of this last, remember that our Standard Problem is to find a point on \(\mathscr{Z}\) with specified first component \(v^{\odot}\), so we are naturally interested in how close \(\widehat{v}\) is to \(v^{\odot}\). The map \(\mathsf{R}_{v^{\odot}}\) supplies that information. Usually, we will suppress \(v^{\odot}\) for notational simplicity. These functions are all _partial_, which is why the Table contains '\(\rightharpoonup\)' rather than '\(\rightharpoonup\)' in the type column. Certainly, some potentials have no ground density. For example, the uniformly zero potential in \(\mathbb{R}^{3}\). Given that partiality, there is no benefit to assuming that \([\underline{\mathsf{D}}\Phi]\) is total, even if all standard XC functionals are so. Computationally, our assumption is that an exception, rather than garbage, is returned in case there is no value. The perspective revealed here is different from the usual one. Ground pairs are the only points in \(\mathscr{V}\times\mathscr{D}\) which are usefully accessible. Reference ground pairs can be feasibly selected by their first component, but perturbed ground pairs only in a distorted kind of way. The common talk of "self-consistency" seems inappropriate from this perspective. Points on \(\mathscr{Z}\) generated by using the basic operations are certainly not _inconsistent_ in any sense. Their only possible defect is not being one that we want. The question then, is how to use the expanded stock of basic operations in Table 2 to find a suitable pair, that is, one solving the Standard Problem. The next section takes up the question of how to make progress toward that goal. First, we discuss the last row of the table. ### HK maps The table entries discussed to this point use only \([\underline{\mathsf{D}}E_{0}]\) and \([\underline{\mathsf{D}}\Phi]\) from the primitives (Table 1). The other three, \(\Phi\), \(E_{0}\) and \(\langle\cdot\,,\cdot\rangle\) are needed for the last two, \(F^{\textsc{HK}}\) and \(E^{\textsc{HK}}\). If \((v,\rho)\in\mathscr{Z}_{0}\), then \(0=\Delta_{0}(v,\rho)=F_{0}(\rho)+\langle v\,,\,\rho\rangle-E_{0}(v)\), and therefore \[F(\rho)=F^{\textsc{HK}}(v,\rho)=E_{0}(v)-\langle v\,,\,\rho\rangle+\Phi(\rho).\] The superscript \(\textsc{HK}\), standing for 'Hohenberg-Kohn', is there because this is much closer to the original[1] intrinsic energy ("universal functional") definition of Hohenberg and Kohn than the later constrained-search formulation[14; 15]. The point is that auxiliary data consisting of a potential partner in the reference system is needed to obtain \(F(\rho)\). Since \((\widehat{v},\rho)\) is a perturbed ground pair, once we have \(F(\rho)\), \(E(v)\) follows as \(E(\widehat{v})=F(\rho)+\langle\widehat{v}\,,\,\rho\rangle\). ### Reduced KS-machine Generally, the term _Kohn-Sham machine_ refers to any collection of feasible operations, such as those in Table 2. It is easier to focus on the essentials, though, if we consider a _reduced KS-machine_ offering the single operation input: \[v\] output: \[(\widehat{v},\rho)\in\mathscr{Z}\], \[E(\widehat{v})\], \[F(\rho)\] (22) This is straightforwardly constructed from those in Table 2. \(E(\widehat{v})\) and \(F(\rho)\) come from the HK maps. One use of the reduced KS-machine gives us a perturbed ground pair in \(\mathscr{Z}\), and its essential characteristics. The only problem is that it is unclear how to control _either_ its potential or its density component. ## 5 Verifiable progress Essentially, the only feasible access to \(\mathscr{Z}\) is via \(\widehat{Z}_{0}\). The picture of the previous section suggests the following approach to the Standard Problem. Pick a potential \(v\) (somehow), obtain \(\widehat{Z}_{0}v\), compare its first component to \(v^{\odot}\), if the difference \(\mathsf{R}\,v\) is not satisfactorily small, choose a new input to \(\widehat{Z}_{0}\) based on the experience. Repeat until satisfied. This section is concerned with how to make that choice of next input so that some form of progress is assured. ### Progress Suppose we generate a sequence of points \((v_{n},\rho_{n})\) on \(\mathscr{Z}\). How would we ascertain that we were making progress toward the solution to the Standard Problem? \begin{table} \begin{tabular}{l c l} \hline name & definition & type \\ \hline \(Z_{0}\) & \(v\mapsto(v,\left[\overline{\mathsf{D}}E_{0}(v)\right])\) & \(\mathscr{V}\rightharpoonup\mathscr{Z}_{0}\) \\ \(\mathsf{KS}\) & \((v,\rho)\mapsto(v-\left[\underline{\mathsf{D}}\Phi(\rho)\right],\rho)\) & \(\mathscr{Z}_{0}\rightharpoonup\mathscr{Z}\) \\ \(\widehat{Z}_{0}\) & \(\mathsf{KS}\circ\!Z_{0}\) & \(\mathscr{V}\rightharpoonup\mathscr{Z}\) \\ (\(\widehat{\cdot}\)) & \(\pi_{\mathscr{V}}\circ\widehat{Z}_{0}\) & \(\mathscr{V}\rightharpoonup\mathscr{V}\) \\ \(R_{v^{\odot}}\) & \(v\mapsto v^{\odot}-\widehat{v}\) & \(\mathscr{V}\rightharpoonup\mathscr{V}\) \\ \(F^{\textsc{HK}}\) & \((v,\rho)\mapsto E_{0}(v)-\langle v\,,\,\rho\rangle+\Phi(\rho)\) & \(\mathscr{Z}_{0}\rightharpoonup\mathbb{R}\) \\ \(E^{\textsc{HK}}\) & \((v,\rho)\mapsto F^{\textsc{HK}}(v,\rho)+\langle\widehat{v}\,,\,\rho\rangle\) & \(\mathscr{Z}_{0}\rightharpoonup\mathbb{R}\) \\ \hline \end{tabular} \end{table} Table 2: Basic feasible functions/operations, described in the text. \(\circ\) is the composition operator, \(\pi_{\mathscr{V}}\) extracts \(\mathscr{V}\) component, and \(\rightharpoonup\) indicates a partial (not everywhere defined) function. One interpretation would be that \(v_{n}\) and \(\rho_{n}\) are converging to the target potential and density. However, the latter is unknown. We could ask if \(v^{\odot}-v_{n}\) is becoming small, but that requires a quantitative measure of the "size" of a potential difference. We defer such topological considerations to the following sections, in order to see what can be done without them. Fortunately, the basic feasible operations in hand already provide the means to assess whether one density is _energetically_ better than another, provided we have them in the form of components of points on \(\mathscr{Z}_{0}\) or \(\mathscr{Z}\). The energetic measure of how close \(\rho\) is to a ground density for \(v^{\odot}\) is \(\Delta(v^{\odot},\rho)\). So, define \[\operatorname{inc}(v^{\odot};\rho,\rho^{\prime}) := \Delta(v^{\odot},\rho)-\Delta(v^{\odot},\rho^{\prime}) \tag{23}\] \[= F(\rho)-F(\rho^{\prime})+\langle v^{\odot}\,,\,\rho-\rho^{ \prime}\rangle\,.\] If this is less than zero, \(\rho\) is a "better" density than \(\rho^{\prime}\), indicating that going from \(\rho^{\prime}\) to \(\rho\) is _progress_ of a sort. The important point is that \[(v,\rho),(v^{\prime},\rho^{\prime})\in\mathscr{Z}_{0}\Rightarrow \tag{24}\] \[\operatorname{inc}(v^{\odot};\rho,\rho^{\prime})= F^{\textsc{ht}}(v,\rho)-F^{\textsc{ht}}(v^{\prime},\rho^{\prime})+ \langle v^{\odot}\,,\,\rho-\rho^{\prime}\rangle\,.\] Evidently, this is feasible. It is the measure of progress we will use in this section. ### Conventional fixed-point formulation Given \(v_{0}\) as input, the KS-machine produces (barring exceptions) a reference ground pair \((v_{0},\rho_{0})=Z_{0}v_{0}\) and a perturbed ground pair \((\widehat{v_{0}},\rho_{0})=Z_{0}v_{0}\) as output. For purposes of comparing with the usual formulation of KS iteration, it may be helpful to refer to \(\rho_{0}\) and \(\widehat{v_{0}}\) as the _output density_ and _output potential_, respectively. Now, in that situation, a simple idea for the next input is \[v_{1}=v_{0}+\mathsf{R}\,v_{0}. \tag{25}\] The pattern can be continued to entire sequence \((v_{n},\rho_{n},\widehat{v_{n}})_{n}\), with \(v_{n}+\mathsf{R}\,v_{n}=v_{n+1}\), \(\widehat{v_{n}}+\mathsf{R}\,v_{n}=v^{\odot}\). Unpacking definitions shows that this is equivalent to \[v_{n+1}=v^{\odot}+\left[\mathbbm{D}\Phi(\rho_{n})\right], \tag{26}\] and (25) is thereby revealed to be the usual naive iteration step. This is labelled "naive" because it is a well-known empirical fact that this scheme is subject to problems, so-called charge-sloshing in particular, which is ameliorated by _mixing_. In the bivariate perspective being built here, that would be expressed as the idea that (25) is a good "direction" in which to shift the input potential, but that maybe a more cautious step is advisable: \[v_{1}=v_{0}+\lambda\,\mathsf{R}\,v_{0},\quad 0<\lambda\leq 1. \tag{27}\] Conventionally, the same rough idea is implemented differently. An auxiliary ingredient, an _input density_ is introduced to parametrize the input potential, as \[v_{n+1}=v^{\odot}+\left[\mathbbm{D}\Phi(\rho_{n+1}^{\textsc{in}})\right], \tag{28}\] and mixing is done on the auxiliary quantity: \[\rho_{n+1}^{\textsc{in}}=\lambda\rho_{n}+(1-\lambda)\rho_{n+1}^{\textsc{in}}. \tag{29}\] This kind of parameterization gives rise to the apparently common view that Kohn-Sham theory _intrinsically_ involves a fixed-point problem, i.e., of the map \(\rho_{n}^{\textsc{in}}\mapsto\rho_{n}\). From the bivariate perspective, that is entirely incidental. It is unclear what advantages it may have over working directly with potentials as in (27). Most importantly for the present work, I am unable to prove anything about such schemes, whereas favorable results will be obtained for something like (27). ### Utilities We collect some useful identities, proven by straighforward manipulation, which will be used in Sections 5.4 and 5.5. Items 1 - 3 hold door either the reference system (in which case subscripts \(0\) should be attached) or the perturbed system. They are entirely elementary and depend only on convexity properties of \(F\) and \(E\). Recall the definition of excess energy: \[\Delta(v,\rho)=F(\rho)+\langle v\,,\,\rho\rangle-E(v).\] 1. Cross-difference identity: \[\Delta(v,\rho)+\Delta(v^{\prime},\rho^{\prime})= \Delta(v,\rho^{\prime})+\Delta(v^{\prime},\rho)\] \[+\langle v-v^{\prime}\,,\,\rho-\rho^{\prime}\rangle\,. \tag{30}\] Each of \(v\), \(v^{\prime}\), \(\rho\), and \(\rho^{\prime}\) appears once in a \(\Delta\) on either side. Upon substituting the definition of \(\Delta\). all \(F\)'s and \(E\)'s cancel out, leaving only potential-density pairings. 2. Monotonicity: \[(v,\rho),(v^{\prime},\rho^{\prime})\in\mathscr{Z}\;\Rightarrow\;\langle v-v^{ \prime}\,,\,\rho-\rho^{\prime}\rangle\leq 0. \tag{31}\] If either \((v^{\prime},\rho)\) or \((v,\rho^{\prime})\) fails to be a ground pair, then the inequality is strict. This monotonicity inequality[10; 16; 17] is an immediate specialization of the cross-difference identity (30). It generalizes an inequality previously derived in a specificly DFT context[18; 19]. 3. \[(v,\rho)\in\mathscr{Z}\;\Rightarrow\] \[\Delta(v^{\prime},\rho)=E(v)-E(v^{\prime})+\langle v^{\prime}-v \,,\,\rho\rangle\,. \tag{32}\] Expand \(\Delta(v^{\prime},\rho)-\Delta(v,\rho)\) using the definition of \(\Delta\). 4. Assuming \(\rho\in\operatorname{dom}D\Phi\), and with \(\mathbbm{D}_{\rho}\) denoting the subdifferential with respect to \(\rho\) at fixed \(v\), \[(v,\rho)\in\mathscr{Z}_{0}\;\Rightarrow\;\mathsf{R}\,v\in\mathbbm{D}_{\rho} \Delta(v^{\odot},\rho). \tag{33}\] According to the definition of excess energy, \(\mathbbm{D}_{\rho}\Delta(v^{\odot},\rho)=\mathbbm{D}F(\rho)+v^{\odot}\). Since \((v,\rho)\in\mathscr{Z}_{0}\) implies that \(-\widehat{v}=\mathsf{R}\,v-v^{\odot}\in\mathbbm{D}F(\rho)\), the conclusion follows. ### An infeasible strategy Given \(v_{0}\), \(v_{1}\) is defined as in (25). Corresponding densities are defined by the conditions \[(v_{0},\rho_{0}),(v_{1},\rho_{1})\in\mathscr{Z}_{0}. \tag{34}\] Now, we consider two ideas for interpolation. The first is defined by a linear interpolation in density: \[\rho_{\lambda}=(1-\lambda)\rho_{0}+\lambda\rho_{1} \tag{35}\] for \(0\leq\lambda\). We will show that **Proposition 5.1**.: _Whenever the derivative exists,_ \[\frac{d}{d\lambda}\Delta(v^{\otimes},\rho_{\lambda})\Big{|}_{\lambda=0}<0. \tag{36}\] This result appears to have been first given by Wagner _et al.[19]_, later corrected and rigorized by Laestadius _et al.[10]_. Proof of Prop. 5.1.: Apply monotonicity (31) of \(\Delta_{0}\) to the two points \((v,\rho),(v+\mathsf{R}\,v,\rho^{\prime})\in\mathscr{Z}_{0}\) (as illustrated in Fig. 1 of the main text) to obtain \[\left\langle\rho^{\prime}-\rho\,,\,\mathsf{R}\,v\right\rangle<0. \tag{37}\] The inequality is strict because \((v+\mathsf{R}\,v,\rho)\not\in\mathscr{Z}_{0}\). For, if both \((v,\rho)\) and \((v+\mathsf{R}\,v,\rho)\) are in \(\mathscr{Z}_{0}\), it follows that \((v^{\otimes},\rho)\not\in\mathscr{Z}\), contrary to assumption. Combining (33) and (37) shows that \(\left\langle\rho^{\prime}-\rho\,,\,\underline{\mathsf{D}}_{\rho}\Delta(v^{ \otimes},\rho)\right\rangle\) contains a negative number. Hence, if the derivative exists, \[\left\langle\rho^{\prime}-\rho\,,\,w\right\rangle=\frac{d}{d\lambda}\Delta(v^ {\otimes},\rho+\lambda[\rho^{\prime}-\rho])\Big{|}_{\lambda=0} \tag{38}\] for every \(w\in\underline{\mathsf{D}}_{\rho}\Delta(v^{\otimes},\rho)\). Although this shows that Unfortunately, there is a serious as a basis of a strategy. To be able to use it in a non-blind way, we must be able to test the value of \(\Delta(v^{\otimes},\rho_{\lambda})-\Delta(v^{\otimes},\rho_{0})\). As previously discussed, the only evident feasible way to do that is to obtain \(\rho_{\lambda}\) as the second component of a point on \(\mathscr{Z}_{0}\), which means we need to know a potential having \(\rho_{\lambda}\) as a ground density. ### A feasible strategy A second attempt to find a method of feasibly making progress involves linear interpolation of the potential according to: \[v_{\lambda} =(1-\lambda)v_{0}+\lambda v_{1}\] \[=v_{0}+\lambda\,\mathsf{R}\,v_{0}. \tag{39}\] Corresponding densities \(\rho_{\lambda}\) are defined implicitly via \[(v_{\lambda},\rho_{\lambda})=Z_{0}v_{\lambda}. \tag{40}\] We are recycling notation here: Although the \(\rho_{\lambda}\) defined here interpolate between \(\rho_{0}\) and \(\rho_{1}\), unlike in (35), this interpolation is generally nonlinear. **Proposition 5.2**.: \[\operatorname{inc}(v^{\otimes};\rho_{\lambda},\rho_{0})= \Delta(\widehat{v_{0}},\rho_{\lambda})\] \[-\frac{1}{\lambda}\Big{[}\Delta_{0}(v_{\lambda},\rho_{0})+\Delta_ {0}(v_{0},\rho_{\lambda})\Big{]}.\] (41) _This is bounded above by either of the following:_ \[\lambda^{-1}\left\langle v_{\lambda}-v_{0}\,,\,\rho_{\lambda}- \rho_{0}\right\rangle-\left\langle\widehat{v_{\lambda}}-\widehat{v_{0}}\,,\, \rho_{\lambda}-\rho_{0}\right\rangle, \tag{42a}\] \[\left\langle(1-\lambda)\,\mathsf{R}\,v_{0}+[\underline{\mathsf{D} }\Phi(\rho_{0})]-[\underline{\mathsf{D}}\Phi(\rho_{\lambda})]\,,\,\rho_{ \lambda}-\rho_{0}\right\rangle. \tag{42b}\] **Corollary 5.3**.: _With the preceding notation, assuming \(\rho_{\lambda}\) exists and \(\mathsf{R}\,v\neq 0\),_ \[\operatorname{inc}(v^{\otimes};\rho_{\lambda},\rho_{0})<\Delta(\widehat{v_{0}},\rho_{\lambda})-\frac{1}{\lambda}\Delta_{0}(v_{0},\rho_{\lambda}). \tag{43}\] Recall that \(\Delta_{0}\) and \(\Delta\) are everywhere non-negative. The remarkable, and encouraging, aspect of the inequality (43) is the extra factor \(\lambda^{-1}\) in the negative term; more about this in the next subsection. Proof of Prop. 5.2.: Apply the identity (32) to the expression \[\Delta(v^{\otimes},\rho_{\lambda})-\Delta(v^{\otimes},\rho_{0})+\Delta( \widehat{v_{\lambda}},\rho_{0})\] three times, replacing \((v,\rho,v^{\prime})\) successively by \((\widehat{v_{\lambda}},\rho_{\lambda},v^{\otimes})\), \((\widehat{v_{0}},\rho_{0},v^{\otimes})\), and \((\widehat{v_{0}},\rho_{0},\widehat{v_{\lambda}})\). In the resulting expression, each of \(E(\widehat{v_{\lambda}})\), \(E(v^{\otimes})\), and \(E(\widehat{v_{0}})\) occurs once with a plus and once with a minus sign, cancelling to leave \[\Delta(v^{\otimes},\rho_{\lambda})-\Delta(v^{\otimes},\rho_{0})= -\Delta(\widehat{v_{\lambda}},\rho_{0})\] \[+\left\langle v^{\otimes}-\widehat{v_{\lambda}}\,,\,\rho_{ \lambda}-\rho_{0}\right\rangle \tag{44}\] Now, \(v^{\otimes}=\widehat{v_{0}}+\mathsf{R}\,v_{0}=\widehat{v_{0}}+\frac{1}{ \lambda}(v_{\lambda}-v_{0})\), by (39). Substitute for \(v^{\otimes}\), to find the RHS of (44) equal to \[-\Delta(\widehat{v_{\lambda}},\rho_{0})+\frac{1}{\lambda}\left\langle v_{ \lambda}-v_{0}\,,\,\rho_{\lambda}-\rho_{0}\right\rangle-\left\langle\widehat{v _{\lambda}}-\widehat{v_{0}}\,,\,\rho_{\lambda}-\rho_{0}\right\rangle.\] Dropping the negative first term here immediately yields the upper bound (42a). The second form (42b) of the upper bound follows upon the substitution \(\widehat{v_{\lambda}}-\widehat{v_{0}}=v_{\lambda}-v_{0}+[\underline{\mathsf{D} }\Phi(\rho_{0})]-[\underline{\mathsf{D}}\Phi(\rho_{\lambda})]\). Returning to the previous display, use the cross-difference identity (30) once for the reference system and once for the perturbed system to rewrite it as \[-\frac{1}{\lambda}\Big{[}\Delta_{0}(v_{\lambda},\rho_{0})+\Delta_{0}(v_{0}, \rho_{\lambda})\Big{]}+\Delta(\widehat{v_{0}},\rho_{\lambda}).\] Equating to the LHS of (44) yields (41). ### Analyticity The question now is, under what circumstances is the RHS of the inequality (43) negative for some range of \(\lambda\)? If both \(\Delta(\widehat{v_{0}},\rho_{\lambda})\) and \(\Delta_{0}(v_{0},\rho_{\lambda})\) are \(\mathcal{O}(\lambda^{2})\), that would be more than enough. Recall that \(v_{\lambda}=(1-\lambda)v_{0}+\lambda v_{1}\) and \(\rho_{\lambda}\) is \(a\) noninteracting ground density for \(v_{\lambda}\). Both \(\Delta(\tilde{v_{0}},\rho_{\lambda})\) and \(\Delta_{0}(v_{0},\rho_{\lambda})\) certainly have a minimum (zero) at \(\lambda=0\). If \(\rho_{\lambda}\) varies at all smoothly, we would expect both these excess energies to be quadratic in \(\lambda\) near the minimum, exactly as needed. Supposing \(F\) is an exact functional, so that \(\Delta\) comes from a well-defined quantum mechanical problem, the following can be proved[20]: If the noninteracting problem for \(v_{0}\), and the interacting problem for \(\widehat{v_{0}}\), has a nondegenerate ground state with nonzero spectral gap, then both these excess energies are not just \(\mathcal{O}(\lambda^{2})\), but _analytic_ at \(\lambda=0\). On the other hand, if the nondegeneracy and gap conditions are not satisfied, we should not be at all surprised if the excess energies behave in a way which dashes our hopes. The strategy of section 5.5 is is therefore conditionally vindicated. ## 6 Interlude: Toward Topology The rest of this paper develops a functional analytic picture which is not directly dependent on our analysis of Kohn-Sham machines, but very much influenced by it. Questions asked, and hypotheses imposed are chosen to be relevant. For instance, in asking about limits of a sequence \(((v_{n},\rho_{n}))\) of ground pairs, we will decide it is reasonable to ask that the \(F(\rho_{n})\) be bounded on the grounds that the KS machine provides this information when it generates a ground pair. We saw that the course of a Kohn-Sham computation can be distilled into a sequence \((v_{n},\rho_{n})\) of ground pairs. Moreover, barring exceptions, the computation can be done such that \(\Delta(v^{\otimes},\rho_{n+1})<\Delta(v^{\otimes},\rho_{n+1})\). This we called "progress", but maybe we should call it \(\Delta\)-progress as there may be other sorts. Since we do not know any ground densities of \(v^{\otimes}\) (else we would not be doing the computation), deciding whether \(\rho_{n+1}\) is closer to such than is \(\rho_{n}\) certainly cannot be done directly, at least. Surely, though, we could see whether \(v_{n+1}\) is closer to \(v^{\otimes}\) than \(v_{n}\)? Only if we know what "closer" means. If we had a metric \(d^{\prime}\) on \(\mathscr{V}\), that would provide one answer, and we could speak of "\(d^{\prime}\)-progress". This brings us to the issue of topologies on our function spaces, which we have so far deliberately avoided. The next section contains a review of relevant ideas, tailored to our needs. For us, equipping \(\mathscr{V}\) and \(\mathscr{D}\) with topologies is not merely a matter of mathematical convenience, but has physical significance, and will be done based on the considerations of section 2.3. After all, how do we distinguish one state (i.e., density) from another? By finding an observable which takes differing values for them. Thus arises the most physically-grounded notion of _neighborhood_ of a density. ## 7 Topological Notions This section review some important topological concepts and relates them to the physical state-observable duality. Because of the latter, readers already comfortable with all the mathematics maybe should skim it. By _topology_, I refer to the classical idea of defining neighborhoods of points in a point set, closely related to approximation. Actually, we do not deal with general topologies, but metrics and semimetrics. One may wonder whether even that is excessive. For that reason, it bears emphasizing at the outset that we will do this in order to ground the mathematics physically. Following the development in section 2, the fundamental means at our disposal to distinguish densities and define neighborhoods is via the observables. There are infinitely many of these, and they naturally give a system of _seminorms_. If we choose to work with a norm, for convenience, it is desirable that it have some justification tracing back to the observables. ### Metrics, norms, semimetrics, seminorms A metric on a set \(X\) is a map \(d\colon X\times X\to[0,\infty)\) (distance function) satisfying \[d(x,y) =d(y,x)\] (symmetry) \[d(x,z) \leq d(x,y)+d(y,z)\] (triangle inequality) \[d(x,y) >0\;\Rightarrow\;x\neq y\] \[d(x,y) =0\;\Rightarrow\;x=y\] The set together with the metric, \((X,d)\) is a _metric space_. The _open ball_ of radius \(r\) about \(x\in X\) is the set \[B(r;x)\;:=\;\{y\in X\;:\;d(x,y)<r\}\] of points at distance less than \(r\) from \(x\). If \(d\) and \(d^{\prime}\) are two metrics on the same space, \(d^{\prime}\) is _stronger_ than \(d\), written \(d\precsim d^{\prime}\), or \(d^{\prime}\succsim d\), if for every \(r>0\), there is \(r^{\prime}>0\) such that \(B^{\prime}(r^{\prime},x)\subseteq B(r,x)\) for _every_\(x\). Here, \(B^{\prime}\) denotes an open ball for \(d^{\prime}\). \(d\) and \(d^{\prime}\) are _equivalent_, \(d\sim d^{\prime}\), in case both \(d\precsim d^{\prime}\) and \(d^{\prime}\precsim d\). These comparisons are significant for convergence of sequences. Three ways to express the same thing are: sequence \((x_{n})\) converges to \(x\) with respect to \(d\), \(\lim_{n\to\infty}d(x,x_{n})=0\), and, for any \(r>0\), some tail of the sequence is inside \(B(r,x)\). Hence, \(d\precsim d^{\prime}\) implies that every \(d^{\prime}\)-convergent sequence is \(d\)-convergent. If \(X\) is a vector space, metrics which are compatible with the linear structure are of most interest. This means \(d(x+z,y+z)=d(x,y)\) (translation invariance) and, for \(c\in\mathbb{R}\), \(d(cx,cy)=|c|d(x,y)\) (homogeneity). A corresponding _norm_ can then be defined as the distance \(\|x\|=d(0,x)\) from the origin. Such a metric is recovered from the corresponding norm as \(d(x,y)=\|x-y\|\). The last two listed defining conditions for a metric pertain to its role in distinguishing points. The third condition shows how it does that, and the fourth, _separation_, says that the metric can distinguish any distinct points. Dropping the separation condition yields the definition of a _semimetric_. A single semimetric may fail to separate points, but a collection \(\{d_{i}\,:\,i\in\mathcal{I}\}\) of semimetric can _collectively_ separate, even if none does no individually. That is, for each \(x\neq y\), there is some \(i\in\mathcal{I}\) such that \(d_{i}(x,y)>0\). A sequence \((x_{n})\) converges to \(x\) with respect to the system of seminorms \(\{d_{i}\,:\,i\in\mathcal{I}\}\) if and only if \(d_{i}(x_{n},x)\to 0\) for each \(i\). Extending our comparison \((\precsim)\) to systems of semimetrics has a slight subtlety. One way to proceed is to use open balls again. The "size" of the open ball \(B(J,r;x)=\{y\,:\,d_{i}(y,x)<r,\forall i\in J\}\) is parameterized by not only a radius, but also a selection \((J\subset\mathcal{I})\) of a _finite_ number of seminorms. Then, \(\{d_{i}\,:\,i\in\mathcal{I}\}\precsim\{d^{\prime}_{j}\,:\,j\in\mathcal{I}^{ \prime}\}\) if and only if for any \(d\) size \((I,r)\), there is a \(d^{\prime}\) size \((I^{\prime},r^{\prime})\) such that \(B^{\prime}(I^{\prime},r^{\prime};x)\subseteq B(I,r;x)\) for every \(x\). As concerns convergence, a collectively separating _finite_ system \(\{d_{1},\ldots,d_{n}\}\) can be replaced by the single metric \(d_{*}(x,y)=\max(d_{1}(x,y),\ldots,d_{n}(x,y))\). Hence, only infinite systems of semimetrics are really of interest. Just as for the passage from metric to norm, to make a seminorm respect the linear structure of a vector space, one imposes translation invariance and homogeneity. Such a compatible semimetric is a _seminorm_. (Termological note: _seminorm_ is standard. Accepting that, _semimetric_ seems natural. However, what we are calling _semimetric_ is called _pseudometric_ by some.) ### Seminorms and dual pairs Seminorms have been lurking all along in our pairing maps. Suppose \(\mathscr{X}\) and \(\mathscr{V}\) form a dual system (Def. 3.1). Each \(v\in\mathscr{V}\) defines a proper seminorm \(p_{v}\) on \(\mathscr{X}\), defined by \[x\mapsto p_{v}(x)\,:=\,|\langle v,x\rangle|. \tag{45}\] Similarly, each \(x\in\mathscr{X}\) defines a seminorm on \(\mathscr{V}\). No single \(p_{v}\) separates, but the entire system of seminorms separates collectively. For \(\mathscr{X}=\operatorname{Vec}\mathscr{D}\) and \(\mathscr{V}\) our spaces of states and observables, respectively, this is something we should insist on. If two states cannot be distinguished by _any_ observable, on what ground would we say they are distinct? Relatedly, if we admit the physical meaningfulness of a set of observables, it is very unclear on what grounds we could reject the physical meaningfulness of the corresponding seminorms and the topology which they generate. Since systems of seminorms arising this way are of great importance to us, we introduce a notation. **Definition 7.1**.: For a dual pair \((\mathscr{V}\,,\,\mathscr{X})\), the system \(\{p_{v}\,:\,v\in\mathscr{V}\}\) of seminorms defined in (45) is denoted \(\sigma(\mathscr{X},\mathscr{V})\). Swapping the roles of \(\mathscr{X}\) and \(\mathscr{V}\) gives the system \(\sigma(\mathscr{V},\mathscr{X})\)\(\mathscr{V}\). If \(\mathscr{D}\) is a subspace of \(\mathscr{X}\), we write \(\sigma(\mathscr{D},\mathscr{V})\) for the system of semimetrics induced from \(\sigma(\mathscr{X},\mathscr{V})\). ### Norm compatibility with a dual system Once \(\mathscr{X}\) has a topology, we have a new criterion with which to distinguish linear functionals, namely, those which are continuous. It turns out that the linear functionals on \(\mathscr{X}\) continuous with respect to \(\sigma(\mathscr{X},\mathscr{V})\) are exactly those of the form \(x\mapsto\langle v\,,\,x\rangle\) for \(v\in\mathscr{V}\). Physically, this makes sense: the linear observables ought to be exactly the continuous linear functionals on states, or something has been chosen incorrectly. It is not as easy to work with a seminorm system such as \(\sigma(\mathscr{X},\mathscr{V})\) as with a simple norm, at either the level of general results or that of specific spaces. This motivates us to equip \(\mathscr{X}\) with a norm, but it also raises the question of potential grounds for considering a norm to be "physical". I propose a principle based on the observation of the previous paragraph. A topology \(\tau\) on \(\mathscr{X}\), defined by seminorms, is said to be _compatible_ with the duality \(\langle\mathscr{V}\,,\,\mathscr{X}\rangle\) if the set of linear functionals on \(\mathscr{X}\) which are continuous with respect to \(\tau\) are exactly those of the form \(x\mapsto\langle v\,,\,x\rangle\) for \(v\in\mathscr{V}\). Then, the principle is that, to the extent that the choice \(\mathscr{V}\) of observables is physical, topologies compatible with the duality \(\langle\mathscr{V}\,,\,\mathscr{X}\rangle\) are the "more physical" ones. This matter of topologies compatible with a given duality is a standard chapter of the theory of locally convex spaces. (Often literally, e.g., Chapter III of Horvath's book[21].) We list some relevant facts. Not only do all topologies compatible with a given duality have the same continuous linear functionals, but also (i) the same lower semicontinuous convex functions into \(\overline{\mathbb{R}}\), (ii) the same closed convex subsets of \(\mathscr{X}\), (iii) the same bounded sets. An important observation is that, if there is a norm on \(\mathscr{X}\) compatible with the duality, it is essentially unique, and defined by the weakest seminorm dominating all the \(p_{v}\) for \(v\in\mathscr{V}\). Fortunately, we have such a case. With the norm \[\|x\|=\|x\|_{1\cap 3}\;:=\;\|x\|_{1}+\|x\|_{3}, \tag{46}\] Vec \(\mathscr{D}\) still has \(\mathscr{V}=\mathcal{L}^{\infty}(\mathcal{M})+\mathcal{L}^{3/2}(\mathcal{M})\) as its dual. With the canonical norm \[\|v\|^{\prime}\;:=\;\sup\left\{|\,\langle v\,,\,x\rangle\,|\;:\;x\in \operatorname{Vec}\mathscr{D},\,\|x\|=1\right\}, \tag{47}\] \(\mathscr{V}\) becomes a Banach space. A norm which is equivalent to this canonical one, and possibly more convenient, or at least more explicit, is \[\|v\|_{\infty+\frac{3}{2}}\;:=\;\inf\left\{\|v\|_{\infty}+\|v_{2}\|_{\frac{3}{ 2}}\;:\;v=v_{1}+v_{2}\right\}. \tag{48}\] However, we will not actually make any use of this concrete form. \(\operatorname{Vec}\mathscr{D}\) is _not_ a Banach space under the norm \(\|\cdot\|_{1\cap 3}\). Its _completion_ (see section 7.5) under this norm is the Banach space \(L^{1}(\mathcal{M})\cap L^{3}(\mathcal{M})\). ### Variations on continuity We collect some variations on the concept of continuity for metric spaces. Recall that a function \(f\colon\mathscr{X}\to\mathscr{Y}\) between metric spaces is continuous at \(x\in\mathscr{X}\) iff, given \(\delta>0\), there is an \(\epsilon>0\) such that \(f\) carries the ball of radius \(\epsilon\) centered at \(x\)_into_ the ball of radius \(\delta\) centered at \(f(x)\). In section 9, we shall use a slightly stronger form of continuity, as follows. **Definition 7.2** (locally Lipschitz continuous).: For a metric space \(\mathscr{X}\), a function \(f\colon\mathscr{X}\to\mathbb{R}\) is _locally Lipschitz continuous_ (for short, _locally L-continuous_) iff for each point \(x\in\mathscr{X}\), there is a neighborhood \(U\ni x\) and \(L>0\) such that \[y,z\in U\ \Rightarrow\ |f(y)-f(z)|<d(y,z). \tag{49}\] Example: the function \(x\mapsto\sqrt{|x|}\) is locally L-continuous on \(\mathbb{R}\setminus\{0\}\), but only continuous at zero. Just as for the unilateral forms of derivative introduced in Def. 3.2, a unilateral form of continuity is relevant in optimization situations. **Definition 7.3** (lower/upper semicontinuity).: A function \(f\colon\mathscr{X}\to\overline{\mathbb{R}}\) on a topological space \(\mathscr{X}\) is _lower semicontinuous_ (lsc) at \(x\in\mathscr{X}\) when, for any tolerance \(\epsilon>0\), there is a neighborhood \(U\) of \(x\) such that \[y\in U\ \Rightarrow\ f(y)>f(x)-\epsilon.\] _Lower semicontinuous_ without qualifier means lsc everywhere. \(f\) is _upper semicontinuous_ (usc) if \(-f\) is lsc. This is equivalent to replacing \(>\) and \(-\) in the above display with\(<\) and \(+\). For a convergent sequence \(x_{n}\to x\), lower semicontinuity of \(f\) implies that \(\liminf_{n\to\infty}f(x_{n})\geq f(x)\). The value of \(f\) at the limit point \(x\) might be "smaller than anticipated", but not "larger than anticipated". A real-valued function is continuous at a point iff it is both lsc and usc there. The concept of lower semicontinuity is very important for us because \(F\) is lsc, but not usc (see section 8.3). If \(S\) is a set of lsc functions, then their pointwise supremum, \(f(x)\ :=\ \sup\left\{g(x)\ :\ g\in S\right\}\) is also lsc. In particular, if \(S\) consists of continuous functions, then the supremum is lsc, though there is no reason, in general to suppose it continuous. For an important example, consider \(E\) defined on \(\mathscr{V}\) with a system of seminorms at least as strong as \(\sigma(\mathscr{V},\mathscr{D})\). Then, it is automatic from the definition of \(E\) that it is usc. Finally, we introduce a weakening of continuity which will be useful because it allows us to bound how discontinuous \(F\) can be in certain circumstances. **Definition 7.4** (almost continuous).: \(f\colon\mathscr{X}\to\mathbb{R}\) is 1. \(\epsilon\)_-almost continuous at_ \(x\) precisely if: for any \(\epsilon^{\prime}>0\), there is \(\delta\) such that \[d(y,x)<\delta\ \Rightarrow\ |f(x)-f(y)|<\epsilon+\epsilon^{\prime}\] (50) 2. \(\epsilon\)-almost continuous _on_\(\mathscr{X}\) precisely if: \(f\) is \(\epsilon\)-almost continuous at \(x\) for every \(x\in\mathscr{X}\). 3. \(g\)-almost continuous, where \(g\colon\mathscr{X}\to[0,\infty)\), precisely if: for every \(\epsilon>0\), \(f\) is \(\epsilon\)-almost continuous on \(\{g\leq\epsilon\}\) ### Complete metric spaces Due to its important in the investigation, we conclude this section with a brief review of the concept of completeness for metric spaces. Roughly, a metric space is _complete_ if a sequence actually has a limit whenever it "appears to be converging" in the following sense. **Definition 7.5** (Cauchy).: The sequence \((x_{n})\subseteq\mathscr{X}\) is _Cauchy_ iff \[\operatorname{diam}\left\{x_{n}\ :\ n\geq N\right\}\to 0\text{ as }N\to\infty. \tag{51}\] The _diameter_ of a set \(A\), \(\operatorname{diam}A\) is \(\sup\left\{d(x,y)\ :\ x,y\in A\right\}\). **Definition 7.6** (Complete).: A metric space is _complete_ if every Cauchy sequence has a limit. For a familiar example of a metric space which is _not_ complete, consider the rational numbers \(\mathbb{Q}\) with the ordinary distance function. If \(x_{n}\) is \(\sqrt{2}\) to \(n\) decimal places, then \((x_{n})\) is Cauchy, but does not converge to anything since \(\sqrt{2}\) is not in \(\mathbb{Q}\). A Banach space is a complete normed space. There is a canonical, abstract, way to complete any normed space \(V\). The completion is a Banach space and \(V\) is dense in it. Then, any Cauchy sequence has a limit in the completion. This seems very convenient, but is not always appropriate. Later we will be interested in the metric \(d_{1}\) on the space of densities \(\mathscr{D}\) which derives from the \(L^{1}\) norm. We will not use a completion because we will want to know that limits are in \(\mathscr{D}\) itself. our partial order on metrics, \(\precsim\), behaves well with respect to completeness. Namely, if \((X,d)\) is complete and \(d\precsim d^{\prime}\), then \((X,d^{\prime})\) is also complete. ## 8 Interlude: some general strategy Suppose we have found a well-motivated metric on \(\mathscr{V}\times\mathscr{D}\), and return to the sequence \(((v_{n},\rho_{n}))\) of ground pairs. Questions which naturally arise are: Does it converge if it is Cauchy (see section 7.5 for this notion)? If it does, is the limit a ground pair? The following sections put together a topological perspective on DFT. We consider regularity properties of \(E\) and \(F\), the relation between the energetic version of nearly a ground pair (small excess energy) and distance to \(\mathscr{Z}_{0}\) or \(\mathscr{Z}\), and convergence of sequences of ground pairs. ### Room for error Our analysis of Kohn-Sham machines assumed that they produce points exactly on \(\mathscr{Z}\), i.e., with zero excess energy \(\Delta\). Assuming that only \(\Delta(v_{n},\rho_{n})<\epsilon\) is an idealized model of a certain kind of error. In the following sections, therefore, we will be interested not only in \(\mathscr{Z}\), but also sets of bounded \(\Delta\) in \(\mathscr{V}\times\mathscr{D}\), in order to understand how the conclusions are robust against such error. ### the axiomatic approach Additional axioms will be added to A1 - A3 already announced. They will be _motivated_ by what we can deduce about the exact quantum mechanical situation, but are expressed at the DFT level. This style of working allows us to keep track of exactly what we have used from the underlying QM (not a lot), and gives room for the results to apply to model functionals. There will only ever be a single \(F\) involved. However, it need not be an exact functional, clearly traceable to a quantum mechanical Hamiltonian. Any \(F\) which satisfies the axioms will do, so it could be \(F_{0}\), an exact \(F\), or \(F_{0}+\Phi\) for a model HXC energy. The axioms reflect properties of exact functionals, but are not particularly constraining. Final results are funnelled through the axioms, so to speak. There is work to be done both in proving that the axioms are satisfied in standard interpretation, and in getting from them to claims formally stated as theorems. This is not always most efficient approach. Two later axioms will supercede earlier ones. Choice of axioms aims for mathematical simplicity, physical transparency, and generality (hence flexibility in application). ### \(F\) is not continuous In the physics literature, it is often implicitly assumed that intrinsic energy \(F\) is well-behaved, continuous at least, and possibly smooth. This is not only unjustified, but incorrect. With respect to the norm \(\|\cdot\|\) already mentioned, and dealt with in the next section, the exact functional \(F\) is lsc, but _not_ usc. In fact, \(F_{0}\) already has this problem. To see this, consider a density \(\rho\), and select a region \(U\) and \(\epsilon>0\). By adding oscillations of bounded amplitude but increasingly small wavelength to \(\rho\) in the region \(U\), we can produce a sequence of densities \(\rho_{n}\) such that \(\|\rho_{n}-\rho\|<\epsilon\), but \(F_{0}(\rho_{n})>n\). Hence, \(F_{0}\) is _unbounded above on every neighborhood_. The excess energy \(\Delta\) inevitably inherits this problem. This is worth emphasizing because some of what follows, though by no means all, would be somewhat trivial if \(F\) were continuous. In addition, we also have \(E\) and \(\Delta\) to worry about. ## 9 Structure and regularity I ### New postulates In addition to A1 - A3 from section 4.1, we now assume 1. \(\operatorname{dom}E\supseteq\mathscr{V}\). 2. \(\mathscr{V}\) is the topological dual of \((\operatorname{Vec}\mathscr{D},\|\cdot\|)\) with respect to the pairing \(\langle\,,\,\rangle\). 3. \(F\), extended to the completion of \((\operatorname{Vec}\mathscr{D},\|\cdot\|)\) by \(F\equiv+\infty\) off \(\mathscr{D}\), is lower semicontinuous. Recall that '\(\operatorname{dom}\)' indicates the set on which a function takes a proper (noninfinite) value. Thus, with A4, all our functions \(F\), \(E\), and \(\Delta\) take proper values over all of \(\mathscr{V}\times\mathscr{D}\). Together with the pairing \(\langle\cdot\,,\,\cdot\rangle\), the norm \(\|\cdot\|\) on \(\operatorname{Vec}\mathscr{D}\) induces a canonical norm (47) \(\|\cdot\|^{\prime}\) on \(\mathscr{V}\), under which it is a Banach space. The norm \(\|\cdot\|\) on \(\mathscr{X}\) and and \(\|\cdot\|^{\prime}\) on \(\mathscr{V}\) induce metrics as discussed in section 7.1. When convenient, we refer to these as \(d\) and \(d^{\prime}\), respectively. ### Standard interpretation The interpretation is [See Eqs. (46) and (48)] \[\|x\| := \|x\|_{1\cap 3}\] \[\|v\|^{\prime} := \|v\|_{\infty+\frac{3}{2}},\] \[\langle v\,,\,x\rangle = \int_{\mathbb{R}^{3}}v(x)\rho(x)\,dx\] The pairing was already defined on a bigger set than \(\mathscr{V}\times\mathscr{D}\), so the extension described is not really necessary, but it is worth noting that the extension recovers the original pairing on the bigger set. ### Structure theorem In this section, we equip \(\mathscr{V}\times\mathscr{D}\) with the metric \[(d^{\prime}+d)((v,\rho),(v^{\prime},\rho^{\prime}))\,:=\,d^{\prime}(v,v^{ \prime})+d(\rho,\rho^{\prime}). \tag{52}\] Until further notice, convergence will be considered with respect to \(d^{\prime}\), \(d\) and \(d^{\prime}+d\) in \(\mathscr{V}\), \(\mathscr{D}\), and \(\mathscr{V}\times\mathscr{D}\), respectively. We abbreviate \(\{(v,\rho)\,:\,\Delta(v,\rho)\leq\epsilon\}\) by \(\{\Delta\leq\epsilon\}\). A subset of \(\mathscr{V}\times\mathscr{D}\) over which \(\Delta\) is bounded is called a \(\Delta\)_-bounded set_. Later we will be interested in \(F\)_-bounded_ sets, which are defined similarly. In that case, however, there is an ambiguity: \(\{F\leq M\}\) could be a subset of \(\mathscr{D}\), or a subset of \(\mathscr{V}\times\mathscr{D}\) with unrestricted \(\mathscr{V}\) coordinate. Context will make clear which is intended. \(F\) is lsc by assumption (T2), while \(E\) is usc by construction (see section 7.4). \(\Delta\) is then the sum of lower semicontinuous functions of density, \(F(\rho)\), of potential, \(-E(v)\), and a separately continuous function \((v,\rho)\mapsto\langle v\,,\,\rho\rangle\). \(\Delta\) is therefore separately lsc. Just as for continuity, _joint_ lower semicontinuity (i.e., as a function on \(\mathscr{V}\times\mathscr{D}\)) is not in general a consequence of separate lower semicontinuity. Much of the force of the following Proposition 9.1 is in showing that the situation is actually better than just observed. The improvement is clear as regards \(E\) (conclusion 2). Conclusion 1, although stated in a somewhat raw form, implies that \(\Delta\) is lsc on \(\mathscr{V}\times\mathscr{D}\), as is thoroughly explained in Section 13. Conclusions 3 and 4 show that \(F\) is better behaved in restriction to subsets of small excess energy. Beware of misinterpretation. Conclusion 3 does _not_ meand that \(F\) is continuous with respect to \(d\) on the set of v-representable densities. Rather, we can rephrase it as: \(F(\rho^{\prime})\) is close to \(F(\rho)\) if \(\rho^{\prime}\) is close to \(\rho\)_and_ a realizing potential for \(\rho^{\prime}\) is close to one for \(\rho\). The relevance of considering \(\mathscr{Z}\) is that Kohn-Sham computation delivers points on \(\mathscr{Z}\), or, in a less-idealized version, on \(\{\Delta\leq\epsilon\}\). **Proposition 9.1**.: _Assume A1 - A4, T1, T2. Then, on \((\mathscr{V}\times\mathscr{D},d^{\prime}\times d)\),_ 1. _For_ \(\epsilon<\infty\)_,_ \(\{\Delta\leq\epsilon\}\) _is complete._ 2. \(E\) _is locally L-continuous_ 3. \(F\) _is locally L-continuous on_ \(\mathscr{Z}\)__ 4. \(F\) _is_ \(\Delta\)_-almost continuous_ In more concrete terms directly related to Kohn-Sham computation, Prop. 9.1 has the following immediate consequence. Suppose * \(((v_{n},\rho_{n}))\in\mathscr{Z}\) * \((v_{n},\rho_{n})\to(v,\rho)\) Then, * \((v,\rho)\in\mathscr{Z}\) * \(F(\rho)=\lim F(\rho_{n})\) * \(E(v)=\lim E(v_{n})\) ### A4, T1, T2 hold in standard interpretation Proof of A4.: See Ref. 8. (We will later prove an axiom, A5, which implies A4.) Proof of T1.: \(\operatorname{Vec}\mathscr{D}\) is dense in the Banach space \(L^{1}\cap L^{3}\), the topological dual of which is the Banach space \(L^{\infty}+L^{3/2}\). Refer to discussion in section 7.3. Proof of T2.: See Ref. 8. ### Proof of Proposition 9.1 A. \(E\) is locally L-continuous. Proof.: This follows from A4, using 1.4.1 and 1.7.4 of Schirotzek[22]. B. \((v,\rho)\mapsto\langle v\,,\,\rho\rangle\) is locally L-continuous. Proof.: From \[\langle v\,,\,\rho\rangle -\langle\tilde{v}\,,\,\tilde{\rho}\rangle=\langle v-\tilde{v}\,, \,\rho-\tilde{\rho}\rangle\] \[+\langle v-\tilde{v}\,,\,\tilde{\rho}\rangle+\langle\tilde{v}\,, \,\rho-\tilde{\rho}\rangle\] deduce \[|\,\langle v\,,\,\rho\rangle-\langle\tilde{v}\,,\,\tilde{\rho} \rangle\mid\leq\parallel v-\tilde{v}\parallel^{\prime}\parallel\rho-\tilde{ \rho}\parallel\] \[+\parallel v-\tilde{v}\parallel^{\prime}\parallel\tilde{\rho} \parallel+\parallel\tilde{v}\parallel^{\prime}\parallel\rho-\tilde{\rho}\parallel.\] Considering, for example, the open set \(\|v\|^{\prime}<c\), \(\|\rho\|<c\), the RHS above can be bounded by \(3c[d^{\prime}(v,\tilde{v})+d(\rho,\tilde{\rho})]\). C. \(\{\Delta\leq\epsilon\}\) is complete. Proof.: The Cauchy sequence \((v_{n},\rho_{n})\subset\{\Delta\leq\epsilon\}\) has a limit \((v,\rho)\in\mathscr{V}\times\mathscr{Z}\) since \(\mathscr{V}\) and \(\mathscr{Z}\) are complete under \(d^{\prime}\) and \(d\), respectively. Now, \(\Delta(v,\rho)=F(\rho)+\langle v\,,\,\rho\rangle-E(v)\), and we need to show that \(\Delta(v,\rho)\leq\liminf\Delta(v_{n},\rho_{n})\). This follows because \(F\) is lsc (T2), while \(E\) and \(\langle\,,\,\rangle\) are continuous, as shown in the preceding two items. D. \(F\) is locally L-continuous on \(\mathscr{Z}\). Proof.: Given the preceding, the proof of is simple. \(F(\rho)=E(v)+\langle v\,,\,\rho\rangle+\Delta(v,\rho)\). The last term on the RHS is identically zero on \(\mathscr{Z}\), while the first and second are locally L-continuous by items A and B, respectively. E. \(F\) is \(\epsilon\)-almost continous on \(\{\Delta\leq\epsilon\}\). Proof.: \(F(\rho)=E(v)+\langle v\,,\,\rho\rangle+\Delta(v,\rho)\), but the first two terms on the RHS are continuous functions of \((v,\rho)\) by preceding results, while the final term is in the interval \([0,\epsilon]\). ## 10 Nearly-A-ground-pair versus near-A-ground-pair ### energetic and metric progress We now have two notions of progress available: the energetic notion of \(\Delta\)-progress from section 5, and what we might call \(d^{\prime}\)-progress, meaning we find a \((v_{n+1},\rho_{n+1})\) with \(v_{n+1}\) closer (with respect to \(d\)) to our target \(v^{\odot}\) than \(v_{n}\) is. The density cannot enter in any useful notion of progress since in the standard problem, we do not know the target density. The natural question now is whether there is any kind of commensuration between these two kinds of progress. For instance, if our sequence \(((v_{n},\rho_{n}))\subset\mathscr{Z}\) and \(v_{n}\to v\), does \(\Delta(v,\rho_{n})\to 0\)? Put in plain English terms, a small value of \(\Delta(v,\rho)\) means that \((v,\rho)\) is _nearly a ground pair_. In contrast, if \((v,\rho)\) is close in function space to \((v^{\prime},\rho^{\prime})\in\mathscr{Z}\), then \((v,\rho)\) is _near a ground pair_. ### Main results This section takes all the axioms announced so far, A1 - A4, T1, T2, as background. Concerning the comparison alluded to by the section title. there is an uncomplicated and satisfying answer in one direction. It says that the low (excess) energy part of the product space \(\mathscr{V}\times\mathscr{D}\) is metrically close to the ground pairs \(\mathscr{Z}\). **Proposition 10.1** (Nearly-a-ground-pair implies near-a-ground-pair).: \[\Delta(v,\rho)<\epsilon\ \Rightarrow\ (d^{\prime}+d)\Big{(}(v,\rho)\,,\, \mathscr{Z}\Big{)}<2\sqrt{\epsilon}\] (53) Proof.: This is a consequence of the Ekeland variational principle[23]. See Cor. I.6.1 of Ref. 24 or Cor. 5.3.6 of Ref. 17. The other direction is not so straightforward. The following proposition can be paraphrased as saying that \(\Delta\) is locally Lipschitz in \(\mathscr{V}\) on \(F\)-bounded sets. If, starting from \((v,\rho)\), a slight change of the potential -- holding the density fixed -- hits \(\mathscr{Z}\) (so, a restricted form of near-a-ground-pair), then \(\Delta(v,\rho)\) is small. "Slight" here depends on \(\rho\). A similar statement involving slight shifts of the density _is not true_. **Proposition 10.2**.: _If \(U\) is neighborhood on which \(E\) has Lipschitz constant \(L\), then_ \[v,v^{\prime}\in U\ \Rightarrow \tag{54}\] \[\left|\Delta(v,\rho)-\Delta(v^{\prime},\rho)\right|\leq \Big{(}L+\|\rho\|\Big{)}\|v-v^{\prime}\|^{\prime}\] \[\leq \Big{[}c+c^{\prime}F(\rho)\Big{]}\|v-v^{\prime}\|^{\prime}.\] Proof.: By definition, \[\Delta(v,\rho)-\Delta(v^{\prime},\rho)=\left\langle v-v^{\prime}\,,\,\rho \right\rangle+E(v^{\prime})-E(v).\] Since L-continuity of \(E\) means that \(\left|E(v^{\prime})-E(v)\right|\leq L\|v^{\prime}-v\|^{\prime}\), the first inequality in (54) follows immediately. For the second, appeal to following Lemma. **Lemma 10.3**.: _Assume A1 - A4, T1. Then, \(F\) is coercive, that is, \(F(\rho)\to\infty\) as \(\|\rho\|\to\infty\). More precisely, there is a linear bound of the form_ \[F(\rho)\geq c+c^{\prime}\|\rho\|. \tag{55}\] Proof.: This is a consequence of A4, as follows. For every \(v,\rho\), \(F(\rho)\geq E(v)-\left\langle v\,,\,\rho\right\rangle\). By A4, \(E\leq c\) on a ball \(B(r)\) about \(v\equiv 0\). For given \(\rho\), there is a \(v\in B(r)\) such that \(\left\langle v\,,\,\rho\right\rangle<\frac{\gamma}{2}\|\rho\|\). Therefore, we obtain the quoted formula with \(c^{\prime}=\frac{\gamma}{2}\). An axiom to be added later (A5) will render automatic the \(F\)-boundedness on which Proposition 10.2 relies. Meanwhile, we can spell out a consequence which is possibly more digestible, or at least closer to the computational motivation insofar as it is about sequences of ground pairs. Note that \(v\) is fixed in \(\Delta(v,\rho_{n})\) here. **Corollary 10.4**.: _If \(((v_{n},\rho_{n}))\subset\mathscr{Z}\), \(v_{n}\to v\) and \(\{\rho_{n}\}\) is \(F\)-bounded, then \(\Delta(v,\rho_{n})\to 0\)._ ## 11 "Near-a-ground-pair implies nearly-a-ground-pair" improved Corollary 10.4 gives an affirmative answer to a query posed in section 6, but only subject to a hypothesis of \(F\)-boundedness. This is a reasonable hypothesis in the sense that it has not lost touch with the limitations of the reduced KS-machine. Recall that when the machine delivers \((v_{n},\rho_{n})\in\mathscr{Z}\), \(F(\rho_{n})\) is also available. With an axiom added in this section, \(F\)-boundedness will become automatic in that context. ### New postulate A5. For each \(v\in\mathscr{V}\), \[\left|\left\langle v\,,\,\rho\right\rangle\right|=o(F(\rho)),\ \text{as}\ F(\rho)\to\infty \tag{56}\] Axiom A4 is very physically transparent. It says that there is some lower bound to the energies attainable in presence of any given potential. The new axiom is less transparent, but granted it, A4 is obsolete, as will be proven at the end of the section. **Proposition 11.1**.: _Given A1 - A3, A5 implies A4_ ### Locally in \(\mathscr{V}\), \(\Delta\)-boundedness and \(F\)-boundedness are the same thing **Proposition 11.2**.: _Each \(v\in\mathscr{V}\) has a neighborhood \(U\) such that subsets of \(U\times\mathscr{D}\) are \(\Delta\)-bounded iff \(F\)-bounded._ Now we can remove the assumption of \(F\)-boundedness from Cor. 10.4. **Corollary 11.3**.: _Given: \(((v_{n},\rho_{n}))\subset\{\Delta\leq\epsilon\}\), with \(v_{n}\to v\). Then, \(\limsup\Delta(v,\rho_{n})\leq\epsilon\)._ _In particular, if the sequence is in \(\mathscr{Z}\), then \(\lim\Delta(v,\rho_{n})=0\)._ Proof.: Immediate, from Prop. 11.2 and Prop. 10.2. ### Demonstration of A5 in standard interpretation Using notation from the proof of Lemma 13.4, appeal to Lemmas 13.4 and 10.3 to write \[|\left\langle v\,,\,\rho\right\rangle| \leq |\left\langle v_{m}\,,\,\rho\right\rangle|+|\left\langle v^{m}\,, \,\rho\right\rangle| \tag{57}\] \[\leq \|v_{m}\|_{\infty}\|\rho\|_{1}+\|v^{m}\|_{3/2}\|\rho\|_{3}\] \[\leq m+\|v^{m}\|_{3/2}[c+F(\rho)],\] where \(\|v^{m}\|_{3/2}\to 0\) as \(m\to\infty\). Given \(\epsilon\), large enough \(m\) then gives \(|\left\langle v\,,\,\rho\right\rangle|\leq c+\epsilon F(\rho)\). ### Proof of Proposition 11.2 The proof involves a strengthening of A5, in the sense that the condition is shown to hold not only pointwise in \(\mathscr{V}\), but locally uniformly: **Lemma 11.4**.: _Given \(v\in\mathscr{V}\) and \(\epsilon>0\), there is a radius \(r\) and constant \(c\) such that \(\|v^{\prime}-v\|^{\prime}\leq r\) implies_ \[|\left\langle v^{\prime}\,,\,\rho\right\rangle|\leq c+\epsilon F(\rho)\quad \text{for every $\rho$}. \tag{58}\] Proof.: Since \(\left\langle v^{\prime}\,,\,\rho\right\rangle=\left\langle v\,,\,\rho\right\rangle +\left\langle v^{\prime}-v\,,\,\rho\right\rangle\), \[|\left\langle v^{\prime}\,,\,\rho\right\rangle|\leq c+\epsilon^{\prime}F( \rho)+\|v^{\prime}-v\|^{\prime}[c^{\prime}+c^{\prime\prime}F(\rho)], \tag{59}\] where, according to A5, \(\epsilon^{\prime}\) can be chosen as small as desired (at cost of large \(c\)). Now, choose \(\epsilon^{\prime}\) and \(r\) so that \(\epsilon^{\prime}+c^{\prime\prime}r<\epsilon\). This ensures the desired bound on the ball \(\|v^{\prime}-v\|^{\prime}\leq r\). With this Lemma in hand, we return to the proof of Prop. 11.2. For any \(v^{\prime}\) and \(\rho\), \[|\Delta(v^{\prime},\rho)-F(\rho)|=|\left\langle v^{\prime}\,,\,\rho\right\rangle -E(v^{\prime})|. \tag{60}\] Take \(U\,:=\,B(v,r)\), with \(r\) small enough that (i) \(E\) is bounded on \(U\), and (ii) for \(v^{\prime}\in U\), \(\left\langle v^{\prime}\,,\,\rho\right\rangle\) is bounded as in the Lemma, with \(\epsilon<1\). Then, \(\Delta(v^{\prime},\rho)\) is bounded both above and below by a constant plus some strictly positive multiple of \(F(\rho)\). This is what is needed. ### Proof of Proposition 11.1 All that needs to be shown is that, for each \(v\), \(F(\rho)+\left\langle v\,,\,\rho\right\rangle\) is bounded below. Now, by A5 there is some \(M\) such that for \(F(\rho)>M\), the expression is bounded below. However, by Lemma 10.3, \(F(\rho)\leq M\) implies a bound \(\|\rho\|<M^{\prime}\), so in that case, the expression is bounded by \(M+M^{\prime}\|v\|^{\prime}\). ## 12 Interlude: in pursuit of compactness Cor.11.3 demonstrates that, when \((v_{n},\rho_{n})\) is a sequence of ground pairs with \(v\to v^{\circ}\), the situation is good with respect to excess energy. The conclusion \(\Delta(v^{\circ},\rho_{n})\to 0\) is similar to energetic progress from section 5, but not quite the same. Now we turn out attention back to the question of whether the sequence of densities \((\rho_{n})\) converges, and if so, whether the limit is a ground density of \(v^{\circ}\). We would be assured that the sequence at least had cluster points, if we could guarantee that it was confined to a _compact_, or _totally bounded_ set. That remark calls for a review of the important topological notion of _compactness_, in a form suitable for our purposes. Although no overt appeal to this concept is made until section 15, it already begins to exert an influence on the direction of the development. ### Compactness and total boundedness A perhaps helful slogan is, "a compact set is almost finite, in a topological sense". Suppose \(X\) is a metric space. Then, \(X\) is said to be _totally bounded_ exactly if, for any specified \(\epsilon>0\), one can find a finite set of points \(x_{1},\ldots,x_{N}\in X\) such that the \(\epsilon\)-balls centered at those points cover \(X\). A complete, totally bounded metric space is _compact_. Although not the usual definition, this is equivalent to the latter, and immediately captures the significance for our purposes. If \((y_{i}:i\in\mathbb{N})\) is _any_ (not necessarily Cauchy!) sequence in a compact metric space \(X\), then some subsequence converges to a point in \(X\). For example, any closed bounded interval \([a,b]\subset\mathbb{R}\) (\(-\infty<a\leq b<\infty\)) is compact. The entire real line is not, since the sequence \(y_{i}=i\) has no convergent subsequence. So, unboundedness is a way to avoid being compact. Another is having infinitely many dimensions. For instance, the closed unit ball of an infinite-dimensional Hilbert space is not compact. If \(\{\psi_{i}\,:\,i\in\mathbb{N}\}\) is an orthonormal basis, then, the sequence \(i\mapsto\psi_{i}\) does not converge in norm. In an infinite dimensional Banach space, a compact set is both bounded and "almost finite-dimensional" in being within any prescribed distance of some finite-dimensional affine subspace. ### Total variation metric is a physically grounded candidate The weaker a metric on \(\mathscr{D}\), the more compact sets it will have. We are thus motivated to consider metrics weaker than \(d\), induced by the norm \(\|\cdot\|\). Focusing on the standard interpretation, there is a particularly attractive possibility, namely the metric \(d_{1}\) induced by \(L^{1}\) norm. Earlier, we argued that, topologically, one should start from \(\sigma(\mathscr{D},\mathscr{V})\). If \(\|\cdot\|_{1\cap\Im}\) is physically motivated, then any metric strictly between these two is also. This is not quite true of \(d_{1}\). As we shall see, \(d_{1}\) is stronger than \(\sigma(\mathscr{D},\mathscr{V})\) on \(F\)-bounded sets, but not globally. However, \(d_{1}\) has strong independent physical credentials. First, the \(L^{1}\) norm hews tightly to the very concept of density, as an instrument for telling us how much "stuff" is in any specified region, whereas the \(L^{3}\) norm is, as observed, really a proxy for something else. Indeed, if \(\mathcal{N}(A)\), respectively \(\mathcal{N}^{\prime}(A)\), is the number of particles in region \(A\) according to density \(\rho\), respectively \(\rho^{\prime}\), then \[\|\rho-\rho^{\prime}\|_{1}=2\cdot\sup\{\mathcal{N}(A)-\mathcal{N}^{\prime}(A) \,:\,A\subset\mathcal{M}\}. \tag{61}\] This metric has a privileged place in probability theory (a probability measure taking the place of \(\rho\)), where it is known as _total variation_ metric. Secondly, the map dens from quantum mechanical states (density matrices) with the natural trace norm to \(\operatorname{Vec}\mathscr{D}\) is continuous with respect to total variation metric, but not \(L^{3}\) norm. The former therefore has a direct link to the underlying quantum mechanics as well. The next section examines what it takes to replace \(d\) by a weaker metric. The motivation for this lies in the possibility of convenient compact sets, but that theme will be put aside for now. ## 13 Structure and regularity II This section shows that, with an additional axiom we can replace \(d\) by a weaker metric, and therefore \(d^{\prime}+d\) on the product space \(\mathscr{V}\times\mathscr{D}\) while maintaining (nearly all of) the favorable situation of Proposition 9.1. ### Complete lower semicontinuity In referring to a completion of \(\mathscr{D}\), axiom T2 makes reference to points outside \(\mathscr{D}\). With a weaker metric, we would have even more of these. We would like to avoid that and phrase everything in terms of the physical \(\mathscr{D}\). Here we identify the concept to do this, which turns out to be the same as appears in item 1 of Prop. 9.1. Thus, we achieve some unification at the same time. **Definition 13.1** (completely lower semicontinuous).: A function \(f\colon(\mathscr{X},d)\to\mathbb{R}\) on a metric space is _completely lower semicontinuous_ iff for each \(M<\infty\), the subset \(\{F\leq M\}\subseteq\mathscr{X}\) is complete. Normally, we are interested in a fixed function on the set \(\mathscr{X}\) and want to know whether \(f\) is completely lower semicontinuous with respect to \(d\). If so, we say that \(d\)_makes \(f\) completely lsc_. Here is the fundamental fact about this concept. **Lemma 13.1**.: _For \(f\colon(\mathscr{X},d)\to\mathbb{R}\), these are equivalent:_ 1. \(\bar{f}\) _defined on the completion_ \(\overline{(\mathscr{X},d)}\) _by_ \[\bar{f}(x)\,:=\,\begin{cases}f(x)&x\in\mathscr{X}\\ +\infty&\text{otherwise}\end{cases}\] (62) _is lower semicontinuous._ 2. \(f\) _is completely lsc_ 3. _If_ \((x_{n})\) _is an_ \(f\)_-bounded Cauchy sequence in_ \((\mathscr{X},d)\)_, it has a limit_ \(x\)_, and_ \(f(x)\leq\liminf n_{n\to\infty}\,f(x_{n})\)_._ _In particular, a completely lsc function is lsc._ Proof.: \(b\Leftrightarrow\)_c_ is elementary. We show _a \(\Leftrightarrow\)c_. Assume \(a\), and let \((x_{n})\) be a Cauchy sequence in \((\mathscr{X},d)\) on which \(f\) is bounded by, say, \(M<\infty\). It has a limit \(x\in(\overline{\mathscr{X},d})\), and by \(a\), \(f(x)\leq M\), and therefore \(x\in\mathscr{X}\). Conversely, assume \(c\), and let \((x_{n})\) be a Cauchy sequence in \((\mathscr{X},d)\). \(\liminf f(x_{n})=\infty\), there is nothing to show, so assume \(\liminf f(x_{n})<\infty\). But this means the sequence is \(f\)-bounded, so the conclusion \(f(x)\leq\liminf f(x_{n})\) follows directly from hypothesis \(c\). ### New postulate T3. \(d_{1}\) is a metric on \(\mathscr{D}\) such that 1. \(\sigma(\mathscr{D},\mathscr{V})\precsim d_{1}\) on \(F\)-bounded sets 2. \(d_{1}\precsim d\) 3. \(d_{1}\) makes \(F\) completely lsc Notice that, by T2, the metric \(d\) induced by \(\|\cdot\|\) satisfies this axiom. Of course, a weaker metric is intended. **Proposition 13.2**.: _Given A1 - A3, T3 implies T2._ This is trivial. Clause 2 of T3 is included to make it true and plays no other role. ### Transdard interpretation The new ingredient here is \(d_{1}\). The standard interpretation is that \(d_{1}\) is the metric induced by the \(L^{1}\) norm \(\|\cdot\|_{1}\), as discussed in the Interlude. This is the motivation for the subscript 1 on '\(d_{1}\)'. ### Improved structure theorem We continue to use the metric \(d^{\prime}\) on \(\mathscr{V}\). In contrast to section 9, the metric on \(\mathscr{D}\) is \(d_{1}\). That will continue to be the focus of interest even beyond the present section. The statement of the main proposition is similar to Prop. 9.1, but for the use of the new terminology. The generalization from \(d\) to \(d_{1}\) indicates that the proposition is stronger than Prop. 9.1, except for the minor point that we no longer obtain Lipschitz continuity of \(F\) on \(\mathscr{X}\). **Proposition 13.3**.: _Given: A1 - A4, T1 - T3. Then, on \((\mathscr{V}\times\mathscr{D},d^{\prime}+d_{1})\),_ 1. \(E\) _is locally L-continuous_ 2. \(F\) _is_ \(\Delta\)_-almost continuous_ 3. \(F\) _is continuous on_ \(\mathscr{Z}\)__ 4. \(\Delta\) _is completely lsc_ ### T3 holds in standard interpretation A preliminary Lemma 13.4, which will be used elsewhere, prepares the way. **Lemma 13.4**.: _Given \(v\in\mathcal{L}^{\infty}(\mathbb{R}^{3})+\mathcal{L}^{3/2}(\mathbb{R}^{3})\), for any \(\epsilon>0\), \(v\) may be split as \(v=v^{\prime}+v^{\prime\prime}\), with \(v^{\prime}\in\mathcal{L}^{\infty}(\mathbb{R}^{3})\) and \(\|v^{\prime\prime}\|_{3/2}<\epsilon\)._ _Therefore \(\rho\mapsto\langle v\,,\,\rho\rangle\) is the sum of an \(L^{1}\) continuous function and a function bounded by \(\epsilon\|\rho\|_{3}\)._ Proof.: For \(m>0\), denote \[v_{m}\,:=\,\begin{cases}-m&v<m\\ v&-m\leq v\leq m\;,\quad v^{m}\,:=\,v-v_{m}\\ m&v>m\end{cases} \tag{63}\] \(v_{m}\in\mathcal{L}^{\infty}(\mathbb{R}^{3})\) and \(v^{m}\in\mathcal{L}^{3/2}(\mathbb{R}^{3})\), while \(v^{m}\to 0\) pointwise almost everywhere as \(m\to\infty\). Hence, by the Dominated Convergence Theorem, \(\|v^{m}\|_{3/2}\to 0\) as \(m\to\infty\). Proof of T3.: \(\sigma(\mathscr{D},\mathscr{V})\precsim\|\cdot\|_{1}\) on \(F\)-bounded sets: Let \(A\subset\{F\leq M\}\), and suppose for a contradiction, that there is \(\rho\in A\), \(v\in\mathscr{V}\), \(\epsilon>0\), and a sequence \(\rho_{n}\to\rho\) with respect to \(\|\cdot\|_{1}\), such that \(\langle v\,,\,\rho_{n}-\rho\rangle>\epsilon\) for all \(n\). Now, Lemma 10.3 gives a bound \(\|\rho_{n}-\rho\|_{3}<c\) while Lemma 13.4 allows to write \(v=v^{\prime}+v^{\prime\prime}\) as a sum of a bounded \(v^{\prime}\) and \(\|v^{\prime\prime}\|_{3}<\epsilon/c\). However, that implies that \(\langle v\,,\,\rho_{n}-\rho\rangle\) is less than \(\epsilon\) for large enough \(n\). \(\|\cdot\|_{1}\) makes \(F\) completely lsc: This is a consequence of lower semicontinuity of \(F\) as a function on \(L^{1}(\mathbb{R}^{3})\) when extended as \(+\infty\) off \(\mathscr{D}\). See Ref. [8] for details of the latter. ### Proof of Proposition 13.3 A. \(E\) is locally L-continuous. Proof.: \(E\) depends only on \(v\), and the norm on \(\mathscr{V}\) has not changed, so this is the same as in section 9.5. B. \(d^{\prime}+d\) makes \(\Delta\) completely lsc on \(\{F\leq M\}\). * If \((\rho_{n})\subset\{F\leq M\}\) is \(d\)-Cauchy, then it is \(\|\cdot\|\)-bounded. _Proof_: Since \(\sigma(\mathscr{D},\mathscr{V})\precsim d\) on \(\{F\leq M\}\), \((\rho_{n})\) is \(\sigma(\mathscr{D},\mathscr{V})\) bounded. Therefore, by the Uniform Boundedness Principle, \(\{\|\rho_{n}\|\}\) is bounded. * \((v,\rho)\mapsto\langle v\,,\,\rho\rangle\) is continuous on \(\{F\leq M\}\). _Proof_: Suppose sequence \((v_{n},\rho_{n})\xrightarrow{d^{\prime}+d}(v,\rho)\). We need to show that \(\langle v_{n}\,,\,\rho_{n}\rangle\to\langle v\,,\,\rho\rangle\). Now, \[\langle v\,,\,\rho\rangle-\langle v_{n}\,,\,\rho_{n}\rangle=\langle v\,,\, \rho-\rho_{n}\rangle+\langle v-v_{n}\,,\,\rho_{n}\rangle\,.\] Show that each term on RHS tends to zero. 1st term: By hypothesis on \(d\), \(\rho_{n}\xrightarrow{\sigma(\mathscr{D},\mathscr{V})}\rho\). 2nd term: \(|\langle v-v_{n}\,,\,\rho_{n}\rangle\,|\leq\|v-v_{n}\|^{\prime}\|\rho_{n}\|\). Now, \(\|v-v_{n}\|^{\prime}\to 0\), while \(\|\rho_{n}\|\) is bounded by the preceding bullet point. * By item A, and preceding bullet point, \(\Delta\) is the sum of a completely lsc function \((F)\) and two continuous functions (\(E\) and \((v,\rho)\mapsto\langle v\,,\,\rho\rangle\)). C. \(\Delta\) is completely lsc. Now we lift the restriction to \(\{F\leq M\}\). Suppose \(((v_{n},\rho_{n}))\subset\{\Delta\leq\epsilon\}\) is a Cauchy sequence with \(v_{n}\to v\). Some tail of this sequence is in the neighborhood \(U\) of Prop. 11.2, hence is \(F\)-bounded, by \(M\), say. By T3, the sequence \(\rho_{n}\) therefore converges and \(F(\rho)\leq M\). The situation of item B therefore holds, anyway. D. On \(\{\Delta\leq\epsilon\}\), \(F\) is \(\epsilon\)-almost continous. Proof.: The proof is formally just like that for item E in section 9.5. ## 14 Interlude: tightness We are aiming toward getting something as close as possible to automatic convergence of \((\rho_{n})\) whenever \(((v_{n},\rho_{n}))\subset\mathscr{Z}\) and \(v_{n}\to v\). The move to a weaker metric in the previous section help because it gives more totally bounded sets of densities. Yet, we need to find a concrete property which guarantees total boundedness. If the sequence of densities \((\rho_{n})\) is to converge, it is certainly necessary that the following hold: given arbitrary \(\epsilon>0\), there is some sphere such that, from some point in the sequence on, \(\rho_{n}\) puts less than particle number \(\epsilon\) outside the sphere. Otherwise the sequence is "leaky" or "lossy" in the sense that some nonzero particle number is inexorably moving off to infinity. This necessary condition is _tightness_. It turns out to be almost sufficient, in the sense that, while we cannot guarantee that the sequence converges, it has cluster points, and therefore convergent subsequences. And, indeed, they all converge to ground densities for \(v\). ## 15 Density clustering and tightness The following is an immediate consequence of Propositions 11.2 and 13.3. It is really a theorem schema, and says nothing useful or nontrivial on its own, without identification of the property \(\mathcal{P}\). It does however, point out what is needed. **Proposition 15.1**.: _Assume A1-A5, T1 - T3, and suppose that \(\mathcal{P}\) is a property of sequences in \(\mathscr{D}\) such that any \(F\)-bounded sequence with \(\mathcal{P}\) is \(d_{1}\)-totally bounded. Then, if \((v_{n},\rho_{n})\subset\mathscr{Z}\) and \(v_{n}\to v\), the sequence \(\rho_{n}\) clusters on a set of densities, every one of which is a ground density of \(v\)._ ### Tightness Here is the property \(\mathcal{P}\) we need in the standard interpretation. **Definition 15.1** (tight).: A set \(A\) of integrable functions on \(\mathbb{R}^{d}\) is _tight_ if, for every \(\epsilon>0\), there exists \(R\) such that for every \(f\in A\), \[\int_{|x|>R}|f(x)|\,dx<\epsilon. \tag{64}\] Now, for a sequence \((\rho_{n})\subset\mathscr{D}\), we can formulate the condition of tightness as follows. Define \[\mathcal{N}_{>}(n,r)\,:=\,\,\int_{|x|\geq r}\rho_{n}(x)\,dx. \tag{65}\] Then, the sequence is tight if and only if \[\varliminf_{R\to\infty}\varlimsup_{n\to\infty}\mathcal{N}_{>}(n,r)=0. \tag{66}\] Tightness _alone_ does not guarantee that a sequence is \(d_{1}\)-totally bounded. However it does do so in combination with \(F\)-boundedness, and that is enough, as observed in Prop. 15.1. ### Tightness guarantees clustering on ground densities **Lemma 15.2**.: \(F\)_-bounded tight subsets of \(\mathscr{D}\) are \(d_{1}\)-totally bounded._ Refer to Ref. [25] for a proof. Here is a semiclassical interpretation: The idea is that a volume \(h^{\mathcal{N}}\) in phase space corresponds to one dimension in Hilbert space. Now, if \(A\) is tight, then densities in \(A\) come from states almost bounded in position, and the bound on \(F\) implies a bound on momentum. This gives us that \(\mathsf{dens}^{-1}(A\cap\{F\leq M\})\) is compact. Since the map \(\mathsf{dens}\colon\mathcal{L}_{1}(\mathcal{H})\to L^{1}(\mathbb{R}^{3})\) is continuous, the image of that compact set is compact. Finally, combining Lemma 15.2 and Prop. 15.1, we reach the objective of this section, and a major objective of the paper. **Proposition 15.3**.: _In the standard interpretion of A1-A5, T1 - T3, if \((v_{n},\rho_{n})\subset\mathscr{Z}\) and \(v_{n}\to v\), and the sequence \((\rho_{n})\) is tight, then it clusters on a set of densities, every one of which is a ground density of \(v\)._ It is important that tightness is a relatively straightforward property, and one of which, in certain circumstances, we may be confident, or even certain. ### Interpretations with automatic compactness There are at least a couple of interesting variations on the standard interpretation in which the property \(\mathcal{P}\) of Prop. 15.1 can be taken to be the trivial property holding of all sequences. One such is the case where \(\mathcal{M}\) is not \(\mathbb{R}^{3}\), but a three-torus, or more generally a closed manifold. In that case, \(L^{1}(\mathcal{M})\) is isomorphic to \(L^{1}([0,1]^{3})\). If it is thought of that way, all sequences in \(\mathscr{D}\) are tight. Another case leaves everything as in the standard interpretation, except \(F\), which is further specialized (beyond what the axioms say) to an exact functional with a repulsive interaction, and a background trap potential tending to \(+\infty\) as \(|x|\to\infty\). For example, a harmonic potential. In this case, the requirement of bounded \(F\) itself imposes the condition (64). ## 16 recapitulation Here is a telegraphic, and necessarily somewhat imprecise, recapitulation of the findings, with emphasis on the standard interpretation and exact functionals, hence cutting out the axiomatic middlemen. Kohn-Sham computation can be viewed as a walk on ground pairs in \(\mathscr{V}\times\mathscr{D}\). A simple iterative scheme, focusing on potential, is shown to make progress, with caveats. With respect to the metric \(d^{\prime}+d_{1}\) on \(\mathscr{V}\times\mathscr{D}\), the following hold: Ground energy \(E\) is continuous, while intrinsic energy \(F\) and excess energy \(\Delta\) are completely lower semicontinuous. \(F\) is also \(\Delta\)-almost continuous. Thus, although \(F\) is unbounded above on every neighborhood, this phenomenon and possible unpleasant consequences are strongly mitigated as long as we restrict attention to the low intrinsic energy subspace, and \(F\) is even continuous on \(\mathscr{Z}\). Low excess energy pairs are close to the set \(\mathscr{Z}\) of ground pairs, metrically. Conversely, \(\Delta\) increases only slightly when shifting the potential of a point in \(\mathscr{Z}\). (The corresponding statement with respect to density is absolutely _not_ true, not even for the \(\|\cdot\|\) metric.) If \((v_{n},\rho_{n})\) of ground pairs is such that \(v_{n}\to v^{\odot}\), then the densities automatically accumulate on ground densities of \(v^{\odot}\), as long as the density sequence does not have particle number drifting to infinity. ## 17 Some conclusions This work is based on a few simple ideas. First, the procedures and operations of KS computation should be physically interpreted. Second, the topologies (norms) on potential and density spaces entering a functional analytic theory also require physical grounding. Third, one should work explicitly in the product of potential and density space as much as possible. These are also, especially the last, conclusions as starting points. They are vindicated by the results achieved in taking them seriously. A number of the results in this paper point to the somewhat ironic conclusion that more attention should be payed to potential in density functional theory. These are, primarily, the demonstration in section 5.5 that an iterative scheme focusing on potential can make progress, with provisos, and the result, Prop. 15.3, on automatic convergence of density. ###### Acknowledgements. This project was funded by the National Science Foundation under award DMR-2011839.
2308.11318
How to identify and characterize strongly correlated topological semimetals
How strong correlations and topology interplay is a topic of great current interest. In this perspective paper, we focus on correlation-driven gapless phases. We take the time-reversal symmetric Weyl semimetal as an example because it is expected to have clear (albeit nonquantized) topological signatures in the Hall response and because the first strongly correlated representative, the noncentrosymmetric Weyl-Kondo semimetal Ce$_3$Bi$_4$Pd$_3$, has recently been discovered. We summarize its key characteristics and use them to construct a prototype Weyl-Kondo semimetal temperature-magnetic field phase diagram. This allows for a substantiated assessment of other Weyl-Kondo semimetal candidate materials. We also put forward scaling plots of the intrinsic Berry-curvature-induced Hall response vs the inverse Weyl velocity -- a measure of correlation strength, and vs the inverse charge carrier concentration -- a measure of the proximity of Weyl nodes to the Fermi level. They suggest that the topological Hall response is maximized by strong correlations and small carrier concentrations. We hope that our work will guide the search for new Weyl-Kondo semimetals and correlated topological semimetals in general, and also trigger new theoretical work.
Diana M. Kirschbaum, Monika Lužnik, Gwenvredig Le Roy, Silke Paschen
2023-08-22T09:51:05Z
http://arxiv.org/abs/2308.11318v1
# How to identify and characterize strongly correlated topological semimetals ###### Abstract How strong correlations and topology interplay is a topic of great current interest. In this perspective paper, we focus on correlation-driven gapless phases. We take the time-reversal symmetric Weyl semimetal as an example because it is expected to have clear (albeit nonquantized) topological signatures in the Hall response and because the first strongly correlated representative, the noncentrosymmetric Weyl-Kondo semimetal Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\), has recently been discovered. We summarize its key characteristics and use them to construct a prototype Weyl-Kondo semimetal temperature-magnetic field phase diagram. This allows for a substantiated assessment of other Weyl-Kondo semimetal candidate materials. We also put forward scaling plots of the intrinsic Berry-curvature-induced Hall response vs the inverse Weyl velocity--a measure of correlation strength, and vs the inverse charge carrier concentration--a measure of the proximity of Weyl nodes to the Fermi level. They suggest that the topological Hall response is maximized by strong correlations and small carrier concentrations. We hope that our work will guide the search for new Weyl-Kondo semimetals and correlated topological semimetals in general, and also trigger new theoretical work. ## 1 Introduction Heavy fermion compounds are materials where itinerant and localized (typically \(4f\) or \(5f\)) electrons coexist and, at low enough temperatures \(T\), strongly interact via the Kondo effect. They are best known for the heavy effective masses of their charge carriers, the property that gave this class of materials its name [1, 2]. They are also known for their ready tunability. Small variations of an external (nonthermal) control parameter \(\delta\) such as pressure or magnetic field lead to strong changes in the effective mass. Particularly drastic enhancements appear when approaching a quantum critical point where, at a critical value \(\delta_{\rm c}\) of the control parameter, a second-order (typically antiferromagnetic) phase transition is just suppressed to \(T=0\)[3]. The standard method to experimentally determine effective masses of heavy fermion metals is to measure a physical property at sufficiently low temperatures such that it exhibits Fermi liquid behavior. The effective mass can then be determined by comparison with the corresponding theoretical Fermi liquid expression, e.g., \(C(T)=\gamma T\) for the electronic specific heat, \(\Delta\rho(T)=AT^{2}\) for the electrical resistivity, or \(\chi(T)=\chi_{0}\) for the magnetic susceptibility of the conduction electrons, where the Sommerfeld coefficient \(\gamma\), the resistivity \(A\) coefficient, and the Pauli susceptibility \(\chi_{0}\) are all related to the effective mass [1, 2, 4, 5]. Upon approaching a quantum critical point, situated at \(T=0\) and \(\delta=\delta_{\rm c}\), these temperature dependences hold in ever narrower temperature ranges as they give way to non-Fermi liquid behavior emerging at the quantum critical point and extending in a fan-like shape into the \(T(\delta)\) phase diagram [3, 6, 7, 8, 9, 10]. It is important to note that the above relations hold for metals. Whereas most heavy fermion compounds are indeed metallic, there is a smaller subset of materials that display semiconducting properties. They are typically referred to as Kondo insulators [11, 12, 13, 14, 15, 16, 17, 18, 19]. In a simple mean-field picture, the insulating state arises due to the hybridization of the conduction electrons with the localized electrons, and the Fermi level lies within this hybridization gap. The periodic Anderson and Kondo lattice models are also known to exhibit such gaps [20, 21]; at half filling, where the lower hybridized band is fully occupied and the upper hybridized band is empty, a Kondo insulator results. The above Fermi liquid relations may still be meaningful if effects such as doping or off-stoichiometry move the Fermi level from within the gap into the conduction or valence band, or even into a conductive impurity band. In that case, the knowledge of the charge carrier concentration is needed to estimate the mass enhancement from experimental values of \(\gamma\), \(A\), or \(\chi_{0}\). An alternative measure of correlation strength is the width of the gap (the narrower it is the stronger the correlations), but experimentally determined gap magnitudes have typically differed strongly depending on the quantity they were extracted from. The field of Kondo insulators underwent a sudden revival as--with the advent of topological insulators [22]--also topological Kondo insulators were proposed [23]. In this first work, a topologically nontrivial insulating state was found to result from the spin-orbit coupling associated with the hybridization between the conduction and localized (\(f\)) electrons, in particular for certain positions of the renormalized \(f\) level relative to the bottom of the conduction band and for certain crystal symmetries at the \(f\) electron site. This proposal raised great interest and triggered massive efforts [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37]. Nevertheless, in spite of considerable progress, there is no broad consensus yet on the topological nature of the observed surface states. Part of the challenge derives from the fact that the surface of a Kondo insulator is a delicate object. The formation of the Kondo insulator gap requires the Kondo effect to operate, something that naturally fails on a surface, where the Kondo screening cloud is cut off. Secondly, the tools that have provided rapid progress in the field of noninteracting topological insulators, most notably angle-resolved photoemission spectroscopy (ARPES) in combination with density functional theory (DFT), are of limited use for Kondo insulators, both due to their narrow bandwidths and the absence of precise ab-initio methods. Finally, predictions for robust and readily testable experimental signatures of the expected topological surface states are scarce. More recently, in a joint effort of experiment and theory, heavy fermion compounds with metallic topology have been advanced, at first the Weyl-Kondo semimetal [38, 39, 40, 41] and later the Weyl-Kondo nodal-line semimetal [42]. They result from the interplay of the Kondo effect, strong spin-orbit coupling, and specific lattice symmetries, and are strongly correlated analogs of the previously discovered noninteracting and weakly interacting Weyl semimetals [43, 44]. The Weyl-Kondo semimetal, which has Weyl point nodes, was theoretically demonstrated in a periodic Anderson model, with conduction electrons on a zincblende lattice, a simple noncentrosymmetric structure [39, 40]. The Weyl-Kondo nodal-line semimetal, by contrast, was found for conduction electrons on a 3D lattice of space group (SG) \(Pmn2_{1}\) (No. 31) [42]. For the considered commensurate filling, the nodes appear at the Fermi energy as the Kondo effect develops. The linear dispersion near the Weyl nodes is extremely flat, with the renormalized bandwidth given by the Kondo temperature. Experimentally, Weyl-Kondo semimetal behavior was first found in the heavy fermion compound Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\)[38, 41], which crystallizes in a cubic, noncentrosymmetric and nonsymmorphic structure of SG \(I\bar{4}3d\) (No. 220). Initial evidence for Weyl-Kondo nodal-line semimetal behavior was found in Ce\({}_{2}\)Au\({}_{3}\)In\({}_{5}\)[42], which forms in the orthorhombic, noncentrosymmetric, and nonsymmorphic structure of SG \(Pmn2_{1}\) (No. 31). Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\) displays "giant" signatures of nontrivial topology, which was attributed to the effect of strong correlations due to the Kondo effect [38, 41]. In this perspective, we will highlight these features to facilitate the identification of other representatives of this new class of materials. We will also examine the relationship between the size of the topological responses and the strength of electronic correlations, which can be used to verify experimental interpretations. The paper is organized as follows. We summarize the key features of Weyl-Kondo semimetal phase in Section 2 and discuss various candidate materials in Section 3. In Section 4 we explain how the correlation strength can be quantified in these materials and in Section 5 we investigate the relationship between correlation strength and the size of the topological responses. In Section 6 we summarize and discuss our findings, and provide an outlook. ## 2 Characteristics of the Weyl-Kondo semimetal The Weyl-Kondo semimetal is a new state of matter put forward in a joint effort of experiment [38, 41] and theory [39, 40]. It may form in systems with preserved time reversal symmetry but broken inversion symmetry. As it is the currently best-established gapless topological state driven by strong electron correlations, it is the focus of this perspective paper. The understanding that results from the above works is that Weyl nodes, which are already present in the noninteracting bandstructure, become part of the Kondo resonance at low temperatures and thus appear in the immediate vicinity of the Fermi energy. As a consequence, they play an important role in low-temperature properties, including thermodynamics and transport. The resulting band is extremely narrow ("flat"), corresponding to a Weyl (or Dirac) dispersion \[\varepsilon=\hbar vk \tag{1}\] with ultralow velocity \(v\). \(\varepsilon\) and \(k\) are the energy and wave vector counted from a Weyl (or Dirac) point. The heat capacity (for a sample of volume \(V\)) resulting from this dispersion is [39] \[C=\frac{7\pi^{2}V}{30}k_{\rm B}\left(\frac{k_{\rm B}T}{\hbar v}\right)^{3}= \Gamma T^{3}\;, \tag{2}\] which is indeed experimentally observed [38], as will be shown later. Furthermore, a magnetic field-tuning experiment [45], also detailed below, together with theoretical work on the field-tuning effect [46] revealed that, with increasing magnetic field, Weyl nodes and their respective anti-nodes move mostly (for details see [46]) at constant energy in momentum space until they meet and annihilate. The theoretical work considers an Anderson lattice model on a diamond crystal structure with an inversion-symmetry-breaking sublattice potential and is solved in the strong-coupling (Kondo) limit using the auxiliary boson method [46]. Torque magnetization measurements [45] furthermore demonstrated that the Weyl nodes are positioned within a Kondo insulator gap. For Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\), this situation is expected in analogy with the well-known Kondo insulator Ce\({}_{3}\)Bi\({}_{4}\)Pt\({}_{3}\)[14, 47, 48, 49], which is an isostructural and isoelectronic sibling of Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\)[38, 45]. The topological nodal states are situated within the gap because, apparently, they are robust against being gapped out in the Kondo hybridization process [45]. The gapped background, identified also in [50], is a fortuitous situation for experiments because abundant topologically trivial states at the Fermi level might otherwise cover the effect of the topological nodal states. The key transport signature of a Weyl-Kondo semimetal is the "spontaneous" Hall effect [41]. The term spontaneous refers to the situation that a transverse voltage appears in response to a (longitudinal) electrical current but in the absence of both internal and external magnetic fields. An approximate formulation of the Hall response in a time-reversal symmetric but inversion asymmetric setting is \[j_{y}=\sigma_{xy}{\cal E}_{x}=\frac{e^{3}\tau}{\hbar^{2}}\underbrace{\int \frac{d^{3}k}{(2\pi)^{3}}f_{0}({\mathbf{k}})\frac{\partial\Omega_{z}^{\rm odd}({ \mathbf{k}})}{\partial k_{x}}}_{D_{xz}}{\cal E}_{x}^{2}\;, \tag{3}\] where \({\cal E}_{x}\) is an electric field applied along \(x\) ("longitudinal"), \(\Omega^{\rm odd}\) the Berry curvature, which is odd in momentum space, \(f_{0}({\mathbf{k}})\) the equilibrium Fermi-Dirac distribution function, \(D_{xz}\) the Berry curvature dipole, and \(j_{y}\) the resulting transverse (Hall) current density [51]. This is the first nonvanishing term in an expansion in the longitudinal electric field. In this limit, the spontaneous Hall conductivity \(\sigma_{xy}\) is proportional to \({\cal E}_{x}\); thus this response has also been called "nonlinear" Hall effect. In Weyl-Kondo semimetals, however, the Weyl nodes can be situated so close to the Fermi energy that, even for small applied electric fields, higher order terms are needed to capture the experimentally observed behavior [41]. Indeed, in the first candidate material, Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\), which is time reversal invariant as demonstrated by zero-field muon spin rotation (\(\mu\)SR) experiments [41] but has a noncentrosymmetric crystal structure, not only the square-in-\({\cal E}_{x}\) spontaneous Hall current (or voltage) expected from Eq. 3 but also a contribution that is linear in \({\cal E}_{x}\) was observed and attributed to higher order terms that are neglected in Eq. 3 [41]. In applied magnetic fields (or magnetic induction \(B=\mu_{0}H\)), the spontaneous Hall effect finds continuation as an even-in-\(B\) Hall response. The magnetic field can be ruled out to be the origin of this effect, as it would necessarily always result in an odd-in-\(B\) Hall effect. In figure 1 we sketch these key signatures of a Weyl-Kondo semimetal. According to Eq. 2 the Weyl contribution to the heat capacity \(\Delta C\) shows linear behavior on a \(\Delta C/T\) vs \(T^{2}\) plot (figure 1A), with a slope \(\Gamma\) that is inversely proportional to \(v^{3}\). Because the Kondo interaction can lead to bandwidth renormalizations of several orders of magnitude, \(v\) will be drastically reduced compared to the Fermi velocity of simple noninteracting Schrodinger-like quasiparticles (e.g., \(1.4\times 10^{6}\) m/s for gold) or, perhaps more significantly, noninteracting Dirac-like quasiparticles (e.g., \(1\times 10^{6}\) m/s for graphene [52]). This reduction of \(v\) boosts the heat capacity to the point that it may even overshoot the low-temperature (Debye-like) phonon contribution [38]. The temperature \(T_{\rm W}\) up to which this law holds is a measure of the stability of the Weyl-Kondo semimetal phase. It is plotted as full circles in figure 1B. We note that, unlike broken-symmetry phases characterized by an order parameter, this state is not bound by a phase transition but builds up continuously as Kondo coherence sets in [39, 41]. This is symbolized by the violet shading, which lacks a sharp boundary. With increasing applied magnetic field, \(T_{\rm W}\) is successively suppressed. This is because the Weyl and anti-Weyl dispersions start to intersect as the Weyl nodes move towards each other in momentum space [46] (figure 1C). The Weyl-Kondo semimetal phase collapses when the Weyl and anti-Weyl nodes meet and annihilate. The slope of the dispersions is, a priori, not expected to vary with \(B\), as visualized by the inverse Weyl velocity plotted as squares in figure 1B on the right \(y\) axis. The magnitude of the even-in-\(B\) Hall effect, by contrast, depends on the momentum-space distance between a Weyl and the associated anti-Weyl node [46]. As such it is expected to decrease with increasing field (see diamonds plotted on the right \(y\) axis in figure 1B). ## 3 Weyl-Kondo semimetal candidate materials The above-described experiments on Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\)[38, 41] together with the theoretical studies on nonsymmorphic Kondo lattice models [39, 40] have coined the notion of the Weyl-Kondo semimetal. This sets the stage to consider experimental results on other noncentrosymmetric compounds in this context. In what follows we review the pertinent data and compare them with the behavior seen in Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\). In figure 2 we replot published specific heat data, in the form of isofield \(\Delta C/T\) vs \(T^{2}\) curves, for Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\)[45], the cubic half-Heusler compound YbPtBi (SG \(F\bar{4}3m\), No. 216) [53], the tetragonal compound CeAlGe (SG \(I4_{1}md\), No. 109) [54, 55, 56], and the cubic compound Ce\({}_{3}\)Rh\({}_{4}\)Sn\({}_{13}\) (SG \(I2_{1}3\), No. 199) [57, 58], in panels A-D, respectively. Details of the data analyses are explained in the caption. For all four compounds these plots display ranges of linearity, as expected for a Weyl-Kondo semimetal according to Eq. 2. However, a closer inspection reveals distinct differences from the behavior of Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\). Firstly, the maximum temperature \(T_{\rm W}\) up to which the linear behavior holds _in_creases with \(B\) for YbPtBi, CeAlGe, and Ce\({}_{3}\)Rh\({}_{4}\)Sn\({}_{13}\), whereas it _d_creases with \(B\) for Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\). This would indicate that, in these other compounds, magnetic field stabilizes the Weyl-Kondo semimetal phase as opposed to the suppression predicted from Zeeman coupling tuning [46]. Secondly, the slopes of the linear dependencies are sizably reduced with \(B\), whereas for Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\) all iso-\(B\) curves have essentially the same slope. The (putative) Weyl dispersions do thus not remain unchanged (as in the cartoon in figure 1) but become steeper under magnetic field tuning. This sizable correlation tuning effect may hint at the presence of a nearby quantum critical point. As pointed out in [38], a \(T^{3}\) contribution to the specific heat may alternatively result from 3D antiferromagnetic (AFM) magnons, as seen for instance in the heavy fermion antiferromagnets CeIn\({}_{3}\)[59, 60], CePd\({}_{2}\)In (between 3 and 6 T) [61], or CeGe\({}_{1.76}\) Figure 1: Weyl–Kondo semimetal characteristics. **(A)** Weyl contribution to the heat capacity \(\Delta C\), plotted as \(\Delta C/T\) vs \(T^{2}\), for different magnetic fields (inductions) \(B_{i}\). The linear behavior, corresponding to a \(\Delta C\propto T^{3}\) dependence, is a thermodynamic signature of bands with linear dispersion (Eq. 1). For Weyl semimetals, its slope is related to the Weyl velocity \(v\) via Eq. 2. **(B)** Temperature–magnetic field phase diagram displaying the region (violet shading) in which the Weyl–Kondo semimetal signature in specific heat is observed. \(T_{\rm W,max}\) is the temperature up to which the \(\Delta C\propto T^{3}\) dependence holds in zero field. The right axes display the inverse Weyl velocity \(1/v\) (squares) and the even-in-field Hall resistivity \(\rho_{\rm H}^{\rm even}\) (diamonds), both normalized to their maximum values. **(C)** Sketch of the dispersions near a Weyl (W\({}^{+}\)) and its anti-Weyl node (W\({}^{-}\)), in zero magnetic field (left) and in an applied magnetic field (right). The dashed line indicates the Fermi energy \(E_{\rm F}\), chosen here to be positioned slightly below the Weyl nodes. [62] below the respective Neel temperatures. A sizable reduction of the slope with increasing magnetic field, as seen in YbPtBi, CeAlGe, and Ce\({}_{3}\)Rh\({}_{4}\)Sn\({}_{13}\), would indeed be expected in this situation [63, 64]. In fact, CeAlGe is known to order below 5 K [54], which is consistent with the observation of the \(\Delta C/T\) curve [62]. Figure 2: \(\Delta C/T\) vs \(T^{2}\) at fixed magnetic fields for **(A)** Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\), **(B)** YbPtBi, **(C)** CeAlGe, and **(D)** Ce\({}_{3}\)Rh\({}_{4}\)Sn\({}_{13}\). For Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\), \(\Delta C/T=C/T-(\gamma+\beta T^{2})\), where \(C\) is the total measured specific heat, \(\gamma\) a zero-temperature offset, which might originate from residual topologically trivial “background” bands, and \(\beta\) the prefactor of the low-temperature phonon contribution as determined from the non-\(f\) reference compounds La\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\)[45] and La\({}_{3}\)Bi\({}_{4}\)Pt\({}_{3}\)[38]. For the three other compounds, we used \(\Delta C/T=C/T-\gamma\), i.e., no phonon contribution was subtracted. However, this is not expected to change the conclusions as in all three cases \(\beta\) is much smaller than (less than 2% of) the measured slopes [53, 54, 58], creating a maximum error of 0.6% on the extracted putative Weyl velocities. Note that for CeAlGe, \(\gamma\) is negative above 5 T, which is unphysical and indicates that the temperature dependence has to change at lower temperatures. The arrows mark the onset temperature \(T_{\rm W}\) of the \(\Delta C\propto T^{3}\) behavior, defined here via a deviation of the data by more than 5% from the low-temperature \(\Delta C/T=\Gamma T^{2}\) fit. For YbPtBi, the onset temperatures tabulated in [53] were taken, where the definition criterion is not further specified. The data for the plots were taken from [45] (Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\), open symbols from Ref. [38]), [53] (YbPtBi), [54] (CeAlGe), and [58] (Ce\({}_{3}\)Rh\({}_{4}\)Sn\({}_{13}\)). with a complex structure of predominantly antiferromagnetic nature [65], suggesting that AFM magnons may contribute to the observed \(\Delta C\propto T^{3}\) dependence. For Ce\({}_{3}\)Rh\({}_{4}\)Sn\({}_{13}\), there are conflicting reports on its magnetic order. Whereas in [66] two antiferromagnetic phase transitions at 2 and 1.2 K were reported from specific heat measurements, no clear evidence for magnetic order was found in neutron diffraction experiments [67]. This calls for further investigations, for instance by zero-field \(\mu\)SR experiments, which ruled out even spurious magnetism in Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\)[41]. YbPtBi is known to order antiferromagnetically in zero field, but this order is suppressed to \(T=0\) at 0.4 T and a Fermi liquid state is recovered at fields above 0.8 T [68]. The \(\Delta C\propto T^{3}\) dependence highlighted in [53] appears only at a much larger field of 7 T, deep in the Fermi liquid region. This seems to rule out a connection with the low-field AFM phase. On the other hand, it remains to be understood why a compound with broken inversion symmetry (such as YbPtBi) would not exhibit Weyl-Kondo semimetal features at smaller fields (including \(B=0\)). It should also be clarified whether the \(B\)-induced increase of the crystal electric field level splitting evidenced in [68] may underlie the strong field dependence of the \(\Delta C/T\) data (figure 2B). In figure 3 we summarize the characteristics extracted for all four compounds in temperature-magnetic field phase diagrams. As in figure 1B, the full circles represent the onset temperatures \(T_{\rm W}\) of the \(\Delta C\propto T^{3}\) behavior and the open squares the (putative) inverse Weyl velocities extracted from the slopes \(\Gamma\) of linear fits to \(\Delta C/T\) vs \(T^{2}\). For Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\) the Weyl velocity is approximately constant within the magnetic field range where the Weyl-Kondo semimetal exists. For the other three compounds, a pronounced field dependence is observed which, as discussed above, may hint at alternative origins of the \(\Delta C\propto T^{3}\) dependencies. A spontaneous (nonlinear) Hall effect has so far only been observed for Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\) (diamond at \(B=0\) in figure 3A). It is seen as a "smoking gun" signature for Weyl nodes in a time reversal symmetric but inversion-symmetry-broken semimetal, as the Berry curvature divergences at the Weyl nodes are its only plausible origin. If they are placed very close to the Fermi energy, as expected in a Weyl-Kondo semimetal [39], the resulting spontaneous Hall effect may be giant. Also the corresponding finite field signature, the even-in-field Hall effect as seen in Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\)[41] (diamonds at \(B>0\) in figure 3A), remains to be discovered in the other Weyl-Kondo semimetal candidate materials. What has been analyzed and proposed as evidence for Weyl physics in YbPtBi is an odd-in-field Hall effect (crosses in figure 3B) [53]. It represents a magnetic-field _induced_ effect, in contrast to the spontaneous Hall effect, which exists in \(B=0\), and the even-in-field Hall effect, which exists _in spite of_ the presence of a finite field (i.e., the field is not its origin). As such, it is more ambiguous evidence for Weyl semimetal physics. In general, the identification of intrinsic Berry curvature contributions in odd-in-field Hall resistivity data is a nontrivial task, which has led to conflicting results in particular in magnetic materials [69]. In Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\), such a contribution was identified as the deviation from a linear-in-field Hall resistivity, which is observed only at low temperatures and fields, within the Weyl-Kondo semimetal regime [41] (crosses in figure 3A). Note that in this regime the magnetization is linear in field and can thus not be at the origin of this effect. This contribution is necessarily zero for \(B=0\), then increases to its maximum value, and vanishes again as the Weyl-Kondo semimetal is suppressed by magnetic field. For YbPtBi, the odd-in-field Hall signal appears to exist outside the putative Weyl-Kondo semimetal regime identified via the specific heat (red shading in figure 3B), which calls for measurements at lower temperatures to verify whether the putative Weyl-Kondo semimetal regime might persist to lower fields. Figure 3: Temperature–magnetic field phase diagrams for **(A)** Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\), **(B)** YbPtBi, **(C)** CeAlGe, and **(D)** Ce\({}_{3}\)Rh\({}_{4}\)Sn\({}_{13}\), for comparison with the expectation for a Weyl–Kondo semimetal sketched in figure 1B. The field-dependent onset temperatures \(T_{\rm W}\) (circles, left axes) of the \(\Delta C\propto T^{3}\) behavior, defined as explained in figure 2 and normalized by the respective maximum value \(T_{\rm W,max}\), delineate the region of (putative) Weyl–Kondo semimetal behavior (shading). The right axis displays the field dependence of the inverse (putative) Weyl velocities \(1/v\) extracted from the slopes \(\Gamma\) of the linear fits in figure 2 (squares) and, where available, the even-in-field Hall resistivity \(\rho_{\rm H}^{\rm even}\) (diamonds) and an “anomalous” odd-in-field Hall resistivity \(\rho_{\rm H}^{\rm odd}\) (crosses), all normalized by the respective maximum values. The (putative) Weyl velocities for Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\) and YbPtBi are taken from [38] and [53], respectively. For CeAlGe and Ce\({}_{3}\)Rh\({}_{4}\)Sn\({}_{13}\), they were calculated in this work using Eq. 2. The Hall data are the lowest-temperature isotherms available, which were taken at 0.4 K for Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\)[41, 45] and 0.3 K for YbPtBi [53]. ## 4 Quantifying the correlation strength of Weyl-Kondo semimetals The Weyl-Kondo semimetal Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\) was shown to exhibit "giant" topological responses [38, 41]. This was attributed to the strong bandwidth renormalization via the Kondo effect, which results in a flat Weyl dispersion with very low Weyl velocity. It seems plausible that the Kondo effect leads to similar renormalization effects for both Schrodinger and Dirac/Weyl-like quasiparticles. Thus, a comparison between the respective renormalization factors can serve as a consistency check. To scrutinize the Weyl-Kondo semimetal interpretation discussed above, we use experimental values of the Sommerfeld coefficient \(\gamma\) (removed in the plots in figure 2 by plotting \(\Delta C/T=C/T-\gamma\)) together with Hall effect data for the charge carrier concentration \(n\) to estimate the renormalization in the effective (Schrodinger) mass via \[\frac{m}{m_{0}}=\frac{3\hbar^{2}}{m_{0}\cdot k_{\rm B}^{2}\cdot(3\pi^{2})^{1/3} }\cdot\frac{\gamma}{n^{1/3}}\, \tag{4}\] where \(m_{0}\) and \(m\) are the free electron mass and the mass renormalized by correlations, respectively, and the other symbols have their usual meaning. As renormalization factor for the Dirac/Weyl quasiparticles, we use \((v/v_{0})^{-1}\), i.e. the inverse of the (putative) Weyl velocities \(v\) from figure 3 scaled by \(v_{0}\) (for parameters and references, see table 1). The inverse is taken because a larger renormalization of Dirac/Weyl-like bands is reflected by smaller (not larger) velocities. For concreteness, we use \(v_{0}=10^{6}\) m/s, the Dirac velocity of graphene [52]. The expectation for (correlated) Dirac or Weyl semimetals is that \(m/m_{0}\) and \((v/v_{0})^{-1}\) have similar values. In the double-logarithmic plot in figure 4 this is indicated by a straight line of slope 1. We see that only the data point for Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\) fulfills this expectation. For the other three materials, the renormalization effect would be much larger for the Dirac/Weyl-like than for the Schrodinger-like quasiparticles. This suggests that at least part of the large slopes \(\Gamma\) of the \(\Delta C\propto T^{3}\) dependencies of YbPtBi, CeAlGe, and Ce\({}_{3}\)Rh\({}_{4}\)Sn\({}_{13}\) (figure 2) derive from effects other than a Weyl-Kondo semimetal dispersion. In any case, evidence beyond the specific heat signature should be sought to make a Weyl-Kondo semimetal assignment firm. ## 5 Topological response vs correlation strength As discussed above, the giant spontaneous Hall effect of Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\) may represent such firm evidence. To the best of our knowledge, it has so far not been reported in any other strongly correlated nonmagnetic (time-reversal-symmetry-preserving) Weyl semimetal candidate material, including the three above-discussed heavy fermion compounds. To nevertheless examine whether its magnitude depends on the correlation strength, we resort to a comparison with noninteracting/weakly interacting reference materials. In studies of these compounds, the term nonlinear Hall effect (NLHE) is used, and reference is made to the Berry curvature dipole \(D_{xz}\) (see Eq. 3). As explained in section 2, this is only part of the Berry-curvature-related response observed in Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\). Further terms arise when expanding the out-of-equilibrium distribution function around a finite-current setpoint (instead of around \(j_{x}=0\) as done to obtain Eq. 3), which is deemed necessary in Weyl-Kondo semimetals [41]. To discriminate this fully nonequilibrium response from the Berry curvature _dipole_ effect (the lowest-order term), the expression "spontaneous Hall effect" was used instead of "NLHE" [41]. For the purpose of comparison, we adopt the NLHE terminology in what follows. NLHE studies have been carried out in various (non- or weakly interacting) materials [72, 73, 74, 75, 76, 77, 78, 79, 80, 81], but the identification of intrinsic Berry curvature contributions has been challenging. It involves the separation from extrinsic contributions due to effects such as side jump and skew scattering [82]. For Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\), the Hall angle is constant in the Weyl-Kondo semimetal regime, i.e., \[\tan\Theta=\frac{\sigma_{xy}}{\sigma_{xx}}=\mbox{const}\;, \tag{5}\] as seen from the approximate linear \(\sigma_{xy}\) vs \(\sigma_{xx}\) dependence below about 3 K (Fig. 2B in [41]). Interestingly, this holds for both the dc (and, by extension, the \(2\omega\)) response and the (fully out-of-equilibrium) \(1\omega\) response. In the context of the NLHE, only the \(2\omega\) response is considered and it is investigated how \(\tan\Theta\), typically scaled by the applied longitudinal electric field \({\cal E}_{x}\), depends on the longitudinal conductivity, i.e. \[\frac{\tan\Theta}{{\cal E}_{x}^{\omega}}=\frac{\sigma_{xy}}{\sigma_{xx}{\cal E }_{x}^{\omega}}=\frac{{\cal E}_{xy}^{2\omega}}{({\cal E}_{x}^{\omega})^{2}}=f( \sigma_{xx})\;. \tag{6}\] Figure 4: Weyl vs Schrödinger renormalization. (Putative) inverse Weyl velocity \(v^{-1}\), scaled by the inverse of the Dirac quasiparticle velocity of graphene \(v_{0}=1\times 10^{6}\) m/s [52], as extracted from the linear-in-\(T^{3}\) electronic (or, more generally, nonphononic) specific heat of Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\), YbPtBi, CeAlGe, and Ce\({}_{3}\)Rh\({}_{4}\)Sn\({}_{13}\) (see figure 2) vs the effective (Schrödinger) mass renormalization as calculated via Eq. 4, using the published Sommerfeld coefficients \(\gamma\) and charge carrier concentrations \(n\) given in the table 1. For the open symbol of CeAlGe, \(m/m_{0}=49\) given in [56] was used, where it was determined using the plasma frequency instead of \(n\). The influence of disorder scattering was studied in a 2D tilted massive (gapped) Dirac model as a minimal symmetry-allowed model for a NLHE [82]. \(f(\sigma_{xx})=\mathrm{const}\), as observed for Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\), was found only for the intrinsic contribution (due to the Berry curvature dipole), and side-jump and skew-scattering terms from dynamic (e.g. phonon-induced) disorder. These latter should depend on temperature and disappear in the zero-temperature limit. The fact that, for Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\), \(f(\sigma_{xx})=\mathrm{const}\) holds over the entire temperature range of Weyl-Kondo semimetal behavior is strong evidence for dynamic disorder effects playing a minor role and thus for the intrinsic nature of the spontaneous (or nonlinear) Hall effect. We note that also the linear-response anomalous Hall effect from skew scattering and side-jump scattering was shown to be negligibly small in Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\) (see SI, part B of [41]). As extrinsic scattering effects in the linear-response and nonlinear regimes are related [72, 82, 83], this is a further confirmation for their absence in the NLHE in Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\). In general, the situation is considerably more complex. Here we focus on investigations of (Pb\({}_{1-x}\)Sn\({}_{x}\))\({}_{1-y}\)In\({}_{y}\)Te [80], MoTe\({}_{2}\)[78], WTe\({}_{2}\)[72], and TaIrTe\({}_{2}\)[73], where--via stoichiometry optimization in (Pb\({}_{1-x}\)Sn\({}_{x}\))\({}_{1-y}\)In\({}_{y}\)Te to reach a ferroelectric state with extremely low carrier concentration and via exfoliation in the other three noninteracting/weakly interacting reference compounds--a Berry curvature dipole contribution to the NLHE became sufficiently large to be identified \begin{table} \begin{tabular}{|c|c|c|c|} \hline compound & \(v\) (m/s) & \(\gamma\) (J/(mol K\({}^{2}\))) & \(n\) (1/cm\({}^{3}\)) \\ \hline Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\) & 885 [38] & 0.627 [45] & 8.2\(\cdot 10^{19}\)[41, 45] \\ YbPtBi (7 T) & 213 [53] & 0.244 [53] & \(5.2\cdot 10^{20}\)[70] \\ YbPtBi (9 T) & 292 [53] & 0.182 [53] & \(5.2\cdot 10^{20}\)[70] \\ YbPtBi (13 T) & 394 [53] & 0.089 [53] & \(5.2\cdot 10^{20}\)[70] \\ CeAlGe (0 T) & 288 [54]\({}^{*}\) & 0.05 [54] & \(1.4\cdot 10^{20}\)[54] \\ CeAlGe (14 T) & 496 [54]\({}^{*}\) & 0.041 [54] & \(1.4\cdot 10^{20}\)[54] \\ Ce\({}_{3}\)Rh\({}_{4}\)Sn\({}_{13}\) (0 T) & 121 [58]\({}^{*}\) & 3.44 [58] & \(5\cdot 10^{22}\)[71] \\ Ce\({}_{3}\)Rh\({}_{4}\)Sn\({}_{13}\) (3 T) & 138 [58]\({}^{*}\) & 1.06 [58] & \(5\cdot 10^{22}\)[71] \\ Ce\({}_{3}\)Rh\({}_{4}\)Sn\({}_{13}\) (6 T) & 176 [58]\({}^{*}\) & 0.57 [58] & \(5\cdot 10^{22}\)[71] \\ Ce\({}_{3}\)Rh\({}_{4}\)Sn\({}_{13}\) (9 T) & 200 [58]\({}^{*}\) & 0.43 [58] & \(5\cdot 10^{22}\)[71] \\ \hline \end{tabular} \end{table} Table 1: Parameters used for the data in figure 4, as extracted from the cited publications. The (putative) Weyl velocities \(v\) of CeAlGe and Ce\({}_{3}\)Rh\({}_{4}\)Sn\({}_{13}\) were determined within this work from the slopes \(\Gamma\) of the linear fits in figure 2 using Eq. 2; this is indicated by the \(*\) after the reference. Because the specific heat of CeAlGe exhibits a phase transition anomaly due to the magnetic ordering, a reliable extraction of \(\gamma\) is nontrivial. The values we used at 0 T and 14 T correspond to the lowest value of \(C/T(B=0)\) above the transition and the lowest measured \(C/T(B=14\) T) value in the entire \(T\) range, respectively [54]. To obtain the renormalization of the effective mass from Eq. 4 the \(\gamma\) values from this table must be converted to SI units (J/(K\({}^{2}\)m\({}^{3}\))) by dividing them by the respective molar volume \(V_{M}\). The carrier density \(n\) of Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\) was determined in the region where the Weyl nodes are gapped out (between about 9 and 14 T) [45]. This is needed for consistency with the Sommerfeld coefficient \(\gamma\), which also counts only the Schrödinger-like carriers. with some confidence. (Pb\({}_{1-x}\)Sn\({}_{x}\))\({}_{1-y}\)In\({}_{y}\)Te is an In-doped alloy of two rock salt-type compounds: the normal insulator (NI) PbTe and the topological crystalline insulator SnTe. For certain compositions (\(x\) and \(y\) values), ferroelectric order appears, which breaks the inversion symmetry of the undeformed system (SG \(Fm\bar{3}m\), No. 225), thereby enabling the formation of a Weyl semimetal [84]. An "optimally doped" sample shows an electrical conductivity that decreases with decreasing temperature [80]. Using this and the temperature dependent \({\cal E}_{xy}^{2\omega,{\rm intr}}/({\cal E}_{x}^{\omega})^{2}\), \(f(\sigma_{xx})\) can be obtained. An extrapolation of the low-\(\sigma_{xx}\) (low-\(T\)) values to \(\sigma_{xx}=0\) leads to an intrinsic NLHE of \(4.35\times 10^{-4}\) m/V. Bulk MoTe\({}_{2}\) crystallizes in the noncentrosymmetric \(T_{\rm d}\)-MoTe\({}_{2}\) structure of SG \(Pmn2_{1}\) (No. 31) [85, 86], but the exfoliated films of interest here have a lower \(Pm\) symmetry [87]. \(f(\sigma_{xx})\) shows a pronounced dependence on \(\sigma_{xx}\), with a functional form that changes with temperature. There is also a pronounced thickness dependence. Thinner films have larger residual resistivity (due to surface scattering), which tips the balance between different (extrinsic) scattering processes. The best estimate of the intrinsic Berry curvature contribution comes from the thinnest samples because they have the smallest conductivity and thus the lowest skew-scattering contribution (which is the dominant extrinsic scattering effect at high temperatures). The extrapolation of \(f(\sigma_{xx})\propto\sigma_{xx}^{2}\) to \(\sigma_{xx}=0\) (at \(T\to\infty\)) gives \(1.2\times 10^{-6}\) m/V. This is one order of magnitude larger than the upper bound estimated from DFT calculations of the Berry curvature dipole, so presumably it is still dominated by extrinsic scattering [78]. The situation is similar in \(T_{\rm d}\)-WTe\({}_{2}\)[72]. Its SG \(Pmn2_{1}\)[88] is again reduced to \(Pm\) in exfoliated multilayer films [87]. For three films of 5-6 layer thickness, \(f(\sigma_{xx})\) was found to be proportional to \(\sigma_{xx}^{2}\) in temperature ranges between 2 and 100 K. Again, the Berry curvature dipole contribution is estimated by extrapolating this dependence to \(\sigma_{xx}=0\). That the values obtained for the three films vary by almost an order of magnitude (\(0.15-1\times 10^{-9}\) m/V) is attributed to the different carrier concentrations and mobilities, though no systematic dependence is seen. Finally, also exfoliated samples of \(T_{\rm d}\)-TaIrTe\({}_{2}\) (again SG \(Pmn2_{1}\) for bulk) reveal such behavior [73]. Using the same procedure for the thinnest and thus most resistive film yields \(1.8\times 10^{-8}\) m/V as an estimate for the intrinsic Berry curvature dipole contribution to the NLHE. In figure 5 we compare the magnitudes of these intrinsic Berry curvature dipole contributions to the NLHE by plotting \({\cal E}_{xy}^{2\omega,{\rm intr}}/({\cal E}_{x}^{\omega})^{2}\) as a function of the respective reciprocal Weyl velocities \(v^{-1}\) (scaled by \(v_{0}^{-1}\), panel A) and charge carrier concentrations \(n^{-1}\) (panel B). All values are also given in table 2, and the caption contains details on how they were obtained. The data points in figure 5A fall into three groups of similar \(n\). In particular, Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\) has roughly the same \(n\) as \(T_{\rm d}\)-TaIrTe\({}_{4}\) and \(T_{\rm d}\)-MoTe\({}_{2}\) (\(\sim\!10^{20}\) cm\({}^{-3}\)), which is highlighted by the dashed guide-to-the-eyes line which represents a \(v^{-2}\) dependence (for the other data points, shaded lines with the same slope are plotted). At constant \(n\), \({\cal E}_{xy}^{2\omega,{\rm intr}}/({\cal E}_{x}^{\omega})^{2}\) thus appears to be boosted by strong correlations, which flatten the Weyl bands (smaller slope \(v\) of the Weyl dispersion, Eq. 1) and enhance the electronic density of states at the Fermi level [which scales as \(D(E_{\rm F})\sim v^{-3}\) in a 3D material with Weyl dispersion]. A second trend that becomes clear from this plot is that, at constant \(v\), \({\cal E}_{xy}^{2\omega,{\rm intr}}/({\cal E}_{x}^{\omega})^{2}\) is enhanced by reducing \(n\). This dependence is explicitly revealed in figure 5B. All data of the noninteracting/weakly interacting Weyl semimetals fall on a universal curve, \({\cal E}_{xy}^{2\omega,{\rm intr}}/({\cal E}_{x}^{\omega})^{2}\sim n^{-1.3}\) (dashed line), evidencing a strong dependence on the proximity of the Weyl nodes to the Fermi energy (\(E_{\rm F}\sim n^{1/3}\) in a 3D material with Weyl dispersion). Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\) lies orders of magnitude above this line. Again, we also include a line of the same slope for Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\) (shaded line), which makes a strategy for further enhancing \({\cal E}_{xy}^{2\omega,{\rm intr}}/({\cal E}_{x}^{\omega})^{2}\) explicit: to reduce the charge carrier concentration in a strongly correlated Weyl semimetal such as Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\). Whether, at constant \(n\), the correlation-induced \(v\) reduction is the only cause of the drastic enhancement of \({\cal E}_{xy}^{2\omega,{\rm intr}}/({\cal E}_{x}^{\omega})^{2}\) or whether also other ingredients--such as the multiplicity of Weyl nodes near the Fermi energy, the \(k\) space separation of node and anti-node, or the tilting of the nodes [40]--contribute, should be clarified by future work. ## 6 Discussion and outlook We have investigated the role of strong correlations in topological semimetals. As a starting point, we used the recently discovered time-reversal-invariant but inversion Figure 5: Intrinsic Berry-curvature-dipole-induced nonlinear Hall effect (NLHE), quantified by \({\cal E}_{xy}^{2\omega,{\rm intr}}/({\cal E}_{x}^{\omega})^{2}\), of (candidate) Weyl semimetals as function of **(A)** the inverse scaled Weyl velocity \((v/v_{0})^{-1}\) as a measure of correlation strength. The full symbols show the experimentally extracted values, the open symbols DFT results. \(T_{\rm d}\)-TaIrTe\({}_{4}\), \(T_{\rm d}\)-MoTe\({}_{2}\), and Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\), which have similar charge carrier concentrations (of order \(10^{20}\,{\rm cm}^{-3}\)), lie on a universal curve \(\sim(v/v_{0})^{-2}\), as seen from the dashed guide-to-the-eyes curve. For the two shaded lines at higher and lower \(n\), we used the same slope; **(B)** the inverse charge carrier concentration \(n^{-1}\) (of hole-like charge carriers for consistency with [80]). The charge carrier concentrations of the quasi-2D material \(T_{\rm d}\)-WTe\({}_{2}\) were calculated using \(n=n_{\rm 2D}/d\), where \(d\) is the interlayer distance of (2.7-2.8) Å [87]. The noninteracting/weakly interacting materials lie on a universal curve \(\sim n^{-1.3}\), which was determined by fitting (dashed line). A curve with the same slope is plotted through the data point of Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\) (shaded line). symmetry-broken Weyl-Kondo semimetal Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\)[38, 39, 40, 41]. We reviewed its topological signatures in both thermodynamic and transport measurements, namely (i) a "giant" value of the electronic specific heat coefficient \(\Gamma=\Delta C/T^{3}\) of Dirac-like quasiparticles, which is associated with ultraslow quasiparticle velocities \(v\propto\Gamma^{-1/3}\) and thus ultraflat linearly-dispersing bands [38]; (ii) an equally giant value of the intrinsic nonlinear (spontaneous, i.e. \(B=0\)) Hall effect arising from the Berry curvature monopoles at the Weyl nodes [41]; (iii) a continuation of this zero-field Hall effect as an even-in-\(B\) component, confirming that magnetic field is not the cause of the effect [41]; (iv) a clearly identified odd-in-\(B\) anomalous Hall effect due to the Berry curvature induced by a magnetic field [41]. We have explained the understanding of these effects in terms of a Weyl-Kondo semimetal model [39, 40] where, at the appropriate filling, Weyl nodes appear in the immediate vicinity of the Fermi level and are associated with Weyl bands with ultraflat dispersion [38, 39, 40, 41]. We have produced a temperature-magnetic field phase diagram that delineates the region of Weyl-Kondo semimetal signatures, using the (high-temperature) onset temperature \(T_{\rm W}\) of the \(\Delta C\propto T^{3}\) dependence as the "phase" boundary (note that a Weyl semimetal is not a phase in the thermodynamic sense). With increasing field, this boundary is suppressed to zero at a critical field \(B_{\rm c}\), which is understood in terms of a Zeeman-coupling induced motion of Weyl nodes in momentum space until a \begin{table} \begin{tabular}{|l|l|l|l|} \hline Compound & \({\cal E}_{xy}^{2\omega,{\rm intr}}/({\cal E}_{x}^{\omega})^{2}\) (m/V) & \(n_{(h)}\) & \((v/v_{0})^{-1}\) \\ \hline Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\) & 3E-3 [41] & 8.2E19 (cm\({}^{-3}\)) [41, 45] & 1130 [38] \\ (Pb\({}_{1-x}\)Sn\({}_{x}\))\({}_{1-y}\)In\({}_{y}\)Te & 4.35E-4 [80] & 2E16 (cm\({}^{-3}\)) [80] & 5.9 [84], 8.3 [80] \\ \(T_{\rm d}\)-MoTe\({}_{2}\) & 1.2E-6 [78] & 7.7E19 (cm\({}^{-3}\)) [78] & 6.2-8.1 [89] \\ \(T_{\rm d}\)-WTe\({}_{2}\) & (0.15-1)E-9 [72] & (1.49-2.18)E13 (cm\({}^{-2}\)) [72] & 3.2 [91], 4.8-6.7 [90] \\ \(T_{\rm d}\)-TaIrTe\({}_{4}\) & 1.8E-8 [73] & 6.3E19 (cm\({}^{-3}\)) [73] & 3.3-6.6 [92] \\ \hline \end{tabular} \end{table} Table 2: Estimates of the intrinsic NLHE contribution due to the Berry curvature dipole, \({\cal E}_{xy}^{2\omega,{\rm intr}}/({\cal E}_{x}^{\omega})^{2}\) (obtained as explained in the text), the 3D/quasi-2D charge carrier concentration of hole-like charge carriers, \(n_{(\rm h)}\), and the inverse ratio of the Weyl velocity \(v\) and the velocity of graphene (taken as \(v_{0}=1\times 10^{6}\) m/s), for the selected Weyl semimetal (candidate) materials. These data are used in figure 5. The values of \(v\) were obtained as follows. Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\): from the slope \(\Gamma\) of \(\Delta C/T\) vs \(T^{2}\), with the phonon contribution subtracted [38]. (Pb\({}_{1-x}\)Sn\({}_{x}\))\({}_{1-y}\)In\({}_{y}\)Te: from a linear fit of the optical conductivity vs photon energy, yielding \(v=1.7\times 10^{5}\) m/s [84], and from a linear-in-\(T\) fit of the electrical conductivity, leading to \(v=1.2\times 10^{5}\) m/s [80]. MoTe\({}_{2}\): from \(1/v=m/[\hbar(3\pi^{2}n)^{1/3}]\), using the charge carrier concentrations \(n=0.70\times 10^{20}\) and \(0.93\times 10^{20}\) 1/cm\({}^{3}\) of two orbits in Shubnikov–de Haas (SdH) oscillations, and effective masses of \(m=(1.0-1.2)m_{0}\) extracted in a Liftshitz–Kosevich analysis [89]. WTe\({}_{2}\): from slopes of linearly dispersing (surface) bands in ARPES [90], giving \((1.5-2.1)\times 10^{5}\) m/s and from a Weyl orbit in SdH oscillations [91] on 14-layer thick exfoliated WTe\({}_{2}\), giving \(v=3.09\times 10^{5}\) m/s. TaIrTe\({}_{4}\): from ARPES revealing linearly dispersing surface states with a slope of 2 eVÅ (or 1 eVÅ as given in the text) [92], yielding a Dirac/Weyl velocity of \(v=3.04\times 10^{5}\) m/s (or \(1.52\times 10^{5}\) m/s). Weyl and its anti-Weyl node meet and annihilate [45, 46]. We have also included the magnitudes of the topological signatures (i)-(iv), scaled to their maximum values, in this phase diagram. Whereas \(\Gamma\) remains essentially constant within the boundary, the Hall signatures get successively suppressed towards \(B_{\rm c}\). This behavior indicates that, with increasing field, the Weyl nodes move at constant energy in momentum space, without an appreciable change of the slope of the Weyl bands, until they meet and annihilate at \(B_{\rm c}\)[45], in good agreement with theoretical expectations [46]. The key aspects that make the Weyl-Kondo semimetal Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\) a prime example for correlation-driven metallic topology are summarized as follows: * Its Weyl-Kondo semimetal phase is well delineated: It emerges only at low temperatures as the material becomes fully Kondo coherent, and is suppressed at a readily accessible magnetic field as the Weyl nodes annihilate. * Its Weyl-Kondo bands reside within a Kondo insulating gap: This eliminates contributions from topologically trivial "background" bands to a large extent, aiding the identification of topological signatures; in addition, it pins the Fermi level to the immediate vicinity of the Weyl nodes. * Its Weyl-Kondo semimetal signatures are "giant": The orders of magnitude mass renormalization of Schrodinger-like quasiparticles known from heavy fermion compounds is inherited by the Weyl quasiparticles in terms of a corresponding band flattening, Weyl velocity suppression, and Weyl density of states enhancement. We have searched the literature for other candidate Weyl-Kondo semimetals and considered the noncentrosymmetric compounds YbPtBi, CeAlGe, and Ce\({}_{3}\)Rh\({}_{4}\)Sn\({}_{13}\) as promising candidates because they all exhibit temperature and field ranges with \(\Delta C/T\propto T^{2}\) behavior with large slopes [53, 54, 58]. The phase diagrams that we constructed, however, show several differences from the one of Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\), namely: (i) the (putative) phase boundaries are stabilized as opposed to suppressed with increasing magnetic field; (ii) the (putative) Weyl velocities are significantly increased with field as opposed to essentially unchanged in Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\); (iii) no spontaneous or even-in-field Hall effect is detected; (iv) the odd-in-field Hall effect detected in one of the materials [53] seems to appear outside the (putative) phase boundary. As a further consistency check, we estimated effective (Schrodinger) masses and Dirac (or Weyl) velocities of the candidate materials. Whereas the expected renormalization ratio of order unity was found for Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\), much stronger Weyl than Schrodinger renormalizations would have to be at play in the three other compounds. This calls for further studies, to pin down whether and to which extent other effects (e.g., antiferromagnetic magnons of CEF splitting in large fields) intervene. Finally, as Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\) is so far the only Weyl-Kondo semimetal in which a spontaneous or even-in-field Hall response has been identified, we resorted to a comparison with noninteracting systems. To the best of our knowledge, the only candidate time-reversal symmetric noninteracting Weyl semimetals that have shown evidence for an intrinsic (Berry-curvature-related) NLHE are exfoliated thin films of \(T_{\rm d}\)-MoTe\({}_{2}\)[78], \(T_{\rm d}\)-WTe\({}_{2}\)[72], and \(T_{\rm d}\)-TaIrTe\({}_{4}\)[73], as well as carrier-concentration-optimized ferroelectric (Pb\({}_{1-x}\)Sn\({}_{x}\))\({}_{1-y}\)In\({}_{y}\)Te [80]. The quantity that conveniently benchmarks the size of this effect is \({\cal E}_{xy}^{2\omega,{\rm intr}}/({\cal E}_{x}^{\omega})^{2}\), where \({\cal E}_{x}^{\omega}\) is the applied electric field and \({\cal E}_{xy}^{2\omega,{\rm intr}}\) the intrinsic part of the resulting transverse electric field at double frequency. A comparison of all available data reveals that \({\cal E}_{xy}^{2\omega,{\rm intr}}/({\cal E}_{x}^{\omega})^{2}\) is drastically enhanced by strong correlations. Furthermore, as it also increases with decreasing charge carrier concentration, a strategy for further boosting the intrinsic topological Hall response is to reduce the carrier concentration of strongly correlated Weyl semimetals. We propose gating experiments on thin films as a promising strategy to explore this route. An interesting topic for further studies across the correlation spectrum are nonlinear optical responses, as seen in several noninteracting/weakly interacting Weyl semimetals and discussed also in terms of their potential for applications [73, 93, 94]. Strongly correlated Weyl semimetals might amplify such responses and reduce the pertinent energies, thereby enabling e.g. non-reciprocal devices and rectification in the microwave regime. We hope that our comparison of the key characteristics of the Weyl-Kondo semimetal Ce\({}_{3}\)Bi\({}_{4}\)Pd\({}_{3}\) with features of other candidate materials provides valuable guidance to discover new strongly correlated Weyl semimetals. This would allow the determination of universal aspects in Weyl-Kondo semimetals, such as the dependence of the magnitude of the nonlinear Hall response with the Weyl velocity (correlation strengths), charge carrier concentration (distance of the nodes from the Fermi energy), and potentially other factors such as the node vs anti-node separation in momentum space, tilting, and multiplicity of the Weyl nodes. This, in turn, may motivate further theoretical development and, more generally, boost progress toward a broader understanding of correlation-driven topological semimetals across different materials classes, as well as the development of technological applications. ## Acknowledgements We thank J. Cano, J. Checkelsky, G. Eguchi, S. Grefe, A. Prokofiev, Q. Si, X. Yan, and D. Zocco for fruitful discussions, which were in part conducted at the Kavli Institute for Theoretical Physics at UC Santa Barbara. This work was supported by the Austrian Science Fund (I5868 - FOR 5249 QUAST, F86 - SFB Q-M&S), the European Union's Horizon 2020 Research and Innovation Programme (824109, EMP), and the European Research Council (ERC Advanced Grant 101055088, CorMeTop), and in part by the US National Science Foundation (Grant No. NSF PHY-1748958). For the purpose of open access, the authors have applied a CC BY public copyright licence to the Author Accepted Manuscript version arising from this submission.
2308.06664
Universal quantum Otto heat machine based on the Dicke model
In this paper we study a quantum Otto thermal machine where the working substance is composed of N identical qubits coupled to a single mode of a bosonic field, where the atoms and the field interact with a reservoir, as described by the so-called open Dicke model. By controlling the relevant and experimentally accessible parameters of the model we show that it is possible to build a universal quantum heat machine (UQHM) that can function as an engine, refrigerator, heater or accelerator. The heat and work exchanges are computed taking into account the growth of the number N of atoms as well as the coupling regimes characteristic of the Dicke model for several ratios of temperatures of the two thermal reservoirs. The analysis of quantum features such as entanglement and second-order correlation shows that these quantum resources do not affect either the efficiency or the performance of the UQHM based on the open Dicke Model. In addition, we show that the improvement in both efficiency and coefficient of performance of our UQHM occurs for regions around the critical value of the phase transition parameter of the model.
He-Guang Xu, Jiasen Jin, G. D. M. Neto, Norton G. de Almeida
2023-08-13T02:27:17Z
http://arxiv.org/abs/2308.06664v1
# Universal quantum Otto heat machine based on the Dicke model ###### Abstract In this paper we study a quantum Otto thermal machine where the working substance is composed of N identical qubits coupled to a single mode of a bosonic field, where the atoms and the field interact with a reservoir, as described by the so-called open Dicke model. By controlling the relevant and experimentally accessible parameters of the model we show that it is possible to build a universal quantum heat machine (UQHM) that can function as an engine, refrigerator, heater or accelerator. The heat and work exchanges are computed taking into account the growth of the number N of atoms as well as the coupling regimes characteristic of the Dicke model for several ratios of temperatures of the two thermal reservoirs. The analysis of quantum features such as entanglement and second-order correlation shows that these quantum resources do not affect either the efficiency or the performance of the UQHM based on the open Dicke Model. In addition, we show that the improvement in both efficiency and coefficient of performance of our UQHM occurs for regions around the critical value of the phase transition parameter of the model. ## I Introduction Quantum thermodynamics [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13], which is described by the laws of quantum mechanics and thermodynamics, plays a fundamental role in understanding the transitions between various forms of energy and become a vibrant branch of modern research. A quantum thermal machine [14; 15; 16; 17; 18; 19; 20; 21; 22] is a quantum device to study the thermodynamic properties of quantum systems. In recent years, the study of thermal nanomachines has been driven by the great theoretical and experimental effort dedicated to the investigation of their properties in the quantum regime. Nowadays, there are many experimental platforms to explore QHE, such as trapped ion systems [23; 24; 25; 26], optomechanics [27; 28], ultracold atoms [29; 30], nuclear magnetic resonance (NMR) [31; 32; 33], superconducting circuits [34; 35; 36; 37; 38]. Among thermal machines, great interest has been devoted to cyclic thermal machines, both refrigerators and engines, operating in the quantum regime where energy exchanges can occur, for example, between the reservoir and just two levels of a single atom or between levels of a quantum harmonic oscillator. Several typical quantum cycles have been extensively studied, such as Carnot, Otto, and Stirling cycles [13; 4; 39; 40; 41; 42; 43; 44; 45; 46; 47]. In this paper, we are only concerned with quantum Otto cycle. The performance of quantum Otto cycle depend strongly on the choice of the working substance. For example, recent studies show that with two temperatures fixed, the Otto cycle performed with fermionic substances can surpass the performance of the same cycle when performed with bosonic substances [48]. Regarding the Otto cycle there are several cases being considered, such as single spin systems [49; 50], two-level atoms [22], coupled spin systems [51; 40; 52], coupled spin-3/2 [53], harmonic oscillators [54; 23], relativistic oscillators [55], Bose-Einstein condensates [56; 57], and light-matter systems described by the Jaynes-Cumming [58; 59; 60; 61] and quantum Rabi [62; 63; 64] models. Despite the several studies on light-matter systems, there are very little works devoted to investigate a quantum Otto heat engine operating with multi-qubits interacting with a single cavity mode in the dressed picture and taking into account dissipation as well as the number of two-level atoms, as described by the open Dicke model (ODM). Over the past decades, the Dicke model has been theoretically studied in several contexts, as for example quantum phase transition [65; 66; 67; 68; 69; 70; 71; 72], quantum entanglement [73; 74; 75], chaos [76], lasing [77] and quantum thermodynamics [78]. According to the qubit-photon coupling ratio \(\lambda/\omega\), where \(\lambda\) is the coupling strength and \(\omega\) is the frequency of the cavity mode field, the ODM can be divided into different coupling regimes: weak and strong coupling regime (\(\lambda/\omega<0.1\)), ultrastrong coupling regime (USC) (\(0.1\leq\lambda/\omega<1\)), which was experimentally realized in a variety of quantum systems [79; 80; 81; 82; 83; 84; 85; 86; 87], and deep strong coupling regime (DSC) (\(1\leq\lambda/\omega\)). In this work, we study a quantum Otto heat machine (QOHM) operating under two thermal reservoirs and having as working substance N atoms and one mode of an electromagnetic field, as modelled by the ODM. We calculate the total work extracted and the amount of heat exchanged between the system and the reservoir and both the efficiency of the engine and the coefficient of performance (COP) of the refrigerator by numerically solving the ODM using the extended bosonic coherent state approach and the dressed master equation, which is suitable for any coupling strength regime to describe the ODM dynamics [88; 89]. As we will show, it is possible, by controlling the ODM parameters, to build a universal quantum thermal machine [90] that, depending on the choice of parameters, can work either as an engine, or as a refrigerator, or as heater, or as an accelerator. Furthermore, our results indicate that it is not possible, for the model analyzed here, to use quantum resources to improve the engine efficiency or the refrigerator performance. This paper is organized as follows. In Sec. II we introduce the open Dicke model and numerically solve it by using the extended bosonic coherent state approach. In Sec. III we present our model for a universal QOHM, having as the working substance N two-level atoms and one mode of the electromagnetic field, both atoms and field interacting with their respective reservoirs through the so-called open Dicke model. In Sec. IV, we study the roles of the qubit-mode coupling strength, the number N of qubits and the temperature ratio between the cold and hot thermal reservoirs on the amount of work and heat extractable as well as the impact on the engine efficiency and the refrigerator performance when varying the system parameters. Finally, in Sec. V we present our conclusions. ## II The model The Hamiltonian describing the Dicke model consisting of a single bosonic field interacting with \(N\) identical two-level qubits is expressed as (\(\hbar=1\)) [91; 92] \[\hat{H}_{0}=\omega_{0}\hat{a}^{\dagger}\hat{a}+\Delta\hat{J}_{z}+\frac{2 \lambda}{\sqrt{N}}(\hat{a}^{\dagger}+\hat{a})\hat{J}_{x}, \tag{1}\] where \(\omega_{0}\) and \(\Delta\) are the frequencies of the single bosonic mode and qubits, respectively, \(\lambda\) is the qubit-boson coupling strength, \(\hat{a}^{\dagger}(\hat{a})\) denotes the creation (annihilation) operator of the bosonic field, \(\hat{J}_{x}=\frac{1}{2}(\hat{J}_{+}+\hat{J}_{-})\) and \(\hat{J}_{z}\) are the pseudospin operators given by \(\hat{J}_{\pm}=\sum_{i}^{N}\hat{\sigma}_{\pm}^{i},\hat{J}_{z}=\sum_{i}^{N}\hat{ \sigma}_{z}^{i}\), with \(\hat{\sigma}_{\alpha}\) (\(\alpha=x,y,z\)) being the Pauli operators. The pseudospin operators satisfy the commutation relation \([\hat{J}_{+},\hat{J}_{-}]=2\hat{J}_{z}\), \([\hat{J}_{z},\hat{J}_{\pm}]=\pm\hat{J}_{\pm}\). In this work, we will consider the resonance condition \(\omega_{0}=\Delta=\omega\). Dicke model has numerically exact solution by using extended bosonic coherent state approach [72]. For convenience of numerical solution, we first rotate the angular momentum operators with \(\pi/2\) along the \(y\)-axis \(\hat{H_{s}}=\exp(i\pi\hat{J}_{y}/2)\hat{H}_{0}\exp(-i\pi\hat{J}_{y}/2)\), resulting in \[\hat{H_{s}}=\omega_{0}\hat{a}^{\dagger}\hat{a}-\frac{\Delta}{2}(\hat{J}_{+}+ \hat{J}_{-})+\frac{2\lambda}{\sqrt{N}}(\hat{a}^{\dagger}+\hat{a})\hat{J}_{z}. \tag{2}\] For the two-level qubits, its basis can be spanned by the Dicke state \(\{|j,m\rangle,m=-j,-j+1,...,j-1,j\}\) with \(j=N/2\), and the Hilbert space of the total system can be expressed in terms of the basis \(\{|\varphi_{m}\rangle_{b}\otimes|j,m\rangle\}\). In the Dicke model, the excitation number \(\tilde{N}=\hat{a}^{\dagger}\hat{a}+\hat{J}_{z}+N/2\) is not conserved. Therefore, the truncation of the bosonic excitation number procedure has to be applied in this system, especially in the strong qubit-boson coupling regime. By considering the displacement transformation \(\hat{A}_{m}=\hat{a}+g_{m}\) with \(g_{m}=2\lambda m/\omega\sqrt{N}\) and taking the total system basis into the Schrodinger equation, we obtain \[-\Delta j_{m}^{+}|\varphi_{m}\rangle_{b}|j,m+1\rangle-\Delta j_{m} ^{-}|\varphi_{m}\rangle_{b}|j,m-1\rangle\] \[+\omega_{0}(\hat{A}_{m}^{\dagger}\hat{A}_{m}-g_{m}^{2})|\varphi_ {m}\rangle_{b}|j,m\rangle=E|\varphi_{m}\rangle_{b}|j,m\rangle, \tag{3}\] where \(\hat{J}_{\pm}|j,m\rangle=j_{m}^{\pm}|j,m\pm 1\rangle\), with \(j_{m}^{\pm}=\sqrt{j(j+1)-m(m\pm 1)}\). Next, we multiply Eq. (3) on the left by \(\{\langle n,j|\}\), which results in \[-\Delta j_{n}^{+}|\varphi_{n+1}\rangle_{b}-\Delta j_{n}^{-}|\varphi_{n-1} \rangle_{b}+\omega_{0}(\hat{A}_{n}^{\dagger}\hat{A}_{n}-g_{n}^{2})|\varphi_{n} \rangle_{b}=E|\varphi_{n}\rangle_{b}, \tag{4}\] where \(n=-j,-j+1,...,j\). Furthermore, the bosonic state can be expanded as \[|\varphi_{m}\rangle_{b} = \sum_{k=0}^{\rm N_{tr}}\frac{1}{\sqrt{k!}}c_{m,k}(\hat{A}_{m}^{ \dagger})^{k}|0\rangle_{A_{m}} \tag{5}\] \[= \sum_{k=0}^{\rm N_{tr}}\frac{1}{\sqrt{k!}}c_{m,k}(\hat{a}^{ \dagger}+g_{m})^{k}e^{-g_{m}\hat{a}^{\dagger}-g_{m}^{2}/2}|0\rangle_{a},\] where \(\rm N_{tr}\) is the truncation number of bosonic excitations. Finally, we obtain the eigen-value equation \[\omega_{0}(l-g_{n}^{2})c_{n,l}-\Delta j_{n}^{+}\sum_{k=0}^{\rm N_ {tr}}c_{n+1,kA_{n}}\langle l|k\rangle_{A_{n+1}}\] \[-\Delta j_{n}^{-}\sum_{k=0}^{\rm N_{tr}}c_{n-1,kA_{n}}\langle l| k\rangle_{A_{n-1}}=Ec_{n,l}, \tag{6}\] where the coefficients are \({}_{A_{n}}\langle l|k\rangle_{A_{n-1}}=(-1)^{l}D_{l,k}\) and \({}_{A_{n}}\langle l|k\rangle_{A_{n+1}}=(-1)^{k}D_{l,k}\), with \[D_{l,k}=e^{-G^{2}/2}\sum_{r=0}^{\min[l,k]}\frac{(-1)^{-r}\sqrt{l!k!}G^{l+k-2r} }{(l-r)!(k-r)!r!},G=\frac{2\lambda}{\omega_{0}\sqrt{N}}. \tag{7}\] In the following work, we select the maximum truncation number \(\rm N_{tr}=50\), which is sufficient to give the convergent excited state energies with relative error less than \(10^{-5}\). As is well known, in the thermodynamic limit \(N\rightarrow\infty\) the Dicke model undergoes a transition from normal (ground-state with zero photonic and atomic excitations) to superradiant (ground-state with a macroscopic population) phase when the qubit-boson coupling strength crosses the critical value \(\lambda_{c}=\frac{1}{2}\sqrt{\omega_{0}\Delta\coth(\beta\omega_{0}/2)}\). The zero and finite temperature transitions belong to different classes of universality with this difference manifested, for example, in photon-atom entanglement, which diverges for \(T=0\) and remains finite for \(T\neq 0\). Moreover, when \(N=1\) the Dicke model is reduced to the seminal quantum Rabi model [63; 64]. To help clarify the numerical results, we explore two limit regimes of our model, (\(i\)) the thermodynamic limit with \(N\rightarrow\infty\) and \(\sqrt{N}\lambda=\)_constant_, and (\(ii\)) the deep-strong coupling regime with fixed \(N\) and \(\lambda\rightarrow\infty\). Both regimes allow to derive a diagonalizable effective Hamiltonian through the Holstein-Primakoff (HP) representation of the angular momentum operators which maps the total spin operators \(\hat{J}_{\alpha}\) to a bosonic mode \(\hat{b}\). To the case (\(i\)), the quantization axis is \(\hat{J}_{z}\) and the HP transformation, \(\hat{J}_{z}=(\hat{b}^{\dagger}\hat{b}-\frac{N}{2})\), \(\hat{J}_{+}=\hat{b}^{\dagger}\sqrt{N-\hat{b}^{\dagger}\hat{b}}\), \(\hat{J}_{-}=\sqrt{N-\hat{b}^{\dagger}\hat{b}}\)\(\hat{b}\), leading to the large \(N\) limit ( \(N\gg\langle\hat{b}^{\dagger}\hat{b}\rangle\)) \[\hat{H}_{HP(N)}=\omega_{0}\hat{a}^{\dagger}\hat{a}+\Delta\hat{b}^{\dagger}\hat {b}+\lambda(\hat{a}^{\dagger}+\hat{a})(\hat{b}^{\dagger}+\hat{b}). \tag{8}\] The above Hamiltonian can be diagonalized in the normal phase \(\lambda\leq\sqrt{\omega_{0}\Delta}/2=\lambda_{c}\) to \(\hat{H}_{NP}=\ \varepsilon_{-}c_{-}^{\dagger}c_{-}+\varepsilon_{+}c_{+}^{\dagger}c_{+}\) with the the energies given by \[(\varepsilon_{\pm})^{2}=\frac{\omega_{0}^{2}{+}\Delta^{2}}{2}\pm\frac{1}{2} \sqrt{(\omega_{0}^{2}{-}\Delta^{2})^{2}+16\lambda^{2}\omega_{0}\Delta}. \tag{9}\] After a suitable displacement of the HP bosons, the super-radiant phase \(\lambda>\lambda_{c}\) can be cast in a bilinear form and diagonalized with the normal mode frequencies \[(\varepsilon_{\pm})^{2}=\frac{\omega_{0}^{2}\lambda^{4}{+}\Delta^{2}\lambda_{ c}^{4}}{2\lambda_{c}^{4}}{\pm}\frac{1}{2\lambda_{c}^{4}}\sqrt{(\omega_{0}^{2} \lambda^{4}{-}\Delta^{2}\lambda_{c}^{4})^{2}+4\omega_{0}^{2}\Delta^{2}\lambda _{c}^{8}}. \tag{10}\] The proper quantization axis to case (\(ii\)) is \(\hat{J}_{x}\) and the HP leading Hamiltonian term to large \(\lambda\) ( \(N\gg\langle\hat{b}^{\dagger}\hat{b}\rangle\)) and eigenvalues limit are \[\hat{H}_{HP(\lambda)} = \omega_{0}\hat{a}^{\dagger}\hat{a}+\frac{4N\lambda^{2}}{\omega_{ 0}}\hat{b}^{\dagger}\hat{b}-N\lambda(\hat{a}^{\dagger}+\hat{a})\] \[E_{mn} = m\frac{4N\lambda^{2}}{\omega_{0}}+n\omega_{0}{-}\frac{N^{2} \lambda^{2}}{\omega_{0}}. \tag{11}\] We note that, the eigenstates of \(\hat{H}_{HP(\lambda)}\) are product states of photons displaced Fock states and atomic states being \(x\)-polarized. Furthermore, both limiting cases lead to decoupled quantum harmonic oscillators that can be used to calculate the average energy analytically for each stage of the thermodynamic cycle and hence the work and efficiency. ## III Quantum Otto cycle To perform a quantum Qtto cycle, which is composed of two adiabatic and two isochoric processes [4; 5], we consider N two-level atoms and one electromagnetic field mode, as described by the Dicke model, as the working substance. During the isochoric process we left the N atoms and the electromagnetic field to interact with a hot (cold) reservoir at temperature \(T_{h}\) (\(T_{c}\) ). The four strokes of the quantum Otto cycle are described as follows (Fig. 1). 1. Quantum isochoric process. The working substance as modelled by the Dicke Hamiltonian \(H_{s}^{h}\) with frequency \(\omega=\omega_{h}\) is brought into contact with a hot reservoir at temperature \(T_{h}\). In this process, the system undergoes a Markovian evolution, which is described by the quantum dressed master equation [88; 89] \[\frac{d}{dt}\hat{\rho}_{s} = -i[\hat{H}_{0},\hat{\rho}_{s}]+\sum_{w;k<j}\{\Gamma_{u}^{jk}n_{u} (\Delta_{jk})\mathcal{D}[|\phi_{j}\rangle\langle\phi_{k}|,\hat{\rho}_{s}] \tag{12}\] \[+\Gamma_{u}^{jk}[1+n_{u}(\Delta_{jk})]\mathcal{D}[|\phi_{k} \rangle\langle\phi_{j}|,\hat{\rho}_{s}]\},\] where \(|\phi_{k}\rangle\) is the dressed eigenbasis of the Dicke Hamiltonian \(\hat{H}\) as \(\hat{H}_{0}|\phi_{k}\rangle=E_{k}|\phi_{k}\rangle\), \(\mathcal{D}[\hat{O},\hat{\rho}_{s}]=\frac{1}{2}[2\hat{O}\hat{\rho}_{s}\hat{O}^{ \dagger}-\hat{\rho}_{s}\hat{O}^{\dagger}\hat{O}-\hat{O}^{\dagger}\hat{O}\hat{ \rho}_{s}]\) is the dissipator, \(\Gamma_{u}^{jk}=\gamma_{u}(\Delta_{jk})|S_{u}^{jk}|^{2}\) is the rate, with \(S_{q}^{jk}=\frac{1}{\sqrt{N}}\langle\phi_{j}|(\hat{J}_{+}+\hat{J}_{-})|\phi_{k}\rangle\) and \(S_{c}^{jk}=\langle\phi_{j}|(\hat{a}^{\dagger}+\hat{a})|\phi_{k}\rangle\), where we consider the Ohmic case \(\gamma_{u}(\Delta_{jk})=\pi\alpha\Delta_{jk}){\rm exp}(-|\Delta_{jk}|/\omega_ {co})\), with \(\alpha\) being the coupling strength and \(\omega_{co}\) being the cutoff frequency of the thermal baths. In the eigenbasis, the dynamics of the population \(P_{n}=\langle\phi_{n}|\hat{\rho}_{s}|\phi_{n}\rangle\) is given by \[\frac{d}{dt}P_{n} = \sum_{u,k\neq n}\Gamma_{u}^{nk}n_{u}(\Delta_{nk})P_{k} \tag{13}\] \[-\sum_{u,k\neq n}\Gamma_{u}^{nk}[1+n_{u}(\Delta_{nk})]P_{n},\] where \(\Gamma_{u}^{nk}=-\Gamma_{u}^{kn}\). After a long enough evolution, the system will reach the only steady state Figure 1: Schematic representation of the four strokes of an Otto cycle for the realization of a universal heat machine based on the open Dicke mode, as detailed in Section III. During the isochoric stroke the frequency of the working substance, as modelled by the Dicke Hamiltonian, is held fixed while interacting with a hot (cold) reservoir at temperature \(T_{h}\) (\(T_{c}\)). Only heat is exchanged during this stroke. In the two quantum adiabatic strokes the working substance is isolated from the reservoir and has its frequency shifted, thus producing work. No heat is exchanged during this stroke. By controlling the parameters \(\omega_{0}\), \(\Delta\) and \(\lambda\) of the model the machine can work as an engine, refrigerator, heater, or accelerator. \(\sum_{n}P_{n}^{ss}(T_{h})|E_{n}^{h}\rangle\langle E_{n}^{h}|\) of Eq. (8) with \(\frac{dp}{dt}=0\), \(P_{n}^{ss}(T_{h})\) being the corresponding population. The system eigenstates \(|\phi_{k}^{h}\rangle\) and eigenvalues \(E_{k}^{h}\) of \(H_{s}^{h}\) were obtained by using the extended bosonic coherent state approach method [72]. During this process, a heat amount \(Q_{h}\) is absorbed from the hot reservoir, without any work being done. 2. Quantum adiabatic expansion process. The system is isolated from the hot reservoir and the energy levels is changed from \(E_{n}^{h}\) to \(E_{n}^{c}\) by varying the frequency from \(\omega_{c}\) to \(\omega_{h}\) (with \(\omega_{h}>\omega_{c}\)). This process must be done slow enough to ensure that the populations \(P_{n}^{ss}\) (\(T_{h}\)) remain unchanged according to the quantum adiabatic theorem. At the end of this adiabatic expansion the state becomes \(\rho_{2}=\sum_{n}P_{n}^{ss}(T_{h})|E_{n}^{c}\rangle\langle E_{n}^{c}|\). During this process only work is performed, with no heat being exchanged. 3. Quantum isochoric process. The working substance with frequency \(\omega=\omega_{c}\) and modelled by the Hamiltonian \(H_{s}^{c}\) is now put into contact with a cold reservoir at temperature \(T_{c}<T_{h}\) until they reach thermal equilibrium. In this case, we have a change in the steady state population from \(P_{n}^{ss}(T_{h})\) to \(P_{n}^{ss}(T_{c})\), while the eigenvalues \(E_{n}^{c}\) of the system remain unchanged, and the state becomes \(\rho_{3}=\sum_{n}P_{n}^{ss}(T_{c})|E_{n}^{c}\rangle\langle E_{n}^{c}|\). During this process, only heat is exchanged, and amount of heat \(Q_{c}\) is released to the reservoir, but no work is done. 4. Quantum adiabatic compression process. The system is isolated from the cold reservoir and its energy levels is changed back from \(E_{n}^{c}\) to \(E_{n}^{h}\) by varying the frequency from \(\omega_{h}\) to \(\omega_{c}\). At the end of the process, the populations \(P_{n}^{ss}(T_{c})\) remain unchanged, the state becomes \(\rho_{4}=\sum_{n}P_{n}^{ss}(T_{c})|E_{n}^{h}\rangle\langle E_{n}^{h}|\), and only work is performed on the working substance, but no heat is exchanged. Next, let us calculate the work and heat exchanged in each stroke. According to the first law of thermodynamics, a quantum system with discrete energy levels can be written as \[dU=\delta Q+\delta W=\sum_{n}(E_{n}dP_{n}^{ss}+P_{n}^{ss}dE_{n}), \tag{14}\] where \(E_{n}\) are the energy levels and \(P_{n}^{ss}\) are the occupation probabilities at steady state. Accordingly, the heat \(Q_{h}\) (\(Q_{c}\)) exchanged with the hot (cold) reservoir, and the net work \(W\) satisfy the following relations [20] \[Q_{h}=\sum_{n}E_{n}^{h}[P_{n}^{ss}(T_{h})-P_{n}^{ss}(T_{c})], \tag{15}\] \[Q_{c}=\sum_{n}E_{n}^{c}[P_{n}^{ss}(T_{c})-P_{n}^{ss}(T_{h})], \tag{16}\] \[W=Q_{h}+Q_{c}=\sum_{n}(E_{n}^{h}-E_{n}^{c})[P_{n}^{ss}(T_{h})-P_{n}^{ss}(T_{c} )]. \tag{17}\] In this work we will adopt the following convention: \(Q>0\) (\(Q<0\)) correspond to absorption (release) of heat from (to) the reservoir while \(W>0\) (\(W<0\)) correspond to work performed by (on) the quantum heat engine. There are only four working regimes allowed under not violating the Clausius inequality with the first law of thermodynamics [93]: (1) Heat engine (E): \(Q_{h}>0\), \(Q_{c}<0\), and \(W>0\); (2) Refrigerator (R): \(Q_{c}>0\), \(Q_{h}<0\), and \(W<0\); (3) Heater (H): \(Q_{c}<0\), \(Q_{h}<0\), and \(W<0\); (4) Accelerator (A): \(Q_{c}<0\), \(Q_{h}>0\), and \(W<0\). In this article we are more concerned with the heat engine and the refrigerator, which are of most interest for useful applications and whose figures of merit are the efficiency \(\eta=\frac{W}{Q_{h}}\) and the coefficient of performance (COP) \(\xi=\frac{Q_{c}}{|W|}\), respectively. ## IV Results and discussions ### Working regimes for the universal quantum Otto machine based on the ODM We can gain some insight of the Otto cycle by making a qualitative description of the different working regimes for the universal QOHM, as shown in Fig. 2 for \(N=8\) two-level atoms and \(\omega_{h}/\omega_{c}=2\). Note that given two operating temperatures of the Otto cycle, by controlling Figure 2: The various operating regimes of the quantum Otto machine achieved by varying the temperatures of the hot thermal reservoir \(T_{h}\) and the qubit-boson coupling strength \(\lambda\), both in units of \(\omega\), keeping fixed the temperature of the cold thermal reservoir, also in units of \(\omega\) as (a) \(T_{c}=0.1\) (b) \(T_{c}=0.2\), (c) \(T_{c}=0.4\), and (d) \(T_{c}=2\). The color code stands for heat engine (magenta), refrigerator (cyan), heater (green), and accelerator (yellow). The other system parameters are given by \(N=8\), \(\omega_{h}=2\omega\), \(\omega_{c}=\omega\). the parameter \(\lambda\) we obtain the four different types of machine. Also, note that the four regions are all present at low temperatures, Fig.2(a)-(c), and mainly occupied by refrigerator (cyan area) and heat engine (magenta area). As the temperature rises, Figs.2(d), the engine and refrigerator operating regions stand out even more. Fig. 2(a)-(c) show that heat engine and refrigerator regions are mainly distributed in weak coupling and ultra-strong regime. Strikingly, up to \(\lambda\approx 0.3\) and for \(\lambda\gg 2\), due to the relative harmonicity of the spectrum, the positive-work condition (PWC) follows the one for the quantum harmonic oscillator or qubit, i.e., \(T_{h}>\frac{\omega_{h}}{\omega_{c}}T_{c}\). Next, we analyze the work regimes for the universal QOHM for different numbers N of qubits when the temperatures of the hot and cold thermal reservoirs are fixed. Fig. 3 shows the (a) work \(W\), (b) heat \(Q_{h}\), and (c) heat \(Q_{c}\) as a function of the qubit-boson coupling strength \(\lambda\) for different values of the qubits number N, thus evidencing the four working regimes for different qubits numbers. As mentioned above, heat engine is distributed in the weak and strong coupling regimes and the deep strong coupling regime, whereas the PWC \(T_{h}>\frac{\omega_{h}}{\omega_{c}}T_{c}\) is satisfied, while the refrigerator, heater and accelerator are located around the critical coupling \(\lambda_{c}=\frac{1}{2}\sqrt{\omega_{0}\Delta\coth(\beta\omega_{0}/2)}\), where the spectrum is highly anharmonic. Note that increasing the number of N qubits shifts the engine region to the left allowing engines to be built for smaller values of \(\lambda\) as N grows. On the right side, in the deep strong regime, this behavior is maintained: as N grows, the region corresponding to the engine also shifts to the left, increasing the area corresponding to the engine. It is to be noted that for \(\lambda\lesssim 0.4\) and for \(\lambda\gtrsim 1.5\) the work extracted by the engine practically remains constant with the increase of N. This behavior is mimicked by the amounts of heat \(Q_{h}\) and \(Q_{c}\) exchanged with the reservoirs. Given that increasing the number of atoms means that more work and heat can be exchanged, it is somewhat surprising that for certain values of \(\lambda\) increasing N these quantities remain unchanged. On the other hand, for the other types of machines, whose regions correspond to the middle region of Fig. 3, the exchanged work and heat increases with N. ### Efficiency and coefficient of performance Next, we study the efficiency and COP of the Otto quantum engine and the refrigerator, which are the heat machines of greatest practical interest. To help our analysis, it is useful to recall the analytical result of efficiency \(\eta_{\lambda=0}=1-\omega_{c}/\omega_{h}\), and COP \(\xi_{\lambda=0}=\omega_{c}/(\omega_{h}-\omega_{c})\), of an Otto cycle when \(\lambda=0\), which corresponds to \(N\) qubits and a non-interacting bosonic mode. First, we focus on the engine. In Fig. 4(a), (b) and (c) we plot efficiency as a function of the qubit-boson coupling strength \(\lambda\) for different number of qubits \(N=2\) (solid blue), \(N=4\) (dashed red), \(N=8\) (dash dot cyan), \(N=12\) (dotted magenta), \(N=32\) (dash dot green), \(N=56\) (dashed dark blue) and the thermodynamic limit \(N\rightarrow\infty\) (solid black) with fixed \(T_{h}=0.5\), \(T_{c}=0.1\) for Figs. 4(a)-(b), and \(T_{h}=6\), \(T_{c}=2\) for Fig. 4(c). Fig. 4(d) and (e) with \(T_{c}=0.1\), and Fig. 4(f) with \(T_{c}=2\), show the efficiency for various temperature ratios when varying the qubit-boson coupling strength \(\lambda\) with fixed \(N=8\). The drop to zero in efficiency, Fig. 4(a) and (d), and its growth from zero to a maximum, Fig. 4(b) and (e), occur due to the transition from the engine to the refrigerator regime, corresponding to the regions shown in Fig. 3. But in any case, it is notable that the falls to zero and the rises to the maximum occur suddenly rather than smoothly. Notably, for \(\lambda\lesssim 0.3\) the efficiency is independent of the number of atoms used as working substance. Also, note that the efficiency drops to zero for smaller values of \(\lambda\) as \(N\) grows, as shown in Fig. 4(a) and (d), because of the shift of the engine region to the left as \(N\) grows, as already mentioned when analyzing Fig. 3. Figure 3: (a) Work output \(W\), (b) heat \(Q_{h}\) and (c) heat \(Q_{c}\) as a function of the qubit-boson coupling strength \(\lambda\) for different number of qubits \(N=2\) (solid blue line), \(N=4\) (solid red line), \(N=8\) (solid cyan line), \(N=12\) (solid magenta line), \(N=32\) (solid green line), \(N=56\) (solid black line). Vertical dotted lines divide the different operating regimes of our universal quantum Otto heat machine by the same color used to designate the number N of atoms. The solid horizontal gray line indicates the zero of each quantity. The other system parameters are given by \(T_{h}=0.5\), \(T_{c}=0.1\), \(\omega_{h}=2\omega\), \(\omega_{c}=\omega\). All quantities above are in units of \(\omega\). The main advantage of the ODM over the decoupled system as a working substance happens around the critical region \(\lambda_{c}=\frac{1}{2}\sqrt{\omega_{0}\Delta\coth(\beta\omega_{0}/2)}\), where \(\eta>\eta_{\lambda=0}\) only to \(\lambda<\lambda_{c}\), an unexpected result, that means the normal phase of the Dicke model is more suitable to the engine operation than the superradiante phase, as depicted in Figs. 4(a)-(f). Remarkably, for small temperatures as in Fig. 4(a) the number of qubits saturates quickly to the thermodynamic limit \(N\rightarrow\infty\), being \(N\approx 30\) enough to extract the maximum efficiency. Interestingly, for values of temperature ratios for which there is always the engine condition, see Figs. 4(c) and (f) and also Fig. 2, there is still a drop below the \(\eta_{\lambda=0}\) around the critical coupling. In the deep-strong coupling regime the efficiency tends to \(\eta=0.5\), what is predicted by the effective Hamiltonian Eq. (11) and the harmonicity of the spectrum for the considered temperatures. We point out that, to small number of atoms \(N=8\), there is a decrease in the engine operating region as the temperature gap increases as shows in Fig. 4(d), the smallest region corresponding to \(T_{c}/T_{h}=1/3\) (solid black line). Second, note that the efficiency is smaller the greater the temperature gap is, a somewhat expected behavior when compared with the Carnot efficiency \(\eta_{Carnot}=1-T_{c}/T_{h}\). As expected, the efficiency of our universal QOHM based on the open Dicke model never exceeds the Carnot efficiency. From Fig. 4(a),(b),(d),(e), it would appear that the abrupt drop and sudden resurgence of efficiency values is a characteristic of engine efficiency. However, as can be seen from Fig. 2(a)-(d), which shows the region of the various machines, there are temperature ratios for which there will always be a condition for the engine to exist. For these temperature ratios, there will be neither a sudden decrease to zero nor, consequently, an abrupt resurgence in efficiency. In fact, for \(T_{c}>1.0\) in Fig. 2(a), \(T_{c}>1.2\) in Fig. 2(b), \(T_{c}>1.5\) in Fig. 2(c), and \(T_{c}>4.5\) in Fig. 2(d) the engine condition will always be fulfilled. This behavior is exemplified in Fig. 4-(c) and (f), where we explored other temperature ratios for fixed \(N=8\) and \(T_{c}=2\). Note from Fig. 4-(c) that in addition to Figure 4: The efficiency \(\eta\) of the quantum heat engine as a function of the qubit-boson coupling strength \(\lambda\) (in units of \(\omega\)) for different number of qubits \(N=2\) (solid blue line), \(N=4\) (dashed red line), \(N=8\) (dash dot cyan line), \(N=12\) (dotted magenta line), \(N=32\) (dash dot green line), \(N=56\) (dotted deep blue line), and infinity N (solid black line), with fixed \(T_{h}=0.5\), \(T_{c}=0.1\) for (a) and (b), \(T_{h}=6\), \(T_{c}=2\) for (c). Likewise, (d) and (e) and (f) are for the efficiency \(\eta\) of the quantum heat engine as a function of the qubit-boson coupling strength \(\lambda\) under different temperature ratios \(T_{c}/T_{h}=1/7\) (solid blue line), \(T_{c}/T_{h}=1/6\) (dashed red line), \(T_{c}/T_{h}=1/5\) (dash dot cyan line), \(T_{c}/T_{h}=1/4\) (dotted magenta line), \(T_{c}/T_{h}=1/3\) (solid black line), and fixed \(N=8\), with cold reservoir temperatures \(T_{c}=0.1\) for (d) and (e), and \(T_{c}=2\) for (f). Here the temperatures are in units \(\omega\). The other system parameters are given by \(\omega_{h}=2\omega\), \(\omega_{c}=\omega\). the abrupt drop in efficiency, which is a signature of the passage from the region that determines engine condition to that of refrigerator, for N=2, 4, and 8, there is also a smooth drop and rise, indicating that despite the increase of \(\lambda\) the engine condition continues to be satisfied. Next, we focus on the refrigerator regime. In Fig. 5 the COP \(\xi\) as function of the qubit-boson coupling strength is investigated for several ratios of temperatures with fixed \(N=8\) and \(T_{c}=0.4\) - Fig. 5(a), and \(T_{c}=2\) - Fig. 5(b). In Fig. 5(c) we see the effect of the number of qubits on the COP \(\xi\). Noteworthy, for the normal phase, for all temperatures and number of qubits, we found \(\xi<\xi_{\lambda=0}\). Besides, similar to the heat engine in the deep-strong coupling regime, the effective Hamiltonian Eq. (11) leads to an accurate result of \(\xi=\xi_{\lambda=0}\). As evidenced from Fig. 5(a)-(c), the region of coupling \(\lambda_{c}<\lambda\leq 3\) is where the COP for universal QOHM having the Dicke model as working substance surpass that of the decoupled system used to fuel the quantum refrigerator. In addition, as observed in Fig. 5(a) and (b), the COP strongly depends on the temperature ratio, thus differing from \(\xi_{\lambda=0}=\omega_{c}/(\omega_{h}-\omega_{c})\), being higher to small temperature ratios, as the Carnot COP, keeping the limit \(\xi\ll\xi_{Carnot}=T_{c}/T_{h}-T_{c}\). We note from Fig. 5 that for ratios of temperature where the Figure 6: (a) Work \(W\), (b) efficiency \(\eta\), and (c) COP as a function of the frequency ratio \(\omega_{h}/\omega_{c}\) for coupling strength \(\lambda=0.05\) (solid blue line), \(\lambda=0.1\) (dashed red line), \(\lambda=0.2\) (dash dot cyan line), \(\lambda=0.4\) (dotted magenta line), \(\lambda=0.5\) (dash dot green line), \(\lambda=0.7\) (dash dot deep green line) and \(\lambda=1.2\) (solid black line). The temperatures of the cold and hot reservoirs, given in units of \(\omega\), were fixed at \(T_{c}=0.1\) and \(T_{h}=0.4\), respectively. The number of qubits is fixed to \(N=8\). universal QOHM would work as heat engine with \(\lambda=0\) (uncoupled case), corresponding to \(\frac{T_{b}}{T_{c}}<\frac{\omega_{h}}{\omega_{c}}\), for some coupling ranges the universal QOHM works as a refrigerator with COP lower than that of the uncoupled case. Lastly, we explore in Fig. 6(a)-(c) the effect of the frequency ratio \(\omega_{h}/\omega_{c}\) on the work exchanged, Fig. 6(a), as well on the efficiency, Fig. 6(b), and performance, Fig. 6(c), for the universal QOHM. The coupling strengths are \(\lambda=0.05\) (solid blue line), \(\lambda=0.1\) (dashed red line), \(\lambda=0.2\) (dash dot cyan line), \(\lambda=0.4\) (dotted magenta line), \(\lambda=0.5\) (dash dot green line), \(\lambda=0.7\) (dash dot deep green line) and \(\lambda=1.2\) (solid black line). The temperatures were fixed as \(T_{c}=0.1\) to the cold thermal reservoir and \(T_{h}=0.4\) to the hot thermal reservoir. The frequency ratios, in addition to indicating more clearly the PWC condition, also allow extracting the point that maximizes both the efficiency and the COP for the universal QOHM. Note the same behavior already observed in other figures, both for efficiency and for COP. In Fig. 6(b) we see that after a growth until reaching a maximum, there is an abrupt drop, precisely at the point where the engine operating condition changes to the refrigerator condition. In Fig. 6(c), where there is a sudden appearance of COP in the refrigerator region, the COP also reaches a maximum and then decreases smoothly. The value of the maximum frequency ratio can thus be used to maximize both efficiency and COP, and for the engine this condition is the so-called efficiency at maximum power [23]. To finish this section, we point out that we also studied two other protocols to carry out the adiabatic processes, namely (i) keeping the frequencies constant and changing the coupling strength and (ii) changing the number of qubits that interact with the quantum mode and fixing both the frequencies and the coupling strength. As verified by our numerical calculations (not shown here), in both protocols the efficiency and the coefficient of performance do not overcome the case in which the working substance is composed of a field mode decoupled from the qubits. ### Quantum correlations at thermal equilibrium In this section, we investigate whether quantum correlations are present at thermal equilibrium and if so, whether they affect the efficiency or COP of the universal quantum heat machine (UQHM) based on the open Dicke model (ODM). In accordance with our analytical and numerical studies, shown in Figs. 7(a)-(f), we claim that quantum properties surviving thermalization are not the reason for the superior performance of efficiency, extractable work and COP for the UQHM based on the ODM. The improvements we observed both in efficiency (Fig.5) and COP (Fig.6) are due to the structure of the energy levels, as evidenced by the validity condition \(N\gg\langle\hat{b}^{\dagger}\hat{b}\rangle\) to derive the effective Hamiltonian Eq. (8), that is, small temperatures require a smaller number of qubits to lead to the anharmonicity around the critical point that is present at all temperatures in the thermodynamic limit. To calculate quantum correlations, we resort to the following quantities: (i) the second-order correlation function, which captures the occurrence of sub-Poissonian statistics, and (ii) the negativity, which quantifies entanglement. We emphasize that other quantum measures, such as mutual information, squeezing and quantum discord, were investigated and omitted because they showed the same general behavior of the considered quantities. First, note that the conventional definition of the normalized zero-delay second-order correlation function is [94] \[g^{(2)}(0)=\frac{\langle(\hat{a}^{\dagger})^{2}(\hat{a})^{2}\rangle}{\langle \hat{a}^{\dagger}\hat{a}\rangle^{2}}. \tag{18}\] This quantity describes the probability of detecting two photons simultaneously. This definition holds for weak light-matter couplings, where the intracavity photons, whose annihilation operator is described by \(\hat{a}\), suffice to explain the observed photon correlations. On the other hand, in the USC regime, where the qubit system strongly dresses the bosonic field, the second-order correlation function is derived from the input-output formalism as [95; 96] \[G^{(2)}(0)=\frac{\langle(\hat{X}^{-})^{2}(\hat{X}^{+})^{2}\rangle}{\langle \hat{X}^{-}\hat{X}^{+}\rangle^{2}}, \tag{19}\] where \[\hat{X}^{+}=-i\sum_{k>j}\Delta_{kj}X_{jk}|\phi_{j}\rangle\langle\phi_{k}|, \tag{20}\] with \(\hat{X}^{-}=(\hat{X}^{+})^{\dagger}\), \(\Delta_{kj}=E_{k}-E_{j}\) is the energy gap, and \(X_{jk}=\langle\phi_{j}|(\hat{a}^{\dagger}+\hat{a})|\phi_{k}\rangle\). Here, \(X_{jk}\) describes the transition from the higher eigenstate \(|\phi_{k}\rangle\) to the lower one \(|\phi_{j}\rangle\). Notice that, in the weak qubit-photon interaction limit (i.e. \(\lambda_{i}\ll 1\)), the operator \(\hat{X}^{+}\) is reduced to \(\hat{X}^{+}=-i\omega\hat{a}\). Thus, the correlation function in Eq. (19) simplifies to the conventional case. When \(G^{(2)}(0)<1\) the light presents the non-classical effect of anti-bunching, and can be taken as an unequivocal indication of quantumness. As for the negativity \(\mathcal{N}(\rho)\) of a subsystem A, it can be defined in terms of a density matrix \(\rho\) as [97; 98]: \[\mathcal{N}(\rho)=\frac{\|\rho^{T_{A}}\|_{1}-1}{2} \tag{21}\] where \(\rho^{T_{A}}\) is the partial transpose of \(\rho\) with respect to subsystem A, and \(\|X\|_{1}=Tr|X|=Tr\sqrt{X^{\dagger}X}\) is the trace norm or the sum of the singular value for the operator \(X\). Non-zero negativity values indicate the presence of quantum correlations in the form of entanglement, being greater the greater the amount of entanglement present. We computed \(G^{(2)}(0)\) in Figs. 7(a),(c),(e) and \(\mathcal{N}(\rho)\) in Figs. 7(b),(d), (f) as a function of coupling strength for different number of qubits. In Figs. 7(e),(f) we set \(N=8\) and investigated the effect of different temperatures on the quantumness of the work substance. Note that the quantum correlations are degraded by increasing the number \(N\) of qubits and increasing the temperature \(T_{h}\), whereas \(\eta\) and \(\xi\) increase with the number of qubits for all temperatures. If we compare the Figs. 7(a)-(f) showing maximum antibunching and maximum entanglement with Figs 4(a)-(f) and 5(a)-(b), which respectively show efficiency and COP, we will see that there is no correspondence between the maximum of quantum correlations and the maximum of efficiency and COP. As far as the second-order correlation is concerned, the efficiency is higher in the deep strong coupling regime (\(\lambda>2\)), Figs. 4(b),(e), and therefore, far from the region where the second-order correlation shows the sub-Poissonian effect. The same conclusion can be drawn from Fig.5(a)-(c) for the COP, whose maxima lie in regions far from the value of the critical parameter \(\lambda\). With regard to negativity, Figs.7(b), (d), and (f) show that the maximum of negativity, and therefore of entanglement, does not coincide with the maximum of efficiency and COP. For example, in Fig. 4(b) for \(T_{c}=0.1\) and various values of \(N\), the efficiency is practically constant with the coupling parameter, and therefore independent of the amount of entanglement, whereas in Fig.7(b) the negativity, also at \(T_{c}=0.1\) and the various values of \(N\) presents maximums and minimums when varying the coupling parameter. The same can be said about the COP: there is nothing in the analysis of the maximums that indicates the relevance of negativity for its improvement. Take for example Fig.7(d) for \(T_{c}=2\), where negativity remains zero for a large range of values of \(\lambda\) and then increases monotonically until to \(\lambda=2\), with similar behavior even for different values of \(N\). Compare with Fig.5(c), where the COP has a very different behavior depending on \(N\), with no correspondence with negativity. These same conclusions are supported by additional numerical calculations that we performed (not shown here). To summarize this Section, and as previously mentioned, we point out that the improvement in the efficiency and performance of the UQOHM using the Dicke model as the working substance cannot be attributed to quantum resources [99; 100], but it is due to the high anharmonicity of the spectrum around the critical point of the Dicke model. ## V Conclusion In summary, in this work we propose a universal quantum Otto heat machine (UQOHM) based on the open Dicke model (ODM). The ODM is composed of N atoms of two levels (qubits) that interact with a mode of the electromagnetic field and both the mode and the N qubits, which constitute the working substance of the universal machine, interact with thermal reservoirs. This model presents a critical point and can be solved analytically in the thermodynamic limit \(N\rightarrow\infty\). By universal thermal machine we mean that it is possible, by adjusting the atom-field coupling parameter \(\lambda\) of the ODM, to build all types of thermal machines, namely engines, refrigerators, heaters, and accelerators. Focusing on engines and refrigerators, which are the machines with the greatest applicability, we show, for a wide temperature range and a large number of qubits, including in the thermodynamic limit, how the engine efficiency and the refrigerator coefficient of performance change with the parameter \(\lambda\) of the ODM. We also conducted a study of the quantum correlations present in the ODM using the second order correlation function and negativity showing that, for certain values of the coupling parameter \(\lambda\) of the Dicke model, both the antibunching effect and the entanglement survive the Figure 7: Two-photon correlation function \(G_{N}^{(2)}(0)\) (a), (c), (e) and negativity \(\mathcal{N}(\rho)\) (b),(d),(f) as a function of the coupling strength \(\lambda\). In (a)-(d) both the two-photon correlation function and negativity are shown for several qubit number \(N\), including the thermodynamic limit, with the cold thermal reservoir temperature fixed at \(T_{c}=0.1\) (a)-(b) and \(T_{c}=2\) (c)-(d). In (e)-(f), the qubit number was fixed at \(N=8\) and the cold thermal reservoir temperature was chosen as \(T_{c}=0.1\) (blue solid line), \(T_{c}=0.4\) (red dash line), \(T_{c}=0.2\) (green dot line), and \(T_{c}=5\) (black dash line). The other system parameters are \(\omega_{c}=\omega\). The coupling strength \(\lambda\) and temperatures are in units of \(\omega\). malization. Next, we show that it is possible, close to the critical point, to obtain both an efficiency and a performance for the UQOHM that is greater than the case in which the system is uncoupled, thus showing the advantage of using the Dicke model as the working substance. Furthermore, the detailed study of the second-order correlation function and negativity indicates no correspondence between the improvement in the efficiency and the coefficient of performance of the UQOHM and the quantum resources arising from anti-bunching and entanglement. ## VI Acknowledgement We acknowledge financial support from the Brazilian agencies CNPq and FAPEG. This work was performed as part of the Brazilian National Institute of Science and Technology (INCT) for Quantum Information Grant No. 465469/2014-0. H.-G. X. and J. J. are supported by National Natural Science Foundation of China under Grant No. 11975064.
2302.05336
Intelligent Proactive Fault Tolerance at the Edge through Resource Usage Prediction
The proliferation of demanding applications and edge computing establishes the need for an efficient management of the underlying computing infrastructures, urging the providers to rethink their operational methods. In this paper, we propose an Intelligent Proactive Fault Tolerance (IPFT) method that leverages the edge resource usage predictions through Recurrent Neural Networks (RNN). More specifically, we focus on the process-faults, which are related with the inability of the infrastructure to provide Quality of Service (QoS) in acceptable ranges due to the lack of processing power. In order to tackle this challenge we propose a composite deep learning architecture that predicts the resource usage metrics of the edge nodes and triggers proactive node replications and task migration. Taking also into consideration that the edge computing infrastructure is also highly dynamic and heterogeneous, we propose an innovative Hybrid Bayesian Evolution Strategy (HBES) algorithm for automated adaptation of the resource usage models. The proposed resource usage prediction mechanism has been experimentally evaluated and compared with other state of the art methods with significant improvements in terms of Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). Additionally, the IPFT mechanism that leverages the resource usage predictions has been evaluated in an extensive simulation in CloudSim Plus and the results show significant improvement compared to the reactive fault tolerance method in terms of reliability and maintainability.
Theodoros Theodoropoulos, John Violos, Stylianos Tsanakas, Aris Leivadeas, Konstantinos Tserpes, Theodora Varvarigou
2023-02-09T00:42:34Z
http://arxiv.org/abs/2302.05336v1
# Intelligent Proactive Fault Tolerance at the Edge through Resource Usage Prediction ###### Abstract The proliferation of demanding applications and edge computing establishes the need for an efficient management of the underlying computing infrastructures, urging the providers to rethink their operational methods. In this paper, we propose an Intelligent Proactive Fault Tolerance (IPFT) method that leverages the edge resource usage predictions through Recurrent Neural Networks (RNN). More specifically, we focus on the process-faults, which are related with the inability of the infrastructure to provide Quality of Service (QoS) in acceptable ranges due to the lack of processing power. In order to tackle this challenge we propose a composite deep learning architecture that predicts the resource usage metrics of the edge nodes and triggers proactive node replications and task migration. Taking also into consideration that the edge computing infrastructure is also highly dynamic and heterogeneous, we propose an innovative Hybrid Bayesian Evolution Strategy (HBES) algorithm for automated adaptation of the resource usage models. The proposed resource usage prediction mechanism has been experimentally evaluated and compared with other state of the art methods with significant improvements in terms of Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). Additionally, the IPFT mechanism that leverages the resource usage predictions has been evaluated in an extensive simulation in CloudSim Plus and the results show significant improvement compared to the reactive fault tolerance method in terms of reliability and maintainability. Edge Computing Fault Tolerance Recurrent Neural Networks Deep Learning Evolution Strategy Bayesian Optimisation Hypert ## 1 Introduction During the last decade, the scientific community witnessed the emergence of applications that are intertwined with a set of demanding QoS requirements. Extended Reality (XR) [1] applications is one instance of this type of application. XR applications are associated with various QoS requirements [2] that are based on the ability to provide an immersive end-user experience. These requirements may include aspects such as latency and bandwidth. Studies have shown that for an end-user experience to be acceptable in terms of immersion, the end-to-end latency shall not surpass the 15ms mark, and the bandwidth should be able to reach up to 30 Gbps [3]. On top of that, the desired integrity of the aforementioned immersive experiences may be jeopardized by faults in task processing, due to potential disruptions in service delivery. Therefore, it is of paramount importance for this class of applications to be able to exhibit fault tolerance capabilities. Furthermore, this class of applications present various demanding requirements in terms of computational resources, since they incorporate the rendering of 3D models and detailed graphics. Because of these computational requirements, monolithic development architectures would result in prohibitively bulky and expensive end-user equipment in order to facilitate the required computational resources [4]. Cloud computing is able to partially alleviate the burden that is imposed on the end-user devices by providing computational resources that these applications may run on via the Internet. Cloud computing is based on the use of shared computational resources that may span over multiple locations. Therefore, part of the computational burden is transferred to these shared resources. Unfortunately, the distance between the end-user devices and the cloud servers may result in high latency and low available bandwidth. Thus, the need to bring processing and data closer to the devices where it's being generated [5] was created. In the case of XR applications, these devices may include smart objects, mobile phones, network gateways, sensors and a plethora of immersion devices. This distributed computing paradigm, defined as edge computing aims to establish decentralized topologies and allow the relocation of various computational and storage resources closer to the edge of the network. By doing so, it is expected to provide service delivery and content caching in better response times and transfer rates. The aforementioned devices may vary wildly in terms of computational powers. As a result, it is necessary to make sure that the computations that take place at the edge are not demanding and do not exceed the computational capabilities of the involved devices. When contemplating the nature and requirements of modern-day applications it is of major importance for the workload execution to be resilient and meet the QoS standards set by the industry. The devices at the edge of the network are subjected to significant fluctuations in the amount of offloaded tasks over time [6]. Hence, it is of paramount importance that these fluctuations do not affect the performance of the system and cause process faults [7]. In addition to that, edge computing environments are characterized by extreme heterogeneity and dynamicity in regards to the tasks and the processing nodes involved. This unprecedented situation gave birth to the need for an IPFT method which should be robust to infrastructure and workload changes. Monitoring and predicting the capacity under which the edge nodes are operating in terms of resource metrics such as CPU, RAM, bandwidth and disk can be a valuable piece of information with regards to implementing fault tolerance policies. Resource metrics have high serial and cross correlation values making the use of time series methods rational [8]. Regression-based RNN [9], which leverage time series characteristics through Gated Recurrent Units (GRU) [10] or Long Short-Term Memory (LSTM) [11], can be used in order to accurately predict the resource metrics. In order to handle the extreme heterogeneity and dynamicity of the edge environments, we provide a systematic methodology for building deep learning models in an automatic way using historical data. Common approaches that are based on manual trial and error methodologies in regards to creating acceptable deep learning architectures require many working hours to be spent by deep learning experts every time the deployed applications, user behaviour or the edge infrastructure change. On the other hand, the available deep learning automation methods still have significant shortcomings such as low efficiency and high computational requirements [12]. A potential solution to these drawbacks could be the extension of evolutionary algorithms, as well as their combination with other models for hypertuning. The facts mentioned above motivated us to propose an IPFT method that focuses on processing faults; faults related with resource shortage and the resulting incompetence in regards to processing capabilities that impede the underlying infrastructure to execute tasks within acceptable QoS ranges. Our research goals are to propose a composite deep learning architecture suitable for predicting in a unified way the ability of the edge nodes to execute the incoming workload and an appropriate operational pipeline that guarantees advanced fault tolerance. The cornerstone of this pipeline is the ability to operate in a proactive manner. Whenever a bottleneck in task execution is expected to occur, proactive measures like task migration and node replication should be triggered. A composite deep learning architecture should leverage the time series characteristics of the edge resources and the involved tasks which can be provided by monitoring systems (i.e. Prometheus) [13]. Finally, we propose the HBES optimization algorithm in order to provide a composite deep learning model which is nearly optimal. The four major contributions of our research are: * The proposal of the IPFT method that achieves high reliability and maintainability with very good performance in terms of timely fault detection and repair. * A discussion of how a specific category of faults, the process faults are related and can be predicted by the resource utilization metrics of processing edge nodes * The proposal and analysis of the theoretical principles of a composite deep learning model for edge resource usage prediction that includes two channels. One with feed-forward and one based on RNN layers. * The proposal of an innovative hybrid hyperparameter optimization model that combines the evolution strategy with the Bayesian optimization algorithms in order to gauge a close to optimal composite deep learning architecture. The rest of the paper is structured as follows: Section 2 highlights the related work in fault tolerance, resource usage prediction, time series, deep learning and hyperparameter optimisation techniques. Section III explains how a proactive fault tolerance mechanism can leverage resource usage predictions. Section 4 provides an analysis of the RNN multi-output regression approaches, the composite deep learning architectures and the HBES method. Section 5 describes the experimental setup in a real edge computing dataset, the simulation of IPFT in CloudSim Plus and the evaluation results. Section 6 concludes the paper, reports the current limitations and suggests future directions. ## 2 Related Work Fault tolerance mechanisms are mainly divided in two categories; reactive and proactive. The reactive approach decreases the influence of failures in the edge infrastructure after a failure has actually occurred. The main reactive fault tolerance methodologies are the reactive replication, resubmission, retry and checkpointing. For instance, a state of the art replication-based fault tolerance mechanism in large-scale graph-parallel systems was proposed in [14], which works supporting cheap maintenance of vertex states. This mechanism replicates the vertices with normal message exchanges, and provides fast in-memory reconstruction of the failed vertices from replicas in other machines. The replication increases the reliability of the system and the chance that the task will finish correctly, at the expense of additional resources for redundancy. A retry approach that uses idempotent HTTP methods has been proposed in [15] for offloading and execution failures. This retry strategy has the advantage that it utilizes the least resources of the computing environment and minimizes the user time, but at the expense of increasing the response time, since HTTP methods may be retried multiple times until they complete successfully. In terms of checkpointing, a reactive fault tolerance approach for the serverless paradigm was investigated in [16]. Specifically, the authors through checkpointing and live container migration have succeed to save resources in constrained devices. Unfortunately, also in this case, the execution time is increased since it includes the recovery time of the failed servers. In the proactive fault tolerance approach a potential fault is predicted in order to avoid its influence on the task execution. Tian et al. [17] use the tree based model which is a statistical analysis technique to diagnose the high risk cloud tasks and apply virtual machine migration techniques. This approach even if it significantly improves the reliability and efficiency, it has generalization limitations since it can not automatically adapt to new computing environments. Machine learning and online learning methods have also been used in combination with microservices architecture and IoT systems to detect fault patterns and pre-emptively mitigate the faults [18]. Another advanced proactively model has recently been proposed in [19]. This model performs multi-step predictions in order to estimate the process faults and the QoS degradation in different time granularities. This approach utilizes an encoder-decoder model and gauges the ability of the infrastructure to process the incoming tasks at different production rates. From the above it is evident that the limited computational capacity of processing nodes set barriers to the edge computing and IoT applications [20]. We can overcome these barriers through efficient resource management [21]. This process includes the guarantee of well-defined QoS metrics and an accurate workload prediction [22]. Some of the workload prediction models leverage RNN and specifically LSTM, formulating the resource usage metrics as data sequences [23]. None of these approaches explores how a proactive fault tolerance method can leverage the resource usage predictions, but instead choose to focus on the actual resource usage prediction process [24]. For instance, the authors in [8] have used an Autoregressive Integrated Moving Average (ARIMA) to avoid resource under-provisioning or over-provisioning in data centers. ARIMA has the limitation that it models linear dependencies and it is based on the stationarity assumption. However, as noticed in [25] the workload to be processed at the edge has trend, seasonality and nonlinearities in the execution behaviours, which limit the application of statistical linear models. These limitations are overcome by machine learning models such as K-Means, Decision tree and K-Nearest neighbors [26]. While there is a lot of classical machine learning models publicly available, for our experiments we selected to use XGBoost [27] because it is popular for winning Kaggle and other prestigious machine learning competitions [28]. XGBoost mostly uses gradient boosted decision trees and is available as an open-source software library. The limitation of the machine learning approaches is that every time the edge-cloud infrastructure, the user behaviour, or the application change, new models should be trained from scratch with human assistance. Automated machine learning achieves with an automatic way to guide the learning process of models, maximizing the performance, and minimizing the computational budget without human involvement. In the domain of cloud computing, the Application and User Context Resource Predictor (AUCROP) [29] has been proposed for automated usage of classical machine learning algorithms. In addition, a general-purpose automated machine learning meta-model for data pre-processing, regression and hyperparameter tuning through the Bayesian optimization is the Auto-sklearn [30]. Keras-Tuner [31] is the approach from Keras to automate the hyperparameter tuning, also named hypertuning. Keras is one of the most popular frameworks in the deep learning community. Keras-Tuner has the advantage that the hypermodels, the range of hyperparameters and the tuning process is smoothly integrated in Keras but it supports only the optimizers: (a) random search, (b) hyperband which is a random search with early stopping, and the (c) Bayesian optimization. In our experimental evaluation we used Keras-Tuner, AUCROP and Auto-sklearn. In this work we also extend the research in the hypertuning combining evolution strategy with the Bayesian optimization. Thus, we propose the innovative HBES method as a prominent automated deep learning solution that tackles the heterogeneity and the dynamicity of edge computing environments. More specifically, our work aims to extend the resource usage prediction method by proposing a proactive fault tolerance method that leverages the resource usage predictions and tackles the above mentioned limitations as follows: Firstly, the IPFT mechanism requires a minimum number of replicas of the execution nodes since it requests a replication only after a fault prediction. Secondly, the IFFT does not include the time overhead of task rescheduling after a fault since the replication and rescheduling of the task will take place timely and proactively. Thirdly, the IPFT leverages deep learning RNN models in order to overcome the limitations of statistical models and adapts to non-linear and non-stationary resource metrics. Lastly, the introduced HBES qualifies the generality of the whole process which is a common limitation of many methods in the pertinent literature. ## 3 Leveraging Resource Usage Predictions for Proactive Fault Tolerance ### Resource Usage Prediction in Edge Computing The management and orchestration of edge computing infrastructures can be improved by leveraging various resource utilization metrics. The most notable of these metrics are CPU, RAM, bandwidth, and disk I/O. At the same time, the edge computing paradigm is characterized by the dynamic behaviour and the heterogeneity of the processing edge nodes, which are obliged to operate within some specific constraints dictated by the QoS requirements. Hence, the decision making in a dynamic and heterogeneous environment is a rather complex process, which requires every available source of information to be used. Prediction of the resource consumption metrics, by leveraging time series characteristics of historical data, constitutes one of the most valuable piece of information. It serves as a strong indicator for the availability of the processing nodes in order to receive additional workload or to predict potential QoS degradation in future time steps. Accordingly, the publicly available monitoring tools like Prometheus, OpenTSDB, Nagios and InfuxDB can provide the resource metrics in a stream format or in a time series database like PromQL. These time series databases can be used to produce datasets which are suitable for the RNN model training. The dynamic behaviour of edge nodes is attributed to the fluctuation of application requests and their workload. The number of requests per time interval changes within various time-frames and is affected by many periodic phenomena. Furthermore, the edge is characterized by a high heterogeneity, since the edge nodes can have different hardware and software characteristics, such as memory, computing power, etc. This heterogeneity becomes more apparent when taking into consideration the various flavours of Raspberry Pis, Arduinos, sensor notes, and other micro-controllers that coexist and collaborate within the same edge infrastructure. At the same time, application owners can set strict performance requirements for the edge nodes in terms of availability, throughput and different types of potential delays. Thus, edge providers are struggling to get the QoS metrics within the acceptable ranges specified. Consequently, in order to guarantee that the infrastructure will have sufficient computational capacities to handle fluctuating demands, a fault tolerance mechanism should proactively take decisions considering the amount of resources and the availability of the processing nodes. In the scientific literature there are different categories of faults that correspond to specific fault tolerance mechanisms. The major categories of the faults are: (a) Network faults, (b) Physical faults, (c) Process faults and (d) Service expiry faults [32]. Among these faults, process faults can be very severe. In more detail, process faults occur in processes because of resource shortages which lead to longer task execution delays or even execution stoppages. This type of faults can eventually lead to performance degradation that is not acceptable for real-time and/or time-sensitive applications. Thus, it becomes evident that the execution of the tasks has a solid impact on resource utilization and vice versa. This fact has led us to examine the resource utilization metrics in order to take proactive fault tolerance decisions to reduce the adverse effects of process faults. ### Proactive Fault Tolerance Given that the modus operandi of the edge computing paradigm relies on a vast number of compute nodes operating simultaneously, it is extremely important to consider component failures as an inevitability. By doing so, it is important to ensure that the infrastructure will continue operating without interruptions and QoS deterioration. The main way of ensuring this service continuity, is by triggering migration policies and by utilizing backup components, which automatically replace the failed ones in a manner which guarantees the QoS. The replication process of a node, such as a virtual machine, requires a certain waiting period, which would provoke QoS degradation. Thus, fault tolerance should be achieved by following a proactive approach. At any given time, the network should contain a specific number of computational nodes, which can remain idle until one of the already working components ceases to function properly. Given that redundant computational nodes may be requested, it is important to keep this redundancy to a minimum. However, by utilizing machine learning algorithms, it is possible to extract information regarding the behaviour patterns of the services and the process faults that occur. Hence, this enables the fault tolerance functionality to manifest in a manner which will ensure that the operations will continue to take place uninterrupted and that the overall redundancy cost will be kept to a minimum. The proposed IPFT model provides fault predictions by using data features which are associated with the resource usage in distributed edge environments. The IPFT monitors the resource consumption that takes place on each processing node in order to reveal, at run-time, insufficient processing capabilities that may result in potential QoS degradations. In case that the deployed resources cannot satisfy the increasing amount of demands within a specified time-frame, the IPFT will then trigger mitigation policies such as proactive node replication and task execution migration. The multi-channel neural network part of the IPFT mechanism of a node takes into consideration the state of the other available nodes when predicting its future state. This way, in case of a predicted fault, it can perform task migration to the nodes that are already up and running, provided that there is enough computational capacity available. By doing so, we avoid immediately resorting to node replication which could lead to a cost and time deployment increase for setting up a new node. As mentioned before, the replication of a node requires a certain waiting period, which can have grave ramifications on the performance of the edge environment. The IPFT can predict the future needs for node replication in a time horizon longer than the replication time. Thus, the timely triggering of node replication processes and the corresponding task migration prevents the occurrence of the process faults. As illustrated in Fig. 1.1, when a specific processing node is predicted to present high resource utilisation, then a node replication process will be triggered (Fig. 1.2). This way, the future tasks to come will be accommodated by the new node, avoiding tasks rejections and long execution times. The IPFT operates in a manner which consists of 4 main stages, as one can see in Fig. 2. In stage 1, it monitors the resource metrics of the processing nodes. In stage 2, it predicts the maximum resource usage that is expected to take place within a specified time-frame. With regards to the stage 3, there are three distinct scenarios which dictate how the rest of the operation shall be carried out. The stage 3 processes are carried out independently for each of the processing nodes. The three distinct scenarios and their interactions with the corresponding actions taken at stage 4 can be summarized as follows: * Stage 3A. If the resource usage prediction is lower than a specific lower-bound threshold, then the processing node is considered under-utilized and thus a decommission request for this specific node is issued. During the stage 4A, the tasks that were assigned to this node are redirected to alternative locations and the decommission process is completed. * Stage 3B. If the resource usage prediction is between the lower-bound threshold and the upper-bound threshold, then the resource consumption rate is considered ideal and thus there is no need to perform any additional actions. * Stage 3C. If the resource usage prediction is higher than a specific upper-bound threshold, then the processing node is considered over-utilized and thus a replication request is issued. The stage 4C takes place after the creation of the new processing node. During the stage 4C, a fraction of the tasks that were assigned to the over-utilized processing node are redirected to the newly created one. Figure 1: The IPFT Triggers Virtual Machine Replication & Task Migration based on Resource Utilization Predictions. Figure 2: The four stages of the IPFT pipeline which span over three distinct scenarios. The IPFT works jointly with a workload balancing mechanism that gives higher priority to the nodes with low resource utilization predictions and lower priority to the nodes with higher resource utilization predictions. This is a common practice to many task offloading mechanisms that use different criteria to balance the workload [33]. As an example the MinMin task offloading method prioritizes the smaller tasks to be executed in the nodes that will be available sooner. While the MaxMin task offloading method prioritizes the larger tasks to be executed in the processing nodes that will be available sooner. The size of the tasks is estimated based on the number of its million instructions or their estimated completion time. In this way, IPFT modifies the behaviour of the task offloading mechanisms taking into consideration the resource usage predictions for the migration of the tasks. ### Threshold-based Decision Making Task migration and node replication occur when a resource usage prediction metric is higher than a specified threshold value. This type of threshold-based approach is used in many decision making mechanisms in cloud/edge computing [34]. The IPFT involves two thresholds. If the value of the resource utilization prediction is higher than the upper-bound threshold the IPFT invokes a node replication process. If the prediction value which corresponds to a specific processing node is lower than lower-bound threshold, then this particular processing node is turned off (e.g. for reducing the total energy consumption [35]). The appropriate selection of these two thresholds is integral to the performance of the IPFT. A high value in the upper-bound threshold will result in a system which is not sensitive and reactive enough to the workload fluctuations. While a low value in the upper-bound threshold will make the system to react and trigger unnecessary node replications. Similarly, the lower-bound threshold should be appropriately fine-tuned. A low value will make the infrastructure to continue using under-utilized processing nodes. A high lower-bound threshold will also turn off processing nodes that are necessary to the smooth operation of the edge infrastructure [36]. In order to identify the optimal threshold values, we propose the use of a grid-search approach that iteratively tries sequential thresholds in order to converge close to the optimal values. These values maximize the fault tolerance evaluation metrics of reliability and maintainability, which will be extensively discussed in the upcoming experimental section. The selection of the threshold values is heavily dependant on the characteristics of each application and the underlying physical infrastructure. The literature provides recommendations in similar problems and mechanisms which, despite providing sub-optimal results, serve as valuable guidelines towards establishing some standards regarding how these bounds are chosen. [37] Given that the predictions are quite accurate and the threshold values are chosen optimally, the resulting system is expected to be highly robust and able to provide satisfying availability. For the best results, the resource utilization predictions should derive from the resource consumption behaviour of each individual processing node, while taking into consideration the overall resource consumption behaviour of the entire edge computing infrastructure. In the next section we will describe the deep learning model that predicts the resource utilization, as well as the way it simultaneously leverages the resource consumption metrics of each individual node and the entire edge infrastructure. ## 4 A composite Deep Learning Architecture for Resource Usage Prediction A composite deep learning network with the HBES model is proposed to provide accurate resource utilization predictions for the IPFT method. The composite deep learning network is designed to satisfy the particularities of the edge infrastructure and the resource usage metrics. Since resource metrics like CPU, RAM, disk, and bandwidth have sequential dependence, RNN can provide an appropriate type of neural layers. RNN combines the advantages of deep learning with the characteristics of time series forecasting. There are different types of RNN architectures and the two most prominent are the GRU and LSTM. Each individual processing node is examined separately for future possible process faults. However, in order to trigger the node replication, the deep learning model of each node should be aware of its own status and the whole edge infrastructure status. In this paper, the edge node which is examined is also called local and the whole edge infrastructure is called global. Because the local and the global status affect each other we propose the use of a composite deep learning model that combines the two in order to provide the local resource utilization predictions. The way that these two different sources of information are combined will be explained in the next subsection. The workflow of the resource usage prediction in an edge computing environment is depicted in Fig. 3. In the beginning, the edge devices generate tasks which are partially or fully executed to the edge computing nodes. During the task execution the nodes are monitored in order to keep the resource utilization metrics. The resource utilization metrics are provided to the composite deep learning model in order to predict the resource utilization in the next time horizon. Next, the resource usage predictions can be used by a thresholding method in order to trigger node replication and task migration. In the following subsections, we will describe the theory behind the key parts of composite neural networks, the RNN and the HBES for the resource usage prediction. ### Two Channel Architecture for Resource Usage Prediction The most commonly used architecture, when it comes to neural networks, comprises of layers that are stacked one after the other in a serial manner. Input data is fed to the first layer, and one by one each of the layers transform the data and feed it to the next one. When building a neural network however, we have the option of putting layers or even series of layers in parallel, to take advantage of complex input structural properties. These parallel sections of the network are processing different parts of the data independently and subsequently concatenate their output to send it to the final section of the network. We decided to use an architecture, such as the one described above, utilizing two separate parallel series of layers (two channels) [38]. This allows us to better handle the local and global data metrics that are available. The main advantage of using a multi-channel neural network architecture is the ability to use different kinds of layers in the input stage of the model. As an example, many use cases from the literature use a convolutional network and a feedforward network at the input stage to accommodate data having both numerical and image properties. Multiple channel neural network architectures have the advantage that they can group together the data that their properties present high correlation [39]. This happens by having different sequences of layers handling different kind of data. A multi-channel neural network can be more time and resource-consuming to train and infer compared to a vanilla serial neural network, because of its more complicated structure. We also cannot properly evaluate the channels, or understand their individual effect when it comes to the final output of the model, since training is performed start-to-finish without giving us more details about the process. RNN are a type of neural network that do particularly well when facing problems with sequential data. They are used to solve problems such as text prediction or voice recognition, and they are deemed effective when dealing with time series based problems as well. Consequently, we chose to include a RNN in our model as the first input channel, since it is going to handle the sequential data of a single node (i.e. the one that the model is actively trying to predict). The input is the monitoring measurements with an interval record of one minute. The channel consists of one or two RNN layers followed by a feedforward layer, and sends the results to the output part of the network. For the second channel of the model, we chose to implement a feedforward network. This part is being fed with the global monitoring state, as well as a transformed timestamp feature that refers to the day of the week and the part of the day. It is obvious that the chance that a particular node to be overloaded in the immediate future is tied closely to the state of the other nodes that are collaborating with it to handle the requests. The channel of the feedforward network includes dense layers, that also have dropout layers in between. This architectural decision works as a regularization measure that stops our model from overfitting during training. Once the channels handle the input part of the model, their concatenated results are given to the output section of the network. The goal of this combination is to take into account the processed data and output a prediction for the resource utilization of a particular node, in a sufficient time horizon after the state/last input we fed to the model. This is a feedforward neural network as well, with dense and dropout layers until the output layer. The choices regarding the merging of the two channels and the concatenation size of the input section, are handled by the hyperparameter selection algorithm. Figure 3: A Pipeline from Task Production to Advanced Fault Tolerance. In order to train itself on a dataset, the model tries to predict the future state of a particular node based on the complex input referring to both the local node in question and the global overall state. Once the model outputs a prediction, it compares the prediction to the actual output value utilizing a loss function. Loss functions such as MAE or RMSE, are a way to tell how good or bad a regressive model is at predicting the correct values. Based on the error calculated, the model shifts its weights and parameters using a variant of gradient descent, back-propagating through the network affecting all layers. A function called optimizer is describing how the procedure of weight-shifting/network-optimizing takes place. Both the architecture, and the training of a neural network provide us with many options and hyperparameters to tweak. In order to ensure a quick and optimal choice of hyperparameters for our model we implemented the HBES algorithm that will efficiently make these decisions. ### Recurrent Neural Networks for Time Series Data Artificial neural networks can be defined as function approximators, mapping lower level data representations to higher and disentangled data representations. RNN [9] is a type of artificial neural networks, which facilitate dynamic temporal behavior, capture data sequences, and maintain the previous input states.The RNN architectural paradigm is based on various neuron-like nodes organized into successive layers, where each node is connected with nodes of the next successive layer and also has recurrent connections. Utilizing this particular concept, information regarding previous data inputs is allowed to affect future outputs, thus making RNN a solid option for time series modeling while taking into account contextual information. By monitoring edge computing infrastructures, we gather sequential data and predict the future resource usage metrics with RNN, based on their current and previous values. The main problem that RNN encounters is the vanishing gradient problem. This problem emerges during the training stage of the RNN, when the gradients become vanishing small preventing the weight updates of the RNN. Various gate-related architectures have been introduced in order to tackle the vanishing gradient problem. Through the use of gates, the network is able to properly maintain relevant information and to successfully pass it down to the next time-steps. The two most notable ones are the LSTM networks and the GRU [10] networks. ### Long Short-Term Memory The LSTM network architecture was created to tackle the problem of vanishing gradients in RNN. The importance of using this complicated architecture can be highlighted by pointing out how much context can offer in finding a solution. When dealing with a time series data problem, such as predicting resource usage, we can get much more useful information if we look at historical data of our machines' usage, rather than just glancing at their current state. This way, we can better understand concepts like trend, which can only be explained over time. LSTMs, just like regular RNNs, utilize the hidden state to connect the consequent nodes so as to enable better understanding of temporal data. However, they also use a cell state, which is another connection between the nodes. Each LSTM cell can read from the cell state, write to it or reset it via the use of gates. There are three gates in total, each activated by a sigmoid function. This ensures that the model remains differentiable, since sigmoid offers smooth curves in the range of 0 to 1. Each one of the gates take as inputs the actual input as well as the hidden state of the previous timestep. In addition to the gates, a vector called \(\overline{\text{C}}\) is responsible for carrying the candidate information that can be added to the cell state. \(\overline{\text{C}}\) utilizes a tanh layer, which is in charge of limiting the vanishing gradient phenomenon. To this extent, the cell information can be kept longer without vanishing it. The way this is achieved is by keeping the gradients zero-centered, between the values of -1 and 1. The input gate is handling incoming data, and controlling whether the memory cell should be updated. It is applied to \(\overline{\text{C}}\), and the result is then added to the cell state. The sigmoid activation of the gate is used to either mitigate or enhance the effect that the new information should have on the cell state. The forget gate is the entity that is responsible for selecting the information that is deemed less important, and it removes it from the cell state by soft-resetting its values. Additionally, by utilizing the sigmoid function, it produces a scaled output for every value that is saved in the cell state. Finally, the output gate is the final layer before the new hidden state is produced. It uses the sigmoid function as a filter to be applied to the cell state after it goes through a tanh layer first. After this process is completed, both the hidden state and the cell state compose the output of the LSTM cell which will be inputted to the next time-step. ### Gated Recurrent Units GRU and LSTM are similar as both of them manage to prevent the vanishing gradient problem by utilizing gate structures. What sets them apart is the fact that GRU combines the forget gate and input gate to form a single update gate. By reducing the number of gates involved, GRU is able to provide less complex structures and thus, be more computationally efficient when compared to LSTM. At the same time, GRU manages to perform equally well. GRU networks also facilitate the hidden state mechanism which connects one unit of the network to the next, thus allowing the manifestation of dynamic temporal behavior in a similar manner. Each GRU unit is indicative of a specific time-step that facilitates the transfer of important information through the time continuum. Furthermore, it contains two distinct gate structures. The first one is referred to as the reset gate while the second one is referred to as the update gate. They both bear sigmoid layers which provide smooth curves in the 0 to 1 zone, thus ensuring that the model will remain differentiable. By squishing the values between 0 and 1, the sigmoid activation also helps the network to learn which data is important or not and then accordingly keep it or forget it. In order to contextualize the GRU paradigm in accordance to edge computing, we input vectorized representations with information such as the perspective timestamps, the resource utilization of CPU, RAM, bandwidth, and disk through the data preprocessing as illustrated in Fig. 4. The functionality of GRU networks is carried out in the form of the following steps. As explained before each GRU uses a reset gate and an update gate. Each of these gates has two weight matrices. The first one corresponds to the input while the second one corresponds to the hidden state. The reset gate of GRU is responsible for deciding how much of the past information shall be forgotten. Much like in the case of LSTMs, the first step is to multiply the input and the hidden state by their corresponding weights. The sum of the multiplication results is then passed through a sigmoid layer. The update gate is in charge of determining how much of the information gathered over the previous time-steps needs to be passed along for future use. In this regard, its behavior is quite similar to the one of the reset gate. The first step requires the multiplication of the input and the hidden state by their perspective weights. The hidden state entails information derived from the previous \(t-1\) units. Then, the multiplication results are added together and passed through a sigmoid layer. The output of the update gate will be referred to as \(u\). The next step is to create a candidate new hidden state. Similarly to the reset and update gates, there are also two weight matrices involved. The first one corresponds to the input and the second one corresponds to the hidden state. The first step towards creating a candidate new hidden state is to multiply the input by its corresponding weights. The second step is to calculate the Hadamard product in an element-wise manner between the hidden state and the output of the reset gate. This process is essential for deciding how much of the information gathered during the previous time-steps will be removed. The Hadamard product is then multiplied by the weights of the hidden state. The results of the two multiplications are then added together. The sum is passed through a tanh layer, which minimizes the effects of the vanishing gradient phenomenon. This is performed by distributing the gradients in a sufficient manner, within a zero-centered range. Thus, it enables the information to flow longer without vanishing. The product of the operations so far is the candidate new hidden state will be referred to as \(h^{\prime}\). In order to get the updated hidden state, the first step required is to perform element-wise multiplication to the output of the update gate and the hidden state. The second one is to perform element-wise multiplication to the \(h^{\prime}\) and the product of \(1-u\). The updated hidden state is the sum of the two multiplication products. The updated hidden state is then carried over to the next GRU unit, which corresponds to the next time-step. Figure 4: Integrated RNN in the resource usage prediction. ### Evolutionary Strategy Evolutionary strategy [40] belongs to the category of evolutionary algorithms that are population-based metaheuristic optimization approaches inspired by the principles of biological evolution. The formulation of evolution strategy is based on successive iterations of mutation and selection over a population of candidate solutions. The candidate solutions, also named individuals, are initialised in random positions in an n-dimensional space and move toward positions that minimize an objective function. These dimensions are the numerical GRU-RNN hyperparameters that should be optimized. For the needs of hypertuning GRU-RNN, we used its numerical hyperparameters as the search space of the evolution strategy and the mean squared error of the candidate RNN as the fitness function. In each iteration, a number of RNN are trained and evaluated with the mean squared error, whereas the most accurate of them are mutated to the next iteration. The mutation is a stochastic process based on a normal distribution that introduces variations in the best fit individuals of each iteration. In the beginning, the exploration for different candidate solutions is intensive, making stronger mutations towards new areas of the search space. In each iteration the exploration decreases and the exploitation of the best fit individuals increase using a self adaptation control variable. This means that the mutation introduces strong variations in the first iterations and the variations decay as the evolution progresses in order to converge to a close to optimal RNN architecture. Hypertuning deep learning models with evolution strategy in contrast with other evolution algorithms, like genetic algorithms, has the advantage of not re-combining different neural network topologies that may have significant discrepancies in their phenotypes. This happens because the crossover of the genetic algorithm has the difficulty that the parents may have different architectures that cannot be unified in their offspring. A typical example is if the one parent is a 2 layered LSTM-RNN followed with 6 dense layers and the second parent is a 2 layered GRU-RNN followed with 4 dense layers. Thus, the phenotypes of LSTM and GRU cannot be smoothly recombined. On the other hand, evolution strategy is based only in the selection and the mutation which smoothly lead the evolution process. Specifically, the mutation operations introduce variations into the survived candidates providing the opportunity to test neighbour solutions that may lead to an improved fitness value. ### Bayesian Optimization Bayesian optimization [41] is widely used to estimate hyperparameters in machine learning and deep learning models. It was an obvious option for the searching process in the categorical dimensional space, in order to find the close to optimal nominal hyperparameters of the RNN. Bayesian optimization iteratively requests new observations of the search space with an acquisition function and estimates the objective function with a surrogate function. The increase of the Bayesian optimization observations gives a higher probability for the global optimum location. Nonetheless, we should take into consideration that the number of observations are finite and computationally expensive so the smart search process should select points that maximize the probability to find a new optimal following an exploration vs. exploitation trade off. The surrogate function approximates the objective one and is updated every time the objective function is evaluated in the new candidate points. The acquisition function decides where to sample next in the iterative process of Bayesian optimization, finding the points that maximize the expected improvement. The expected improvement is a function of two components. The first estimates the regions that the surrogate function has optimal points and the second estimates the regions with high prediction uncertainty that have not explored efficiently yet. ### Hybrid Evolution Strategy with Bayesian Optimization The hyperparameter optimisation for a composite neural network is a prominent challenge as it includes the important architectural decisions for a close to optimal topology. The HBES constitutes an innovative, holistic and unified approach for hypertuning by merging the evolution strategy and Bayesian optimization methodologies. The evolution strategy is responsible to evolve a population of candidate deep learning models based on their numerical hyperparameters and each individual candidate solution estimates its nominal hyperparameters with the Bayesian optimization as it is described in Algorithm 1. The numerical hyperparameters are the number of recurrent layers and feedforward layers, the number of neurons for each layer, the lookback, epochs, the batch size, and the percentage of dropout and learning rate. The nominal hyperparameters are the type of the neural layers, the activation functions and the optimizers. The gained knowledge of the nominal hyperparameters is universal through the population and updated by all the individuals over the generations. The ultimate goal of HBES is through the Bayesian evolution process to converge to a close to optimal solution and to train deep learning models that can predict timely and accurately the resource utilization of the next timesteps. ``` Step 1: Initialization of Evolution Strategy Set the starting search point of the algorithm. Usually \(a_{1}\)=[0.5, 0.5,..., 0.5] since we have already scaled our hyperparameter options down to [0,1] Step 2: for i = 1, 2,..., \(n_{pop}\): i) add some random noise to the search point ii) Scale back from [0,1] to the hyperparameter search space to create the ordinal hyperparameter values for the network to be trained iii) Bayesian Optimization with GP 1) Apply a Gaussian Process prior on \(f\) 2) Observe \(f\) at \(n_{0}\) points according to an initial experimental design 3) Initialize \(n=n_{0}\) 4) Repeat while \(n\leq N\) a) Update the posterior probability distribution on \(f\) using all available data b) Let \(x_{n}\) be a maximizer of the acquisition function over x. c) Observe \(y_{n}=f(x_{n},x_{i}(t+1),v_{i}(t+1))\) d) \(n\leftarrow(n+1)\) Step 3: Sort the results and its corresponding hyperparameters Step 4: Calculate the new search point by averaging the points of the \(top_{n}\) networks Step 5: Go to Step 2 until desired number of iterations is completed ``` **Algorithm 1** Hybrid Bayesian & Evolution Strategy \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{RMSE} & \multirow{2}{*}{MAE} & \multicolumn{2}{c|}{CPU-1 (\%)} & \multicolumn{2}{c|}{RAM-1 (\%)} & \multicolumn{2}{c|}{Infer. Time} \\ \cline{3-10} & & & RMSE & MAE & RMSE & MAE & Single & Batch \\ \hline HBES-GRU & 0.0641 & 0.0276 & 15.918 & 12.815 & 1.694 & 0.580 & 0.033 & 0.038 \\ GA-LSTM & 0.0674 & 0.0338 & 16.099 & 12.838 & 1.746 & 0.917 & 0.020 & 0.024 \\ Keras-Tuner & 0.0785 & 0.0377 & 16.291 & 13.290 & 2.631 & 0.818 & 0.042 & 0.042 \\ AUCROP & 0.0814 & 0.0414 & 17.235 & 14.009 & 2.480 & 1.482 & 0.004 & 0.011 \\ XGBoost & 0.1139 & 0.0599 & 16.457 & 13.569 & 1.515 & 0.472 & 0.060 & 0.010 \\ Auto-sklearn & 0.1055 & 0.0243 & 52.659 & 17.856 & 1.546 & 0.526 & 0.263 & 0.572 \\ \hline \end{tabular} \end{table} Table 1: Comparison of single-output & multi-output prediction methods of resource usage metrics. ## 5 Experimental Evaluation To evaluate the IPFT methodology, we make two types of experiments. First, we experimentally evaluate and compare the applicability of RNN with the HBES against state of the art methods using a real dataset. This dataset is constructed by monitoring Raspberry Pi's in an edge computing infrastructure. Next, we leverage the resource utilization prediction model in order to develop an IPFT mechanism that takes intelligent replication and migration decisions in a sufficient time before the process faults occur. The sufficient time in the context of intelligent replication and migration involves the service deployment time of the processing nodes and the scheduling of the new tasks on that node. The experimental evaluation of IPFT took place in a seven days edge computing simulation using the CloudSim Plus framework [42] ### Experimental Evaluation in Resource Usage Prediction The edge infrastructure we used includes Raspberry Pi3 as processing nodes with a 64-bit quad-core ARM Cortex-A53 at 1.4GHz, loaded with Raspbian operating system which is a version of Debian Linux. The dataset constructed by a monitoring tool implemented in Python 3 using the libraries sputil [43] and GPUtil [44]. We monitored the real time usage of CPU, RAM, disk and bandwidth in one second time interval. The deployed application was a natural language processing text classification. The use case was to make the text classification on an edge computing environment, locally, close to the text owners and not in cloud computing infrastructures for privacy issues. The reason for this choice is that the text owners did not agree for their texts to be transferred and processed in remote servers. In order to control the application remotely and take the resource usage datasets we used the SSH protocol, but we did not have the privileges to access the processed texts. #### 5.1.1 Model Implementation and Frameworks for Comparison The HBES model and the RNN multi-output regression model are implemented in Python 3 using the frameworks NumPy, pandas, statistics, Scikit-learn, SciPy, Scikit-Optimize, TensorFlow 2 and Keras. The environment we used for the training and the evaluation of the model was the Jupyter notebook of the Google Colaboratory. The experiments' source code is available for any kind of reproduction and re-examination in our GitHub repository [45]. In this experimental setup we used the HBES with GRU as RNN and compared the results with a time series baseline approach, the machine learning meta-model for resource usage prediction AUCROP, the Auto-sklearn, the XGBoost, our previous LSTM with genetic algorithm model (GA-LSTM) [11] and the Keras-Tuner. #### 5.1.2 Evaluation Results and Discussion Our initial time-series analysis provided results which indicate positive correlations for lags in a value range from 1 to 22. This finding confirms the strong self similarity property that the sequences of resource usage metrics have. Afterwards, using the ARIMA forecasting model we evaluated the resource metrics predictions. For instance, the CPU RMSE was 18.474. After comparing the results of the statistical models with the ones of the machine learning and deep learning approaches we found that the latter had an improvement that surpasses the 20% RMSE in most cases. Because of that, we decided to focus our research on the machine learning and deep learning models. Table 1 summarizes the experimental results. The first two columns provide the aggregated RMSE and MAE including all testing values of the devices, and the resource metrics. For RMSE, which gives an extra penalty to predictions with significant errors, we can see that HBES-GRU had the best performance. In the column entitled MAE, we see that the two best models are the auto-sklearn and HBES-GRU. Their prediction errors are very close and they have a significantly better performance compared to other models. CPU-1 and RAM-1 columns represent the RMSE and MAE for the processing edge node which had the least accurate predictions in the infrastructure. In addition, Fig. 5 illustrates the 25th and 75th percentile, the median, the min and the Figure 5: GRU-RNN with HBES prediction errors of resource usage metrics. max of the error value metrics. These metrics include CPU, RAM, disk usage, and bandwidth in terms of the bytes sent and received. Regarding the disk usage and the bandwidth the prediction errors were insignificant. This is not only due to the ability of the HBES-GRU to provide accurate predictions, but due to the small fluctuation in these two resource metrics as well. The fluctuation in CPU is much greater than in RAM and HBES-GRU captures in a better way the various changes when compared to the other models. XGBoost has better performance than HBES-GRU in RAM. This may be justified by the ensemble structure that XGBoost has. XGBoost can build specific decision trees for the residuals of RAM and target on its slow change behaviour. Lastly, we see the inference times of the models which are required in order to provide a single prediction or a batch with one hundred predictions. All time measurements are in seconds. All the inference times, except from Auto-sklearn, are within a range from 11 to 60 msec. These inference times indicate that resource usage prediction is a rather fast process which can be incorporated in time sensitive applications. In this research we did not compare the training times because we wanted to make an exhaustive smart search in the hypothesis space and see the limits of accuracy that the different models can achieve. It is worth noting that we have made experiments using a wide range of time-frames. However, the measurements in Table 1 are produced using a 10 minute time-frame. We chose to illustrate this time-frame because it is close to the actual time which is required for the deployment time of a node. From the results we see that even if GRUs are simpler in their structure compared to LSTMs, due to their lack of a dedicated output gate, they had slightly better performance. Furthermore, the conducted experiments enabled us to reach to the following conclusions: * In most metrics the deep learning models (HBES-GRU, GA-LSTM, Keras-Tuner) have better performance than machine learning models (AUCROP, XGBoost, Auto-sklearn). * The evolutionary algorithms for hypertuning (HBES-GRU and GA-LSTM) have better performance compared with the simple Bayesian optimization (Keras-Tuner). * One can witness a significant improvement when using the hybrid Bayesian and evolution strategy approach instead of simple genetic algorithms. #### 5.1.3 Convergence of Hybrid Bayesian Evolution Strategy The convergence and the location of the global optimum are two of the most important topics in the domain of evolutionary algorithms. The convergence means that as the population evolves, the individuals go closer to the optimal solution shrinking their divergence. However, we cannot be sure if the convergence points in the genotype space constitute a global or local minimum. For this reason, the HBES algorithm in the beginning of the evolution process expresses a strong variance in the mutation which decays over iterations. Concurrently, we keep the best genotype found over all the iterations. The convergence of HBES is illustrated in Fig. 6. We observe that in the beginning the average population error per iteration fluctuated strongly. In some iterations it is trapped in local minima as example between the iterations six to eleven. In some other iterations it lays in plateau regions, such as between iterations twenty to twenty five. Yet, using the mutation the individuals eventually escape from the plateau regions and the local minima and move towards close to optimal regions. Figure 6: The convergence of HBES for close to optimal RNN. These close to optimal regions in the genotype space are decoded to the close to optimal GRU-RNN architectures in the phenotype space. These GRU-RNN architectures provide the most accurate resource usage predictions for CPU, RAM, disk, and bandwidth usage in an edge computing infrastructure. ### Experimental Evaluation in Proactive Fault Tolerance The promising experimental results of the HBES with RNN in regards to resource usage prediction, motivated us to continue the experiments in order to evaluate its applicability as a proactive fault tolerance mechanism. Specifically, we used the HBES to make a smart search in candidate two-channel deep learning models and applied the thresholding method as described in Section IV. #### 5.2.1 Experimental Simulation The composite deep learning model was integrated in an edge simulation of CloudSim Plus. We simulated an edge infrastructure that consists of a set of nodes, 5 available to us by default, and another 15 that can be activated for intelligent replication when needed. We simulated the local and global resource monitoring process, measuring CPU, RAM and bandwidth values, and saving those values every 60 seconds (time-step). The task offloader of the infrastructure was receiving incoming traffic and was assigning each task on a node, based on the following scheduling algorithms: RoundRobin, MinMin, and MaxMin. The local and global resource usage metrics are being monitored and then fed to the IPFT mechanisms of each processing node. During every single time-step we use the monitoring data in order to formulate the appropriate data representations, featuring the past time-series measurements of a single node, as well as the state of the infrastructure as a whole. The input is then fed to the composite deep learning model, enabling it to make predictions of resource usage for every node in a time horizon of 10 minutes. In this experimental set up we made the assumption that the preparation time for the infrastructure to assure its availability and robustness to faults is 10 minutes. The simulation lasted for seven days and the tasks were generated by a mixture of Gaussian probability distributions that simulate a realistic application workload behaviour [46]. The processing nodes simulated the processing capabilities of Raspberry Pi's. We defined a process fault if the time execution of a task lasts more than one second. The selection of one second is a reasonably acceptable latency for several data analytic applications [47]. Trying different latency times for the process faults, we noticed that the IPFT performance was better than the reactive approach. In the reactive fault tolerance approach, a node replication is triggered in case of a fault is taking place. In Table II, as we will thoroughly discuss in the next section, we compare the IPFT mechanism to the reactive fault tolerance approach. #### 5.2.2 Fault Tolerance Evaluation Metrics In order to evaluate the performance of the IPFT mechanisms, we used a set of fault tolerance evaluation metrics [48]. Mean Time To Failure (MTTF) is defined as the expected time for a failure to occur given that the system functions properly. MTTF is an evaluation metric which corresponds to the overall inability of the edge infrastructure to operate properly and thus, it is calculated by taking into consideration the number of faults regardless of the actual processing node that failed. Mean Time To Repair (MTTR) is defined as the expected time required to repair the system after a failure occurs. For the MTTF the higher values are the better and for MTTR the lower values are the better. These evaluation metrics are calculated in terms of seconds. Two additional Fault Tolerance evaluation metrics are the reliability and maintainability. Reliability refers to the ability of an edge infrastructure to run continuously without any failure. Maintainability refers to how easily a failed system can be repaired. Both reliability and maintainability are numbers with no units and higher values mean better performance. #### 5.2.3 Evaluation Results and Discussion The experimental results are summarized in Table II. We compared the IPFT mechanism to the Reactive Fault Tolerance (RFT) approach. The RFT approach performs node replications after a fault occurs. Regarding the task offloading algorithm we used the Round Robin (RR) [49], the MinMin and MaxMin [50]. The experimental result show the superiority of IPFT compared to the RFT in all evaluation metrics. In addition we see that the outcomes are significantly affected from the task offloading mechanism. This happens because the task offloading algorithms also integrate a workload balancing methodology with different criteria as we will discuss in the following paragraphs. The number of generated tasks in all experimental setups was close to 1,500,000 with some parts of the day to have an intensive task generation (i.e., 11:00 -13:00), while some other parts of the day a small number of tasks (i.e., 02:00 -04:00). We simulated this task generation behaviour because it is close to the activity of many user applications during a day. In this way, we observed that the infrastructure made intelligent replications during the parts of the day with an increased task generation. Respectively, the infrastructure turned off the edge nodes when the IPFT mechanism predicted an under-utilization of the processing nodes. Sometimes there were some sudden spikes or drops in task generation and resource utilization, but the infrastructure using the IPFT mechanism could dynamically and timely adapt. In Table II, we can see that in RR, MaxMin and MinMin the MTTF in IPFT has been increased compared to the RFT. This means that leveraging the resource usage predictions, faults occur more sparsely and rarely. We can see from the MTTR metric that in the event of a fault, the infrastructure will recover very quickly, scheduling the new tasks in processing nodes with low resource utilization. The reliability metric shows that by using the IPFT, the edge infrastructure can provide the expected results up to 93% of the simulation length, even during the stressing time periods of the simulated days. The significant improvement noticed for the maintainability metric, declares that even if a fault occurs, the IPFT will increase the robustness of the edge infrastructure. In other words, the IPFT will take timely the right measures by triggering node replication and task migration, in order to reduce the likelihood of subsequent fault occurrence. A fault is recorded taking into consideration all the edge nodes that are currently active. This means that the MTTF value of 13.309 seconds in IPFT MaxMin includes the faults of different edge nodes. In addition to that, some generated tasks had a large number of million instructions that would have provoked a fault because of the CPU unavailability in the processing nodes. In this case, we wanted to know how these tasks affect the MTTF and MTTR. From the analysis of the results we saw that the variance of the task size is the reason that we see that the three different task offloading mechanism have different performance. In particular, the MaxMin algorithm gives higher priority in big tasks, thus we see a significantly better MTTF metric. During the simulation we examined the IPFT decisions and how the edge environment operates. The simulation confirmed that the infrastructure takes advantage of the timely decision to trigger proactive actions, such as intelligent node replication and task migration before the amount of tasks overwhelms the processing nodes. This can be particular important for the infrastructure provider as it can save cost and energy, by shutting down nodes when they are no longer needed. Additionally, the provider can achieve a smoother flow of on-time completed tasks, avoiding crashes and minimizing QoS deterioration. In regards to the actual cost of implementing the proposed IPFT paradigm, the consumption of computational resources was \(3.2\%\) greater when compared to the reactive approach. Furthermore, the burden imposed on the network infrastructure was about \(10\) bytes per second, due to the information flow that derived from the need of the prediction model to have access to the ongoing resource usage. Finally, the incorporation of the prediction mechanism used around 125MB of RAM and increased CPU consumption by around 47% for an average of 2250ms on an Intel Xeon E312xx CPU. When contemplating the substantial benefits provided by the IPFT approach, we believe that the overall implementation cost is justified and quite reasonable. ## 6 Conclusion In this paper, we proposed a proactive Fault Tolerance mechanism in an edge computing infrastructure based on the resource usage predictions. In the beginning, we discussed and experimentally evaluated the use of RNN for the resource usage modelling. We developed a composite deep learning model that leverages in two channels the resource usage metrics of the local processing nodes and the infrastructure as a whole. We also designed a hypertuning algorithm that combines evolution strategy with Bayesian optimization and surpasses commonly used hypertuners like Keras-Tuner and other state of the art machine learning models. Last but not least, we presented how a proactive fault tolerance mechanism can leverage the resource usage predictions triggering node replication and task migration. Our experiments and results corroborated the efficiency of our proactive fault tolerance methodology, the applicability of RNN and the two-channel architecture for the resource usage prediction. The limitation of our work is that we \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline & MTTF & MTTR & Reliability & Maintainability \\ \hline RFT RR & 2.864 & 19.657 & 0.741 & 0.048 \\ \hline IPFT RR & 9.506 & 3.343 & 0.904 & 0.230 \\ \hline RFT MinMin & 8.733 & 36.169 & 0.897 & 0.026 \\ \hline IPFT MinMin & 8.919 & 5.656 & 0.899 & 0.150 \\ \hline RFT MaxMin & 3.721 & 24.239 & 0.788 & 0.039 \\ \hline IPFT MaxMin & 13.309 & 7.425 & 0.930 & 0.118 \\ \hline \end{tabular} \end{table} Table 2: Evaluation of Reactive & Intelligent Proactive Fault Tolerance Methods have not yet holistically worked with other types of faults like network faults, physical faults and service expiry faults. Our future work is to make data analysis and find the execution patterns in the edge resources that are related with these types of faults. In addition, we want to see how different threshold values affect the performance of the IPFT. Specifically, we plan to further investigate the resource usage metric threshold for triggering the node replication and task migration and the time intervals for monitoring the time series metrics. ## Acknowledgment This work is part of the ACCORDION and CHARITY projects that have received funding from the European Union's Horizon 2020 research and innovation programme under grant agreements No 871793" and No 101016509".
2308.08649
Towards Zero Memory Footprint Spiking Neural Network Training
Biologically-inspired Spiking Neural Networks (SNNs), processing information using discrete-time events known as spikes rather than continuous values, have garnered significant attention due to their hardware-friendly and energy-efficient characteristics. However, the training of SNNs necessitates a considerably large memory footprint, given the additional storage requirements for spikes or events, leading to a complex structure and dynamic setup. In this paper, to address memory constraint in SNN training, we introduce an innovative framework, characterized by a remarkably low memory footprint. We \textbf{(i)} design a reversible SNN node that retains a high level of accuracy. Our design is able to achieve a $\mathbf{58.65\times}$ reduction in memory usage compared to the current SNN node. We \textbf{(ii)} propose a unique algorithm to streamline the backpropagation process of our reversible SNN node. This significantly trims the backward Floating Point Operations Per Second (FLOPs), thereby accelerating the training process in comparison to current reversible layer backpropagation method. By using our algorithm, the training time is able to be curtailed by $\mathbf{23.8\%}$ relative to existing reversible layer architectures.
Bin Lei, Sheng Lin, Pei-Hung Lin, Chunhua Liao, Caiwen Ding
2023-08-16T19:49:24Z
http://arxiv.org/abs/2308.08649v1
# Towards Zero Memory Footprint Spiking Neural Network Training ###### Abstract Biologically-inspired Spiking Neural Networks (SNNs), processing information using discrete-time events known as spikes rather than continuous values, have garnered significant attention due to their hardware-friendly and energy-efficient characteristics. However, the training of SNNs necessitates a considerably large memory footprint, given the additional storage requirements for spikes or events, leading to a complex structure and dynamic setup. In this paper, to address memory constraint in SNN training, we introduce an innovative framework, characterized by a remarkably low memory footprint. We **(i)** design a reversible SNN node that retains a high level of accuracy. Our design is able to achieve a \(\mathbf{58.65\times}\) reduction in memory usage compared to the current SNN node. We **(ii)** propose a unique algorithm to streamline the backpropagation process of our reversible SNN node. This significantly trims the backward Floating Point Operations Per Second (FLOPs), thereby accelerating the training process in comparison to current reversible layer backpropagation method. By using our algorithm, the training time is able to be curtailed by \(\mathbf{23.8\%}\) relative to existing reversible layer architectures. ## 1 Introduction As a bio-inspired neuromorphic computing representative, Spiking Neural Network (SNN) has attracted considerable attention, in contrast to the high computational complexity and energy consumption of traditional Deep Neural Networks (DNNs) [28; 2; 25; 10]. SNN processes information using discrete-time events known as spikes rather than continuous values, offering extremely hardware-friendly and energy-efficient characteristics. For instance, in a robot navigation task using Intel's Loihi [1], SNN could achieve a 276\(\times\) reduction in energy compared to a conventional DNN approach. Work [24] shows that DNN consumes 111 mJ and 1035 mJ per sample on MNIST and CIFAR-10, respectively, while SNN consumes only 0.66 mJ and 102 mJ, i.e., 168\(\times\) and 10\(\times\) energy reduction. Despite their numerous advantages, one major bottleneck in the deployment of SNNs has been memory consumption. The memory complexity of a traditional DNN with a depth of \(L\) is \(\mathcal{O}(L)\). But for SNN of the same depth \(L\), there are several timesteps \(T\) involved in the computation. Consequently, the memory complexity of SNNs escalates to \(\mathcal{O}(L*T)\). For instance, the memory requirement during the DNN training process of ResNet19 is 0.6 GB, but for the SNN with the same architecture could reach about 12.34 GB (-20 \(\times\)) when time-step equals 10. This presents a significant challenge for their applicability to resource-constrained systems, such as IoT-Edge devices [21]. To address the issue of high memory usage of SNN, researchers have proposed several methods, including Quantization [21], Weight Sparsification [22; 12], the Lottery Ticket Hypothesis [13], Knowledge Distillation [9], Efficient Checkpointing [27], and so on. In this paper, we introduce a novel reversible SNN node that drastically compresses the memory footprint of the SNN node inside the entire network. Our method achieves state-of-the-art results in terms of SNN memory savings. It achieves this by recalculating all the intermediate states on-the-fly, rather than storing them during the backward propagation process. To further enhance the efficiency of our approach, we also present a new algorithm for the backpropagation process of our reversible SNN node, which significantly reduces the training time compared with the original reversible layer backpropagation method. Remarkably, our method maintains the same level of accuracy throughout the process. As a result of our innovations, we reduce the memory complexity of the SNN node from \(\mathcal{O}(n^{2})\) to \(\mathcal{O}(1)\), while maintaining comparable accuracy to traditional SNN node. Moreover, our method reduces the FLOPs needed for the backpropagation by a factor of 23% compared to existing reversible layer backpropagation method, thus accelerating the training process. Collectively, these advances pave the way for more efficient and scalable SNN implementations, enabling the deployment of these biologically inspired networks across a wider range of applications and hardware platforms. ## 2 Background And Related Works ### Spiking Neural Network Spiking neural network (SNN) uses sparse binary spikes over multiple time steps to deal with visual input in an event-driven manner [1; 3]. We use SNN with the popular Leaky Integrate and Fire (LIF) spiking neuron. The forward pass is formulated as follows. \[v[t]=\alpha v[t{-}1]+\sum_{i}w_{i}s_{i}[t]-\vartheta o[t-1] \tag{1a}\] \[o[t]=h(v[t]-\vartheta)\] (1b) \[h(x)=\begin{cases}0,&\text{if }x<0\\ 1,&\text{otherwise}\end{cases} \tag{1c}\] where \(t\) denotes time step. In Eq. (1a), \(v[t]\) is the dynamics of the neuron's membrane potential after the trigger of a spike at time step t. The sequence \(s_{i}[t]\in\{0,1\}\) represents the \(i\)-th input spike train, consisting solely of 0s and 1s, while \(w_{i}\) is the corresponding weight. In Eq. (1b), \(o[t]\in\{0,1\}\) is the neuron's output spike train. In Eq. (1c), \(h(x)\) is the Heaviside step function to generate the outputs. In the backward pass, we adopt Backpropagation Through Time (BPTT) to train SNNs. The BPTT for SNNs using a surrogate gradient [19], which is formulated as follows. \[\delta_{l}[t]=\epsilon_{l+1}[t]w_{l+1} \tag{2a}\] \[\epsilon_{l}[t]=\delta_{l}[t]\phi_{l}[t]+\alpha\epsilon_{l}[t]\] (2b) \[\frac{\partial L}{\partial w_{l}}=\sum_{t=0}^{T-1}\epsilon_{l}[t] \cdot[s_{l}[t]]^{\intercal} \tag{2c}\] Denote \(L\) as the loss, in Eq. (2a), \(\delta_{l}[t]{=}\frac{\partial L}{\partial\alpha_{l}[t]}\) is the error signal at layer \(l\) time step \(t\) which is propagated using above formulations. In Eq. (2b), \(\epsilon_{l}[t]{=}\frac{\partial L}{\partial w_{l}[t]}\), \(\phi_{l}[t]{=}\frac{\partial o_{l}[t]}{\partial w_{l}[t]}{=}\frac{\partial u _{l}(v_{l}[t]-\vartheta)}{\partial v_{l}[t]}\), where we follow the gradient surrogate function in [7] to approximate the derivative of \(u(x)\), such that \(\frac{\partial u(x)}{\partial x}\approx\frac{1}{1+\pi^{x}x^{2}}\). Eq. (2c) calculates the gradient of \(l^{th}\) layer weight \(w_{l}\). ### Reversible Layer A reversible layer refers to a family of neural network architectures that are based on the Non-linear Independent Components Estimation [4; 5]. In traditional training methods, the activation values of each layer are stored in memory, which leads to a dramatic increase in memory requirements as the network depth increases. However, reversible transformation technology allows us to store only the final output of the network and to recompute the discarded ones when needed. This approach significantly reduces memory requirements, making it possible to train deeper neural networks and more complex models under limited memory conditions, thus potentially unlocking new insights and improving performance across a wide range of tasks. The reversible transformation has been used in different kinds of neural networks, such as CNN [8], Graph neural networks (GNN) [15], Recurrent Neural Networks (RNN) [17] and some transformers [18]. Furthermore, studies have demonstrated the effectiveness of reversible layers in different types of tasks, including image segmentation [20], natural language processing [14], compression [16], and denoising [11]. ## 3 Reversible SNN Algorithm ### Reversible SNN Memory Analysis During the training process of SNN networks, the activation values occupy the main memory storage space. The SNN activation value memory analysis schematic diagram is shown in Fig.1. In this figure, we use the VGG-13 architecture [26] with ten timesteps as an example. The percentage values represent the memory footprint ratio of each part in the entire network. The left diagram is the original SNN where the activation values of \(X\) account for 90.9% of the memory usage, and the output potentials of each neuron occupy 9.1% of the memory. The right diagram is our designed reversible SNN, which only requires saving the final \(X_{output}\) values and the output potentials of each neuron, without storing all intermediate values, thus significantly saving memory. The intermediate activation values will be regained during the backpropagation process through our inverse calculation equation. In this example, our method is able to save 90.21% of the memory used for activation values. The exact amount of memory saved by our method will be shown in Experiment part. ### Reversible SNN Forward Calculation Our forward algorithm is in the upper section of Fig.2. The various input states \(S=(X,V)\) of each neuron are evenly divided into two groups along the last dimension. Namely: \(S=[S_{1},S_{2}]\). **P**: Calculate the first part of output \(Y_{1}\): \[M_{1}^{t}=V_{1}^{t-1}+\frac{1}{\tau}\cdot\left(X_{1}^{t}-V_{1}^{t-1}\right) \quad\text{(\ref{eq:SNNNN})} Y_{1}^{t}=H\left(M_{1}^{t}-V_{th}\right)+\beta\cdot X_{2}^{t} \tag{4}\] \(M_{1}^{t}\) is the membrane potential of the first half neuron at time \(t\). \(V_{2}^{t-1}\) is the input potential of the second half neuron at time \(t-1\). \(\tau\) is the time constant. \(X_{2}^{t}\) is the input to the second half neuron at time \(t\). \(V_{th}\) is the threshold voltage of the neurons. \(H()\) is the Heaviside step function. \(\beta\) is a scaling factor for the input. \(\beta\cdot X_{2}^{t}\) will help \(Y_{1}^{t}\) to collect information about the second half of the input in the next step. Then calculate first part of output voltage \(V_{1}^{t}\): \[V_{1}^{t}=\left(1-Y_{1}^{t}\right)\odot M_{1}^{t}+Y_{1}^{t}\cdot V_{res}+ \alpha\cdot V_{1}^{t-1} \tag{5}\] \(V_{1}^{t}\) is the output potential of the first half neuron at time \(t\). \(V_{res}\) is the reset voltage of the neurons. \(\alpha\) is a scaling factor for the membrane potential. Figure 1: Activation value Memory Comparison between the original SNN network and our reversible SNN network, using VGG13 with a timestep of ten as an example. : SNN node, : Save to memory, : NOT save to memory. \(\mathfrak{C}\): Use the first part of output \(Y_{1}\) to calculate the second part \(Y_{2}\): \[M_{2}^{t}=V_{2}^{t-1}+\frac{1}{\tau}\left(Y_{1}^{t}-V_{2}^{t-1}\right) \tag{6}\] \[Y_{2}^{t}=H\left(M_{2}^{t}-V_{th}\right)+\beta\cdot X_{1}^{t} \tag{7}\] \(M_{2}^{t}\) is the membrane potential of the second half neuron at time \(t\). \(Y_{2}^{t}\) is the output of the second half neuron at time \(t\). calculate the second part of output voltage \(V_{2}^{t}\): \[V_{2}^{t}=\left(1-Y_{2}^{t}\right)\odot M_{2}^{t}+Y_{2}^{t}\cdot V_{res}+ \alpha\cdot V_{2}^{t-1} \tag{8}\] \(V_{2}^{t}\) is the output potential of the second half neuron at time \(t\). \(\mathfrak{C}\): For all the output states \(S_{output}=([Y_{1},Y_{2}],[V_{1}^{t},V_{2}^{t}])\), combine them by the last dimension. ### Reversible SNN Inverse Calculation The purpose of the inverse calculation is to use the output results to obtain the unsaved input values. i.e. Use \(Y\) and \(V_{output}\) to calculate \(X\) and \(V\). Our inverse algorithm is in the lower section of Fig.2. \(\mathfrak{1}\): For all the output states \(S_{output}=(Y,V_{output})\), divide them into two groups by the last dimension in the same way as in the first step of forward calculation, namely: \(S_{output}=[\hat{S}_{output}1;S_{output}2]\) \(\mathfrak{2}\): Calculate \(V_{1}^{t}\) by combine Eq.6 and Eq.8, simplify: \[V_{2}^{t-1}=\frac{V_{2}^{t}-(1-Y_{2})\cdot\frac{1}{\tau}\odot Y_{1}-Y_{2} \cdot V_{reset}}{(1-Y_{2})\cdot(1-\frac{1}{\tau})+\alpha} \tag{9}\] Calculate \(X_{1}^{t}\) by combine Eq.6 and 7, simplify: \[X_{1}^{t}=\left(Y_{2}^{t}-H\left(M_{2}^{t}-V_{th}\right)\right)\div\beta \tag{10}\] \(\mathfrak{3}\): Calculate \(V_{1}^{t}\) by combine Eq.3 and Eq.5, simplify: \[V_{1}^{t-1}=\frac{V_{1}^{t}-(1-Y_{1})\cdot\frac{1}{\tau}\odot X_{1}^{t}-Y_{1} \cdot V_{reset}}{(1-Y_{1})\cdot(1-\frac{1}{\tau})+\alpha} \tag{11}\] Figure 2: This reversibility demo use \(2\times 2\) toy Input as an example and shows our forward and inverse calculations. : The origin of the equations in the inverse process. Calculate \(X_{1}^{t}\) by combine Eq.3 and 4, simplify: \[X_{2}^{t}=\left(Y_{1}^{t}-H\left(M_{1}^{t}-V_{th}\right)\right)\div\beta \tag{12}\] \(\mathfrak{x}\): For all the input states \(S=([X_{1},X_{2}],[V_{1}^{t-1},V_{2}^{t-1}])\), combine them by the last dimension. ## 4 Inverse Gradient Calculation Although our reversible architecture significantly reduces memory usage, it does extend computation time for two primary reasons: (i) It necessitates the recalculation of the activation values that weren't originally stored. (ii) Many of the previous reversible layer architectures have inherited the backpropagation method from checkpointing [30; 6]. This method requires using the recalculated intermediate activation values to rerun the forward equation, thereby constructing a forward computational graph. This graph is then used to derive the corresponding gradients. This step of rerunning the forward equation introduces additional computational overhead, which extends the overall computation time. This scenario is prevalent across all existing architectures of reversible layers, including Reversible GNN [15], Reversible RNN [17], Reversible Transformers [18], and so on. To reduce the training time, we have designed a new algorithm, called the inverse gradient calculation method, which is able to substantially decrease the number of FLOPs during the backpropagation process compared to the original reversible architecture. Our design is shown in Fig.3. The left diagram illustrates the original forward and backward processes. The middle diagram depicts the original calculation process for reversible layers. It contains four steps: 1. The input \(X\) pass the forward function to compute the output \(Y\), without storing the input data to conserve memory. 2. For each layer \(n\): The output \(X^{n}\) of this layer pass the inverse function to compute the input \(X^{n-1}\) of this layer. This process starts with the final output \(Y\). 3. For each layer \(n\): The input \(X^{n-1}\) passes through the forward function again to reconstruct the forward computational graph, which facilitates gradient computation. 4. For each layer \(n\): Compute the gradient \(\frac{\partial\mathbf{X^{n}}}{\partial\mathbf{X^{n-1}}}\) based on the forward computational graph. The right diagram is our design, it contains three steps: 1. The input \(X\) pass the forward function to compute the output \(Y\), without storing the input data to conserve memory. 2. For each layer \(n\): The output \(X^{n}\) of this layer pass the inverse function to compute the input \(X^{n-1}\) of this layer and construct an inverse computational graph. 3. For each layer \(n\): Compute the gradient \(\frac{\partial\mathbf{X^{n}}}{\partial\mathbf{X^{n-1}}}\) based on the inverse computational graph. Below is the specific calculation formula of the \(\frac{\partial\mathbf{X^{n}}}{\partial\mathbf{X^{n-1}}}\) based on the inverse computation graph, and the derivation process is in the Appendix. \[\frac{\partial\mathbf{X^{n}}}{\partial\mathbf{X^{n-1}}}=\frac{\theta}{2+\left( \pi\cdot\theta\cdot\left(M_{1}^{t}-V_{th}\right)\right)^{2}}\cdot\frac{1}{\tau }\odot\left(1+\frac{\theta}{2+\left(\pi\cdot\theta\cdot\left(M_{2}^{t}-V_{th} \right)\right)^{2}}\cdot\frac{1}{\tau}\right)+\beta \tag{13}\] \[\frac{\partial\mathbf{X^{n}}}{\partial\mathbf{X^{n-1}}}=\frac{\theta}{2+\left( \pi\cdot\theta\cdot\left(M_{2}^{t}-V_{th}\right)\right)^{2}}+\beta \tag{14}\] All the variables in Eq.13 and Eq.14 have the same meaning as the variables in Eq.3-Eq.12 and \(\theta\) is an adjustable constant parameter. The ability to perform computational graph inverse computation in my algorithm is based on that our forward function has symmetry with the inverse computation function. For the original reversible network: \[FLOPS_{backward}^{pri}=FLOPS_{inverse}+FLOPS_{forward}+FLOPS_{\frac{ \partial\mathbf{X^{n}}}{\partial\mathbf{X^{n-1}}}} \tag{15}\] For our reversible network: \[FLOPS_{backward}^{our}=FLOPS_{inverse}+FLOPS_{part\;of\;\frac{\partial \mathbf{X^{n-1}}}{\partial\mathbf{X^{n}}}} \tag{16}\] Compared to the standard reversible network, our method reduces FLOPS by 23%. The FLOPS analysis is shown in Appendix and the detailed time measurement is shown in the Experiment part. ## 5 Experiment We first compare our design with the SOTA SNN Memory-efficient methods for SNN training on the several datasets. Subsequently, we incorporate our reversible SNN node into different architectures across various datasets. The goal is twofold: Firstly, we wanted to demonstrate that compared to the SNN node currently in use, our reversible version is able to offer substantial memory savings. Secondly, we aim to show that, when compared to the existing reversible layer backpropagation method, our reversible SNN node backpropagation design is able to considerably reduce the time spent in the backpropagation process, thereby accelerating the training phase. In the final part, we conduct an ablation study to evaluate the influence of various parameters within our equations and the impact of the number of groups into which the input is divided on the performance of our model. All experiments were conducted on a Quadro RTX6000 GPU equipped with 24GB of memory, using PyTorch 1.13.1 with CUDA 11.4, and an Intel(R) Xeon(R) Gold 6244 CPU running at 3.60GHz. To ensure that the values we obtain through inverse-calculation is the same as the original forward-calculation method, we use torch.allclose(rtol=\(1e^{-06}\), atol=\(1e^{-10}\)) to compare all the inverse calculated values with the origianl forward calculated values. All the results return true as they should be. Detailed hyperparameter settings for each experiment are provided in the Appendix. ### Comparison with the SOTA Methods We conducted a comparison of our approach with the current SOTA methods in memory efficiency during the SNN training process on the CIFAR10 and CIFAR100 datasets. To verify the universality of our work, we apply our designed reversible SNN node to the current SOTA sparse training work for SNNs. A comparison was then made between these two methods on the Tiny-ImageNet dataset, the results are shown in Table 1. Our approach (RevSNN) achieves a \(\mathbf{2.54\times}\) memory reduction on the CIFAR10 dataset and a \(\mathbf{3.13\times}\) memory reduction on the CIFAR100 dataset compared to the current SOTA SNN training memory-efficient method. At the same time, we also maintain a high level of accuracy. On the Tiny-ImageNet dataset, we only replaced the original SNN node with our designed reversible SNN node, keeping all other conditions consistent (RevND). As a result, the accuracy of our VGG-16 model structure is 0.28% points higher than that of the original dense model and saves \(\mathbf{1.87\times}\) more memory than the original work at 99% sparsity. On the ResNet-19 model, our accuracy is 0.31% points higher than the dense model, and saves \(\mathbf{2.06\times}\) more memory than the original work at 99% sparsity. ### Memory Consumption Evaluation To investigate whether our newly designed reversible SNN node achieves the expected memory savings compared to the original Spiking Neural Node, we incorporated our node into a range of architectures including VGG-11, 13, 16, 19, and ResNet-19, 34, 50, 101. For the VGG architectures, we examined the corresponding memory usage for timesteps ranging from 1 to 20, while for the ResNet architectures, we scrutinized the memory usage for timesteps from 1 to 10. These tests were conducted on the CIFAR-10 dataset. For all the experiments, we keep the batch size = 128. The SNN node memory comparison results is shown in Fig.4. For VGG architecture, even when employing the most memory-intensive VGG-19 architecture with a timestep of 20, the cumulative memory usage for all the reversible SNN nodes within the entire network remains below 200MB. In contrast, using conventional SNN nodes demands a substantial amount of memory, up to 9032MB. For ResNet architectures, the ResNet-101 architecture with a timestep of 10 needs about 28993MB using conventional SNN node, but only 1382MB using our reversible SNN node. As the number of model layers and the timestep value increase, the memory savings achieved by our reversible SNN node become more pronounced. Specifically, when utilizing the VGG-19 architecture with a timestep of 20, our reversible SNN node enjoys a \(\mathbf{58.65\times}\) memory reduction compared to the original SNN node. The specific data values are shown in the Appendix. These experimental results align with our theoretical analysis in Section 3.1, further validating that our design is able to significantly reduce memory usage. ### Training time Evaluation To investigate the time efficiency of our designed backpropagation architecture in comparison with the traditional reversible layer backpropagation method, we employ two sets of backpropagation architectures for our reversible SNN node. The first set utilizes the original reversible layer backpropagation method, while the second set incorporates our newly designed backpropagation architecture. \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Dataset** & **Method** & **Architecture** & **Time-steps** & **Accuracy** & **Memory(Glb)** \\ \hline \multirow{8}{*}{CIFAR10} & OITT [32] & VGG(6/8V8) & 6 & 93.52\% & 4 \\ & S2A-STSU [29] & ResNet-17 & 5 & 92.75\% & 27.93 \\ & IDE-LIF [33] & CIFARNet-F & 30 & 91.74\% & 2.8 \\ & Hybrid [23] & VGG-16 & 100 & 91.13\% & 9.36 \\ & Tandem [31] & CifarNet & 8 & 89.04\% & 4.2 \\ & Skipper [27] & VGG-5 & 100 & 87.44\% & 4.6 \\ \cline{2-6} & **RevSNN(Ours)** & ResNet-18 & 4 & 91.87\% & **1.101** \\ \hline \multirow{8}{*}{CIFAR100} & IDE-LIF [33] & CIFARNet-F & 30 & 71.56\% & 3.51\% \\ & OITT [32] & VGG(8/8V5) & 6 & 71.05\% & 4.04\% \\ & S2A-STSU [29] & VGG-13 & 4 & 68.96\% & 31.05 \\ & Skipper [27] & VGG-5 & 100 & 66.48\% & 4.6 \\ \cline{2-6} & **RevSNN(Ours)** & ResNet-18 & 4 & 71.13\% & **1.12** \\ \hline \multirow{8}{*}{Tiny-ImageNet} & ND(Dense) [12] & VGG-16 & 5 & 39.45\% & 3.99 \\ & ND(90\% Sparsity) [12] & VGG-16 & 5 & 39.12\% & 3.78 \\ \cline{1-1} & ND(99\% sparsity) [12] & VGG-16 & 5 & 33.84\% & 3.76 \\ \cline{1-1} \cline{2-6} & **RevSNN(Ours)** & VGG-16 & 5 & 39.73\% & **2.01** \\ \cline{1-1} \cline{2-6} & ND(Dense) [12] & ResNet-19 & 5 & 50.32\% & 5.29 \\ \cline{1-1} & ND(90\% Sparsity) [12] & ResNet-19 & 5 & 49.25\% & 5.11 \\ \cline{1-1} & ND(99\% sparsity) [12] & ResNet-19 & 5 & 41.96\% & 5.09 \\ \cline{1-1} \cline{2-6} & **RevSNN(Ours)** & ResNet-19 & 5 & 50.63\% & **2.47** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of our work with the SOTA methods in memory efficiency during the SNN training process. For all the works: Batch size = 128. *: They did not provide the memory data directly for training CIFAR100, we estimate it based on their memory usage for training CIFAR10 and their parameter data. We employ VGG-11, VGG-13, VGG-16, and VGG-19 architectures with timesteps ranging from 1 to 10. We compare the time required for one iteration of training using the original SNN node, the reversible SNN node with the original reversible layer backpropagation method, and the reversible SNN node with our backpropagation architecture on the CIFAR-10 datasets. We perform all the experiments on an empty RTX6000 GPU and keep the batch size = 64. The reported times for each forward and backward pass are averages taken over all iterations within the first five epochs. Fig.5 presents our measurement of the training time when the number of timesteps is set to 4, 6, and 8. The forward computation times for the three methods are virtually identical. The shortest backward processing time is exhibited by the original SNN node, primarily due to it records all intermediate values throughout the computation, thus eliminating the need for recalculations. Comparatively, among the two reversible SNN nodes, our backpropagation design achieves a \(\mathbf{20}\%-\mathbf{30}\%\) increase in speed over previous reversible layer backpropagation method during the backward process. As the network expands, the superiority of our backpropagation design becomes increasingly evident. Specifically, under the VGG-19 architecture with a timestep of 8, our designed node is able to save \(\mathbf{23.8}\%\) of the total training time compared to the reversible node using the original reversible layer backpropagation method. This aligns well with our previous theoretical predictions in Section 4. Data for the other timesteps is shown in the Appendix. ### Ablation Study #### Effects of parameters \(\alpha\) and \(\beta\) in our equations In Eq.4 and Eq.5, we have two parameters: \(\alpha\) and \(\beta\). The optimal setting for the parameter \(\beta\) is 1, as this maximizes the preservation of the original features of the data. We conduct experiments to Figure 4: Memory comparison between normal SNN node and our reversible SNN node. Figure 5: Training time analysis. Solid lines: Backward process’s duration; Dashed lines: Forward process’s duration; Red lines: Training time for the original SNN; Green lines: Training time for the reversible SNN using the original reversible layer backpropagation method; Blue lines: Training time for the reversible SNN employing our proposed backpropagation architecture. assess the impact of the \(\alpha\) parameter on the model's performance. We vary the \(\alpha\) parameter from 0.05 to 0.8, and then employ architectures VGG-19, VGG-16, VGG-13, and VGG-11 to evaluate the accuracy on the CIFAR100 dataset. The results are shown on the left of Fig.6. We observe that varying \(\alpha\) within the range of 0.05 to 0.8 impacts the final accuracy by approximately 1%. Generally, the model exhibits optimal performance when \(\alpha\) is set between 0.1 to 0.2. **Effects of number of groups for the various states** In Section 3.2, We introduce a method of splitting various input states into two groups along the last dimension. Nonetheless, this method might encounter issues under specific circumstances. For instance, if the last dimension of a tensor is an odd number, it cannot be evenly divided into two groups. To address this, we enhance the original algorithm: we divide the various input states into \(n\) groups according to the number of elements \(n\) in the last dimension. Eq.3-5 is then executed sequentially for each group. This enhancement further improves the universality of our algorithm. To evaluate the impact of the number of groups on the model, we modified part of the fully connected layers in the original ResNet-19, ResNet-18, VGG-16, VGG-13 network from 128 activations to 144 activations. This is to allow it to have a wider variety of factors. We then evaluate the model's performance with the number of groups set to 2, 3, 6, 12, 24, 48, 72, and 144 respectively on CIFAR100 dataset. The results are shown on the right of Fig.6. We observe that the training accuracy improves as the number of groups increases. When the number of groups approaches the number of elements \(n\) in the last dimension, the accuracy typically surpasses that of the original SNN node. This is attributed to a larger number of groups yielding a higher fidelity representation of the original data. ## 6 Conclusion and Discussion This work addresses a fundamental bottleneck of current deep SNNs: their high GPU memory consumption. We have designed a novel reversible SNN node that is able to reduce memory complexity from \(\mathcal{O}(n^{2})\) to \(\mathcal{O}(1)\). Specifically, our reversible SNN node allows our SNN network to achieve \(\mathbf{2.54}\) times greater memory efficiency than the current SOTA SNN memory-efficient work on the CIFAR10 dataset, and \(\mathbf{3.13}\) times greater on the CIFAR100 dataset. Furthermore, in order to tackle the prolonged training time issue caused by the need for recalculating intermediate values during backpropagation within our designed reversible SNN node, we've innovated a new backpropagation approach specifically suited for reversible architectures. This innovative method, when compared to the original reversible layer architecture, achieves a substantial reduction in overall training time by \(\mathbf{23.7}\%\). As a result, we are able to train over-parameterized networks that significantly outperform current models on standard benchmarks while consuming less memory. Figure 6: **Left Figure**: Test VGG-19,VGG-16,VGG-13,VGG-11 models on CIFAR100 dataset by using different \(\alpha\) settings. **Right Figure**: Change activations number from 128 to 144 for some fully connected layers inside ResNet-19, ResNet-18, VGG-16, VGG-13 and test model performance for different number of groups on CIFAR100. Rev.: Reversible SNN node. Ori.: Original SNN node. Mo.: Modified network (Change some fully connected layers).
2307.10683
Fractional Denoising for 3D Molecular Pre-training
Coordinate denoising is a promising 3D molecular pre-training method, which has achieved remarkable performance in various downstream drug discovery tasks. Theoretically, the objective is equivalent to learning the force field, which is revealed helpful for downstream tasks. Nevertheless, there are two challenges for coordinate denoising to learn an effective force field, i.e. low coverage samples and isotropic force field. The underlying reason is that molecular distributions assumed by existing denoising methods fail to capture the anisotropic characteristic of molecules. To tackle these challenges, we propose a novel hybrid noise strategy, including noises on both dihedral angel and coordinate. However, denoising such hybrid noise in a traditional way is no more equivalent to learning the force field. Through theoretical deductions, we find that the problem is caused by the dependency of the input conformation for covariance. To this end, we propose to decouple the two types of noise and design a novel fractional denoising method (Frad), which only denoises the latter coordinate part. In this way, Frad enjoys both the merits of sampling more low-energy structures and the force field equivalence. Extensive experiments show the effectiveness of Frad in molecular representation, with a new state-of-the-art on 9 out of 12 tasks of QM9 and on 7 out of 8 targets of MD17.
Shikun Feng, Yuyan Ni, Yanyan Lan, Zhi-Ming Ma, Wei-Ying Ma
2023-07-20T08:20:12Z
http://arxiv.org/abs/2307.10683v3
# Fractional Denoising for 3D Molecular Pre-training ###### Abstract Coordinate denoising is a promising 3D molecular pre-training method, which has achieved remarkable performance in various downstream drug discovery tasks. Theoretically, the objective is equivalent to learning the force field, which is revealed helpful for downstream tasks. Nevertheless, there are two challenges for coordinate denoising to learn an effective force field, i.e. low sampling coverage and isotropic force field. The underlying reason is that molecular distributions assumed by existing denoising methods fail to capture the anisotropic characteristic of molecules. To tackle these challenges, we propose a novel hybrid noise strategy, including noises on both dihedral angel and coordinate. However, denoising such hybrid noise in a traditional way is no more equivalent to learning the force field. Through theoretical deductions, we find that the problem is caused by the dependency of the input conformation for covariance. To this end, we propose to decouple the two types of noise and design a novel fractional denoising method (Frad), which only denoises the latter coordinate part. In this way, Frad enjoys both the merits of sampling more low-energy structures and the force field equivalence. Extensive experiments show the effectiveness of Frad in molecular representation, with a new state-of-the-art on 9 out of 12 tasks of QM9 and on 7 out of 8 targets of MD171. Footnote 1: The code is released publicly at [https://github.com/fengshikun/Frad](https://github.com/fengshikun/Frad) ## 1 Introduction Molecular representation learning is fundamental for various tasks in drug discovery, such as molecular property prediction (Schutt et al., 2018, 2021; Liu et al., 2022b), drug-drug interaction prediction (Asada et al., 2018; Rohani and Eslahchi, 2019), and de novo molecular generation (Gebauer et al., 2019; Luo and Ji, 2022). Inspired by the success of self-supervised learning in natural language processing (NLP) (Dai and Le, 2015; Devlin et al., 2018) and computer vision (CV) (Simonyan and Zisserman, 2014; Dosovitskiy et al., 2020), various molecular pre-training methods have been proposed, to tackle the lack of labeled data problem in this area. Among them, most early approaches treat molecular data as 1D SMILES strings (Wang et al., 2019; Honda et al., 2019; Chithrananda et al., 2020; Zhang et al., 2021; Xue et al., 2021; Guo et al., 2022) or 2D graphs (Rong et al., 2020; Li et al., 2020; Zhang et al., 2021; Li et al., 2021; Zhu et al., 2021; Wang et al., 2022b; Fang et al., 2022b; Lin et al., 2022), and utilize sequence-based or graph-based pre-training methods to obtain molecular representations. However, the 3D geometric structure is crucial for a molecule, since it largely determines the energy function, and thus the corresponding physical and chemical properties (Schutt et al., 2018). Therefore recently, more and more pre-training methods (Liu et al., 2021; Li et al., 2022; Zhu et al., 2022; Fang et al., 2022a; Stark et al., 2022a) have been proposed to exploit 3D molecular data (see Appendix D). In 3D molecular pre-training, the coordinate denoising approach (Zaidi et al., 2022; Luo et al., 2022; Zhou et al., 2023; Jiao et al., 2022; Liu et al., 2022a) is a promising one, and has achieved remarkable performance. Specifically given the equilibrium molecular structure, some independent and identical noise is added to the corresponding atomic coordinates, and the model gets trained to reconstruct the input. Compared with other self-supervised learning methods, coordinate denoising methods have the ability to capture the fine-grained 3D geometry information. More importantly, this approach enjoys a physical interpretation of learning a molecular force field (Zaidi et al., 2022). Force field learning have been proven effective for downstream tasks. Theoretically, force field and potential energy are fundamental physical quantities that has close relation with several downstream tasks(Chmiela et al., 2017). Em pirically, Zaidi et al. (2022); Jiao et al. (2022); Liu et al. (2022); Luo et al. (2022) have demonstrated that learning the force field or energy will produce remarkable performance for various downstream tasks. To further validate this issue, we conduct additional experiments detailed in the Appendix B.3, where we employ the prediction of force field as the pre-training task, our results show the effectiveness of learning force field approach. Considering the equivalence between denoising and learning force field, denoising could be a powerful pretraining method for molecular representation. However, two challenges prevent the current coordinate denoising methods from learning an accurate force field. * Low Sampling Coverage. In existing coordinate denoising methods, the noise level is usually set very small, to avoid generating irrational substructures, e.g. distorted aromatic rings. It is observed in experiments that if the noise level is large, the performance can decrease dramatically Zaidi et al. (2022). Similar phenomenon is also found in Appendix B.1. Though the existing noise sampling strategy can avoid unwanted rare noisy structures, the produced structures could hardly cover common structures with low energy, which can be crucial for various downstream tasks. Therefore, the existing coordinate denoising methods have limitations in learning accurate force field at other common low energy structures, except for the given equilibrium structures. * Isotropic Force Field. In existing coordinate denoising methods, the noise is assumed to be with an isotropic covariance, meaning the slope of the energy function is the same in all directions around the local minimum. However, the energy function of molecule is intrinsically not isotropic in the sense that there can be both rigid and flexible parts in a molecule. As illustrated in Figure 1, the structures of rings, double bonds, and triple bonds, are usually fixed in low-energy conformations, while some single bonds can be rotated without causing radical energy changes. All these different structures are very popular in practice. Therefore, the existing methods fail to depict the anisotropic energy landscape, leading to inaccurate learned force field. To tackle the aforementioned challenges, we propose a novel hybrid noise strategy to capture the characteristic of molecular distribution. Unlike the coordinate noise, we first introduce a Gaussian noise to the dihedral angles of the rotatable bonds, and then add a traditional noise to the coordinates of atoms. In this way, the dihedral angle noise scale could be set large to search the energy landscape, which may cover more meaningful low energy structures without generating invalid noisy structures. Under the setting of hybrid noise, the corresponding conformation distribution will be with an anisotropic covariance. Especially, the covariance of the flexible parts is large through the perturbation of the rotatable dihedral angels with a large noise level. Whereas the covariance of the rigid parts is small, since only small levels of coordinate noise will be added to them. Although the hybrid noise strategy well addresses the above two challenges, unlike traditional coordinate denoising, learning to directly recover the hybrid noise is no more equivalent to learning the force field. Through a meticulous mathematical deduction, we find that the bottleneck is the dependency of the input conformation in the formulation of covariance. Confronted with the difficulty of denoising the dihedral angles, we decouple the two types of noise and design a novel fractional denoising method. The main idea is adding the hybrid noise, while only denoising the latter coordinate part. We can prove that this new denoising method, namely fractional denoising with hybrid noise (Frad), is equivalent to learning an anisotropic force field, inheriting all the merits of hybrid noise. The main contribution of this work is the introduction of a new hybrid noise strategy and the design of a novel fractional denoising method for 3D molecule pre-training. Theoretically, we prove that the new denoising method is equivalent to learning the force field with an anisotropic covariance, which captures the important characteristic of molecules. Empirically, we conduct experiments by pre-training on a large dataset PCQM4MV2 (Nakata and Shimazaki, 2017) and fine-tuning on two widely-used dataset, i.e. QM9 (Ramakrishnan et al., 2014; Ruddigkeit et al., 2012) and MD17 (Chmiela et al., 2017). Experimental results show that our method achieves a new state-of-the-art on 9 out of 12 tasks of the QM9 and on 7 out of 8 targets of MD17, as compared with previous state-of-the-art denoising pre-training baselines and other approaches tailored for property prediction. Comprehensive ablation studies manifest the effectiveness of our design in both pre-training and fine-tuning. ## 2 Preliminary In this section, we will clarify the widely-applied assumptions and notations in denoising pre-training and introduce Figure 1: An illustration of the anisotropy of molecular structures. In low-energy conformations of aspirin, the structure of benzene ring and the carbon-oxygen double bonds are almost fixed, while some single bonds can rotate flexibly. the coordinate denoising method. **Boltzmann Distribution.** From the prior knowledge in statistical physics, the occurrence probability of molecular conformations is described by Boltzmann distribution (Boltzmann, 1868) \(p_{physical}(\tilde{x})\propto exp(-E_{physical}(\tilde{x}))\), where \(E_{physical}(\tilde{x})\) is the (potential) energy function, \(\tilde{x}\in\mathbb{R}^{3N}\) is the position of the atoms, i.e. conformation, N is the number of atoms in the molecule. More details are in Appendix C.1. **Gaussian Assumption.** The goal is to learn the molecular force field\(-\nabla_{\tilde{x}}E(\tilde{x})\). From the Boltzmann distribution, we have \(\nabla_{\tilde{x}}\log p(\tilde{x})=-\nabla_{\tilde{x}}E(\tilde{x})\), where \(\nabla_{\tilde{x}}\log p(\tilde{x})\) is the score function of the conformation \(\tilde{x}\). However both the energy function \(E_{physical}\) and distribution \(p_{physical}\) are unknown, and we only have access to a set of \(n\) equilibrium conformations \(x_{1},\cdot\cdot\cdot,x_{n}\) during pre-training, which are local minima of the energy and the local maxima of the probability distribution. Accordingly, the conformation distribution can be approximated by mixture of Gaussians centered at the equilibriums (Zaidi et al., 2022): \[p_{physical}(\tilde{x})\approx p(\tilde{x})=\sum_{i=1}^{n}p_{N}(\tilde{x}|x_{ i})p_{0}(x_{i}), \tag{1}\] where \(p_{N}(\tilde{x}|x_{i})\sim\mathcal{N}(x_{i},\Sigma),i=1,\cdot\cdot\cdot,n\) are Gaussian distributions, \(n\) is the number of equilibriums, \(\tilde{x}\in\mathbb{R}^{3N}\) is any conformation of the molecule, \(p_{0}\) is the probability of the equilibriums. Then the approximate energy function is \(E(\tilde{x})\approx E_{physical}(\tilde{x})\), which satisfies \(p(\tilde{x})\propto exp(-E(\tilde{x}))\). The Gaussian mixture degenerates into Gaussian distribution when the equilibrium is unique, and this is the case in our pre-training dataset. It is worth to note that the existing methods adopt Gaussian distribution with isotropic diagonal covariance \(\Sigma=\tau^{2}I_{3N}\), leading to isotropic quadratic energy function. However, it is not the case in the real world. This is exactly one of our motivations to propose a new method in section 3 to provide an anisotropic covariance matrix that better fit \(p_{physical}\). **Molecular Force Field Learning.** It can be proved that denoising is an equivalent optimizing objective to learning the approximate force field with the assumptions above. \[\begin{split}& E_{p(\tilde{x})}\|GNN_{\theta}(\tilde{x})-(- \nabla_{\tilde{x}}E(\tilde{x}))\|^{2}\quad\text{(a)}\\ =& E_{p(\tilde{x})}\|GNN_{\theta}(\tilde{x})-\nabla_{ \tilde{x}}\log p(\tilde{x})\|^{2}\quad\text{(b)}\\ =& E_{p(\tilde{x}|x_{i})}\|GNN_{\theta}(\tilde{x})- \nabla_{\tilde{x}}\log p(\tilde{x}|x_{i})\|^{2}+T\quad\text{(c)}\\ =& E_{p(\tilde{x}|x_{i})p(x_{i})}\|GNN_{\theta}( \tilde{x})-\frac{x_{i}-\tilde{x}}{\tau^{2}})\|^{2}+T\quad\text{(d)},\end{split} \tag{2}\] where \(GNN_{\theta}(\tilde{x})\) denotes a graph neural network with parameters \(\theta\) which takes conformation \(\tilde{x}\) as an input and returns node-level noise predictions, \(T\) is constant independent of \(\theta\), \(-\nabla_{x}E(\tilde{x})\) is referred to as the molecular force field, indicating the force on each atom. In (2), the first equation uses the Boltzmann distribution, the second equation is proved in Vincent (2011) and Proposition A.8 that training a neural network to estimate the score function (b) is equivalent to perturbing the data with a pre-specified noise \(p(\tilde{x}|x_{i})\) and training a neural network to estimate the conditional score function (c). The first two equation holds for any distribution \(p(\tilde{x})\), while the last equation employs the Gaussian assumption with an isotropic diagonal covariance \(\Sigma=\tau^{2}I_{3N}\). Since coefficients \(-\frac{1}{\tau^{2}}\) do not rely on the input \(\tilde{x}\), it can be absorbed into \(GNN_{\theta}\)(Zaidi et al., 2022). So we conclude that typical denoising loss and force field fitting loss are equivalent, i.e. \(\min_{\theta}E_{p(\tilde{x}|x_{i})p(x_{i})}\|GNN_{\theta}(\tilde{x})-(\tilde{x }-x_{i})\|^{2}\simeq\min_{\theta}E_{p(\tilde{x})}\|GNN_{\theta}(\tilde{x})-(- \nabla_{\tilde{x}}E(\tilde{x}))\|^{2}\), where \(\simeq\) denotes equivalent optimization objectives for GNN. This proof helps to comprehend the content in section 3.2 and 3.3. More details are in Appendix C.2. ## 3 Method In this section, we clarify the two challenges faced by traditional coordinate denoising in detail and elaborate how we tackle them by designing dihedral angle noise and hybrid noise in section 3.1. Then in section 3.2, we provide a mathematical description for the two types of noise and explain that the force field interpretation does not hold for directly denoising the hybrid noise. Finally, to eliminate this limitation, a new kind of denoising task is proposed in section 3.3. ### Hybrid Noise In fact, the fundamental cause of the two challenges is the inadequate distribution assumption. The distribution in existing denoising methods fail to capture an important molecular characteristic. To be specific, for 3D molecular modeling, it is essential to notice that molecules have rigid parts and flexible parts. The structures of rigid parts, such as rings, double and triple bonds, are almost fixed, whereas some single bonds can flexibly rotate. In other words, little coordinate perturbation on the rigid parts brings about high energy conformation, while altering the dihedral angles of the rotatable bonds does not cause sharp energy change. For convenience, the rule is concluded as the anisotropy of molecules or chemical constraints (Stark et al., 2022). Now we introduce this important chemical knowledge into the denoising framework. #### 3.1.1 Enlarging Sampling Coverage As is briefly discussed in section 1, unless the noise scale is very small, the isotropic coordinate noise will generate structures that violate the chemical constraints and thus are ineffective for downstream tasks. However, small noise scale hinders sampling more structures with low energy. Meanwhile, it is hard to mathematically define a kind of noise that both satisfies the chemical constraints and can be easily implemented into the denoising framework. Inspired by a technique utilized in molecular docking (Meng et al., 2011; Stark et al., 2022b) and generation community (Wang et al., 2022a; Jing et al., 2022) that searches low-energy structures by generating the dihedral angles of the rotatable bonds, we propose dihedral angle noise that naturally obeys chemical constraints. In chemistry, a dihedral angle refers to the angle formed by the intersection of two half-planes through two sets of three atoms, where two of the atoms are shared between the two sets. Specifically, we search all the rotatable single bonds in the molecule and perturb the corresponding dihedral angles by Gaussian noise. These can be efficiently completed using RDKit, which is a fast cheminformatics tool (Landrum et al., 2013; Riniker and Landrum, 2015). Therefore, we can adjust the noise scale without generating invalid structures, enlarging the sampling coverage. In addition, since the dihedral angle noise can generate structures with low energy, adding dihedral angle noise can also be viewed as data augmentation. #### 3.1.2 Approximating Anisotropic Force Field Since perturbing different parts of the molecule structure can cause varying scales of effect on energy, we assume the conformation distribution \(p(\tilde{x})\) has an anisotropic covariance, i.e. the slope of the energy function is not the same in all directions around the local minimum. Intuitively, the energy function should be sharp in the direction of perturbing the positions of atoms in the rigid parts, while smooth in that of the flexible parts. Correspondingly, the covariance of noise should be small on rigid parts and large on flexible parts. Following the method in section 3.1.1, we perturb the flexible parts by turning the dihedral angles of the rotatable bonds and then perturb the whole molecule by a small level of coordinate noise, resulting in hybrid noise, as is shown in Figure 2a. In this way, the covariance of the hybrid noise is larger for the flexible parts and smaller for the rigid parts, leading to an anisotropic conformation distribution and correspondingly an anisotropic energy function that meets the chemical constraints. Therefore, the approximate conformation distribution of hybrid noise is more accurate than that of traditional coordinate noise, especially after carefully tuning the noise scales. Consequently, the approximate force field corresponding to the hybrid noise is more accurate than that of traditional coordinate noise. This is supported by our experiment in Appendix B.2. Moreover, since the noise scale of the coordinate noise is kept small, the hybrid noise still maintains the sample validity mentioned in section 3.1.1. ### Difficulties to Learn the Force Field of Hybrid Noise A challenge to the hybrid noise is that directly denoising is not equivalent to learning a force field. To better understand this difficulty, we provide the coordinate form of dihedral angle noise and hybrid noise under certain conditions. Before providing theoretical results, we first clarify some notations. \(x_{i}\) denotes the equilibrium conformations, \(x_{a}\) denotes the conformation after perturbed by dihedral angle noise, \(\tilde{x}\) denotes the conformation after perturbed by hybrid noise, \(x_{i}\), \(x_{a}\), \(\tilde{x}\in\mathbb{R}^{3N}\), and \(N\) is the number of atoms in this molecule. If there are \(m\) rotatable bonds in the molecule, the dihedral angles of the rotatable bonds are represented by \(\tilde{\psi}=(\psi_{1},...,\psi_{m})\in[0,2\pi)^{m}\). Denote \(\psi_{i}\), \(\psi_{a}\), \(\tilde{\psi}\) as the dihedral angles of \(x_{i}\), \(x_{a}\), \(\tilde{x}\) respectively. The notations are Figure 2: An overview of our method Frad. **a**: During pre-training, the hybrid noise, combining dihedral angle noise and coordinate noise, is applied to the equilibrium conformation. **b**: The GNN is trained to predict the coordinate noise, which is a fraction of the hybrid noise. This process is named Frad (Fractional Denoising), and proved to be equivalent to learning an approximate force field. **c**: We apply Frad during fine-tuning on the MD17 dataset. Specifically, fractional denoising is added as an auxiliary task, which is optimized with the primary property prediction task simultaneously. consistent in Figure 2. **Proposition 3.1** (Noise Type Transformation).: _Consider adding dihedral angle noise \(\Delta\psi\in[0,2\pi)^{m}\) on the input structure \(x_{i}\). The corresponding coordinate change \(\Delta x=x_{a}-x_{i}\in\mathbb{R}^{3N}\) is approximately linear with respect to the dihedral angle noise, when the scale of the dihedral angle noise is small._ \[\|\Delta x-C\Delta\psi\|_{2}^{2}\leq\sum_{j=1}^{m}D_{j}\mathcal{E}(\Delta\psi_ {j}) \tag{3}\] _where \(C\) is a \(3N\times m\) matrix that is dependent on the input conformation, \(\{D_{j},j=1\cdots m\}\) are constants dependent on the input conformation. \(\lim_{\Delta\psi_{j}\to 0}\mathcal{E}(\Delta\psi_{j})=0,\forall j=1\cdots m\), indicating the linear approximation error is small when the scale of the dihedral angle noise is small._ The proposition in full form and its proof are in Appendix A. Proposition 3.1 provides the approximate linear relationship between the two types of noise. When the scale of the dihedral angle noise is sufficient small, \(\Delta x\) is sufficiently close to \(C\Delta\psi\), then we get the approximate conformation distribution of \(x_{a}\) and \(\tilde{x}\). **Proposition 3.2** (The Conformation Distribution Corresponding to Dihedral Angle Noise).: _If \(p(\psi_{a}|\psi_{i})\sim\mathcal{N}(\psi_{i},\sigma^{2}I_{m})\), i.e. Gaussian dihedral angle noise is added on the equilibrium conformation, then the approximate conformation distribution of the noisy structure \(x_{a}\) conditioned on equilibrium structure \(x_{i}\) is \(p(x_{a}|x_{i})\sim\mathcal{N}(x_{i},\Sigma_{\sigma})\), where \(\Sigma=\sigma^{2}CC^{T}\)._ **Proposition 3.3** (The Conformation Distribution Corresponding to Hybrid Noise).: _If \(p(\psi_{a}|\psi_{i})\sim\mathcal{N}(\psi_{i},\sigma^{2}I_{m})\), \(p(\tilde{x}|x_{a})\sim\mathcal{N}(x_{a},\tau^{2}I_{3N})\) i.e. the hybrid noise is added on the equilibrium conformation, then the approximate conformation distribution of the noisy structure \(\tilde{x}\) conditioned on equilibrium structure \(x_{i}\) is \(p(\tilde{x}|x_{i})\sim\mathcal{N}(x_{i},\Sigma_{\sigma,\tau})\), where \(\Sigma_{\sigma,\tau}=\tau^{2}I_{3N}+\sigma^{2}CC^{T}\)._ We summarize the conditional conformation distribution and corresponding conditional score function under different noise type in Table 1. Compared to traditional coordinate noise, the covariance of the hybrid noise is indeed anisotropic. In addition, the covariances in Proposition 3.2 and 3.3 are dependent on the input equilibrium structure \(x_{i}\). Substitute them into the third row of (2), we have \(min_{\theta}E_{p(\tilde{x})}\|GNN_{\theta}(\tilde{x})-\nabla_{\tilde{x}}E( \tilde{x})\|^{2}\simeq min_{E(\tilde{x},x_{i})}\|GNN_{\theta}(\tilde{x})- \nabla^{-1}(x_{i}-\tilde{x}))\|^{2}\). However, \(min_{\theta}E_{p(\tilde{x},x_{i})}\|GNN_{\theta}(\tilde{x})-\Sigma^{-1}(x_{i} -\tilde{x}))\|^{2}\neq min_{\theta}E_{p(\tilde{x},x_{i})}\|GNN_{\theta}( \tilde{x})-(x_{i}-\tilde{x}))\|^{2}\) for \(\Sigma=\Sigma_{\sigma}\) and \(\Sigma=\Sigma_{\sigma,\tau}\), i.e. neither denoising dihedral angle noise nor denoising hybrid noise is equivalent to learning the force field, because the coefficients \(\Sigma\) rely on input conformation and cannot be absorbed into \(GNN_{\theta}\). Although the theories require noise scale to be sufficiently small, it is enough to show the difficulty to learn the force field of hybrid noise, because the equivalence should hold in all noise scale settings. ### Fractional Denoising Method From the discussion above, we conclude that the last equation in (2) do not hold because the covariance is no longer isotropic and depends on the input structure. Note that the problem lies in the dihedral part of the hybrid noise. In order to decouple the two types of noise, we design a clever denoising method, namely fractional denoising, that adding hybrid noise while reconstructing the coordinate part of noise, as is illustrated in Figure 2b. An exciting result is that the fractional denoising task is equivalent to learning the anisotropic force field of hybrid noise. The result is summarized below. The proof is in Proposition A.7 in appendix. **Proposition 3.4** (Fractional Denoising Score Matching).: _If \(p(\tilde{x}|x_{a})\sim\mathcal{N}(x_{a},\tau^{2}I_{3N})\) and \(p(x_{a}|x_{i})\) can be arbitrary distribution, we have_ \[\begin{split}& E_{p(\tilde{x}|x_{a})p(x_{a}|x_{i})p(x_{i})}\|GNN_{ \theta}(\tilde{x})-(\tilde{x}-x_{a})\|^{2}\\ &\simeq E_{p(\tilde{x})}\|GNN_{\theta}(\tilde{x})-\nabla_{\tilde{x}} \log p(\tilde{x})\|^{2},\end{split} \tag{4}\] \(\simeq\) _denotes the equivalence as optimization objectives. \(\nabla_{\tilde{x}}\log p(\tilde{x})=-\nabla_{\tilde{x}}E(\tilde{x})\) is the anisotropic force field of the hybrid noise, because \(p(\tilde{x})=\sum_{i=1}^{n}p(\tilde{x}|x_{i})p_{0}(x_{i})\) and \(p(\tilde{x}|x_{i})\) is given by hybrid noise and with an anisotropic covariance._ Proposition 3.4 indicates the fractional denoising objective is equivalent to learning an anisotropic force field. Additionally, though we only denoise the coordinate part, Frad does not suffer from the sampling challenge because the samples \(\tilde{x}\) are generated by hybrid noise. Besides, \(p(x_{a}|x_{i})\) can be arbitrary distribution, leaving room for designing more accurate energy functions in future work. In particular, fractional denoising is implemented as follows: our model takes \(\tilde{x}\) as input and predicts \(\big{(}\tilde{x}-x_{a}\big{)}\) as the denoising target. We train our network \(GNN_{\theta}\) to minimize the pre-training loss function defined in Equation 5. For a complete description of our method's pipeline, please refer to Algorithm 1 in \begin{table} \begin{tabular}{c c c} \hline \hline \multirow{2}{*}{**Noise Type**} & **Conformation** & **Score Function** \\ & **Distribution**\(p(\cdot|x_{i})\) & \(\nabla\log p(\cdot|x_{i})\) \\ \hline Coordinate & \(x_{cd}\sim\mathcal{N}(x_{i},\tau^{2}I_{3N})\) & \(\frac{1}{\tau^{2}}(x_{i}-x_{cd})\) \\ Dihedral Angle & \(x_{a}\sim\mathcal{N}(x_{i},\Sigma_{\sigma})\) & \(\Sigma_{\sigma}^{-1}(x_{i}-x_{a})\) \\ Hybrid & \(\tilde{x}\sim\mathcal{N}(x_{i},\Sigma_{\sigma,\tau})\) & \(\Sigma_{\sigma,\tau}^{-1}(x_{i}-\tilde{x})\) \\ \hline \hline \end{tabular} \end{table} Table 1: The conditional distributions of the conformation perturbed by various noise type given the clean (equilibrium) conformation \(x_{i}\) and the corresponding score functions. appendix. \[\begin{split}&\mathcal{L}_{pre-training}=\\ & E_{p(\tilde{x}|x_{a})p(x_{a}|x_{i})p(x_{i})}\|GNN_{\theta}( \tilde{x})-(\tilde{x}-x_{a})\|_{2}^{2}.\end{split} \tag{5}\] ### Applying Frad to Fine-tuning In addition to pre-training, (Godwin et al., 2021; Zaidi et al., 2022) show that denoising can also improve representation learning in fine-tuning by discouraging over-smoothing and learning data distribution. The method is called Noisy Nodes, which incorporates an auxiliary loss for coordinate denoising in addition to the original property prediction objective. Specifically, it corrupts the input structure by coordinate noise and then trains the model to predict the properties and the noise from the same noisy structure. We have included pseudocode for Noisy Nodes in Algorithm 2, provided for reference. Unfortunately, we find it cannot converge on tasks in MD17 dataset. We conjecture that this is because the task in MD17 is sensitive to the input conformation (see section 4.1.1), whereas Noisy Nodes have to corrupt the input conformation leading to an erroneous mapping between inputs and property labels. To fill this gap, we propose two modifications, as is illustrated in Figure 2c. For one thing, we decouple the denoising task and the downstream task to keep the input of the downstream task unperturbed. The two tasks are trained simultaneously by optimising a weighted sum of losses of the two tasks. For the other thing, we substitute our hybrid noise and fractional denoising for the coordinate denoising in Noisy Nodes, so the benefit of force field learning can also be inherited. Specifically, Equation 6 defines the optimization goal of our modified Noisy Nodes. \[\begin{split}\mathcal{L}_{fine-tuning}=&\lambda_{p} \mathcal{L}_{Fractional Denoising}\\ &+\lambda_{n}\mathcal{L}_{PropertyPrediction}\end{split} \tag{6}\] where \(\mathcal{L}_{Fractional Denoising}=E_{p(\tilde{x}|x_{a})p(x_{a}|x_{i})p(x_{i})}\| \Delta x_{i}^{pred}-(\tilde{x}-x_{a})\|_{2}^{2},\Delta x_{i}^{pred}\)\(=\) NoiseHead\({}_{\theta_{n}}(GNN_{\theta}(\tilde{x}))\) denotes the prediction of noise, \(\mathcal{L}_{PropertyPrediction}=PropertyPredictionLoss(y_{i}^{pred},y_{i})\) that can be in different form for various downstream tasks, \(y_{i}^{pred}\)\(=\)\(\mathrm{LabelHead}_{\theta_{t}}(GNN_{\theta}(x_{i}))\) represents the predicted label of \(x_{i}\), \(\lambda_{p}\) and \(\lambda_{n}\) represent the loss weight of property prediction and Noisy Nodes respectively, the \(\mathrm{NoiseHead}_{\theta_{t}}\) module takes the representation of \(\tilde{x}\) as its input and generates a predicted node-level noise for each atom, while the \(\mathrm{LabelHead}_{\theta_{t}}\) employs the representation of \(x_{i}\) to forecast the graph-level label for \(x_{i}\). The full optimization pipeline can be found in Algorithm 3. Ablation study in section 4.3.2 validates that both of the modifications contribute to better performance. Our modified Noisy Nodes may further benefit the tasks that are sensitive to the input conformations, such as ligand generation, affinity prediction, and so on. We leave it as future work. ## 4 Experiments ### Settings #### 4.1.1 Datasets We leverage a large-scale molecular dataset PCQM4Mv2 (Nakata and Shimazaki, 2017) as our pre-training dataset. It contains 3.4 million organic molecules, with one equilibrium conformation and one label calculated by density functional theory (DFT). We do not use the label since our method is self-supervised. As for downstream tasks, we adopt the two popular 3D molecular property prediction datasets: MD17 (Chmiela et al., 2017) and QM9 (Ruddigkeit et al., 2012; Ramakrishnan et al., 2014). QM9 is a quantum chemistry dataset including geometric, energetic, electronic and thermodynamic properties for 134k stable small organic molecules \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline Models & \(\mu\) (D) & \(\alpha\) (\(a_{0}^{3}\)) & \(\epsilon_{HOMO}\) & \(\epsilon_{LUMO}\) & \(\Delta\epsilon\) & \(<R^{2}>\) & ZPVE & \(U_{0}\) & \(U\) & \(H\) & \(G\) & \(C_{v}\) \\ & & (meV) & (meV) & (meV) & (\(a_{0}^{2}\)) & (meV) & (meV) & (meV) & (meV) & (meV) & (\(\frac{c}{molK}\)) & \\ \hline SchNet & 0.033 & 0.235 & 41.0 & 34.0 & 63.0 & 0.07 & 1.70 & 14.00 & 19.00 & 14.00 & 14.00 & 0.033 \\ E(n)-GNN & 0.029 & 0.071 & 29.0 & 25.0 & 48.0 & 0.11 & 1.55 & 11.00 & 12.00 & 12.00 & 12.00 & 0.031 \\ DimeNet++ & 0.030 & 0.043 & 24.6 & 19.5 & 32.6 & 0.33 & 1.21 & 6.32 & 6.28 & 6.53 & 7.56 & 0.023 \\ PaiNN & 0.012 & 0.045 & 27.6 & 20.4 & 45.7 & 0.07 & 1.28 & 5.85 & 5.83 & 5.98 & 7.35 & 0.024 \\ SphereNet & 0.027 & 0.047 & 23.6 & 18.9 & 32.3 & 0.29 & **1.120** & 6.26 & 7.33 & 6.40 & 8.00 & 0.022 \\ TorchMD-NET & 0.011 & 0.059 & 20.3 & 18.6 & 36.1 & **0.033** & 1.840 & 6.15 & 6.38 & 6.16 & 7.62 & 0.026 \\ \hline Transformer-M & 0.037 & 0.041 & 17.5 & 16.2 & **27.4** & 0.075 & 1.18 & 9.37 & 9.41 & 9.39 & 9.63 & 0.022 \\ SE(3)-DDM & 0.015 & 0.046 & 23.5 & 19.5 & 40.2 & 0.122 & 1.31 & 6.92 & 6.99 & 7.09 & 7.65 & 0.024 \\ 3D-EMGP & 0.020 & 0.057 & 21.3 & 18.2 & 37.1 & 0.092 & 1.38 & 8.60 & 8.60 & 8.70 & 9.30 & 0.026 \\ \multicolumn{2}{c}{DP-TorchMD} & \multirow{2}{*}{0.012} & \multirow{2}{*}{0.0517} & \multirow{2}{*}{17.7} & \multirow{2}{*}{14.3} & \multirow{2}{*}{31.8} & \multirow{2}{*}{0.4496} & \multirow{2}{*}{1.71} & \multirow{2}{*}{6.57} & \multirow{2}{*}{6.11} & \multirow{2}{*}{6.45} & \multirow{2}{*}{6.91} & \multirow{2}{*}{**0.020**} \\ -NET(\(\tau=0.04\)) & & & & & & & & & & & \\ \hline \multicolumn{2}{c}{Frad} & \multirow{2}{*}{**0.010**} & \multirow{2}{*}{**0.0374**} & \multirow{2}{*}{**15.3**} & \multirow{2}{*}{**13.7**} & \multirow{2}{*}{27.8} & \multirow{2}{*}{0.3419} & \multirow{2}{*}{1.418} & \multirow{2}{*}{**5.33**} & \multirow{2}{*}{**5.62**} & \multirow{2}{*}{**5.55**} & \multirow{2}{*}{**6.19**} & \multirow{2}{*}{**0.020**} \\ \multicolumn{2}{c}{(\(\sigma=2,\tau=0.04\))} & \multirow{2}{*}{**0.010**} & \multirow{2}{*}{**0.0374**} & \multirow{2}{*}{**15.3**} & \multirow{2}{*}{**13.7**} & \multirow{2}{*}{27.8} & \multirow{2}{*}{0.3419} & \multirow{2}{*}{**5.33**} & \multirow{2}{*}{**5.62**} & \multirow{2}{*}{**5.55**} & \multirow{2}{*}{**6.19**} & \multirow{2}{*}{**0.020**} \\ \hline \hline \end{tabular} \end{table} Table 2: Performance (MAE, lower is better) on QM9. The best results are in bold. made up of CHONF atoms. Each molecule has one equilibrium conformation and 12 labels calculated by density functional theory (DFT). The QM9 dataset is split into a training set with 110,000 and a validation set with 10,000 samples, leaving 10,831 samples for testing. This splitting is commonly applied in literature. As usually done on QM9, we fine-tune a separate model for each of the 12 downstream tasks, with the same pre-trained model. MD17 is a dataset of molecular dynamics trajectories, containing 8 small organic molecules with conformations, total energy and force labels computed by electronic structure method. For each molecule, 150k to nearly 1M conformations are provided. Therefore, compared to QM9, the property prediction task of MD17 is more sensitive to the input conformations. We note that the force prediction task is more discriminative and widely-used than the energy prediction task. So we choose force prediction as the downstream task. Regarding data splitting, the approaches diverge on taking large (9500) or small (950 or 1000) size of training data. As the size of training dataset affects the force prediction significantly, we perform Frad with both splitting for fair comparisons. More settings are summarized in Appendix B.5. #### 4.1.2 Baselines In terms of 3D pre-training approaches, our baselines cover the currently SOTA methods we have known, including DP-TorchMD-NET (Zaidi et al., 2022), 3D-EMGP (Jiao et al., 2022), SE(3)-DDM (Liu et al., 2022), Transformer-M (Luo et al., 2022). DP-TorchMD-NET is the baseline we are most interested in, because it is a typical coordinate denoising pre-training method and shares the same backbone with Frad. So their performance well reflects the comparison between coordinate denoising and fractional denoising. As equivariant denoising methods, 3D-EMGP and SE(3)-DDM are also important baselines to judge whether the prior knowledge we incorporate and the way we incorporate are better. As for Transformer-M, it is a competitive model consisting of denoising and energy prediction pre-training tasks. We exclude Uni-mol (Zhou et al., 2023) and ChemRL-GEM (Fang et al., 2021) since they only provide the average performance of 3 energy tasks in QM9. We also adopt the representative approaches designed for property prediction to test our ability as a property prediction model. The approaches are not pre-trained and they comprise TorchMD-NET (Tholke and De Fabriitis, 2022), SchNet (Schutt et al., 2018), E(n)-GNN(Satorras et al., 2021), DimeNet (Gasteiger et al., 2020), DimeNet++ (Klicpera et al., 2020), SphereNet (Liu et al., 2022), PaiNN (Schutt et al., 2021). Among them, we employ TorchMD-NET as our backbone, which is an equivariant Transformer architecture for 3D inputs. For fair comparison with coordinate denoising, we use the publicly available code from Zaidi et al. (2022) to produce results for DP-TorchMD-NET. The result of TorchMD-NET with 9500 training data of MD17 is reported by (Jiao et al., 2022). Other results are taken from the referred papers. ### Main Experimental Result #### 4.2.1 Results on QM9 In this section, we evaluate the models on QM9 and verify whether Frad can consistently achieve competitive results. The performance is measured by mean absolute error (MAE) \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline Training & \multirow{2}{*}{Models} & \multirow{2}{*}{Aspirin} & Benzene & Ethanol & Malonaldehyde & Naphthalene & Salicylic Acid & Toluene & Uracil \\ \hline \multirow{4}{*}{9500} & TorchMD-NET & 0.1216 & 0.1479 & 0.0492 & 0.0695 & 0.0390 & 0.0655 & 0.0393 & 0.0484 \\ & 3D-EMGP & 0.1560 & 0.1648 & 0.0389 & 0.0737 & 0.0829 & 0.1187 & 0.0619 & 0.0773 \\ & 3D-EMGP & 0.1124 & 0.1417 & 0.0445 & 0.0618 & 0.0352 & 0.0586 & 0.0385 & 0.0477 \\ & \begin{tabular}{c} DP-TorchMD \\ -NET(\(\tau=0.04\)) \\ \end{tabular} & 0.0920 & 0.1397 & 0.0402 & 0.0661 & 0.0544 & 0.0790 & 0.0495 & 0.0507 \\ & \begin{tabular}{c} Frad \\ (\(\sigma=2,\tau=0.04\)) \\ \end{tabular} & **0.0680** & 0.1606 & **0.0332** & **0.0427** & **0.0277** & **0.0410** & **0.0305** & **0.0323** \\ \hline \hline \multirow{4}{*}{1000} & SphereNet & 0.430 & 0.178 & 0.208 & 0.340 & 0.178 & 0.360 & 0.155 & 0.267 \\ & SchNet & 1.35 & 0.31 & 0.39 & 0.66 & 0.58 & 0.85 & 0.57 & 0.56 \\ & DimeNet & 0.499 & 0.187 & 0.230 & 0.383 & 0.215 & 0.374 & 0.216 & 0.301 \\ & SE(3)-DDM & 0.453 & **0.051** & 0.166 & 0.288 & 0.129 & 0.266 & 0.122 & 0.183 \\ \hline \multirow{4}{*}{950} & PainNN & 0.338 & 0.052* & 0.224 & 0.319 & 0.077 & 0.195 & 0.094 & 0.139 \\ & TorchMD-NET & 0.2450 & 0.2187 & 0.1067 & 0.1667 & 0.0593 & 0.1284 & 0.0644 & 0.0887 \\ \cline{1-1} & \begin{tabular}{c} Frad (\(\sigma\) \\ \(=2,\tau=0.04\)) \\ \end{tabular} & **0.2087** & 0.1994 & **0.0910** & **0.1415** & **0.0530** & **0.1081** & **0.0540** & **0.0760** \\ \hline \hline \end{tabular} \end{table} Table 3: Performance (MAE, lower is better) on MD17 force prediction. The best results are in bold. *The result is reported by SE(3)-DDM and PaiNN does not provide the result on Benzene. for each property and the results are summarized in Table 2. In general, we achieve a new state-of-the-art for 9 out of 12 targets. The models for the upper half of the table are property prediction baselines without pre-training. We exceed them on most of the tasks. It is worth mentioning that we make remarkable improvements on the basis of the backbone TorchMD-NET on 11 targets, indicating the effectiveness of our method. As for the outlier < \(R^{2}\) >, we observe the same phenomenon in DP-TorchMD-NET. We speculate it is because the optimal noise scale of < \(R^{2}\) > is different from that of other targets. We also have an evident advantage over the denoising pre-training methods in the lower half of the table. Especially, our Frad achieves or surpasses the results of coordinate denoising approach DP-TorchMD-NET in all 12 tasks, revealing that chemical constraints are unneglectable in denoising. Here DP-TorchMD-NET is trained with hyperparameters in the code of Zaidi et al. (2022). A comparison with strictly aligned setting between coordinate denoising and Frad is in section 4.3.1. #### 4.2.2 Results on MD17 Compared with QM9, tasks in MD17 are more sensitive to molecular geometry and contain nonequilibrium conformations, which bring new challenges to the models. Theoretically, denoising can directly benefit downstream force learning, since it has learnt an approximate force field as a reference. As we expected, Frad achieves new SOTA and the results are in Table 3. In both large and small training data scenarios, Frad outperforms the corresponding pretrained and non-pretrained baselines on 7 out of 8 molecules. Especially when comparing with 3D-EMGP(TorchMD-NET) and DP-TorchMD-NET who utilize the same backbone as us, our superiority is evident, showing the necessity to correct denoising methods by chemical constraints. Regarding Benzene, we observe overfitting during fine-tuning the Frad, which is not found in other molecules. This may be caused by the relatively fixed structure of benzene, leading to low-dimensional features which are easy to overfit. In addition, we can see from the table that the best result on Benzene, achieved by SE(3)-DDM, may be mainly attributed to the backbone PaiNN. Correspondingly, the inferior performance of Frad may come from the backbone TorchMD-NET rather than denoising. ### Ablation Study The Frad technique can be applied in pre-training phase as training targets and fine-tuning phase by Noisy Nodes. Then how much does each part contribute to the final result? Our ablation study here validates each part respectively. #### 4.3.1 Frad in Pre-training To verify Frad as effective pre-training target, we evaluate Frad and coordinate denoising on 6 tasks in QM9. The settings for the two approaches, including hyperparameters for optimization, network structure and Noisy Nodes, are strictly aligned in each task. The results are displayed in Table 4. Frad surpasses coordinate denoising on all 6 tasks, indicating the significance of chemical constraints in force field learning. Note that QM9 contains multiple categories of equilibrium properties, including thermodynamic properties, spatial distribution of electrons and states of the electrons. We speculate an accurate force field learning can not only assist energy prediction, but may enhance the atomic charge prediction and its related properties as well. #### 4.3.2 Frad in Fine-tuning Next, to validate our improvements on Noisy Nodes, we use the same model pre-trained by Frad (\(\sigma=2\), \(\tau=0.04\)) and fine-tune it on Aspirin task in MD17 with distinct Noisy Nodes settings. The results are in Table 5. The analysis are threefold. Setting 2-5 v.s. setting 1: The traditional Noisy Nodes fail to converge while our modifications fix the problem. Setting 3-4 v.s. setting 2: Setting 3 can converge because the dihedral angle noise has less influence on the energy. Additionally, decoupling the input of different tasks ensures an unperturbed input for property prediction, and fundamentally corrects the mapping, allowing setting 4 to work effectively. Setting 5 v.s. setting 4: Fractional denoising further promotes the performance of Noisy Nodes. Combined with experiments in section B.2, we can infer that learning a more accurate force field indeed contributes to downstream tasks. ## 5 Conclusion This paper is concerned with coordinate denoising approach for 3D molecule pretraining. We find that existing coordinate denoising methods has two major limitations, i.e. low sampling coverage and isotropic force field, which prevent the current methods from learning an accurate force filed. To tackle these challenges, we propose a novel denoising method, namely Frad. By introducing hybrid noises on both \begin{table} \begin{tabular}{c c c c c c} \hline \hline & \(\mu\) (D) & \(\alpha\) (\(a_{0}^{3}\)) & \(\epsilon_{HOMO}\) & \(\epsilon_{LUMO}\) & \(\Delta\epsilon\) & \(<R^{2}>\) \\ & & & (meV) & (meV) & (meV) & (\(a_{0}^{2}\)) \\ \hline \(\tau=0.04\) & 0.0120 & 0.0517 & 17.7 & 14.3 & 31.8 & 0.4496 \\ \hline \(\sigma=2\), & & & & & \\ \(\tau=0.04\) & **0.0118** & **0.0486** & **15.3** & **13.7** & **27.8** & **0.4265** \\ \hline \hline \end{tabular} \end{table} Table 4: The performance (MAE) of coordinate denoising and Frad on QM9. The top results are in bold. dihedral angel and coordinate, Frad has the ability to sample more low-energy conformations. Besides, by denoising only the coordinate noise, Frad is proven to be equivalent to a more reasonable anisotropic force field. Consequently, Frad achieves new SOTA on QM9 and MD17 as compared with existing coordinate denoising methods. Ablation studies show the superiority of Frad over coordinate denoising in terms of both pre-training and fine-tuning. Our work has provided several potential directions. Firstly, Proposition 3.4 holds without limiting the angle noise type, suggesting fractional denoising could be a general technique worth indepth investigation. Secondly, here is another point of view to understand our Frad that the dihedral angle noise is a data augmentation strategy to search more low energy structures, while the fractional denoising method is purposed to learning an effective molecular representation insensitive to the coordinate noise. This perspective may inspire new pre-training methods based on both contrastive learning and denoising. Thirdly, how to design a denoising method which better captures the characteristics of molecules to learn a more accurate force field is still an open question. ## Acknowledgements This work is supported by National Key R&D Program of China No.2021YFF1201600, Vanke Special Fund for Public Health and Health Discipline Development, Tsinghua University (NO.20221080053) and Beijing Academy of Artificial Intelligence (BAAI). We acknowledge Cheng Fan, Han Tang and Bo Qiang for chemical knowledge consultation.
2310.02648
Long-Term Dynamic Window Approach for Kinodynamic Local Planning in Static and Crowd Environments
Local planning for a differential wheeled robot is designed to generate kinodynamic feasible actions that guide the robot to a goal position along the navigation path while avoiding obstacles. Reactive, predictive, and learning-based methods are widely used in local planning. However, few of them can fit static and crowd environments while satisfying kinodynamic constraints simultaneously. To solve this problem, we propose a novel local planning method. The method applies a long-term dynamic window approach to generate an initial trajectory and then optimizes it with graph optimization. The method can plan actions under the robot's kinodynamic constraints in real time while allowing the generated actions to be safer and more jitterless. Experimental results show that the proposed method adapts well to crowd and static environments and outperforms most SOTA approaches.
Zhiqiang Jian, Songyi Zhang, Lingfeng Sun, Wei Zhan, Nanning Zheng, Masayoshi Tomizuka
2023-10-04T08:13:44Z
http://arxiv.org/abs/2310.02648v1
# Long-Term Dynamic Window Approach for Kinodynamic Local Planning in Static and Crowd Environments ###### Abstract Local planning for a differential wheeled robot is designed to generate kinodynamic feasible actions that guide the robot to a goal position along the navigation path while avoiding obstacles. Reactive, predictive, and learning-based methods are widely used in local planning. However, few of them can fit static and crowd environments while satisfying kinodynamic constraints simultaneously. To solve this problem, we propose a novel local planning method. The method applies a long-term dynamic window approach to generate an initial trajectory and then optimizes it with graph optimization. The method can plan actions under the robot's kinodynamic constraints in real time while allowing the generated actions to be safer and more jitterless. Experimental results show that the proposed method adapts well to crowd and static environments and outperforms most state-of-the-art approaches. ## I Introduction Differential wheeled robot planning can be achieved by global and local planners. The global planner generates a navigation path to a goal point. The local planner continuously generates actions that guide the robot to follow the navigation path until it reaches the goal point. During this process, the actions generated by the local planner must meet the kinodynamic constraints and keep the robot safe and jitterless. Tab. I shows several key features that should be satisfied for applying local planning methods in differential wheeled robots. First, planning methods need to obey the differential constraints and the acceleration limitation. Then, these methods should fit the static environments, which means they need to be able to deal with irregular borders. Meanwhile, they should also fit the crowd environments, which means they should be able to interact with multiple moving agents. Moreover, planning methods should be long-sighted, which means they need to give a long horizon planning result. Finally, planning methods should be able to track the navigation path to ensure the robot converges to the destination. However, as shown in Tab. I, most current local planning methods can only meet some of the above-mentioned features. Therefore, we propose a novel local planning method satisfying all the features in Tab. I. First, our method constructs time-varying distance fields [1] from the agents and occupancy grid map. Then, a Long-Term Dynamic Window Approach (LT-DWA) is proposed to generate a long-time horizon state-cost tree. Finally, a path with the least cost in the tree is selected and optimized using the Elastic-Band Model Predictive Control (EB-MPC) method to obtain the planned trajectory. In conclusion, this paper has the following contributions: * The LT-DWA is proposed to generate the initial state sequence. * Time-varying distance fields are combined with the MPC [10] to formulate the planning problem and the EB method [11] is applied to solve it. * The proposed local planner is open-sourced1. It can be applied to static and crowd environments and outperforms current planning methods. Footnote 1: [https://github.com/flztiii/LT_DWA](https://github.com/flztiii/LT_DWA) ## II Related Work The local planner can be achieved by reactive, predictive, and learning-based methods. Reactive methods directly build the mapping from the robot's current state to action, including Dynamic Window Approach (DWA) [2], Reciprocal Velocity Obstacles (RVO) [12], and RouteGAN [13]. Predictive methods generate a continuous sequence of actions based on the robot's current state and predicted future conditions. For example, the Timed Elastic Band (TEB) method proposed by Rosmann _et al._[6], the Dynamic Channel (DC) method proposed by Cao _et al._[8], and the Timed-ESDF method proposed by Zhu _et al._[9] are all predictive methods. Learning-based methods use large amounts of data to map from the robot's current state to its action through imitation learning or reinforcement learning. Learning-based methods such as SARL [14], RGL [15], DSRNN [16], and ESA [4] show state-of-the-art performance in crowd environments by considering the crowd's interaction [17, 18]. **MPC-formed planning:** Modeling the local planning problem in MPC form and then solving the problem through optimization is an effective approach [19, 10, 20]. The mathematical models established by these methods are often non-convex, so the initial guess significantly influences the planning result. However, obtaining a feasible initial guess in a crowd environment is challenging. For example, Brito _et al._'s method [10] uses the expansion of the previous planning result as the initial guess for the next planning episode, which is difficult to guarantee the feasibility of the initial guess in a crowd environment. Therefore, in our method, we introduce a robust method for generating a feasible initial guess and combine it with the MPC-formed planning method. **DWA methods:** The DWA has many applications as a planning method that considers kinodynamic and environmental constraints simultaneously. Its fundamental idea is to sample in the feasible control space and then evaluate in the state space. Brock _et al._[21] improve the DWA's evaluation function. Ogren _et al._[22] combine the DWA with global planning and prove the global convergence of their method. However, none of their methods can address the short-sight of the DWA in a single planning episode. A solution is to use multi-step DWA, but another problem will arise: the exponential expansion of the state space. If the exponential expansion of the state space of the multi-step DWA can be solved, it can be applied to generating the feasible initial guess, which our method does. **Distance field:** The distance field adequately represents the environment and is widely used by various planners [1, 23, 24]. Oleynikova _et al._'s method [23] introduces the distance field in the static environment. Chen _et al._'s and Ngo _et al._'s methods [1, 24] are proposed to construct the distance field in the crowd environment and prove its effectiveness. Therefore, we introduce the distance field into the MPC-formed planning method to improve the effectiveness of the planner. ## III Problem Formulation The state and system dynamics of the robot are defined as follows. \[\begin{split}\mathbf{s}&=(x,y,\theta,v,\omega)^{T},\quad\mathbf{u}=(a_{v},a_{\omega})^{T},\\ \hat{\mathbf{s}}&=f(t,\mathbf{s},\mathbf{u})=(v\cos \theta,v\sin\theta,\omega,a_{v},a_{\omega})^{T},\end{split} \tag{1}\] where, \(x\) and \(y\) indicate the position in 2D space, \(\theta\) is orientation, \(v\) and \(\omega\) are linear and angular velocities, \(a_{v}\) and \(a_{\omega}\) are linear and angular accelerations. The robot's radius is defined as \(R\). The robot's state at time \(t_{0}\) is defined as \(\mathbf{s}_{\mathrm{init}}\). Since all the inputs can be transformed into the coordinate system with the robot state as the origin, without losing generality, it can be considered that \(\mathbf{s}_{\mathrm{init}}=\mathbf{0}^{T}\times v_{\mathrm{init}}\times w_{ \mathrm{init}}\). The navigation path connecting the robot's current and goal points is defined as \(\mathcal{P}=\left\{\mathbf{p}_{p}\right\}_{p=0}^{P}\), where \(\mathbf{p}_{p}=(x_{\mathbf{p}_{p}},y_{\mathbf{p}_{p}},\theta_{\mathbf{p}_{p}} )^{T}\) indicates the \(p_{\mathrm{th}}\) point's position and orientation on the navigation path. The agents are defined as \(\mathcal{O}=\left\{\mathbf{o}_{o}\right\}_{o=0}^{O}\), where \(\mathbf{o}_{o}=(x_{\mathbf{o}_{o}},y_{\mathbf{o}_{o}},vx_{\mathbf{o}_{o}},y_{ \mathbf{o}_{o}},r_{\mathbf{o}_{o}})^{T}\) indicates the \(o_{\mathrm{th}}\) agent's center position, velocity, and radius. The occupancy grid map is defined as \(\mathcal{B}=\left\{\mathbf{b}_{b}\right\}_{b=0}^{B}\), where \(\mathbf{b}_{b}=(x_{\mathbf{b}_{b}},y_{\mathbf{b}_{b}})^{T}\) indicates the \(b_{\mathrm{th}}\) occupied grid's position. In this way, the local planning problem can be described as follows. The local planning is to calculate the state sequence \(\mathbf{s}(t),t\in[t_{0},t_{0}+T]\) within a fixed time \(T\), so that \(\mathbf{s}(t)\) minimizes the cost function \(c(\mathbf{s}(t),\mathcal{P},\mathcal{O},\mathcal{B},T)\) while satisfying Eq. 1. For simplification, the problem is discretized in the time domain, where \(T\) is divided into \(N\) frames time segments \(\Delta T\). Then, \(\mathbf{s}(t)\) can be defined as \(\mathcal{S}=\left\{\mathbf{s}_{i}\right\}_{i=0}^{N}\) (\(\mathbf{s}_{i}=\mathbf{s}(t_{0}+i\Delta T)\)), and the local planning problem can be defined as follows. \[\begin{split}\min_{\mathcal{S}}& c(\mathcal{S}, \mathcal{P},\mathcal{O},\mathcal{B})\\ \mathrm{s.t.}&\left\{\begin{array}{l}\mathbf{s}_{0}= \mathbf{0}^{T}\times v_{\mathrm{init}}\times w_{\mathrm{init}},\\ \mathbf{s}_{i+1}=\mathbf{s}_{i}+\Delta Tf(t_{0}+i\Delta T,\mathbf{s},\mathbf{u }),\\ \qquad\qquad\qquad\forall i\in[0,N-1],\\ \mathbf{s}_{i}\in\mathrm{SE}(2)\times[v_{\mathrm{min}},v_{\mathrm{max}}] \times[\omega_{\mathrm{min}},\omega_{\mathrm{max}}],\\ \mathbf{u}_{i}\in[a_{v}^{\mathrm{min}},a_{v}^{\mathrm{max}}]\times[a_{\omega}^ {\mathrm{min}},a_{\omega}^{\mathrm{max}}],\end{array}\right.\end{split} \tag{2}\] where, function \(f(\cdot)\) is the same as Eq. 1, \(v_{\mathrm{min}}\) and \(v_{\mathrm{max}}\) are the robot's minimum and maximum linear velocities, \(\omega_{\mathrm{min}}\) and \(\omega_{\mathrm{max}}\) are the robot's minimum and maximum angular velocities, \(a_{v}^{\mathrm{min}}\) and \(a_{v}^{\mathrm{max}}\) are the robot's minimum and maximum linear accelerations, \(a_{\omega}^{\mathrm{min}}\) and \(a_{\omega}^{\mathrm{max}}\) are the robot's minimum and maximum angular accelerations. In our method, the cost function \(c(\cdot)\) is the weighted summation of the collision risk cost \(c_{\mathrm{c}}(\cdot)\), navigation following cost \(c_{\mathrm{n}}(\cdot)\), and jitter cost \(c_{\mathrm{j}}(\cdot)\), which are defined in section IV. The collision risk cost aims to improve the robot's safety, which means a lower collision rate and a longer distance to the borders. The navigation following cost punishes the robot from deviating from the navigation path. The jitter cost aims to reduce the robot's jitter, which means a smaller change in the orientation, linear velocity, and angular velocity. ## IV Method ### _Framework_ The framework shown in Fig. 1 is proposed to solve the local planning problem. In step one (Section B), the local planner generates the \(N\)-frames time-varying distance fields \(\{d_{i}(\cdot)\}_{i=0}^{N}\). (\(d_{i}(\cdot):\mathbb{R}^{2}\!\rightarrow\!\mathbb{R}\) is a mapping from the Cartesian plane to the real number set.) Then, the planner calculates the reference navigation path \(\{\mathbf{p}_{i}\}_{i=0}^{N}\). (\(\mathbf{p}_{i}=(x_{\mathbf{p}_{i}},y_{\mathbf{p}_{i}},\theta_{\mathbf{p}_{i}} )^{T}\in\mathrm{SE}(2)\).) \(\{\mathbf{p}_{i}\}_{i=0}^{N}\) and \(\{d_{i}(\cdot)\}_{i=0}^{N}\) are required in the following state cost and optimization objective function calculations. In step two (Section C), the planner applies LT-DWA to generate an initial state sequence. In step three (Section D), the planner uses the EB-MPC method to optimize the initial state sequence. According to the optimized state sequence, the controller generates control commands. The local planner updates at a fixed frequency until the robot reaches the goal point. ### _Reference Navigation Path and Time-Varying Distance Fields_ The reference navigation path \(\{\mathbf{p}_{i}\}_{i=0}^{N}\) is obtained according to the navigation path \(\mathcal{P}\). The point on the navigation path, which has the minimum euclidean distance to the robot's current state \(\mathbf{0}^{T}\) on the Cartesian plane, is selected as the reference navigation path's beginning point \(\mathbf{p}_{0}\). Then, the \(i_{\mathrm{th}}\) point \(\mathbf{p}_{i}\) on the reference navigation path is the point on the navigation path, whose arc length to \(\mathbf{p}_{0}\) along the navigation path is \(i\Delta Tv_{\max}\max(\cos\theta_{\mathbf{p}_{0}},0)\). \(\theta_{\mathbf{p}_{0}}\) is the orientation gap between \(\mathbf{p}_{0}\) and \(\mathbf{0}^{T}\). This expression expects that when the difference between the orientation of the robot and the navigation path is significant, the robot should adjust its orientation first. Otherwise, it should follow the navigation path at the maximum speed. The time-varying distance fields \(\{d_{i}(\cdot)\}_{i=0}^{N}\) are calculated according to the agents \(\mathcal{O}\) and the occupancy grid map \(\mathcal{B}\). The \(i_{\mathrm{th}}\) frame distance field \(d_{i}(\cdot)\) is determined by the agent distance field \(d_{i}^{\mathcal{O}}(\cdot)\) and the occupancy grid map distance field \(d_{i}^{\mathcal{B}}(\cdot)\), whose calculation is as follows. \[d_{i}(\cdot)=\max(w_{\mathrm{do}}d_{i}^{\mathcal{O}}(\cdot),w_{\mathrm{dlo}}d _{i}^{\mathcal{B}}(\cdot)),\] where, \(w_{\mathrm{do}}\) and \(w_{\mathrm{dlo}}\) are preset weights. \(d_{i}^{\mathcal{O}}(\cdot)\) is calculated as follows, referring to Chen _et al._'s method [1]. \[d_{i}^{\mathcal{O}}(x,y)\!=\!\max_{\forall\mathbf{o}\in[0,O]}\exp^{-\left( \frac{l^{o}_{i}(x,y,i)^{2}}{2x_{x}^{2}}\!+\!\frac{l^{o}_{i}(x,y,i)^{2}}{2x_{y} ^{2}}\right)},\] where, \[l^{o}(x,y,i)=\Big{\|}\begin{pmatrix}x\\ y\end{pmatrix}-\begin{pmatrix}x_{\mathbf{o}_{o}}+i\Delta Tvx_{\mathbf{o}_{o}}\\ y_{\mathbf{o}_{o}}+i\Delta Tvy_{\mathbf{o}_{o}}\end{pmatrix}\Big{\|},\] \[\alpha=\mathrm{atan}2\Big{(}\frac{y-y_{\mathbf{o}_{o}}-i\Delta Tv y _{\mathbf{o}_{o}}}{x\!-\!x_{\mathbf{o}_{o}}\!-\!i\Delta Tvx_{\mathbf{o}_{o}}} \Big{)}-\mathrm{atan}2\Big{(}\frac{vy_{\mathbf{o}_{o}}}{vx_{\mathbf{o}_{o}}} \Big{)},\] \[l^{o}_{x}(x,y,i)\!=\!l^{o}(x,y,i)\!*\!\cos\alpha,l^{o}_{y}(x,y,i) \!=\!l^{o}(x,y,i)\!*\!\sin\alpha,\] \[\sigma_{x}=\left\{\begin{array}{ll}\frac{1}{3}\!\big{(}r_{ \mathbf{o}_{o}}\!+\!R\!+\!\eta\beta\big{\|}\begin{pmatrix}vx_{\mathbf{o}_{o}} \\ vy_{\mathbf{o}_{o}}\end{pmatrix}\big{\|}\big{)},&-\frac{\pi}{2}\!<\!\alpha\!< \!\frac{\pi}{2},\\ \frac{1}{3}\!\big{(}r_{\mathbf{o}_{o}}\!+\!R\!+\!\eta),&\mathrm{else},\end{array}\right.\] \[\sigma_{y}=\frac{1}{3}(r_{\mathbf{o}_{o}}+R+\eta),\] where, \(\eta\) and \(\beta\) are preset parameters. As shown in Fig. 2, this distance field comprises multiple two-dimensional normal distributions with offset. The normal distributions' centers are \(i_{\mathrm{th}}\) frame agents' centers, whose ranges are determined by \(\eta\) and offsets are determined by the agents' velocities and \(\beta\). \(d_{i}^{\mathcal{B}}(\cdot)\) is calculated as follows. \[d_{i}^{\mathcal{B}}(x,y)=(\mathrm{Relu}(\eta-\delta))^{2},\] \[\delta=\min_{b}\mathrm{Relu}\Big{(}\big{\|}\begin{pmatrix}x-x_{ \mathbf{b}_{\mathrm{s}}}\\ y-y_{\mathbf{b}_{\mathrm{s}}}\end{pmatrix}\big{\|}-R\Big{)},\quad\forall b\in[0, B],\] Fig. 1: This figure shows the local planner’s framework. The example of the time-varying distance fields is shown in the upper left. The example of the reference navigation path is shown in the bottom left, where the green line indicates the navigation path, the blue line indicates the reference navigation path, the colorful circles indicate agents, and the red arrows indicate the agents’ velocities. The example of the LT-DWA is shown at the bottom, where the left side indicates the expanded states in the different frames, the blue curves on the right side indicate the projection of the state-cost tree on the Cartesian plane, and the red curve indicates the selected state sequence. The example of EB-MPC is shown on the right, where the upper part indicates the connection relationship of a node in the graph optimization, and the green curve in the lower part indicates the optimized state sequence. Fig. 2: In this figure, the left part shows the scenario and the right part shows the first frame of its corresponding time-varying distance fields. The colorful circles indicate agents, the red arrows indicate agents’ velocities, and the black region indicates the occupancy grid map. where, \(\eta\) is the same preset parameter. As shown in Fig 2, this distance field is a quadratically decreasing function with the distance to the boundary of the occupancy grid map. The value of this distance field is zero, if the distance to the boundary of the occupancy grid map is larger than \(\eta+R\). ### _Long-Term Dynamic Window Approach_ After obtaining the reference navigation path \(\{\mathbf{p}\}_{i=0}^{N}\) and the time-varying distance fields \(\{d_{i}(\cdot)\}_{i=0}^{N}\), the next step is to build a state-cost tree \(\mathcal{T}\) to obtain the initial state sequence \(\mathcal{S}_{\mathrm{init}}\) for the following optimization. The tree is a set of \(N\) layers, and the tree's \(i_{\mathrm{th}}\) layer \(\mathcal{T}_{i}\) is the set of \(i_{\mathrm{th}}\) frame nodes. Each node \(n\) has three attributes: state (State\([n]\)), cost (Cost\([n]\)), and parent node (Parent\([n]\)). The construction of the state-cost tree is shown in Alg. 1. ``` 1:Reference Navigation Path \(\{\mathbf{p}\}_{i=0}^{N}\), Time-Varying Distance Fields \(\{d_{i}(\cdot)\}_{i=0}^{N}\). 2:State-Cost Tree \(\mathcal{T}\). 3:Add Layer \(\mathcal{T}_{0}=\{\)Node(\(\mathbf{0}^{T}\), 0, null)\(\}\) into empty Tree \(\mathcal{T}\). 4:for\(i=1,2,\cdots,N\)do 5:\(\mathcal{T}_{i}\leftarrow\varnothing\). 6:for Node \(n\in\) Layer \(\mathcal{T}_{i-1}\)do 7:\(\mathcal{S}_{\mathrm{c}}\leftarrow\) expandStates(\(n\)). 8:for State \(\mathbf{s}\in\mathcal{S}_{\mathrm{c}}\)do 9: Push Node(\(\mathbf{s}\), 0, \(n\)) to Layer \(\mathcal{T}_{i}\). 10:if len(\(\mathcal{T}_{i}\)) \(>\)\(K^{{}^{\prime}}\)then 11:\(\mathcal{T}_{i}\leftarrow\) voxelSampling(\(\mathcal{T}_{i}\)). 12:for Node \(n\in\) Layer \(\mathcal{T}_{i}\)do 13: Cost\([n]\) = Cost[Parent\([n]\)]+ calcCost(State\([n]\), \(\mathbf{p}_{i}\), \(d_{i}(\cdot)\), \(i\)). 14:if\(\mathcal{T}_{i}\) is not \(\varnothing\)then 15: Add Layer \(\mathcal{T}_{i}\) into Tree \(\mathcal{T}\). 16:else 17:break 18: ``` **Algorithm 1** Long Term Dynamic Window Approach The root layer \(\mathcal{T}_{0}\) can be obtained according to the robot's current state. Then, \(\mathcal{T}_{i}\) is obtained as the exemplary diagram in Fig. 3. All the nodes in the previous layer \(\mathcal{T}_{i-1}\) are traversed. For each node \(n\), state expansion is performed according to \(n\)'s state State\([n]\). In this way, an expanded state set \(\mathcal{S}_{\mathrm{c}}\) can be generated. The DWA achieves state expansion. In detail, given a state \(\mathbf{s}\), the velocity space boundary of the states in the next frame can be determined according to the robot's velocity and acceleration limitations. The limited velocity space is uniformly sampled, and \(V\times V\) samples can be obtained. For each sample, an expanded state can be calculated according to Eq. 1. Collision-free states are selected from these expanded states, which make up \(\mathcal{S}_{\mathrm{c}}\). A new node without cost is added to \(\mathcal{T}_{i}\) for each state in \(\mathcal{S}_{\mathrm{c}}\). When the states of all nodes in the previous frame are expanded, the number of nodes in \(\mathcal{T}_{i}\) is significant. Assuming that the DWA can expand \(K\) states from one state each time, the time complexity of calculating the tree with \(N\) frames is \(O(K^{N})\). When \(N\) is large, this time complexity is undoubtedly unacceptable. In order to solve this problem, after using the DWA to expand all the states in the current frame, we perform voxel sampling to ensure that the total number of states in the next frame is always around \(K^{{}^{\prime}}\). In this way, the time complexity is \(O(K^{{}^{\prime}}N)\). The voxel sampling process is as follows. The \(\mathrm{SE}(2)\) space boundary of all nodes in \(\mathcal{T}_{i}\) is calculated, and subsequently, the \(\mathrm{SE}(2)\) space within the boundary is voxelized to \(W\times W\times W\) voxels. Each node is located in one voxel. A node is randomly sampled in each voxel, and the sampling result can be obtained. There are two reasons why voxelization is performed in the \(\mathrm{SE}(2)\) space instead of the state space. The first is to reduce the space dimension and achieve lower computational complexity. The second is that the attributes of the state in the \(\mathrm{SE}(2)\) space are more important than those in the velocity space. The former has a direct relationship to the robot's safety, while the latter only has an indirect relationship. Fig. 4 shows an example of the robot's state distribution and the sampled state distribution in the \(\mathrm{SE}(2)\) space at the end of the long horizon. It can be seen from the figure that the blue points distribution is consistent with the red points distribution. Therefore, the representation of the robot's \(\mathrm{SE}(2)\) properties at the horizon's end can be achieved using voxel sampling. After voxel sampling, the number of nodes in \(\mathcal{T}_{i}\) will be acceptable for real-time performance. At this time, the cost of each node \(n\) in \(\mathcal{T}_{i}\) can be calculated, which includes two parts. One is the cost of its parent node, and the other is its state \(\mathbf{s}\)'s cost. When calculating the cost of \(\mathbf{s}\), only the collision risk cost \(c_{\mathrm{c}}(\cdot)\) and the navigation following cost \(c_{\mathrm{n}}(\cdot)\) are considered. The reason is that \(c_{\mathrm{n}}(\cdot)\) and \(c_{\mathrm{c}}(\cdot)\) are Fig. 4: This figure shows an example of the robot’s state distribution and the sampled state distribution in the \(\mathrm{SE}(2)\) space at the end of the long horizon. The red points represent the robot’s state distribution, and the blue points represent the sampled state distribution using voxel sampling. Fig. 3: This figure shows an exemplary diagram of how to construct the layer of the tree. The diagram includes state expansion, voxel sampling, and cost calculation. related to s's \(\mathrm{SE}(2)\) space attributes, while the jitter cost \(c_{\mathrm{j}}(\cdot)\) is only related to the velocity space attributes. In the previous step, the sampling is performed in the \(\mathrm{SE}(2)\) space, so the calculation of \(c_{\mathrm{j}}(\cdot)\) does not make much sense here. In conclusion, given the \(i_{\mathrm{th}}\) frame point on the reference navigation path \(\mathbf{p}_{i}=(x_{\mathbf{p}_{i}},y_{\mathbf{p}_{i}},\theta_{\mathbf{p}_{i}})^ {T}\) and the \(i_{\mathrm{th}}\) frame time-varying distance field \(d_{i}(\cdot)\), the cost of the \(i_{\mathrm{th}}\) frame state \(\mathbf{s}=(x_{\mathbf{s}},y_{\mathbf{s}},\theta_{\mathbf{s}},v_{\mathbf{s}}, \omega_{\mathbf{s}})^{T}\) can be calculated as follows. \[\mathrm{calcCost}(\mathbf{s},\mathbf{p}_{i},d_{i}(\cdot),i)\!=\! \gamma^{i}(c_{\mathrm{c}}(\mathbf{s},d_{i}(\cdot))+c_{\mathrm{n}}(\mathbf{s}, \mathbf{p}_{i})), \tag{3}\] \[c_{\mathrm{c}}(\mathbf{s},d_{i}(\cdot)))=w_{\mathrm{c}}d_{i}(x_ {\mathbf{s}},y_{\mathrm{s}}),\] \[c_{\mathrm{n}}(\mathbf{s},\!\mathbf{p}_{i})\!=\!w_{\mathrm{no} }c_{\mathrm{no}}(\mathbf{s},\!\mathbf{p}_{i})\!+\!w_{\mathrm{na}}c_{\mathrm{ na}}(\mathbf{s},\!\mathbf{p}_{i})\!+\!w_{\mathrm{nt}}c_{\mathrm{nt}}(\mathbf{s}, \!\mathbf{p}_{i}),\] where, \[c_{\mathrm{no}}(\mathbf{s},\mathbf{p}_{i})\!=\!((x_{\mathbf{s}}\!-\!x_{ \mathbf{p}_{i}})\cos\theta_{\mathbf{p}_{i}}\!+\!(y_{\mathbf{s}}\!-\!y_{ \mathbf{p}_{i}})\sin\theta_{\mathbf{p}_{i}})^{2}, \tag{4}\] \[c_{\mathrm{na}}(\mathbf{s},\mathbf{p}_{i})\!=\!(-(x_{\mathbf{s}} \!-\!x_{\mathbf{p}_{i}})\sin\theta_{\mathbf{p}_{i}}\!+\!(y_{\mathbf{s}}\!-\!y _{\mathbf{p}_{i}})\cos\theta_{\mathbf{p}_{i}})^{2},\] \[c_{\mathrm{nt}}(\mathbf{s},\mathbf{p}_{i})\!=\!(1-\cos(\theta_{ \mathbf{s}}-\theta_{\mathbf{p}_{i}}))^{2},\] \(\gamma\) is decline rate, \(w_{\mathrm{c}}\), \(w_{\mathrm{no}}\), \(w_{\mathrm{na}}\), and \(w_{\mathrm{nt}}\) are preset weights. The reason for setting the decline rate is uncertainty in the movement of agents in the crowd environment, and the cost in the long-term future has low reliability. The cost \(c_{\mathrm{no}}(\cdot)\) is to penalize the longitudinal distance between \(\mathbf{s}\) and \(\mathbf{p}_{i}\) and the cost \(c_{\mathrm{na}}(\cdot)\) is to penalize the lateral distance between \(\mathbf{s}\) and \(\mathbf{p}_{i}\). The combination of the two costs can be used to evaluate the distance between \(\mathbf{s}\) and the reference navigation path in the Cartesian plane, which is more flexible than directly calculating the distance between \(\mathbf{s}\) and the reference navigation path [25]. The cost \(c_{\mathrm{nt}}(\cdot)\) is to penalize the orientation gap between \(\mathbf{s}\) and \(\mathbf{p}_{i}\). After \(\mathcal{T}\) is built, the nodes in the \(N_{\mathrm{th}}\) layer \(\mathcal{T}_{N}\) of the tree are traversed, and the node with the minimum cost is selected. We iteratively backtrack the node's parent until the tree's root node is reached. Finally, the initial state sequence \(\mathcal{S}_{\mathrm{init}}\) can be obtained. In the complex environment, when the \(N^{{}^{\prime}}_{\mathrm{th}}\) layer of the tree is built, it may be empty sometimes. In this case, building a complete \(N\) layer tree is given up and a \(N^{{}^{\prime}}\!-\!1\) layer tree is obtained. Then, \(\mathcal{S}_{\mathrm{init}}\) also degenerates from \(N\) frames to \(N^{{}^{\prime}}\!-\!1\) frames. In the worst case, this method will degenerate into the DWA. ### _Elastic-Band Model Predictive Control_ After obtaining the initial state sequence \(\mathcal{S}_{\mathrm{init}}\), the next step is to optimize it using the EB-MPC method. We define the state sequence optimization problem in the MPC form [10] and solve the problem using the EB method [11]. The optimization model used has been given in Eq. 2, which is an MPC-formed model. The expression of the objective function \(c(\mathcal{S},\mathcal{P},\mathcal{O},\mathcal{B})\) is as follows. \[c(\mathcal{S},\!\mathcal{P},\!\mathcal{O},\!\mathcal{B})\!\!=\!\!\!\sum_{i=1}^{N }\gamma^{i}(c_{\mathrm{c}}(\mathbf{s}_{i},d_{i}(\cdot))\!+\!c_{\mathrm{n}}( \mathbf{s}_{i},\!\mathbf{p}_{i})\!+\!c_{\mathrm{j}}(\mathbf{s}_{i},\!\mathbf{ s}_{i-1})),\] where, \(\gamma\) and \(c_{\mathrm{c}}(\cdot)\) is the same as Eq. 3. \(c_{\mathrm{n}}(\cdot)\) adds an additional cost \(c_{\mathrm{nv}}(\cdot)\) on the basis of Eq. 3, whose expression is as follows. \[c_{\mathrm{nv}}(\mathbf{s},\mathbf{p}_{i})=w_{\mathrm{nv}}(v_{\mathbf{s}}- \min(v_{\mathrm{max}},\sqrt{2a_{\mathrm{max}}\epsilon_{\mathbf{p}_{i}}}))^{2},\] where, \(w_{\mathrm{nv}}\) is preset weight, \(v_{\mathbf{s}}\) is \(\mathbf{s}\)'s linear velocity, \(\epsilon_{\mathbf{p}_{i}}\) is arc length between \(\mathbf{p}_{i}\) and the navigation path's endpoint. This expression aims to make the robot's speed tend to the maximum value when it is far away from the goal point. Otherwise, the speed tends to zero. \(c_{\mathrm{j}}(\cdot)\) is the jitter cost, whose expression is as follows. \[c_{\mathrm{j}}(\mathbf{s}_{i},\!\mathbf{s}_{i-1})\!=\!\!w_{\omega}\omega_{i}^{2 }\!+\!w_{a_{v}}\Big{(}\frac{v_{i}\!-\!v_{i-1}}{\Delta T}\Big{)}^{2}\!+\!w_{a_{ \omega}}\Big{(}\frac{w_{i}\!-\!w_{i-1}}{\Delta T}\Big{)}^{2},\] where, \(w_{\omega}\), \(w_{a_{v}}\), and \(w_{a_{\omega}}\) are preset weights, \(\mathbf{s}_{i}=(x_{i},y_{i},\theta_{i},v_{i},\omega_{i})^{T}\). There are three items in the jitter cost. The first item is to penalize high angular velocities for reducing the shaking of the robot's orientation. The second item is to penalize high linear accelerations for reducing the jitter of the robot's speed. The third item is to penalize high angular accelerations for reducing the vibration of the robot's angular velocity. The above three items work together to reduce the jitter of the robot. After obtaining the complete definition of the optimization model, it can be seen that the optimization model Eq. 2 is sparse for the optimization variable \(\mathcal{S}\), so the EB method can be used to solve it [26], whose process is as follows. Each state \(\mathbf{s}_{i}\) in the optimization variable \(\mathcal{S}\) can be regarded as a node, and the objective function and constraints in the optimization model Eq. 2 can be regarded as edges. In this way, a graph can be constructed, as shown on the right side of Fig. 1. According to Eq. 2, there are only unary edges and binary edges in the constructed graph. Subsequently, \(\mathcal{S}_{\mathrm{init}}\) is applied to initialize the graph, and the g2o framework [27] is used to perform graph optimization. The algorithm used for graph optimization is the Levenberg-Marquardt. At last, the optimized state sequence \(\mathcal{S}_{\mathrm{opt}}\) can be obtained. Compared with \(\mathcal{S}_{\mathrm{init}}\), \(\mathcal{S}_{\mathrm{opt}}\) can make the robot have less jitter. ## V Experimental Results The experiments are conducted in crowd, static, and hybrid environments to verify our method in different scenarios. In addition, we design an ablation study to verify the effectiveness of submodules. The testing robot is designed as a differential wheel robot. The robot's shape is set as a 0.3 m radius circle, and its sensing range is limited to 3.5 m. The robot's linear velocity is set from 0 to 1 m/s, its angular velocity is set from -1 rad/s to 1 rad/s, its linear acceleration is set from -1 m/s\({}^{2}\) to 1 m/s\({}^{2}\), and its angular acceleration is set from -1 rad/s\({}^{2}\) to 1 rad/ s\({}^{2}\). ### _Crowd Environment Tests_ In this experiment, the ORCA simulated scenarios as the ESA [4] and the pedestrian trajectory dataset [28] are both used for testing. The testing scenarios are updated at a frequency of 5 Hz. For the ORCA simulated scenarios, 10, 15, and 20 agents are set in the environment, respectively. Each agent is a circle with a radius of 0.3 m, its moving policy is the ORCA, and its maximum speed is 1 m/s. The robot is invisible to all the agents [4, 14, 15]. In each test, the agents are on a circle with a radius of 5 m and moves to the targets, which are their opposite position of the circle with disturbance. Meanwhile, the robot is also on the circle and regards the opposite position as the goal point, as shown in Fig. 5. For the trajectory dataset, the range of the pedestrian trajectories in the dataset is recorded, and each pedestrian is also regarded as a circle with a radius of 0.3 m. In each test, the center points of the trajectories range's upper and lower boundaries are used as the starting and goal points. A starting time is randomly selected to broadcast pedestrian trajectories based on the dataset and ensure that the pedestrians do not collide with the starting point at the starting time. In the above scenarios, the robot uses the proposed method, ORCA, LSTM-RL [29], SARL, and ESA methods for planning, respectively. LSTM-RL, SARL, and ESA are all reinforcement-learning-based methods and focus on considering the interaction of the crowd. For each scenario, 300 tests are conducted and the robot's success rate is counted. The test succeeds if the robot reaches the destination, and fails if it collides with the agents, moves out of bounds, or fails to reach the end within the specified time. The LSTM-RL, SARL, and ESA methods are retrained in the same environment as the ESA, and the difference is that the robot is changed from the holonomic robot to the differential robot during the training. When using the proposed method for planning, the navigation path is the connection line between the robot's current and the goal points. The testing results are shown in Tab. II. According to the results in Tab. II, it can be seen that the proposed method improves the success rate for all the testing scenarios, compared with the current methods. In particular, the proposed method's success rate does not significantly decrease as the environment becomes more complex, which indicates that the proposed method has higher reliability in complex environments. In addition, it can be found that learning-based methods, such as SARL, perform well in the ORCA environment while testing poorly on the pedestrian trajectories dataset. The reason is that these learning-based methods are trained in the ORCA environment, and the data distribution of the pedestrian trajectories dataset and the ORCA environment is quite different, so these methods cannot fit the pedestrian trajectories dataset environment. In contrast, the proposed method does not have the above problems and has better generalization ability. Furthermore, the examples that use the proposed method to achieve crowd navigation in different environments are shown in Fig. 5. This figure describes the movement process of the robot in crowds using the proposed method. According to Fig. 5(c), the robot first moves to the right to avoid agents and then moves forward for a while. At about 13 seconds, the robot slows down and turns towards the goal point, and it finally arrives at the goal point at 23 seconds. ### _Static Environment Tests_ In the static environment tests, the testing scenario is shown in Fig. 6. The start and goal points are randomly selected within the scenario, and the start and goal points are ensured to keep a certain distance from the occupied grid points. The A* algorithm with the Douglas-Peucker algorithm [30] generates the navigation path. Then, our method and the TEB method are applied to carry out local planning along the navigation path, respectively. Each method conducts 300 tests. Success rate, safety, jitter, time consumption for a single plan, and navigation time from the start point to the goal point are used as comparison metrics. The success rate is measured by the success times that the robot reaches its destination divided by the total testing times. The safety is measured by the minimum distance between the robot and occupied grid points. The jitter is measured by the robot's angular velocity, linear and angular accelerations. Fig. 5: This figure shows four examples of the robot navigating in different crowds using the proposed method. The dark green circles indicate the robot, the colorful hollow circles indicate other agents, the curves of different colors indicate the moving trajectories of the corresponding agents, and the values near the circles indicate the corresponding time. The more transparent the circle, the earlier the time. Finally, the testing results are shown in Tab. III. It can be seen from Tab. III that the proposed method has a 7.7 \(\%\) improvement in the success rate, a 50 \(\%\) improvement in the safety and a 10.7 \(\%\) decline in terms of angular acceleration. This result shows that our method has advantages in safety and jitter, compared with the TEB method. Regarding navigation time, the proposed method spends 14.1 \(\%\) more time than the TEB method on average, and part of the reason is that the proposed method tends to choose longer trajectories to keep the distance to the occupied grid points in order to ensure safety. Regarding time consumption for a single plan, the TEB method consumes 18.2 \(\%\) time less than the proposed method on average. However, the TEB method's time consumption is unstable. When the number of occupied grid points is large, its time consumption will increase significantly. The TEB method's maximum time consumption can reach 630.1 ms in the experiment, which cannot meet the real-time requirement. In contrast, the proposed method's time consumption is relatively stable, and its maximum time consumption is 146 ms in the experiment, which can fully meet the real-time requirement. In Fig. 6, a static environment testing example is shown. It can be clearly seen from the figure that compared with the red curve, the green curve is farther away from the occupied grid points and smoother. This figure demonstrates that compared with the TEB method, the proposed method can effectively improve safety and reduce jitter. ### _Hybrid Environment Demonstration_ The proposed method is also tested in an environment with both static and dynamic constraints, as shown in Fig. 7. The agents in the environment follow the ORCA policy as Liu _et al._'s method [5]. In the figure, the black areas indicate the obstacles, the colored hollow circles indicate the other agents, and the green circles indicate the robot. It can be seen that the robot successfully avoided the obstacles and agents, and reached the goal point using the proposed method. It can also be seen from the right side of the Fig. 7 that during this process, the robot's linear velocity, angular velocity, linear acceleration and angular acceleration did not exceed the limitation. ### _Ablation Study_ The scenario used in the ablation study is basically the same as the ORCA environment tests, but the difference is that 25 agents were set up. In this experiment, the difficulty of the test environment is increased to make the comparison of results more distinguishable. In the ablation study, the robot uses the proposed method without EB-MPC optimization (No Opt.) to plan in the testing scenario. Then, the voxel sampling is further replaced by random sampling (Rand.) for planning. Finally, the traditional distance field similar to [31] is used for planning. Each one is tested 300 times, and success rate, jitter, and time consumption for a single plan are recorded. The testing results are shown in Tab. IV. According to the testing results, the success rate is increased by \(6.4\%\), when the proposed time-varying distance fields are used instead of the traditional distance field. The voxel sampling also improves \(10.6\%\) in the success rate than the random sampling. This result can prove the effectiveness of both in complex crowd environments. Furthermore, the robot's jitter can be significantly reduced almost without Fig. 6: This figure shows an example of the static environment test. The green curve is the robot’s moving trajectory using the proposed method and the red curve is the robot’s moving trajectory using the TEB method. Fig. 7: This figure shows a demonstration in the environment with both static and dynamic constraints. The left picture shows the robot’s traveling process. The right pictures show the robot’s linear velocity, angular velocity, linear acceleration, and angular acceleration during the process. reducing the success rate when adding the optimization method. Especially in linear and angular accelerations, there are reductions of \(49.1\%\) and \(45.7\%\), respectively. In terms of time consumption, replacing the distance fields and using voxel sampling hardly caused an increase in time consumption. Although the optimization method increases the time consumption by \(41.7\%\), it can still guarantee the real-time performance of the method. In conclusion, all parts of the proposed method are proved effective as expectation. ## VI Conclusions This paper proposes a long-term dynamic window approach local planning method for differential wheeled robots. This method can be applied to both crowd and static environments, and its planned state sequence in real time can ensure the safety and reduce jitter of the robot while satisfying the kinodynamic constraints. The limitation of the method is that it does not consider the interaction between the crowd or the interaction between the robot and the crowd, which is the limitation of our method. In future work, we will consider integrating the prediction of other agents [32, 33] the interaction between the crowd in the planning to further improve the performance.
2303.02770
Universal distribution of the empirical coverage in split conformal prediction
When split conformal prediction operates in batch mode with exchangeable data, we determine the exact distribution of the empirical coverage of prediction sets produced for a finite batch of future observables, as well as the exact distribution of its almost sure limit when the batch size goes to infinity. Both distributions are universal, being determined solely by the nominal miscoverage level and the calibration sample size, thereby establishing a criterion for choosing the minimum required calibration sample size in applications.
Paulo C. Marques F
2023-03-05T20:46:01Z
http://arxiv.org/abs/2303.02770v2
# On the universal distribution of the coverage ###### Abstract Two additional universal properties are established in the split conformal prediction framework. In a regression setting with exchangeable data, we determine the exact distribution of the coverage of prediction sets for a finite horizon of future observables, as well as the exact distribution of its almost sure limit. The results hold for finite training and calibration samples, and both distributions are determined solely by the nominal miscoverage level and the calibration sample size. **Keywords: Nonparametric regression; Prediction sets; Split conformal prediction; Exchangeability; Polya's urn scheme; de Finetti's theorem.** ## 1 Introduction Conformal prediction [1, 2, 3, 4, 5, 6], a technique developed to address the confidence in the forecasts made by general predictive models, is quickly moving the field of machine learning [7, 8, 9, 10] from a period dominated by point predictions, to a new stage in which inferences about the future are summarized by prediction sets with statistical guarantees. Several features make conformal prediction appealing for use with contemporary machine learning algorithms: it is universal (distribution-free), able to handle high-dimensional data, model agnostic, and its properties hold for finite samples. This paper strengthens the universal properties of the most readily applicable variation on the conformal prediction idea: the split conformal prediction algorithm [2, 5], whose implementation attains a good balance between the predictive goals and the computational complexity of the procedure. In a regression context, the main results of the paper are the identification for exchangeable data of the exact distribution of the coverage of prediction sets for a finite horizon of future observables (future coverage, for short), and the determination of the exact distribution of its almost sure limit when the horizon tends to infinity. Both distributions are universal in the sense that they are determined exclusively by the nominal miscoverage level and the calibration sample size. The two results hold for finite training and calibration data. We begin the formal description of the split conformal prediction procedure in Section 2, paying special attention to how the symmetries of the data exchangeability assumption are preserved through the whole construction, firstly in the exchangeability inherited by the sequence of conformity scores, and secondly in the implied exchangeability of the sequence of indicators whose average defines the future coverage. The meaning of the probability bounds determined by the marginal validity property is analyzed at the end of Section 2 and contrasted with the interpretation of the random variable which defines the future coverage. Establishing a correspondence between Polya's urn scheme and the elements that make up the split conformal prediction procedure, we obtain in Section 3 the exact distribution of the future coverage for a finite horizon. The sequence of symmetry arguments ends with the use of de Finetti's representation theorem to determine the exact distribution of the almost sure limit of the future coverage when its horizon tends to infinity. We finish the paper in Section 4 discussing practical issues related to the choice of the calibration sample size. ## 2 Split conformal prediction In this section we formalize the split conformal prediction procedure [2, 5]. We state the data exchangeability assumption in Section 2.1 and establish the notation for the splitting of the data sequence into training sample, calibration sample, and future observables. Conformity scores are characterized in Section 2.2, in which we give examples of commonly used scores and prove that under the data exchangeability assumption the sequence of conformity scores is also exchangeable. With the additional minimalistic assumption that the conformity scores are almost surely distinct, we obtain in Section 2.3 the marginal validity property of split conformal prediction. In Section 2.4 we discuss the interpretation of the marginal validity property, contrasting it with the properties of the distribution of a properly defined future coverage, whose realizations, as illustrated by the simulation in Example 5, can violate substantially the bounds determined by the marginal validity property. ### Exchangeability assumption and data splitting Let \((\Omega,\mathscr{F},P)\) denote the underlying probability space from which we induce the distributions of all random objects considered in the paper. **Definition 1**.: A sequence of random objects \(\{O_{i}\}_{i\geq 1}\) is _exchangeable_ if, for every \(n\geq 1\), and every permutation \(\pi:\{1,\ldots,n\}\xrightarrow{\cong}\{1,\ldots,n\}\), the random tuples \((O_{1},\ldots,O_{n})\) and \((O_{\pi(i)},\ldots,O_{\pi(n)})\) have the same distribution. In a regression setting, where the task is to predict the value of a quantitative response variable from the value of a vector of predictors, we have a sequence of random pairs \[\boxed{(X_{-r+1},Y_{-r+1}),\,\ldots,\,(X_{0},Y_{0})},\boxed{(X_{1},Y_{1}),\, \ldots,\,(X_{n},Y_{n})},(X_{n+1},Y_{n+1}),\,\ldots\] in which each \(X_{i}\) is a \(d\)-dimensional random vector and each \(Y_{i}\) is a real valued random variable. This _data sequence_ of random pairs is modeled by us as being exchangeable. At the beginning of the sequence we have the _training_ sample \(((X_{-r+1},Y_{-r+1}),\ldots,(X_{0},Y_{0}))\), of size \(r\geq 1\), followed by the _calibration_ sample \(((X_{1},Y_{1}),\ldots,(X_{n},Y_{n}))\), of size \(n\geq 1\), and the sequence of _future_ observables \(\{(X_{n+i},Y_{n+i})\}_{i\geq 1}\). In this notation, we rely on the data exchangeability assumption to deliberately place the training set at the beginning of the sequence. This choice simplifies the form of our statements and signals that the training set plays a secondary role in our analysis. In fact, in what follows it is possible to dispense with the training sample altogether. We avoid doing this to keep our description closer to how split conformal prediction is actually implemented in practice. ### Conformity scores Let \(\mathscr{T}=\sigma((X_{-r+1},Y_{-r+1}),\ldots,(X_{0},Y_{0}))\) be the smallest sub-\(\sigma\)-field of \(\mathscr{F}\) with respect to which the training sample is measurable. **Definition 2**.: A _conformity function_ is a mapping \(\rho:\mathbb{R}^{d}\times\mathbb{R}\times\Omega\to\mathbb{R}_{+}\) such that \(\rho(x,y)=\rho(x,y,\,\boldsymbol{\cdot}\,)\) is \(\mathscr{T}\)-measurable for every \(x\in\mathbb{R}^{d}\) and every \(y\in\mathbb{R}\). The sequence of _conformity scores_\(\{R_{i}\}_{i\geq 1}\) associated with \(\rho\) is defined by \(R_{i}(\omega)=\rho(X_{i}(\omega),Y_{i}(\omega),\omega)\). **Example 1**.: Let \(\hat{\psi}:\mathbb{R}^{d}\times\Omega\to\mathbb{R}\) be a regression function estimator, such that \(\hat{\psi}(x)=\hat{\psi}(x,\,\boldsymbol{\cdot}\,)\) is \(\mathscr{T}\)-measurable for every \(x\in\mathbb{R}^{d}\). The standard choice [2, 5] is to use the conformity function \(\rho(x,y,\omega)=|y-\hat{\psi}(x,\omega)|\), whose corresponding conformity scores are the absolute residuals \(R_{i}=|Y_{i}-\hat{\psi}(X_{i})|\). As pointed out in Example 4, these standard conformity scores end up producing prediction intervals of fixed width for all future observables. The intuition here is that the conformity scores measure the ability of the model to make accurate predictions for the calibration sample, which was never touched by the model during its training process. Later, the assumed data sequence exchangeability transfers this information about the model's prediction capability from the calibration sample to the sequence of future observables, producing prediction sets with the form to be determined in Proposition 2. In the next two examples we leave the expression of the underlying conformity functions implicit. **Example 2**.: One choice that generates prediction intervals whose widths vary with the values of the corresponding vectors of predictors is the locally weighted conformity score [5]. The idea is to construct an estimator \(\hat{\sigma}:\mathbb{R}^{d}\times\Omega\to\mathbb{R}_{+}\) for the variability of the predictions of the response variable, such that \(\hat{\sigma}(x)=\hat{\sigma}(x,\,\boldsymbol{\cdot}\,)\) is \(\mathscr{T}\)-measurable for every \(x\in\mathbb{R}^{d}\). Given this \(\hat{\sigma}\) and the \(\hat{\psi}\) of Example 1, we define the conformity scores as the weighted absolute residuals \(R_{i}=|Y_{i}-\hat{\psi}(X_{i})|/\hat{\sigma}(X_{i})\). See Section 5.2 of [5] for a simple general way to construct \(\hat{\sigma}\). **Example 3**.: Conformalized quantile regression [11] is another way to define conformity scores which produce prediction intervals with variable width. For some \(p\in[0,1]\), let \(\xi_{p}(x)=\inf\left\{y\in\mathbb{R}:P(Y_{1}\leq y\mid X_{1}=x\right\}\geq p\right\}\) be the \(p\)th conditional quantile function, and for an estimator \(\hat{\xi}_{p}:\mathbb{R}^{d}\times\Omega\to\mathbb{R}\) of \(\xi_{p}\), such that \(\hat{\xi}_{p}(x)=\hat{\xi}_{p}(x,\ \boldsymbol{\cdot}\ )\) is \(\mathscr{T}\)-measurable for every \(x\in\mathbb{R}^{d}\), define \(R_{i}=\max\left\{\hat{\xi}_{p_{\mathrm{in}}}(X_{i})-Y_{i},Y_{i}-\hat{\xi}_{p_{ \mathrm{io}}}(X_{i})\right\}\). The choice of \(0<p_{\mathrm{io}}\leq p_{\mathrm{hi}}<1\) is discussed in [11]. Notice that if we choose \(p_{\mathrm{lo}}=p_{\mathrm{hi}}=0.5\), we go back to the standard conformity score of Example 1, with \(\hat{\psi}\) replaced by an estimator of the conditional median of the response variable. **Remark 1**.: Conformity scores are agnostic to the choice of the specific models used to construct \(\hat{\psi}\), \(\hat{\sigma}\), and \(\hat{\xi}_{p}\) in the former three examples. Random Forests [12], Gradient Boosting [13], Deep Neural Networks [10], and Quantile Regression Forests [14] are some contemporary methods of choice. Naturally, models which generalize poorly will end up producing wide and not much informative prediction intervals. **Proposition 1**.: _Under the data exchangeability assumption, the sequence of conformity scores \(\{R_{i}\}_{i\geq 1}\) is exchangeable._ Proof.: Let \(T(\omega)=\left((X_{-r+1}(\omega),Y_{-r+1}(\omega)),\ldots,(X_{0}(\omega),Y_{ 0}(\omega))\right)\). Since the conformity function \(\rho\) is such that \(\rho(x,y)\) is \(\mathscr{T}\)-measurable for every \(x\in\mathbb{R}^{d}\) and every \(y\in\mathbb{R}\), Doob-Dynkin's lemma (see [15], Theorem A.42) implies that there is a measurable function \[h:\underbrace{(\mathbb{R}^{d}\times\mathbb{R})\times\cdots\times(\mathbb{R}^{ d}\times\mathbb{R})}_{k+1\text{ times}}\to\mathbb{R}\] such that \(\rho(X_{i}(\omega),Y_{i}(\omega),\omega)=h((X_{i}(\omega),Y_{i}(\omega)),T( \omega))\), for every \(\omega\in\Omega\). Hence, for Borel sets \(B_{1},\ldots,B_{n}\), we have \[P\big{(}\,\cap_{i=1}^{n}\left\{\omega:R_{i}(\omega)\in B_{i} \right\}\big{)} =P\big{(}\,\cap_{i=1}^{n}\left\{\omega:\rho(X_{i}(\omega),Y_{i}( \omega),\omega)\in B_{i}\right\}\big{)}\] \[=P\big{(}\,\cap_{i=1}^{n}\left\{\omega:h((X_{i}(\omega),Y_{i}( \omega)),T(\omega))\in B_{i}\right\}\big{)}\] \[=P\big{(}\,\cap_{i=1}^{n}\left\{\omega:((X_{i}(\omega),Y_{i}( \omega)),T(\omega))\in h^{-1}(B_{i})\right\}\big{)}.\] For any permutation \(\pi:\{-r+1,\ldots,0,1,\ldots,n\}\xrightarrow{\infty}\{-r+1,\ldots,0,1,\ldots,n\}\), define \(T_{\pi}(\omega)=\left((X_{\pi(-r+1)}(\omega),Y_{\pi(-r+1)}(\omega)),\ldots,(X_ {\pi(0)}(\omega),Y_{\pi(0)}(\omega))\right)\). The data exchangeability assumption implies that \(((X_{i},Y_{i}),T)\) and \(((X_{\pi(i)},Y_{\pi(i)}),T_{\pi})\) have the same distribution. If we restrict our choice to permutations such that \(\pi(j)=j\), for \(-r+1\leq j\leq 0\), then \(T_{\pi}=T\) and \[P\big{(}\,\cap_{i=1}^{n}\left\{\omega:R_{i}(\omega)\in B_{i} \right\}\big{)} =P\big{(}\,\cap_{i=1}^{n}\left\{\omega:((X_{\pi(i)}(\omega),Y_{ \pi(i)}(\omega)),T(\omega))\in h^{-1}(B_{i})\right\}\big{)}\] \[=P\big{(}\,\cap_{i=1}^{n}\left\{\omega:R_{\pi(i)}(\omega)\in B_{i }\right\}\big{)}\] The result follows, since this restriction still permits an arbitrary permutation of the conformity scores \(R_{1},\ldots,R_{n}\). ### Marginal validity **Proposition 2**.: _For exchangeable data, if the conformity scores are almost surely distinct, the marginal validity property_ \[1-\alpha\leq P\big{\{}Y_{n+i}\in D_{n}^{(1-\alpha)}(X_{n+i})\big{\}}\leq 1- \alpha+\frac{1}{n+1}\] _holds for \(i\geq 1\), in which the random prediction set \(D_{n}^{(1-\alpha)}:\mathbb{R}^{d}\times\Omega\to\mathscr{R}\) is defined by_ \[D_{n}^{(1-\alpha)}(x,\omega)=\big{\{}y\in\mathbb{R}:\rho(x,y,\omega)<R_{\lceil( 1-\alpha)(n+1)\rceil}(\omega)\big{\}}\,,\] _with \(D_{n}^{(1-\alpha)}(x)=D_{n}^{(1-\alpha)}(x,\,\mathbf{\cdot}\,)\), and \(\mathscr{R}\) denotes the Borel \(\sigma\)-field._ Proof.: Due to the data sequence exchangeability it is enough to prove the result for \(i=1\). Let \(\mathscr{S}_{n+1}\) denote the set of all permutations \(\pi:\{1,\ldots,n+1\}\stackrel{{\cong}}{{\longrightarrow}}\{1, \ldots,n+1\}\). For each \(\pi\in\mathscr{S}_{n+1}\), define \(A_{\pi}=\{R_{\pi(1)}<R_{\pi(2)}<\cdots<R_{\pi(n)}<R_{\pi(n+1)}\}\). Since by Proposition 1 the sequence of conformity scores is exchangeable, supposing that the conformity scores are almost surely distinct, the \((n+1)!\) events \(\{A_{\pi}\}_{\pi\in\mathscr{S}_{n+1}}\) are mutually exclusive and equiprobable, with \(\sum_{\pi\in\mathscr{S}_{n+1}}P(A_{\pi})=1\), so that \(P(A_{\pi})=1/(n+1)!\) holds for every permutation \(\pi\in\mathscr{S}_{n+1}\). For \(k=1,\ldots,n+1\), the event that \(R_{n+1}\) occupies rank \(k\) among \(R_{1},\ldots,R_{n+1}\) is the union of the events \(A_{\pi}\) for which \(\pi(k)=n+1\). Since there are \(n!\) permutations \(\pi\in\mathscr{S}_{n+1}\) such that \(\pi(k)=n+1\), the probability that \(R_{n+1}\) occupies rank \(k\) among \(R_{1},\ldots,R_{n+1}\) is equal to \(n!/(n+1)!=1/(n+1)\). Let \(R_{(1)}<R_{(2)}<\cdots<R_{(n)}\) denote the ordered conformity scores for the calibration sample. For \(k=1,\ldots,n\), we have that \(R_{n+1}<R_{(k)}\) if and only if the rank of \(R_{n+1}\) among \(R_{1},\ldots,R_{n+1}\) assumes one of the mutually exclusive values \(1,\ldots,k\). Thus, \(P\{R_{n+1}<R_{(k)}\}=k/(n+1)\), for \(k=1,\ldots,n\). Choosing \(k=\lceil(1-\alpha)(n+1)\rceil\), in which \(\lceil t\rceil\) denotes the smallest integer greater than or equal to the real number \(t\), since \((1-\alpha)(n+1)\leq\lceil(1-\alpha)(n+1)\rceil\leq(1-\alpha)(n+1)+1\), we get \[1-\alpha\leq P\{R_{n+1}<R_{\lceil(1-\alpha)(n+1)\rceil}\}\leq 1-\alpha+\frac{1}{ n+1}.\] Recalling that \(R_{n+1}(\omega)=\rho(X_{n+1}(\omega),Y_{n+1}(\omega),\omega)\), the result follows from the definition of \(D_{n}^{(1-\alpha)}\) in the proposition statement. **Remark 2**.: The content of Proposition 2 is the same as that of Theorem 2 of [5]. The term _marginal_ here is used to contrast the proposition statement to some possible statement about the conditional probability \(P(Y_{n+1}\in D_{n}^{(1-\alpha)}(X_{n+1})\mid X_{n+1}=x_{n+1})\). See Section 2.2 of [16] for a discussion on the intrinsic limitations for finite samples of the so-called conditional validity. **Example 4**.: Let \(\hat{s}=R_{\lceil(1-\alpha)(n+1)\rceil}(\omega)\), for an outcome \(\omega\in\Omega\). It follows from Proposition 2 and the definitions in Examples 1, 2 and 3, that for a vector of predictors \(x_{n+1}\in\mathbb{R}^{d}\) the observed prediction sets have the forms: \((\hat{\psi}(x_{n+1})-\hat{s},\hat{\psi}(x_{n+1})+\hat{s})\), for the standard conformity score; \((\hat{\psi}(x_{n+1})-\hat{s}\times\hat{\sigma}(x_{n+1}),\hat{\psi}(x_{n+1})+ \hat{s}\times\hat{\sigma}(x_{n+1}))\), for the locally weighted conformity score; and \((\hat{\xi}_{p_{\rm lo}}(x_{n+1})-\hat{s},\hat{\xi}_{p_{\rm hi}}(x_{n+1})+\hat {s})\) for conformalized quantile regression. ### Interpretations and future coverage Split conformal prediction properties have a frequentist nature and are tied to the idea of potential replications of an idealized experiment. A straightforward interpretation of the marginal validity property established in Proposition 2 in terms of a Monte Carlo experiment would go like this: for some data generating process, we produce independent replications of a finite portion of the exchangeable data sequence, recording, for each outcome \(\omega\in\Omega\), and some \(i\geq 1\), the values of \((T(\omega),A(\omega),(X_{n+i}(\omega),Y_{n+i}(\omega)))\), in which \(T(\omega)\) and \(A(\omega)\) denote realizations of the training and calibration samples, respectively. Together, the marginal validity property and the strong law of large numbers imply that, in the limit of infinite independent replications, the fraction of replications in which \(Y_{n+i}(\omega)\) belongs to the corresponding conformal prediction set \(D_{n}^{(1-\alpha)}(X_{n+i}(\omega),\omega)\) converges almost surely to a number between \(1-\alpha\) and \(1-\alpha+1/(n+1)\). Let us emphasize that in this interpretation each replication involves the simulation of new training and calibration samples, and not just the pair \((X_{n+i}(\omega),Y_{n+i}(\omega))\). A natural extension of this Monte Carlo experiment would be to record, for each outcome \(\omega\in\Omega\), the values of \((T(\omega),A(\omega),(X_{n+1}(\omega),Y_{n+1}(\omega)),\ldots,(X_{n+m}(\omega),Y_{n+m}(\omega)))\), for some horizon \(m\geq 1\) of future observables, and to investigate the properties of a properly defined notion of coverage of the corresponding split conformal prediction sets. **Definition 3**.: The _coverage of the prediction sets for a horizon of \(m\geq 1\) future observables_ is the random variable \(C_{m}^{(n,\alpha)}=\frac{1}{m}\sum_{i=1}^{m}Z_{i}\), in which we defined the indicator \(Z_{i}=1\), if \(Y_{n+i}\in D_{n}^{(1-\alpha)}(X_{n+i})\), and \(Z_{i}=0\), otherwise, for \(i\geq 1\). We may refer to \(C_{m}^{(n,\alpha)}\) simply by _future coverage_, or even just _coverage_, if this creates no ambiguity. A noteworthy aspect of the split conformal prediction procedure, which may surprise a casual reader of the marginal validity property in Proposition 2, is that in this second Monte Carlo experiment, for each independent data replication, the value of the corresponding observed future coverage \(C_{m}^{(n,\alpha)}(\omega)\) is not constrained to stay between the bounds determined by the marginal validity property, even if we consider an arbitrarily large horizon \(m\) of future observables. In fact, as illustrated by the next example, for any realization of the split conformal procedure, in which the algorithm receives as input a single dataset of a certain size, and outputs conformal prediction sets for a very large number of future observations, the observed coverage of the corresponding prediction sets can violate substantially the marginal validity property bounds. **Example 5**.: We generate1\(10\,000\) independent replications of a finite portion of the exchangeable data sequence using a slightly modified version of the Friedman process discussed in [18]. For each independent data sequence replication we draw a single value from a random variable \(W\) with \(\operatorname{Exp}(1)\) distribution, and, for each \(i=-r+1,\ldots,0,1\ldots,n+m\), we have ten independent predictors \(X_{i,1},\ldots,X_{i,10}\) with \(\operatorname{U}[0,1]\) distribution, and an independent random variable \(\epsilon_{i}\) with standard normal distribution. The response variable is defined as \(Y_{i}=10\sin(\pi X_{i,1}X_{i,2})+20(X_{i,3}-1/2)^{2}+10X_{i,4}+5X_{i,5}+W+ \epsilon_{i}\), implying that the response is not related to the last five predictors, which act as noise in the data. It is easy to show that the pairs \((X_{i},Y_{i})\) are not independent due to the presence of \(W\) in the definition of the \(Y_{i}\)'s, and that \(\{(X_{i},Y_{i})\}_{i=r+1}^{n+m}\) is exchangeable. A Random Forest [12] of 500 trees is used as our regression model. For each independent replication, we use a training sample of size \(r=100\), a calibration sample of size \(n=10\), a horizon of \(m=1\,000\) observations, and a nominal coverage level \(1-\alpha=0.8\). Figure 1 depicts the results of the simulation. The black dashed lines indicate the lower (0.8) and upper (0.891) bounds of the marginal validity property in Proposition 2. On replication 8 550 we observe the lowest future coverage value: 0.229. The histogram on the right side of Figure 1 approximates the density of the beta distribution to be identified in Theorem 2. Definition 3 seems to imply that to determine the distribution of the future coverage \(C_{m}^{(n,\alpha)}\) we would need to model the joint distribution of the indicators \((Z_{1},\ldots,Z_{m})\), which are, in general, dependent random variables, due to the way that the random prediction set \(D_{n}^{(1-\alpha)}\) introduced in Proposition 2 uses the information contained on the whole training and calibration samples. However, generally it is not possible to model the dependencies between the \(Z_{i}\)'s without making additional assumptions about the data generating process. In the next section we show, without imposing additional constraints on the data generating process, how the general nonparametric distribution of the future coverage \(C_{m}^{(n,\alpha)}\) is determined as a consequence of the distributional symmetries implied by the fact to be proved in Proposition 3 that the sequence of indicators \(\{Z_{i}\}_{i\geq 1}\) is exchangeable under our current assumptions. ## 3 Coverage distribution This section presents the two main results of the paper. We begin Section 3.1 establishing that the sequence of indicators \(\{Z_{i}\}_{i\geq 1}\) introduced in Definition 3 is exchangeable under our current assumptions. Using this result, a simple symmetry argument connects the marginal validity property with the attributes of the distribution of the future coverage \(C_{m}^{(n,\alpha)}\), showing that its expectation \(\mathrm{E}[C_{m}^{(n,\alpha)}]\) do satisfy the bounds determined by the marginal validity property in Proposition 2. A connection between the elements that Figure 1: Simulation of the future coverage discussed in Example 5. make up the split conformal prediction procedure and Polya's urn scheme allows us to determine the exact distribution of the coverage \(C_{m}^{(n,\alpha)}\) for a finite horizon of \(m\) future observables. In Section 3.2, the exchangeability of the sequence of indicators \(\{Z_{i}\}_{i\geq 1}\) and de Finetti's representation theorem work together to identify the distribution of the almost sure limit of the future coverage \(C_{m}^{(n,\alpha)}\) when the horizon \(m\) tends to infinity. ### Universal distribution for a finite horizon **Proposition 3**.: _For exchangeable data, if the conformity scores are almost surely distinct, the sequence of indicators \(\{Z_{i}\}_{i\geq 1}\) is exchangeable._ Proof.: Let \(k\) be such that \(R_{k}\) is the calibration conformity score ranked at the position \(\lceil(1-\alpha)(n+1)\rceil\) among the calibration conformity scores \(R_{1},\ldots,R_{n}\). For \(i\geq 1\), we know from Proposition 2 that \(Y_{n+i}\in D_{n}^{(1-\alpha)}(X_{n+i})\) if and only if \(R_{n+i}<R_{k}\). Define \(A=\{t\in\mathbb{R}^{2}:t_{1}<t_{2}\}\) and let \(B_{0}=\mathbb{R}^{2}\setminus A\), and \(B_{1}=A\). For \(z_{1},\ldots,z_{m}\in\{0,1\}\), it follows that \[P\big{(}\cap_{i=1}^{m}\{Z_{i}=z_{i}\}\big{)}=P\big{(}\cap_{i=1}^{m}\{I_{\{R_{n +i}<R_{k}\}}=z_{i}\}\big{)}=P\big{(}\cap_{i=1}^{m}\{(R_{n+i},R_{k})\in B_{z_{i} }\}.\] By Proposition 1, the sequence of conformity scores is exchangeable. Therefore, for \(i=1,\ldots,m\), the random vectors \((R_{n+i},R_{k})\) and \((R_{\pi(n+i)},R_{\pi(k)})\) have the same distribution for every permutation \(\pi:\{1,\ldots,n+m\}\stackrel{{\cong}}{{\longrightarrow}}\{1, \ldots,n+m\}\). Considering only permutations such that \(\pi(j)=j\) for \(1\leq j\leq n\), we have that \(R_{\pi(k)}=R_{k}\) and \[P\big{(}\cap_{i=1}^{m}\{Z_{i}=z_{i}\}\big{)} =P\big{(}\cap_{i=1}^{m}\{(R_{\pi(n+i)},R_{k})\in B_{z_{i}}\}\big{)} =P\big{(}\cap_{i=1}^{m}\{I_{\{R_{\pi(n+i)}<R_{k}\}}=z_{i}\}\big{)}\] \[=P\big{(}\cap_{i=1}^{m}\{Z_{\pi(i)}=z_{i}\}\big{)}.\] Since this subclass of permutations is rich enough to express an arbitrary permutation of the indicators \(Z_{1},\ldots,Z_{m}\), the result follows. This exchangeability of the sequence \(\{Z_{i}\}_{i\geq 1}\) implies that the \(Z_{i}\)'s are identically distributed, since, for every distinct \(i,j\geq 1\), we have \((Z_{i},Z_{j})\sim(Z_{j},Z_{i})\), so that \(Z_{i}\sim Z_{j}\). Hence, using Definition 3, by symmetry, \(\mathrm{E}[C_{m}^{(n,\alpha)}]=\mathrm{E}[Z_{1}]=P\{Y_{n+1}\in D_{n}^{(1- \alpha)}(X_{n+1})\}\), and \(1-\alpha\leq\mathrm{E}[C_{m}^{(n,\alpha)}]\leq 1-\alpha+1/(n+1)\), by Proposition 2. Therefore, the marginal validity property brings partial information about the distribution of the future coverage, establishing lower and upper bounds for its expectation. In what follows we go beyond this initial characterization, determining the distribution of the future coverage \(C_{m}^{(n,\alpha)}\) in full exact form. Consider the particular case depicted in Figure 2, in which we have a calibration sample of size \(n=4\), the nominal miscoverage level \(\alpha=0.45\) (so that \(\lceil(1-\alpha)(n+1)\rceil=3\)), and the horizon \(m=5\). Let \(b=\lceil(1-\alpha)(n+1)\rceil\) and \(g=n-\lceil(1-\alpha)(n+1)\rceil+1\). Since by Proposition 1 the sequence of conformity scores is exchangeable, \(R_{5}\) has the same probability of falling into one of the \(b+g=n+1=5\) intervals defined by the ordered calibration conformity scores \(R_{(1)},\ldots,R_{(4)}\) (see the first line in Figure 2). Now, by Definition 3, \(Z_{1}=1\) if and only if \(R_{5}\) falls into one of the \(b=3\) black intervals to the left of \(R_{(3)}\) (second line in Figure 2). Hence, \(P(Z_{1}=1)=b/(n+1)\). Given that \(Z_{1}=1\), \(R_{6}\) has, by exchangeability, the same probability of falling into one of the \((b+1)+g=n+2=6\) intervals defined the conformity scores \(R_{(1)},\ldots,R_{(4)},R_{5}\), and \(Z_{2}=0\) if and only if \(R_{6}\) falls into one the \(g=2\) gray intervals to the right of \(R_{(3)}\) (third line in Figure 2). Therefore, \(P(Z_{2}=0\mid Z_{1}=1)=g/(n+2)\). Following this reasoning, the product rule yields \[P(Z_{1}=1,Z_{2}=0,Z_{3}=0,Z_{4}=1,Z_{5}=1)=\frac{b}{n+1}\cdot\frac{g}{n+2} \cdot\frac{g+1}{n+3}\cdot\frac{b+1}{n+4}\cdot\frac{b+2}{n+5},\] which is manifestly exchangeable, as expected from Proposition 3. This is a Polya's urn scheme [19, 20] with outcome BGGBB in which we started with \(b\) black balls (B) and \(g\) gray balls (G), and after drawing a ball from the urn we put it back adding one ball of the same color. The exchangeability of the vector of indicators \((Z_{1},Z_{2},Z_{3},Z_{4},Z_{5})\) implies that the event \(\{Z_{1}+Z_{2}+Z_{3}+Z_{4}+Z_{5}=2\}\) is the union of \(\binom{m}{2}\) mutually exclusive and equiprobable events of the form \(\{Z_{1}=z_{1},Z_{2}=z_{2},Z_{3}=z_{3},Z_{4}=z_{4},Z_{5}=z_{5}\}\), in which 2 of the \(z_{i}\)'s are equal to 1, and \(m-2=3\) of the \(z_{i}\)'s are equal to 0. Therefore, \[P(Z_{1}+Z_{2}+Z_{3}+Z_{4}+Z_{5}=2)=\binom{m}{2}\left(\frac{b}{n+1}\cdot\frac{ g}{n+2}\cdot\frac{g+1}{n+3}\cdot\frac{b+1}{n+4}\cdot\frac{b+2}{n+5}\right).\] For real \(t\), let \(\lfloor t\rfloor\) denote the largest integer smaller than or equal to \(t\). Considering that \(\lceil-t\rceil=-\lfloor t\rfloor\), and \(\lceil k+t\rceil=k+\lceil t\rceil\) for every integer \(k\), we can rewrite the initial number of gray balls in the urn as \(g=\lfloor\alpha(n+1)\rfloor\). With the usual convention that an empty product is equal to 1, a simple induction argument gives the following result. Figure 2: A connection between split conformal prediction and Polya’s urn scheme. **Theorem 1**.: _Under the data exchangeability assumption, for every nominal miscoverage level \(0<\alpha<1\), every calibration sample size \(n\geq 1\), and every horizon \(m\geq 1\), if the conformity scores are almost surely distinct, the distribution of the future coverage is given by_ \[P(C_{m}^{(n,\alpha)}=k/m)=\binom{m}{k}\frac{\left(\prod_{i=1}^{k}(\lceil(1- \alpha)(n+1)\rceil+i-1)\right)\left(\prod_{i=1}^{m-k}(\lfloor\alpha(n+1) \rfloor+i-1)\right)}{\prod_{i=1}^{m}(n+i)},\] _for \(k=1,\ldots,m\)._ ### Almost sure limit There is a rich literature on exchangeability and its consequences for the foundations of a subjectivistic theory of probability and inference [21, 22, 23, 24, 25, 15, 26]. Here, in the frequentist context of split conformal prediction, we use one of the earliest fundamental results of exchangeability theory as a tool to determine the distribution of the almost sure limit of the future coverage. For a sequence \(\{Z_{i}\}_{i\geq 1}\) of random variables taking values in \(\{0,1\}\), _de Finetti's representation theorem_[27, 28] states that \(\{Z_{i}\}_{i\geq 1}\) is exchangeable if and only if there is a random variable \(\Theta:\Omega\to[0,1]\) such that, given that \(\Theta=\theta\), the random variables \(\{Z_{i}\}_{i\geq 1}\) are conditionally independent and identically distributed with distribution \(\text{Bernoulli}(\theta)\). Furthermore, the distribution \(\mu_{\Theta}\) of \(\Theta\) is unique, and \((1/m)\sum_{i=1}^{m}Z_{i}\) converges almost surely to \(\mathbb{E}[Z_{1}\mid\Theta]=\Theta\), when \(m\) tends to infinity. **Theorem 2**.: _For exchangeable data, if the conformity scores are almost surely distinct, the future coverage \(C_{m}^{(n,\alpha)}\) converges almost surely when the horizon \(m\) tends to infinity to a random variable \(C_{\infty}^{(n,\alpha)}\) with distribution \(\text{Beta}(\lceil(1-\alpha)(n+1)\rceil,\lfloor\alpha(n+1)\rfloor)\), for every nominal miscoverage level \(0<\alpha<1\), and every calibration sample size \(n\geq 1\)._ Proof.: By Proposition 3, the sequence of indicators \(\{Z_{i}\}_{i\geq 1}\) is exchangeable, and de Finetti's theorem gives us the representation \[P\{Z_{1}=z_{1},\ldots,Z_{m}=z_{m}\}=\int_{[0,1]}\theta^{\sum_{i=1}^{m}z_{i}}(1 -\theta)^{m-\sum_{i=1}^{m}z_{i}}\,d\mu_{\Theta}(\theta).\] For \(k=1,\ldots,n\), the event \(\{\sum_{i=1}^{m}Z_{i}=k\}\) is the union of \(\binom{m}{k}\) mutually exclusive and, by exchangeability, equiprobable events of the form \(\{Z_{1}=z_{1},\ldots,Z_{m}=z_{m}\}\), in which \(k\) of the \(z_{i}\)'s are equal to \(1\), and \(m-k\) of the \(z_{i}\)'s are equal to \(0\). Therefore, it follows from the integral representation above that \[P(C_{m}^{(n,\alpha)}=k/m)=P(\sum_{i=1}^{m}Z_{i}=k)=\binom{m}{k}\int_{[0,1]} \theta^{k}(1-\theta)^{m-k}\,d\mu_{\Theta}(\theta).\] Let \(\mu_{\Theta}\) be dominated by Lebesgue measure \(\lambda\) with Radon-Nikodym derivative \[(d\mu_{\Theta}/d\lambda)(\theta)=\left(\frac{n!}{(b-1)!(g-1)!}\right)\theta^{ b-1}(1-\theta)^{g-1}\,I_{[0,1]}(\theta),\] up to almost everywhere \([\lambda]\) equivalence, in which \(b=\lceil(1-\alpha)(n+1)\rceil\) and \(g=\lfloor\alpha(n+1)\rfloor\). This is a version of the density of a random variable with \(\operatorname{Beta}(b,g)\) distribution. Using the Leibniz rule for Radon-Nikodym derivatives (see [15], Theorem A.79), we have that \[P(C_{m}^{(n,\alpha)}=k/m) =\binom{m}{k}\left(\frac{n!}{(b-1)!(g-1)!}\right)\int_{[0,1]} \theta^{k+b-1}(1-\theta)^{m-k+g-1}\,d\lambda(\theta)\] \[=\binom{m}{k}\left(\frac{n!}{(b-1)!(g-1)!}\right)\left(\frac{(k+b -1)!(m-k+g-1)!}{(m+n)!}\right)\] \[=\binom{m}{k}\frac{\left(\prod_{i=1}^{k}(b+i-1)\right)\left(\prod _{i=1}^{m-k}(g+i-1)\right)}{\prod_{i=1}^{m}(n+i)}.\] Since de Finetti's theorem states that \(\mu_{\Theta}\) is unique and that \(C_{m}^{(n,\alpha)}\) converges almost surely to \(\Theta\), when \(m\) tends to infinity, the result follows by inspection of the distribution of the future coverage \(C_{m}^{(n,\alpha)}\) in Theorem 1. As corollaries, we note that the usual properties of the beta distribution yield \[\operatorname{E}[C_{\infty}^{(n,\alpha)}]=\frac{\lceil(1-\alpha)(n+1)\rceil} {n+1}=1-\alpha+O\!\left(\frac{1}{n}\right);\] \[\operatorname{Var}[C_{\infty}^{(n,\alpha)}]=\frac{\lceil(1-\alpha)(n+1) \rceil\times\lfloor\alpha(n+1)\rfloor}{(n+1)^{2}(n+2)}=\frac{\alpha(1-\alpha )}{n}+O\!\left(\frac{1}{n^{2}}\right).\] Furthermore, using the expression of the beta density, a simple application of Scheffe's theorem [29] tells us that \[\sqrt{n}(C_{\infty}^{(n,\alpha)}-(1-\alpha))\xrightarrow[n\to\infty]{d} \operatorname{N}(0,\alpha(1-\alpha)).\] ## 4 Concluding remarks For a nominal miscoverage level \(0<\alpha<1\), we may want to choose the calibration sample size \(n\) in order to control the distribution of \(C_{\infty}^{(n,\alpha)}\). Define the cumulative distribution function \(H_{n,\alpha}(t)=P(C_{\infty}^{(n,\alpha)}\leq t)\), for \(t\in\mathbb{R}\). Fixing some \(\epsilon>0\) and \(0<\gamma<1\), the calibration sample size can be chosen as \(\min\left\{n\geq 1:H_{n,\alpha}(1-\alpha+\epsilon)-H_{n,\alpha}(1-\alpha- \epsilon)\geq\gamma\right\}\). For instance, with \(1-\alpha=0.9\), \(\epsilon=0.02\), and \(\gamma=0.95\), using this criterion, we get a calibration sample size \(n=860\).
2305.06698
On-Site Production of Quasi-Continuous Ultra-High Vacuum Pipes
We present a design study for a new production technology for ultra-high vacuum pipes. The pipes are produced in a fully automatised process in sections of hundreds of meters directly in the later location of usage. We estimate the effort for such a production and show that it might be substantially lower than the effort for an off-site production of transportable sections.
Matthias Angerhausen, Guido Buchholz, Jef Hoste, Marion Purrio, Achim Stahl, Lars Stein, Patrick Toussaint
2023-05-11T10:13:58Z
http://arxiv.org/abs/2305.06698v1
# On-Site Production of Quasi-Continuous Ultra-High Vacuum Pipes ###### Abstract We present a design study for a new production technology for ultra-high vacuum pipes. The pipes are produced in a fully automatised process in sections of hundreds of meters directly in the later location of usage. We estimate the effort for such a production and show that it might be substantially lower than the effort for an off-site production of transportable sections. + Footnote †: journal: Vacuum ## 1 Motivation The Einstein Telescope is a gravitational wave detector of the third generation in preparation in Europe [1]-[4]. It is based on six Michelson interferometers enhanced by Fabry-Perot arm cavities, with an arm length of 10 km each. The interferometers will be located in underground tunnels in the shape of a 10 km equilateral triangle. The laser beams will travel in ultra-high-vacuum (UHV) pipes. A diameter of roughly 1 m is required to enclose the beams. Plain pipe sections of 500 m length are envisioned, connected to each other by pumping stations. In total 120 km of these UHV pipes will be required plus an additional 10 to 20 km of smaller pipes. It will be the largest UHV system ever built. The Einstein Telescope will measure gravitational waves through the changes in the length of the arms caused by the passing waves. The telescope will be able to detect changes down to \(10^{-20}\) m. Fluctuations in the pressure inside the UHV pipes along the path of the lasers induce fluctuations in the index of refraction. This might lead to variations in the phase of the laser waves, mimicking a passing wave. A vacuum level of \(10^{-10}\) mbar at room temperature is required to limit the impact of this effect. Furthermore, a vacuum at this level reduces the probability of laser photons scattered from the mirror surfaces to be rescattered from the rest gas back into the laser beam. These photons would be added incoherently to the beam and would create noise, too. The cost of the Einstein Telescope is driven by the cost of the infrastructure, mainly the tunnels and the vacuum system. The observatory is seen as an infrastructure hosting the Einstein Telescope as its initial instrument. It might be replaced after a decade of operation by a new instrument with even higher sensitivity. This instrument will use the same tunnels and vacuum system. Its specifications on the vacuum are still unknown. To serve the next generation, the vacuum system should be able to reach \(10^{-11}\) mbar, if this is achievable with a reasonable effort. Conventionally, beam pipes for gravitational wave detectors and similarly for particle accelerators are built from pipe sections prefabricated off-site and welded together in the tunnels. The length of the prefabricated sections are limited to about 20 m by transportation issues. We investigated the possibility to transport coils of sheet metal into the tunnels and to fabricate 500 m-long pipe sections in a continuous process directly in the tunnels. In this paper we summarise our findings. We present a highly reliable, fully automated production procedure. With a single robot the full length of pipes of 120 km can be produced in about two years with potentially large savings in cost. ## 2 Pipe Production Technology ### Requirements The exact diameter of the pipes is not known, yet, but it will be approximately 1 m. It is determined by the diameter of the laser beams. We plan for 1 m pipes. The interferometers will include arm-cavities of 10 km length plus around 100 m of distance between the beam splitter and the cavities. The mirrors of the three low-frequency interferometers will be cooled to cryogenic temperature (\(\approx 10\) K), while the arms will be kept at room-temperature. The sections around the cold mirror will need special attention. Here we consider only the room-temperature pipes. Approximately 120 km are required. In addition tens of kilometres of UHV pipes of smaller diameter will be needed for auxiliary optics, which we ignore here. To reach a vacuum of \(10^{-10}\) mbar or better, hydrocarbons have to be removed from the inner surfaces by thorough cleaning. The pipes have to be baked under vacuum at a temperature of at least \(120^{\circ}\) C to detach water from the walls. For the quality of the surface, the pipes are built from a stainless steel. For example 304L would be suitable. Some investigations about new materials are ongoing, but the baseline is 304L. We stick to the baseline. Vacuum firing [5]-[8] might be necessary to remove hydrogen from the bulk of the pipe walls. Vacuum firing would have to be done prior to the production of the pipes. It has no direct impact on the production technology. The pipes have to withstand the environmental pressure. Including a reasonable safety margin, a wall thickness of 7 mm to 8 mm is necessary. We follow an approach, which is used by the current gravitational wave detectors LIGO and VIRGO [9; 10]. We adopt walls of 3 mm to 4 mm thickness reinforced by stiffener rings. This approach eases the weight and therefore simplifies the handling, reduces the cost of material, and facilitates the bending of the initial material into pipes. Those stiffener rings are not included in our production scheme. They will have to be added later. Reliability is probably the most critical requirement for the pipe production. The occurence of a leak will require manual intervention for repair, with a correspondingly large effort for the localisation of the leak and its repair. We try to limit these interventions to a few for the whole project. There will be four pipes running in parallel in each tunnel. Fig. 1 shows a typical tunnel cross section. The UHV-pipes will be built in sections of 500 m length. They will be connected to each other through pumping stations. The pumping stations include the pumps, but also instrumentation for diagnosis and valves. ### Production Concept #### 2.2.1 State of the art in tube and pipeline construction Conventionally the vacuum pipes would be built from prefabricated sections of 15 m to 20 m of length. The sections would be bent from sheet metal, welded, leak checked, and cleaned in a factory, then transported on site, lowered through one of the access shafts into the tunnels, carefully aligned to each other, clamped and butt welded into the final pipe. The vacuum system of the current generation of gravitational wave detectors was built in this way. The length of the sections is limited by transportation, if they are produced in a remote factory, or by the space restrictions in lowering the pipes into the tunnels for an on-site production facility. For the Einstein-Telescope we would need between \(6\,000\) and \(8\,000\) sections. The cost of the vacuum system will be dominated by the personnel required for the handling, especially for the welding in the tunnels. We wanted to avoid the thousands of welding joints between pipe sections and improve the reliability of the welding of the pipes by a single seam weld along the length of the pipes. This was the starting point of our project. #### 2.2.2 An alternative approach We propose to transport rolls of sheet metal into the tunnels and to develop a fully automatised robot, that produces pipe sections of up to 500 m length in the tunnels in the position were the pipes will be installed eventually. Raw materialFor UHV applications the standard material is stainless steel of type 304L or 316L. We base our concept on this material. The wall thickness will be 3 mm to 4 mm, just sufficient to withstand the atmospheric pressure. Stiffener rings around the outer circumference of the pipes will be added later to stabilise the pipes. The sheet metal will be produced by a steel company in coils (see fig. 2). Typical dimension of the coils are an inner diameter of 610 mm and an outer diameter of 1 715 mm. At a thickness of 4 mm a length of the sheets of 510 m is possible. In standard production processes these coils are produced in widths of up to 2 m. For a 1 m pipe Figure 1: A typical layout of the vacuum pipes in a tunnel with 6.5 m diameter. The yellow area around the pipes indicates free space for access to the pipes and for insulation. this width and weld them together to the full width prior to the underground production. The weight of a 1.57 m wide coil is approximately 25 t. About 500 of these coils are needed. Preparation on SurfaceUpon arrival on site, a few steps of preparation need to be done. These require an un-coiling and re-coiling of the material. The edges of the coil need to be prepared for the welding seam. We might want to vacuum-fire the material to reduce the outgasing of hydrogen. Then, two coils will be welded together to create a coil with the full width of 3.14 m. Finally the material is precleaned, bagged, and transported underground with dedicated transport machinery. The elevators are easily capable of taking a 50 t coil. Pipe Fabrication UndergroundIn the tunnel material from a full coil is fed into a fully automatised robot. There are two basic options for welding pipes: a spiral weld or a longitudinal seam. We decided for a longitudinal seam. It simplifies the welding and avoids having to rotate the bulky pipe sections around their axis. Fig. 3 shows a sketch of the bending scheme. We will use dry bending to avoid contamination of the pipe sections with hydrocarbons. The cross section is determined by the size of the coils. The machinery for bending and welding will fit behind this cross section. We estimated a length of the robot of about 30 m. The pipe is welded from inside out. This has two advantages: The cleaner surface will face the vacuum while the root of the weld is on the outside, where defects, if they do occur, mostly will be located. The seam is located at the Figure 2: Roll of sheet metal as produced by a steel company. position for welding. To arrive at the position of the weld, the welding head is attached to the end of a cantilever, which reaches into the forming pipe from the direction of the coil. The seam between the original half-coils will be located on the top of the pipe. It will have its root on the outside, too. We investigated a number of different welding technologies. We think gas metal arc welding is a viable option, but we decided for laser welding under vacuum. There are several arguments in favour of welding in a vacuum: A very stable weld pool can be achieved in a vacuum, leading to a significantly improved seam quality. Lowering the atmospheric pressure during the weld introduces less hydrogen into the weld and avoids oxidation of the weld area. With an effective energy input and lower vacuum requirements than, for example, electron beam welding, a very stable process can be achieved. For the application in our concept a mobile vacuum is required. Fig. 4 shows the welding head of such a system. Strong pumps reduce the pressure to a few mbar underneath the metal cup surrounding the laser head. In our case the rim of the cup will have to be adapted to the shape of the pipe and the cup will have to slide along the seam as the welding progresses. We believe that a pressure of 30 mbar can be reached in this configuration. But, neither the non-flat geometry nor the sliding of the cup on a pipe has been tested, yet. The production speed will be limited by the welding. A speed Figure 3: Bending of the metal sheet into a pipe. of 1 m/min should be possible, It allows to manufacture a 500 m section in a single day. There are three potential options for the position of the robot. * The robot can be mounted stationary in a cavern at the end of the tunnel. The pipe section will be pushed from the machine into the tunnel as its length increases. A support of the pipe is required that allows the pipe to slide along the tunnel with sufficiently low friction. Once a section is completed, it will be pushed further along the tunnel into its final position. * The robot is located on rails in the tunnel. It produces the pipe sections directly in their envisaged position along the tunnel. It starts at the end of the section and slowly moves away from this position as the pipe section grows in length. The pipe section is put on a hydraulic stand, which allows to lift it into its final position in the tunnel. * A combination of the first two options: The robot is movable on rails Figure 4: Example of a head for laser welding in vacuum (source: U. Reisgen et al., ”Laser beam in mobile vacuum”. Proceedings: Lasers in Manufacturing - LIM 2017, Munich, Germany). between productions, but stationary during the production of a single section. It is moved to the end of the section and pushes the section on sliding fixtures into its final position as it is produced. This is the most complicated option, but it avoids sliding pipe sections over kilometres and has less constraints on space. It produces all four pipe sections from a position in the empty part of the tunnel while it fills the tunnel behind with four pipes. The second option is our preferred solution, but the other two options are viable, too. After the individual sections are completed they have to be connected into the final pipe and cleaned from inside with a detergent-spraying robot. A 500 m pipe section will extent during bake-out by roughly 1.5 m. We assume that each section will be fixed in the tunnel at its centre and extent towards the ends. Every position except for the centre must be able to slide in its fixtures. In between sections a bellow must be installed, capable of absorbing 1.5 m. To install pumps, gauges, etc. wholes will be extruded from the pipe walls in certain positions and flanges welded to it. We expect, pumping will be necessary every 500 m. This is a preliminary concept. A few aspects are not taken into account, yet. These are: * The addition of stiffener rings on the outer circumference of the pipes. * Several methods are available to control the quality of the seam weld, for example ultrasonic or X-ray imaging to show inner defects, seam tracking to detect geometrical intolerances or eddy current measurements. We did not implement any in our concept, yet. Eventually a leak-test of the completed pipe sections is required. * We did not work out the details of the cleaning robot, yet. * Baffles need to be installed along the pipes to absorb scattered light and to prevent its back-scattering into the laser beam. In the order of 100 to 200 of these baffles will be required in each 10 km arm. The installation procedure is still unclear. We envisage a manual installation after cleaning of the completed pipes. * The pipes need to be baked at temperatures between 80 and 120\({}^{\circ}\) C to remove water molecules from the inner surfaces. We assume that the same procedures can be applied as for conventionally produced pipes, but no details are worked out, yet. * Despite the high resistance of stainless steel against corrosion, some corrosion protection of the outside of the pipes will be needed. They have to survive for fifty years in the humid atmosphere of the tunnels. The pipes will have to be painted or coated. The procedure still needs to be worked out. #### 2.2.3 Evaluation Risk AssessmentOur concept is based on a new production technology. Extensive prototyping will be necessary to verify the validity of the new concept. Especially as it includes a welding technology which is state-of-the-art, but has not been used for pipe welding before. Should it turn out, that laser welding under vacuum is not feasible, GMA-Hybrid welding or gas metal arc welding would be alternatives. The partners of the project are preparing for the prototyping. The largest risk in production are vacuum leaks. Let us first remember that we are minimising the amount of welding with our concept. The highest risk in a conventional system are the connection welds, where pipes, that never are perfectly round and seldom have a constant diameter have to be aligned carefully for each connection. Specialised clamping devices are used to force the pipe ends into the same shape and to a precise distance between the weld faces. Deviations from the ideal weld preparation (which are not unlikely due to the complicated alignment process) can lead to weld defects, that must be detected and repaired. With our concept we reduce the number of these welds from around \(6\,000\) to \(8\,000\) to only \(240\). The longitudinal weld along the seam of the pipe sections reduces the length of the welding further, compared to the commonly used spiral welding. But the largest risk still are welding defects. We will use a fully automatised continuous welding process. Most defects appear at the beginning of the welding process when parameters are not stable yet. The long welds reduce the number of these ramp-up phases and therefor also the risk of defects. State-of-the-art monitoring of the quality will be applied to identify potential welding problems. Smaller defects can be repaired in situ with the usual techniques. Unfortunately, if defective sections must be replaced, this procedure is difficult and needs large effort. Leak detection can only be done in the tunnel, after a section is completed. Production TimeThe speed of production will be limited by the continuous welding of the longitudinal seam. At least 0.5 m/min can be achieved, but also 1 m/min is plausible, so that a 500 m-section can be produced in one day within two shifts. Prior to the production, the machine has to be prepared. For example, it has to be loaded with all the materials. After the production of a section, some service and maintenance will be required on the machine. All together, we assume that it will take three days for each section. We estimate that a team of five people is needed for the operations. In total, 240 sections of 500 m are needed. They can be produced in two to three years with a single machine. Parallel production in different tunnels is possible, if more than one machine and the corresponding personnel is made available. Interference with other InstallationsDuring the production of pipe sections in a tunnel, this tunnel will be mostly blocked for other installations. Installation of the completed sections in their final position and joining of the sections should be possible though. Some space must be reserved in an adjacent cavern for logistics. The tunnel must be kept free for transportation of the materials. Above ground a space will be needed for the reception of the coils and other materials, their uncoiling, pre-welding, cleaning, etc. Space will also be needed for the storage of materials. CostThe density of the material is 7.93 g/cm\({}^{3}\) (304L or 316L). With a wall thickness of 4 mm and a diameter of 1 m, the weight of the pipes is 99.6 kg/m or about 50 t per section. The cost of the raw material in October 2022 was about 3.5 EUR/kg for 304L and 5 EUR/kg for 316L. This gives a total cost of the raw material between 42 mio. EUR and 61 mio. EUR. It should be noted, that prices are subject to fluctuations and that the estimates are only up-to-date on a daily basis. For the whole system 480 coils of 25 t each need to be transported to the site. The cost of transportation depends on the distance from the steel manufacturer to the site. The coils could be transported by train or by trucks. For any realistic distance the cost of transportation will be small compared to the cost of the material. We will ignore it in the following. We estimate that the cost of the main machine (coil forming and welding) is about 5 mio. EUR. The cost of the machine for the pre-fabrication above ground is estimated at another 2.5 mio. EUR. Special machines for the transportation of the coils will be needed. We estimate 1.25 mio. EUR for five of those machines. The elevator will have to be modified for the transportation of the coils and the machinery. This might cost another 1 mio. EUR. Overall the basic investment will be around 10 mio. EUR. Teams of five people will operate the production. For a continuous production three of these teams will be needed at a cost of about 1.1 mio. EUR annually. A back office at a cost of 0.75 mio. EUR per year has to be added. This sums up to around 5 mio. EUR for personnel for a production time of two to three years. Please keep in mind, that this is a preliminary concept. We only laid out the most important production steps. The cost of these are dominated by the cost of the material. We believe that the missing cost for stiffeners, cleaning, etc. will not change this statement significantly. ## 3 Conclusions and Next Steps We developed a new production technology for long vacuum pipes. It is based on a fully automatised machine moving through the tunnels, producing continuous vacuum pipes from a coil of raw material at a speed of about 1 meter per minute. We propose to use laser welding under vacuum to close the longitudinal seam of the pipe sections. The new production concept looks promising. Effort and cost for transportation and production are potentially lower compared to an off-site production of transportable sections. The fully automatised long welding seams promise an improved reliability of the production. Here we described the core of the new production technology. Many details, such as the mounting of the stiffeners, the cleaning of the pipes, or the installation of baffles are not worked out, yet. The partners of this project are now preparing to work out all the details and to produce a first prototype. ## Acknowledgement This project was executed as a cross-border cooperation of SMEs within the interreg project ET2SMEs ([https://et2smes.eu/](https://et2smes.eu/)). The ET2SMEs project is carried out under the Interreg V-A Euregio Meuse-Rhine Programme, with financial support from the European Regional Development Fund (ERDF).
2302.02459
A High Performance Compiler for Very Large Scale Surface Code Computations
We present the first high performance compiler for very large scale quantum error correction: it translates an arbitrary quantum circuit to surface code operations based on lattice surgery. Our compiler offers an end to end error correction workflow implemented by a pluggable architecture centered around an intermediate representation of lattice surgery instructions. Moreover, the compiler supports customizable circuit layouts, can be used for quantum benchmarking and includes a quantum resource estimator. The compiler can process millions of gates using a streaming pipeline at a speed geared towards real-time operation of a physical device. We compiled within seconds 80 million logical surface code instructions, corresponding to a high precision Clifford+T implementation of the 128-qubit Quantum Fourier Transform (QFT). Our code is open-sourced at \url{https://github.com/latticesurgery-com}.
George Watkins, Hoang Minh Nguyen, Keelan Watkins, Steven Pearce, Hoi-Kwan Lau, Alexandru Paler
2023-02-05T19:06:49Z
http://arxiv.org/abs/2302.02459v3
# A High Performance Compiler for Very Large Scale Surface Code Computations ###### Abstract We present the first high performance compiler for very large scale quantum error correction: it translates an arbitrary quantum circuit to surface code operations based on lattice surgery. Our compiler offers an end to end error correction workflow implemented by a pluggable architecture centered around an intermediate representation of lattice surgery instructions. Moreover, the compiler supports customizable circuit layouts, can be used for quantum benchmarking and includes a quantum resource estimator. The compiler can process millions of gates using a streaming pipeline at a speed geared towards real-time operation of a physical device. We compiled within seconds 80 million logical surface code instructions, corresponding to a high precision Clifford+T implementation of the 128-qubit Quantum Fourier Transform (QFT). Our code is open-sourced at [https://github.com/latticesurgery-com](https://github.com/latticesurgery-com). ## I Introduction Applying surface quantum error correcting codes (QECCs) efficiently to large computations is challenging in terms of classical computing resources necessary for the compilation process. Compilers tailored for QECC are only starting to appear, often with significant limitations with respect to the scale of the circuits that can be handled or the compilation time. Large scale QECC compilation is a necessity, because practical algorithms, like Shor's and Grover's assume high-quality qubits with a very low error rate [50], but we are unlikely to obtain hardware (physical) qubits with such fidelity in the near future [44]. QECCs solve this issue by using a large number of error prone physical qubits to encode higher fidelity _logical_ qubits. For example, a will quantum factoring algorithm needs roughly 1000 qubits to factor a 1000-bit number and millions of gates [42, 15]. Consequently, practical algorithm require very large scale quantum computers, while only some carefully crafted examples of problems where quantum hardware has an advantage with small devices exist [4]. Surface codes are a family of QECCs that require low qubit connectivity and a reasonably high hardware error rate (such as between 0.1% and 1%) to create good logical (computational) qubits [55, 14, 52] and only require degree four nearest neighbour connectivity. These properties make them a promising option for error correcting devices with a couple hundred logical qubits. Physical devices with compatible layouts have already been made or proposed, albeit on a small scale [2, 30, 4, 2, 6]. Examples of larger scale quantum circuits protected by surface QECCs were compiled manually in [16, 17]. The complexity of optimising surface code circuits has been shown to be related to NP-hardness [19, 56]. We present and demonstrate the extremely high scalability of our efficient QECC compiler. This is a step forward for quantum software: we create a streaming pipeline and a compilation environment for the compilation and optimisation of very large scale QECCs. Our high performance pipeline makes it possible to process extremely large circuits (would not fit in memory). We can compile directly, in a streaming process, by reading and writing to mass storage. Streaming enables the real-time operation of our compiler, meaning that this tool may be integrated in the classical control software necessary to operate quantum computers [38]. This paper is organised as follows: In Sec. II we introduce the concepts necessary for presenting the compilation methods and workflow. Sec. III describes the two-stage compilation pipeline that consists of gate level processing (Sec. III.2) and logical operation routing (Sec. III.3). The latter includes also a fast method to perform state vector simulation that takes into account the entangling and disentangling action of the lattice surgery operations. Finally, Sec. IV illustrates the performance of our compiler. We compile within seconds a high-precision 128-qubit Quantum Fourier Transform (QFT) [36]. Figure 1: Example output of our compiler: discrete time steps of a surface code computation. Time axis runs towards the back of the image. The discrete time steps are depicted as a sequence of slices, evolving over time. Each patch (colored square) holds an error-corrected logical qubit. Quantum operations are implemented by merging and splitting patches. To the best of our knowledge, this is the largest-scale compilation of this kind. ## II Background This section introduces the necessary background details for describing the compilation process. The application of error correction to quantum circuits resembles the process well known to classical computer scientists of program compilation: the _compiler_ reads code in a programming language (higher level quantum gates) and outputs machine instructions (lattice surgery quantum gates). We opted for flexibility and developed a compiler with a well-defined intermediate representation to separate circuit pre-processing from surface code instruction layout. Surface code instructions for large scale computations at present is interesting for at least two purposes, one is being able to produce reliable resource estimates, and the other is to start preparing for when we will have such devices, so that hardware engineers can start designing devices with instruction sets for error correction in mind. We assume the reader is familiar with the basic concepts of quantum computing and quantum information [35, 36]. We assume the conventional meaning for common quantum gates (Phase gates S and T, Hadamard gate H, CNOT, Toffoli - to a large extent, the standard gates as listed in Table 2 in Appendix) and the Pauli matrices (I, X, Y, Z). By the phase rotation gate \(R_{Z}(\theta)\) we mean: \[R_{Z}(\theta)=\begin{bmatrix}1&0\\ 0&e^{j\frac{\theta}{2}}\end{bmatrix}\] We will frequently use Pauli product rotations, for which we assume the following: given an axis P (which may be a Pauli matrix or a tensor product of Pauli matrices) we denote by \(P(\theta)=\exp(-i\theta P)=\cos(\theta)I-i\sin(\theta)P\). Note that under this convention, \(R_{P}(\theta)=P(\frac{\theta}{2})\) for \(P=X,Z\). Also, when the Pauli matrices appear with sub-indices, e.g. \(Z_{1}Z_{2}Z_{3}Z_{4}\) in Fig. 2, we mean the tensor product \(X\otimes Z\) of the Pauli matrices applied to qubits indexed 1 and 2. Similarly, we use gate \(R_{X}(\theta)=HR_{Z}(\theta)H\). ### Surface Codes A major challenge with the current generation of quantum computers is the occurrence of errors while performing computations. Errors may occur because of control system faults or stray interaction with the environment. A proposed solution for avoiding errors are Quantum Error Correcting Codes (QECC). These codes add some degree of fault tolerance to computations by using many _physical qubits_ to form fewer but more reliable abstract _logical qubits_[51]. Surface codes are a family of QECCs that aim at improving computational fidelity by entangling physical qubits in a _physical lattice_[47, 36]. This kind of codes, with topological properties, was first theorized with exotic particles known as "anyons" [29]. Surface codes are appealing because they are well understood, and feature a high error threshold. In near future, quantum computing hardware with thousands of qubits might be realized [45, 46, 6] which would be able to operate a surface code cycle on a lattice of qubits. The key step of surface code error detection is stabilizer measurement, as shown by the shaded squares in Fig. 2. These measurements act as parity checks on bit flips or phase flips of a square lattice of _data qubits_. The surface code and its cycle (the sequence of quantum gates applied for enforcing the code constraints) only tell us how to protect a lattice from error. The surface code distance indicates how much error is tolerated [9]. ### Logical Qubits and Logical Operations Logical (computational) qubits are encoded by "cutting out" portions of a device's physical lattice into _patches_, which are cluster states error corrected by the surface code cycle. This encoding of logical qubits is known as the _planar code_[11, 9]. Patches have boundaries outside of which they don't interact, except when performing certain logical operations (Sec. II.2). Fig. 3 outlines how patches relate to the surface code. The patch-based approach has been shown to be a resource-efficient choice for quantum error correction [32, 21, 43]. We will be looking at square patches with two kinds of boundaries that encode a single qubit. Patch size is propor Figure 2: A graphical depiction of surface code layout. The white circles are _data_ qubits protected from errors by measuring stabilizers around them. The squares in the lighter and darker shades of yellow represent stabilizer measurements. For example, the squares marked with \(Z\) and \(X\) represent the \(Z_{1}Z_{2}Z_{3}Z_{4}\) and \(X_{5}X_{6}X_{7}X_{8}\) stabilizer measurements respectively. If an error occurs in a data qubit, such as a phase flip occurring on 9, the \(X\) stabilizers around it will pick it up by changing outcome (_syndromes_, highlighted in purple). There are advanced methods to decode sets of errors (e.g [22, 14, 53]). Errors can either be corrected on the spot or tracked classically by inverting later readouts. This cycle of detecting, decoding and correcting is referred to as the _surface code cycle_. tional to code distance and the performance of the decoding algorithm (e.g. [14; 22]). For our intents, it suffices to know that the size of the patches will depend on the physical error rate of the device, length of the computation and desired success rate of the logical computation. In Sec. IV we estimate the resources [20] necessary to execute the compiled output. Having obtained logical qubits, we require a method to perform operations between them. Table 1 offers an overview of all the surface code operations supported by our compiler at the logical level. Some logical operations are performed directly on patches: Pauli X and Z [8], and Hadamard gates [21], can be implemented in this way and are called _transversal_ operations. It is also possible to directly initialize a patch in the \(\ket{0}\) or \(\ket{+}\) states and to measure in the X or Z basis [32]. For the remaining operations needed to complete a universal gate set we use lattice surgery [21]. This protocol achieves entangling multibody measurements by merging and splitting patches. We use these measurements along with prepared ancilla states to implement CNOT as shown in [21], and the S and the T gates (Fig. 12 in Appendix). T gates utilize a _magic state_, \(\ket{m}=\frac{1}{\sqrt{2}}(\ket{0}+e^{\frac{\alpha}{4}}\ket{1})\), which in the surface code cannot be initialized directly with a high fidelity. These states have to be prepared by _distillation_. There are several protocols for magic state distillation [7; 33], but for our compilation purposes it suffices to acknowledge the fact that these distillations occupy some amount of space on the device's lattice and that they have a certain duration in time: distillation regions are described by their _bounding box_ which includes a time axis for how long it will take to produce the next magic state. ## III Methods We address the problem of taking a circuit specified in a machine readable format, and converting the circuit to the surface code operations outlined in Table 1. For small circuits it is easy enough to perform such conversion by hand, but automation is necessary for large scale circuits. Our _compiler_ is a computer program that reads text in a _source_ formal language and outputs machine code in another language, called the _target_. In our case the source is a quantum circuit in a subset of OpenQASM 2.0 [10], while the target is \begin{table} \begin{tabular}{|l|l|} \hline **Operation** & **Method** \\ \hline \hline Patch initialization in the \(\ket{0}\) and \(\ket{+}\) states & Direct Initialization \\ \hline Single patch measurements & Direct measurement of data qubits [21] \\ \hline Pauli X and Z & Transversal in surface codes [14] \\ \hline Hadamard gates & Transversal in planar code patches [21] \\ \hline Entangling multi body measurements & Lattice surgery merges and splits, mediated by ancilla for routing[32] \\ \hline Boundary Rotation & Patch deformation[32] \\ \hline \hline States & Lattice surgery with twist defects [34] \\ \hline Preparation of Magic states \(\ket{m}=\ket{0}+e^{\frac{\alpha}{4}}\ket{1}\) & Distillation in dedicated regions [33] \\ \hline \end{tabular} \end{table} Table 1: The list of logical surface code operations supported by the compiler. The operations are formalized into _logical lattice instructions_ (LLI), which serve as a central intermediate representation to our compiler. LLI decouples the pre-processing to surface code instructions from laying them out on an abstract lattice. Figure 4: Lattice surgery of patches. Patches are _merged_ by activating the stabilizer measurements with the data qubits between them (blue regions). This operation causes the two patches to become one, hence losing a degree of freedom and projecting the logical state into a subspace. After stopping the stabilizer measurements and measuring the mediating data qubits, the patches are _split_. Overall, this operation is equivalent to a logical multi body measurement [21]. The observable depends on the boundaries: rough for \(X\) and smooth for \(Z\). This figure shows measurements of the observables \(Z\otimes Z\) (top) and \(Z\otimes X\) (bottom) Figure 3: Abstracting physical qubits to patches. We omit the details of stabilizers and data qubits that make up patches, and instead represent distance-independent features. It is always possible to compute back these details about stabilizers from the output format and code distance. The picture to the left shows how this abstract representation relates to the physical implementation, and to the right there is a fully abstract patch, which has its own logical state. The different stabilizers on the boundaries yield two different kinds of boundaries, which are often referred to as _rough_ and _smooth_. a JSON logical operation instructions (Sec. II.2 and Fig. 1). There have been other compilers that take circuits and compile them to instruction sets for error corrections. For example, OpenSurgery [40] leverages lattice surgery as well. Our compiler's source and target are similar to OpenSurgery, but improves on the compiler time performance, offers new optimizations, adds the ability to customize layout, and handles parallel magic state distillation. The error correcting compiler Surbriad [37] achieves a similar goal for braided surface codes. There also are compilers that focus specifically on routing long range surface code interactions [23, 5]. While we do tackle such problem, as it is necessary for our overall compilation goal, the focus of this project is broader in scope and we may integrate existing tools at a later stage. Our compiler offers the flexibility to design custom very large scale layouts with a layout specification and automatically map large-scale circuits to them. A small scale procedure for mapping algorithms onto surface code architectures, exploring the trade offs of different layouts has been presented in [31]. Manually obtained surface code layouts with techniques such as the AutoCCZ for optimizing ripple carry adders where presented for example by [16]. Finally, one last approach to quantum compilers worth mentioning are variational compilers [57], which share with our project the challenges of circuit pre-processing. ### The Compilation Pipeline The compiler operates a two-stage pipeline (Fig. 5): 1) a pre-processing stage, and 2) a layout and routing stage. The two stages communicate through an intermediate representation we refer to as _logical lattice instructions_ (LLI from Table 1). The LLI contains all the information about the logical operations happening on the lattice, but none about the physical locations of the patches, or about routing and distillation regions. The physical qubit lattice will be operated according to LLI instructions (Table 1). The first stage, the _gate level processing stage_, operates mostly at the logical circuit level. We resort to a universal gate set based on surface code operations. We gradually process the input circuit's gates to align with our surface code instructions. Once the circuit is in a suitable format (only Clifford+T gates or certain Pauli rotations), the circuit maps 1-to-1 with surface code operations and is written down as LLI. The second stage is the _slicer_. Herein, the LLI are combined with a layout specification in space and time (Fig. 7). The LLI language is circuit layout agnostic, meaning that the mapping of the logical qubits to the physical lattice may have a great impact on the efficiency of the compiled circuit. The result of these steps is a "sequence" of _slices_ of the physical lattice. The slices depict the state of the computation at each point in time, as shown in (Fig. 1). We offer two such slicers: one written in Python, geared towards the verification of small scale circuits (Sec. III.3.1) and a high performance one written in C++ for large scale circuits (Sec. III.3.2). ### Gate Level Processing The first stage takes a logical circuit specified as a subset of OpenQASM 2.0 [10]. We offer two ways to pre-process the circuit: 1) with Pauli rotations and Pauli product measurements, and 2) directly with higher level quantum gates such as Toffoli gates. In both cases, we first parse the circuit into a list of gates, using either Qiskit [13], PyZX [28], a custom parser or a combination of the three depending on the circuit. The gate list expression of the input circuit might use gates which are not supported by the error-correction procedure. In this step we reduce the gate set so that it easily translates to LLI. Our custom parser is able to break down very small angle rotations, such as \(Z(\frac{\pi}{2^{213}})\) by symbolic processing of the argument. These rotations are needed to compile, for example, a 128-qubit quantum Fourier transformation (QFT) circuit. After parsing, the list of gates is passed through the pipeline to the next stage. First, controlled gates are broken down to CNOTs and single qubit rotations using the identity in Fig. 13. The circuit now only has single qubit Clifford gates, CNOTs and single qubit rotations. At the last stage of pre-processing in the gate model single qubit rotations smaller than \(\frac{\pi}{4}\) are approximated to single qubit Clifford+T gates. It is possible to convert controlled-rotation gates to Clifford operations plus some small angle \(Z(\theta)\) rotations (Figure 13 in Appendix). The latter are not Clifford+T and are difficult to perform in a fault-tolerant way [25]. We achieve arbitrary \(Z(\theta)\) rotations by approximating them with Clifford+T gates, for which we leverage the Gridsynth package [48] which outputs approximations constituted of sequences of H, X, Z, Figure 5: The pipeline as implemented in the compiler. S and T gates. The T gates are performed by consuming magic states, which are prepared in dedicated _distillation regions_[8; 33]. We utilize two methods to convert the Gridsynth approximation to LLI. The first is to directly apply the gates with the methods of Table 1: H, X and Z transversally, S with a twist and T as \(Z(\frac{\pi}{8})\) rotation as shown in Fig. 12. The second approach, we refer to as _Pauli rotation compression_, is shown in Fig. 6, and consists of interpreting the gate sequences returned by the Gridsynth approximations as a sequence of Pauli rotations of varying angles. The direct application of gates is simpler and results in the same Clifford corrective terms for every rotation. With Pauli rotation compression the Clifford corrective terms change for every angle, thus more complex classical control would be required by a downstream stage. In the Appendix we present an algorithm for Clifford gate optimization. ### Slices and Routing To overcome the logistical challenges of structuring a computation on surface code device, we arrange the computation in space and time. Space structure is given by partitioning the physical lattice into square _cells_. A cell may hold or may not hold a patch, be part of a distillation region, or may be used for routing, but patches, distillation regions and routing areas are always placed in accordance to cell boundaries ( Figs. 1, 3 and 7). Time structure is given by thinking of the computation in terms of _slices_. Surface code computations can be viewed as 3D structures in space-time [16; 39; 40], and a slice is a plane through the structure at a fixed time value (Fig. 1). In a nutshell, a slice is a temporally discretized partition of the computation (_clock timesteps_ in Litinski [32] or _moments_ in Google Cirq [12] terminology, for example). Each slice represents a snapshot of the the LLIs that are happening simultaneously on the lattice - slice duration is given by the duration of the slowest LLI. _Routing_ is the problem of deciding how the cells of a slice are allocated to patches or reserved for other purposes. Finding optimal layouts has a great impact on the depth of the computation. Different layouts can for example be used to trade off space for time [31]. Layouts may need to change depending on the task during an algorithm. For example, the oracle in Grover's search algorithm may be very different from the implementation of the diffusion operator [24]. We defined our own _configurable layout specification_ (Fig. 7). The compiler reads a text file containing the layout specification and produces slices with patches arranged accordingly. #### iii.3.1 On-the-fly, Functionally Verified Slicer Our first slicer supports the real-time, on-the-fly functional verification for correctness. This slicer can be used as a preliminary verification of smaller scale lattice surgery circuits. The slicer and the simulation operate on an array of patches of variable length and assumes that all magic states have been prepared ahead of time. The verified slicer is very powerful when it comes to understanding the details of small computation and we used it in the development of the compiler. The simulator, called the _Lazily Tensored State-vector Simulation_ (LTSvS), has the major feature of being able to simulate patch states at the LLI instruction level, such as simulating multi body measurements and Pauli operator gates. LTSvS tensors at the matrix level only when strictly required, otherwise just tracks the fact that the global state is given by a tensor product of sub vectors. LTSvS offers a great performance advantage over naive state vector simulation: our simulator Figure 6: Pauli rotation compression of Cififod+T approximations of small angle rotations E.g. \(R_{Z}(\frac{\pi}{8\pi\hbar})\): gate sequences obtained from Gridsynth are interpreted in the Pauli Frame by breaking it into subsequences. For instance, the sequence \(HSHTSHX\) would be split as \(HSH\), \(TS\), \(H\), \(X\) and would become the sequence \(X_{\frac{\pi}{8}}Z_{\frac{\pi}{8}}HX\) Figure 7: ASCII specification for patch layouts. Q indicates a patch holding a logical qubit, r marks cells that are reserved for routing (the cyan “snakes” of Fig. 1). Numbers Q to 9 are used to identify distillation regions. The boundaries of the distillation regions are computed by connected components search for same numbers, so it is possible to have more than 10 distillation regions. Magic states produced by these regions are queued in the r cells neighbouring a distillation block. Finally, \(\blacktriangle\) marks cells reserved to allocate new ancilla patches in the states, such as the \(|+\rangle\) states that mediate CNOTs or the places for the \(Y\) eigenstates used by \(\frac{\pi}{8}\) Pauli rotations. doesn't expand the full state vector of all logical qubits on the lattice. In particular, qubits that are known to not be entangled, because they were just initialized or measured, are automatically tracked in separate sub-state vectors. Qubits may be entangled within a sub-state vector. An example of unentangled qubits is the array of magic states waiting to be used or ancilla patches. Methodologically, the LTSvS simulator is very similar to the matrix-product-states (MPS) simulation techniques [54], which are efficient on circuits with low counts of entangling gates. Compared to the MPS simulators, e.g. from Qiskit [13], ours is fine tuned for computations with many ancillae and measurements, can handle classical control and can be executed in parallel with the compilation process. #### iii.2.2 High Performance Slicer The main goal of our compiler is to handle very large scale circuits with thousands of logical qubits and millions of LLIs. At this scale every CPU clock cycle and every byte are precious. Our high performance compiler is written in C++ because it comes with zero cost abstraction [49]. The first step of the slicer is to read a layout file (Fig. 7) in order to create an abstract layout representation that describes the device layout. The layout is used to initialize a slice template which will be reused for the routing. The template will be recomputed once the layout dictates this. Our implementation of slice processing keeps memory usage to a minimum because \(O(1)\) slices are ever kept in memory by the slicer itself. Moreover, this representation is stored in a high performance data structure based on bitstreams and hash-maps. The representation will be used for computing routes using a variant of Dijkstra's algorithm. The slicer streams LLI from text or standard input, updating the slice with each instruction, evolving the slices over time. Since the slicer can also stream read from standard input and write to standard output, its possible to implement external programs (e.g. Python scripts or other command line tools) that visit slices by reading from standard input. Given the capability of evolving the lattice state, the slice processing functionality is implemented by defining a C++ functor to visit all slices. The streamed evolution of slices includes managing distillation, queuing magic states [39; 41], initializing ancillas and LLI operations. A user may collect statistics on slices, such as magic state queue and routing space usage in seconds, without having to store in terabytes of memory that slow down the processing. At the same time this functor approach has the advantage of hiding the implementation details from the client so that they can focus on the processing functionality. To place routing regions, we used our own implementation of Dijkstra's algorithm, which is implemented _in place_, such that our tool can search the lattice without constructing a graph of it. Our implementation has close to zero cost overheads with respect to memory and CPU instructions needed to translate back and forth between the lattice layout and the graph needed for performing Dijkstra's algorithm. To further speed up routing, we employ a _cached routing_ technique where previously computed routes are saved and reused later. ## IV Results We implemented our compiler and the source code is open sourced at [https://github.com/latticesurgery-com](https://github.com/latticesurgery-com). The compiler is continuously tested and verified for functional correctness with modern continuous integration, while practical performance plays a significant role. Our compiler offers a wide range of configuration options, ranging from optimization heuristics, intermediate representations of the computations, as well as flexible layouts. We present results for compiling very large quantum circuits, and focus on scalability and resource estimation. ### 128-Qubit QFT To validate the performance of our compiler and high performance slicer we took a circuit that has wide spread use and presents technical fault-tolerant execution challenges. The Quantum Fourier Transform (QFT) is a crucial component providing quantum speedup to algorithms such as Shor's algorithm and quantum phase estimation [36]. The fault-tolerant implementation of the QFT is challenging because of the presence of small angle controlled rotations. For the QFT to retain the desired level of precision, these have to be approximated by a long sequence of Clifford+T, which results in a very long computation. We set Gridsyn's precision to \(10^{-41}\) for the Clifford+T approximations for small angle rotations, which results in thousands of gates. Such number was chosen as it is 3 orders of magnitude less than the smallest angle rotation in our circuit \(\frac{\pi}{2128}\approx 10^{-38}\), after expanding out the controlled rotations (Sec. III.2). Figure 8: Time taken to compile a QFT with the C++ slicer on a laptop (Intel i5350U, 8GB RAM) and number of LLI instructions for different QFT sizes. The Clifford+T implementation of the QFT requires thousands of gates for each controlled rotation, to retain the rotation accuracy we set (\(10^{-41}\)). The number of controlled rotations increases quadratically with the number of qubits the QFT is applied to. Thus, at 128 qubits and after small angle rotation approximation, the QFT circuit has more than 80 Million LLI without gate to Pauli compression. The number of LLI includes Clifford corrective terms that are meant to be applied depending on measurement outcomes. Thanks to concurrent magic state distillation, there are no idle slices waiting for magic states to be produced. We used the high performance slicer to compile the 128-qubit QFT: for example, laying out the slices for the roughly 80 million LLI of the 128-qubit QFT takes less than 15 minutes on an ordinary laptop. The generation of LLI of a QFT on 128 qubits takes negligible time (under 10s on a laptop). Fig. 8 illustrates the performance of the C++ slicer for the QFT128 circuit. ### Resource Estimation A challenging problem in the community of fault-tolerant QC is determining the amount of physical resource that is necessary to carry out a logical computation with a certain degree of precision. Such resource is often quantified by physical qubits over time - often called a _space time volume_. The depth of the circuit and the required magic state fidelity affect the code distance, which in turn affect the number of physical qubits required. Moreover, the degree of parallelization achieved at the routing stage will affect computation depth. Our compiler includes a prototypical resource estimator for surface code computations. We use the Qentiana [18] software to estimate such values, and computed some code distances for randomized circuits of H, T and CNOT gates (Fig. 9). ## V Conclusion We introduced and described a compiler for lattice surgery quantum circuits and showcased some of the results achieved with it. We motivated the design choices behind our two stage pipeline. The first stage included how input circuits are parsed, pre-processed, reduced to Clifford+T and viewed as Pauli rotations. The second stage focuses on laying out circuits on physical devices, which presents substantial performance challenges. We demonstrated the compiler's performance by compiling a 128-qubit QFT. We believe this is a notable achievement: despite its widespread appearance in algorithms, to the best of our knowledge, no surface code compiler is able to handle such large scale circuits. We also showcased the compiler's ability to estimate resource requirements, in particular patch code distance, which is promising in the perspective of quantum benchmarking. Our project is laying the foundation for a full-stack quantum circuit compilation framework. Future work will leverage hybrid classical and quantum instruction sets such as LLVM/QIR to program high performance classical control while integrating the Quantum Processing Unit (QPU) instructions. ## Acknowledgements George Watkins and Alexandru Paler were with funding from the Defense Advanced Research Projects Agency [under the Quantum Benchmarking (QB) program under award no. HR00112230007 and HR001121S0026 contracts]. The views, opinions and/or findings expressed are those of the authors and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. Varun Seshadri, Hoang Minh Nguyen, Keelan Watkins and George Watkins have been supported by the Unitary.fund. Hoi-Kwan Lau acknowledges support from the Canada Research Chair.
2307.15923
Charge-Spin Conversion in Two-Subband Quantum Wells with Conventional and Unconventional Rashba Spin-Orbit Coupling
The reciprocal interconversion between spin polarization and charge current (CSC) is the focus of intensive theoretical and experimental investigation in spintronics research. Its physical origin stems from the Rashba spin-orbit coupling (SOC) induced by the breaking of the structure inversion symmetry. The steady-state interconversion efficiency is the result of the non-trivial spin textures of the electric-field distorted Fermi surface. Its full understanding and evaluation requires the consideration of disorder-induced relaxation effects in the presence of spin-orbit induced band splitting. In this paper the additional effect of the orbital degree of freedom is analyzed in a two-subband quantum well with both conventional and unconventional Rashba SOC in the presence of disorder impurity scattering. The latter is treated at the level of the Born approximation in the Green's function self-energy and with the inclusion of vertex corrections in the linear response functions for the charge current and the spin polarization. By explicitly considering the symmetry properties of the Hamiltonian the matrix structure of the correlation functions is shown to decompose in independent blocks of symmetry-related physical observables. We find that the inclusion of vertex corrections is important for the correct estimate of the CSC efficiency, which also depends on the position of the Fermi level. We also find that the relative sign of the Rashba SOC in the two subbands plays a key role in determining the behavior of the CSC. Finally, we point out how the two-subband model compares with the standard single-band two-dimensional electron gas.
Gerson J. Ferreira, Boyu Wang, Jiyong Fu, Roberto Raimondi
2023-07-29T07:39:59Z
http://arxiv.org/abs/2307.15923v1
Charge-Spin Conversion in Two-Subband Quantum Wells with Conventional and Unconventional Rashba Spin-Orbit Coupling ###### Abstract The reciprocal interconversion between spin polarization and charge current (CSC) is the focus of intensive theoretical and experimental investigation in spintronics research. Its physical origin stems from the Rashba spin-orbit coupling (SOC) induced by the breaking of the structure inversion symmetry. The steady-state interconversion efficiency is the result of the non-trivial spin textures of the electric- field distorted Fermi surface. Its full understanding and evaluation requires the consideration of disorder-induced relaxation effects in the presence of spin-orbit induced band splitting. In this paper the additional effect of the orbital degree of freedom is analyzed in a two-subband quantum well with both conventional and unconventional Rashba SOC in the presence of disorder impurity scattering. The latter is treated at the level of the Born approximation in the Green's function self-energy and with the inclusion of vertex corrections in the linear response functions for the charge current and the spin polarization. By explicitly considering the symmetry properties of the Hamiltonian the matrix structure of the correlation functions is shown to decompose in independent blocks of symmetry-related physical observables. We find that the inclusion of vertex corrections is important for the correct estimate of the CSC efficiency, which also depends on the position of the Fermi level. We also find that the relative sign of the Rashba SOC in the two subbands plays a key role in determining the behavior of the CSC. Finally, we point out how the two-subband model compares with the standard single-band two-dimensional electron gas. Double Quantum Well, Rashba Spin-orbit Coupling, Spin Relaxation Further author information: (Send correspondence to Roberto Raimondi.) Roberto Raimondi:: E-mail: [email protected] ## 1 Introduction The spin-orbit (SO) interaction couples electron spin and its momentum, affording electric control and manipulation of magnetic degrees of freedom, the spin, in quantum spintronics [1, 2]. Also, the SO effects underlie novel topological phenomena in diverse fields of quantum condensed matter such as topological insulators [3], Majorana fermions [4, 5], van der Waals heterostructures [6, 7] and Weyl semimetals [8, 9]. Recent proposals of persistent skyrmion lattice [10], stretchable spin helix [11] as well as helix-stretch based orbit (pseudospin) filter [12], which can be realized by fine tuning the SO strengths, also indicate the important role of SO effects in semiconductor nanostructures. Further, the SO field is the key leading to the charge-to-spin conversion by the direct Rashba-Edelstein effect (REE) and spin-to-charge conversion by the inverse Rashba-Edelstein effect (IREE) [13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]. The reciprocal interconversion between spin polarization and charge current plays a crucial role in modern spintronics, and accordingly is the focus of intensive theoretical and experimental investigations for spintronic applications. In semiconductor nanostructures, the SO effects usually have two dominant contributions, i.e., the Rashba [24] and Dresselhaus [25] terms, arising from the structural and bulk inversion asymmetries, respectively. While the Dresselhaus coupling mainly depends on the quantum confinement (e.g., the well width) [11, 26, 27], the Rashba coupling can be electrically controlled by using an external bias, thus facilitating coherent spin manipulation [11, 28, 29, 30]. As a consequence, the Rashba effect is often used in proposed spintronic devices, e.g., spin-field [31, 32, 33] and spin-Hall effect [34, 35] transistors as well as spin-charge conversion based applications. Extensive studies have been devoted to coherent spin control by resorting to the Rashba SO coupling in semiconductor heterostructures with only one occupied electron subband [36, 37, 38, 11]. However, for the case of the single-subband occupancy, the two lifted spin branches of the energy dispersion feature opposite chiralities, greatly suppressing the efficiency of charge-spin conversion in spintronic applications. Recently, following that additional orbital degrees of freedom may offer more intriguing possibilities for SO control, e.g., band crossing and anticrossing assisted spin manipulation, the spin features in two-band Rashba quantum systems have also attracted growing interest, with both intra- and interband SO terms [39, 40, 41, 42, 43, 11]. In general, the intra- and interband SO terms have the same symmetry, leading to the _conventional_ two-band Rashba model [44, 45, 27, 36]. Within the conventional model, it has been shown that coupling between subbands can be used to control the spin lifetime in the persistent spin helix regime [40, 41, 42, 11, 10]. Experimentally, measurements of spin dynamics in two-subband GaAs quantum wells have shown long and anysotropic spin lifetimes [46, 47, 48]. Recently, Song _et al._ also proposed an _unconventional_ two-band Rashba model in two-dimensional (2D) systems [49], where the intra- and interband SO terms have different symmetries, so that the Fermi circles of spin-lifted subbands have chirality with the same sign, thus providing high efficiency of spin to charge conversion. However, so far, detailed SO features involving the energy dispersion of the four distinct spin branches and the corresponding spin textures for the conventional and unconventional SO models, which are essential for spintronic applications [50, 51], still remain obscure. Further, the effect of vertex correction in momentum space, which is important for the correct charge-spin conversion (CSC) [13], particularly near the crossing or avoided crossing points of the two bands, is largerly unexplored. Here, we explore the SO features of the unconventional Rashba model and make comparison to the conventional one for a QW with two subbands with both intra- and interband SOC. We demonstrate avoided crossings of the energy dispersion and _intertwined_ spin textures, in stark contrast to the conventional Rashba model. And, near the avoided crossings in momentum space, there may even exist vanishing SO fields, triggered by the interband coupling. This can be used as an handle to suppress spin-relaxation mechanisms for electrons in a controllable manner. Furthermore, we take into account the disorder (e.g., impurity) scattering within the Born approximation in the self-energy of Green's function and include the vertex correction of spin textures in the linear response functions for the charge current and spin polarization, to estimate the CSC efficiency. We find that two distinct regimes can be identified as a function of the Fermi energy, depending whether one or two subbands are occupied. When both bands are occupied, the vertex corrections are important and to a large extent can be understood in a way similar to the behavior of the single-band case. In the regime with only one band occupied, the relevance of the vertex corrections is controlled by the strength of the disorder, e.g. impurity scattering. Our analysis is based on the standard diagrammatic impurity technique here generalized to the case of two bands. Symmetry considerations are exploited in order to simplify the algebraic structure of the vertex corrections arising from both orbital and spin degrees of freedom and support the numerical evaluation. Within linear response to a d.c. external electric field we evaluate the Kubo formula for both the induced spin polarization and the electrical current, from whose ratio we measure the CSC efficiency. One byproduct of the algebraic structure of the vertex corrections equation is the information about the spin relaxation times, which appear to behave differently in the conventional and unconventional models. This paper is organized as follows. In Sec. 2, we first present the conventional and unconventional two-band Rashba models, as well as the corresponding energy dispersion with avoided crossings and novel spin textures. The Green's and response functions, involving the self-energy, the vertex equation, and the sysmmetry analysis, are introduced in Sec. 3. We present and discuss numerical results for CSC conversion in Sec. 4. We summarize our main finding in Sec. 5 ## 2 Conventional and unconventional Rashba models We consider two-dimensional electron gases (2DES) confined in semiconductor quantum wells of two occupied subbands, with parabolic dispersion and both intra- and inter-subband Rasbha SO couplings. Following the notation introduced in Ref. [49], we consider the cases of _conventional_ and _unconventional_ Rashba SOC. The conventional model follows from the usual two-subband GaAs zincblende quantum wells grown along the \(z=[001]\) direction [27, 36], where electrons are confined to the \(xy\) plane with \(x=[110]\) and \(y=[1\bar{1}0]\). Assuming structural inversion asymmetry (SIA), the effective Hamiltonian of the conventional model at the \(\Gamma\) point must be invariant under the \(C_{2V}=\{C_{2}(z),M_{y}\}\) point group. In contrast, the unconventional model occurs in 2D systems that transform under the \(C_{3V}=\{C_{3}(z),M_{y}\}\) point group at the \(\Gamma\) point. The derivation of both, conventional \(H_{C}\) and unconventional \(H_{U}\), Hamiltonians is shown in Appendix A. There, we see that \(H_{C}\) and \(H_{U}\) are strikingly similar, differing only by the intersubband Rashba terms \(\eta_{C}\) and \(\eta_{U}\). Consequently, it is useful to recast both Hamiltonians into a generic form, \[H=\begin{pmatrix}\varepsilon_{1}&-i\alpha_{1}k_{-}&0&-i\eta^{*}k_{-}\\ i\alpha_{1}k_{+}&\varepsilon_{1}&i\eta k_{+}&0\\ 0&-i\eta^{*}k_{-}&\varepsilon_{2}&-i\alpha_{2}k_{-}\\ i\eta k_{+}&0&i\alpha_{2}k_{+}&\varepsilon_{2}\end{pmatrix}. \tag{1}\] For the conventional case, the intersubband Rasbha SO coupling \(\eta\equiv\eta_{C}\) is real, ensuring that the intra- \((-i\alpha_{j}k_{-})\) and intersubband \((-i\eta_{C}k_{-})\) terms have the same symmetry. In contrast, for the unconventional case, \(\eta\equiv-i\eta_{U}\) is imaginary, indicating that the intra- \((-i\alpha_{j}k_{-})\) and intersubband \((\eta_{U}k_{-})\) terms have different symmetries in spin space. The subband basis is labeled as \(\ket{j,\sigma}\), where \(j=\{1,2\}\) refer to the subbands, and \(\sigma=\{\uparrow,\downarrow\}\) to the spin along \(z\). We will use two sets of Pauli matrices \(\lambda_{0},\lambda_{x},\lambda_{y},\lambda_{z}\) and \(\sigma_{0},\sigma_{x},\sigma_{y},\sigma_{z}\) to represent the matrix structure in subbands and spin degrees of freedom, respectively. In both cases of intersubband Rashba SO coupling, the basis \(\ket{j,\sigma}\) is sorted as \(\{\ket{1\uparrow},\ket{1\downarrow},\ket{2\uparrow},\ket{2\downarrow}\}\), and for each subband \(\varepsilon_{j}=\varepsilon_{j}^{0}+\frac{\hbar^{2}}{2m}k^{2}\), \(\varepsilon_{j}^{0}\) are the band edges, the effective mass \(m\) is assumed to be the same in both subbands, \(\alpha_{j}\) is the intra-subband Rashba SO coupling, \(\mathbf{k}=(k_{x},k_{y})\) is the in-plane quasi-momentum, and \(k_{\pm}=k_{x}\pm ik_{y}\). Unless otherwise specified, we use a set of parameters similar to those in Ref. [49], i.e., \(\varepsilon_{1}^{0}=0\) meV, \(\varepsilon_{2}^{0}=160\) meV, \(m=0.365m_{0}\), \(|\alpha_{1}|=|\alpha_{2}|=78\) meV nm, \(|\eta_{C}|=|\eta_{U}|=126\) meV nm, where \(m_{0}\) is the bare electron mass. ### Subband anti-crossings and spin textures in k-space The different cases above for \(\eta\equiv\eta_{C}\) or \(\eta\equiv-i\eta_{U}\) yield similar band structures, but they imply different conditions for anti-crossings between the subbands, and distinct spin textures in k-space, as shown in Fig. 1. To see this, first notice that at \(k_{y}=0\), and with \(\eta=0\), we have \([H,\sigma_{y}]=0\). Thus, the wave-functions are eigenstates of \(\sigma_{y}\) with energies \(E_{j,\pm}=\varepsilon_{j}\pm\alpha_{j}k_{x}\). If \(\alpha_{1}/\alpha_{2}>0\), the crossing subbands will have the opposite \(\sigma_{y}\), while for \(\alpha_{1}/\alpha_{2}<0\) they have the same \(\sigma_{y}\). Now, for the conventional case, if we consider a finite \(\eta\equiv\eta_{C}\) as a perturbation of the type \(H^{\prime}=\eta_{C}\lambda_{x}\otimes\sigma_{y}\,k_{x}\), it couples crossing subbands only if they have the same \(\sigma_{y}\) (i.e., \(\alpha_{1}/\alpha_{2}<0\)). In contrast, for the unconventional case with \(\eta\equiv-i\eta_{U}\), the perturbation would be \(H^{\prime}=\eta_{U}\lambda_{x}\otimes\sigma_{x}\,k_{x}\), which couples crossings with opposite \(\sigma_{y}\) (i.e., \(\alpha_{1}/\alpha_{2}>0\)). The four possible scenarios lead to the four distinct helical spin textures shown in Fig. 1. For the conventional cases shown in Fig. 1(a-b), the intersubband SOC \(\eta_{C}\) does not mix the spin components, as argued above, and affects only the band dispersion. In contrast, for the unconventional \(\eta_{U}\) leads to significant spin admixture, as shown by the color code of Fig. 1(c-d). The \(U^{+}\) case, with \(\alpha_{1}/\alpha_{2}>0\), Fig. 1(c) is a particularly interesting scenario, since the spin admixture induced by \(\eta_{U}\) leads to a regime where both spin branches of the lower subband have the same helicity. Consequently, one would expect that this case should lead to a higher charge-spin interconversion, as proposed in Ref. [49]. However, as we will present below, this is not necessarily the case. ## 3 The diagrammatic analysis for the green's and response functions ### Statement of the problem Our aim is the evaluation of the Kubo formula for the response functions of an observable in the presence of an external field. In particular, we will be interested in the spin polarization response to an applied electric field (REE) by allowing electron scattering from random impurities. We will consider standard scalar (i.e. with no dependence on subband and spin degrees of freedom) disorder potential \(U({\bf r})\) with delta-like correlated impurities such that \(\langle U({\bf r})U({\bf r}^{\prime})\rangle=u_{0}^{2}\delta({\bf r}-{\bf r}^{ \prime})\), where \(u_{0}^{2}=n_{\rm imp}v_{0}^{2}\) with \(n_{\rm imp}\) the impurity concentration and \(v_{0}\) the single-impurity scattering amplitude. At the level of the Born approximation, the retarded self-energy \(\Sigma^{R}(\omega,{\bf k})\) is obtained by the so-called rainbow diagram, as shown in Fig. 2(a), and it is given in terms of the single-particle electron Green's function as \[\Sigma^{R}(\omega,{\bf k})=u_{0}^{2}\int\frac{d^{2}k^{\prime}}{(2\pi)^{2}}G^{R }(\omega,{\bf k}^{\prime})=u_{0}^{2}\int\frac{d^{2}k^{\prime}}{(2\pi)^{2}} \sum_{s,n}\frac{P_{s,n}({\bf k}^{\prime})}{\hbar\omega-E_{s,n}({\bf k}^{\prime })+i0^{+}}, \tag{2}\] where \(E_{s,n}({\bf k})\) and \(P_{s,n}({\bf k})\) indicate the eigenvalues and the corresponding projector operators of the generic Hamiltonian from Eq. (1). The summation run over the possible combinations of the indexes \(n=\pm\) and \(s=\pm\), which will be defined later on. Notice that with the adopted model of disorder the self-energy does not depend on the external momentum \({\bf k}\). Once the self-energy has been evaluated and inserted into the Green's function, the Kubo formula, say, for the y-axis spin polarization in response to an x-axis applied electric field may be evaluated by the diagram shown in Fig. 2(b) and reads [13, 52] (below \(e>0\) is the unit charge and a factor \(\hbar/2\) accounts for the spin value and dimensions) \[\chi_{yx}=\frac{\hbar}{2\pi}\frac{(-e)\hbar}{2}\int\frac{d^{2}k}{(2\pi)^{2}} \mathrm{Tr}\left[S_{y}^{0}G^{R}(\omega,{\bf k})J_{x}({\bf k})G^{A}(\omega,{ \bf k})\right], \tag{3}\] where \(S_{y}^{0}=\lambda_{0}\otimes\sigma_{y}\) is the _bare_ spin-density vertex (in units of \(\hbar/2\)) and the \(J_{x}({\bf k})\) is the _dressed_ charge-current vertex (more precisely, the number-current vertex because we have taken out the charge \(-e\)). The latter is obtained by considering the ladder diagrams represented in Fig. 2(c), which yield the vertex corrections mirroring the self-energy corrections due to the rainbow diagram. In explicit terms \(J_{x}({\bf k})\) is obtained by solving the equation [53, 54, 55] \[J_{x}({\bf k})=J_{x}^{0}({\bf k})+u_{0}^{2}\int\frac{d^{2}k^{\prime}}{(2\pi)^{ 2}}G^{R}(\omega,{\bf k}^{\prime})J_{x}({\bf k}^{\prime})G^{A}(\omega,{\bf k}^ {\prime}), \tag{4}\] Figure 1: (a1–d1) Band structure for the two-subband models of the conventional (\(C^{\alpha}\)) and unconventional (\(U^{\alpha}\)) types with \(\alpha=\pm\) indicating the sign of \(\alpha_{1}/\alpha_{2}\) in each panel. The energies \(E_{n,s}\) are calculated at \(k_{y}=0\) and the color code refers to the mean value of the spin operator \(\sigma_{y}\) (i.e., \(\langle\sigma_{y}\rangle>0\) in blue, and \(\langle\sigma_{y}\rangle<0\) in red shades). (a2–d2) Spin textures in k-space corresponding to the band structures and energy (dashed line) from the top panels. The arrows and their shade indicate the direction (helicity) and intensity of the spin vector \(\langle\mathbf{\sigma}\rangle\). At \(k_{y}=0\), the color code matches the one from the top panels. \(J_{x}^{0}(\mathbf{k})\) being the _bare_ charge-current vertex. The set of Eqs. (2,3,4) are well known[13, 54, 53] and have been applied to the single-subband model with Rashba SOC. Its application to the generic Hamiltonian from Eq. (1) will be considered in this paper. The presence of two subbands makes the analytic treatment more involved and it is useful to exploit the symmetries of the model to make the solution of the Eqs. (2,3,4) simpler and physically more transparent. This will be carried out in the following of this section, wheres the discussion of the results will be postponed to a subsequent section. ### The diagonalization of the Hamiltonian and the structure of the self-energy According to Eq. (2), the evaluation of the self-energy requires the knowledge of the eigenvalues and projection operators of the generic Hamiltonian from Eq. (1). We now make the useful observation that the Hamiltonian can be made block-diagonal by introducing the Rashba eigenstates of each subband separately, considering that their spinor structure is independent of the strength of the two Rashba SOC couplings \(\alpha_{1}\) and \(\alpha_{2}\). We then introduce the new basis by the following unitary transformation \[S(\theta)=\frac{1}{\sqrt{2}}\begin{pmatrix}1&-ie^{-i\theta}&0&0\\ 0&0&1&ie^{-i\theta}\\ 1&ie^{-i\theta}&0&0\\ 0&0&1&-ie^{-i\theta}\end{pmatrix}, \tag{5}\] where the angle \(\theta\) identifies the wave vector momentum \(\mathbf{k}=(k\cos\theta,k\sin\theta)\). In this new basis, where all quantities will be denoted by an upper index \(S\), the Hamiltonian reads \[H^{S}\equiv S(\theta)HS^{-1}(\theta)=\begin{pmatrix}\varepsilon_{1}+\alpha_{ 1}k&i\text{Im}(\eta)k&0&\text{Re}(\eta)k\\ -i\text{Im}(\eta)k&\varepsilon_{2}-\alpha_{2}k&-\text{Re}(\eta)k&0\\ 0&-\text{Re}(\eta)k&\varepsilon_{1}-\alpha_{1}k&-i\text{Im}(\eta)k\\ \text{Re}(\eta)k&0&i\text{Im}(\eta)k&\varepsilon_{2}+\alpha_{2}k\end{pmatrix}. \tag{6}\] The Hamiltonian is block-diagonal if we restrict it to either to the conventional (Im(\(\eta=0\)) or to the unconventional (Re(\(\eta\)) =0) cases. Notice as the interband SOC couples equal (opposite) chiralities in the convential (unconventional) case. We then label the two blocks by \[h_{2}^{s}=\begin{pmatrix}\lambda_{1}^{s}&\beta_{s}\\ \beta_{s}^{\prime}&\lambda_{2}^{s}\end{pmatrix},\ s=\pm 1, \tag{7}\] where \(\lambda_{j}^{s}=\varepsilon_{j}+s\bar{\alpha}_{j}k\), with \(\bar{\alpha}_{1}=\alpha_{1}\), \(\bar{\alpha}_{2}=t\alpha_{2}\), and the sign \(t=\pm\) on \(\alpha_{2}\) refers to the conventional and unconventional cases, respectively. Furthermore, \(\beta_{s}=s\eta k\) and \(\beta_{s}^{\prime}=s\eta^{*}k\). In the S basis, the eigenvalues and projection operators are easily found to be \[E_{sn}=\frac{1}{2}\left(\lambda_{1}^{s}+\lambda_{2}^{s}+n\sqrt{(\lambda_{1}^ {s}-\lambda_{2}^{s})^{2}+4\beta_{s}\beta_{s}^{\prime}}\right),\ P_{sn}^{S}= \frac{1}{N_{sn}}\begin{pmatrix}|\beta_{s}|^{2}&\beta_{s}\delta_{sn}^{*}\\ \beta_{s}^{*}\delta_{sn}&|\delta_{sn}|^{2}\end{pmatrix},\ n=\pm 1, \tag{8}\] Figure 2: (a) Self-energy _rainbow_ diagram for the Born approximation. The solid and dashed lines represent the electron Green’s function and the impurity average, respectively. (b) _Bubble_ diagram for the response function. The gray and black dots represent the bare and dressed vertices. (c) _Ladder_ diagram for the dressed vertex. where \(\delta_{sn}=E_{sn}-\lambda_{1}^{s}\) and \(N_{sn}^{2}=|\beta_{s}|^{2}+|\delta_{sn}|^{2}\). In the above, the index \(n=\pm 1\) labels the eigenvalues for each block \(s\). By transforming back the projection operators \(P_{sn}^{S}\) in the original basis one finds \[P_{sn}(\theta)=S^{-1}(\theta)P_{sn}^{S}S(\theta)=\frac{1}{2N_{sn}^{2}}\begin{pmatrix}| \beta_{s}|^{2}&-ise^{-i\theta}|\beta_{s}|^{2}&\beta_{s}\delta_{sn}^{*}&-itse^{- i\theta}\beta_{s}\delta_{sn}^{*}\\ ise^{i\theta}|\beta_{s}|^{2}&|\beta_{s}|^{2}&ise^{i\theta}\beta_{s}\delta_{sn}^ {*}&t\beta_{s}\delta_{sn}^{*}\\ \beta_{s}^{*}\delta_{sn}&-ise^{-i\theta}\beta_{s}\delta_{sn}^{*}&|\delta_{ sn}|^{2}&-itse^{-i\theta}|\beta_{s}|^{2}\\ itse^{i\theta}\beta_{s}\delta_{sn}^{*}&t\beta_{s}^{*}\delta_{sn}&itse^{i\theta }|\beta_{s}|^{2}&|\delta_{sn}|^{2}\end{pmatrix}. \tag{9}\] Because the eigenvalues \(E_{sn}\), as shown in Eq. (8), only depend on the absolute value of \(\mathbf{k}\), when the projector \(P_{sn}(\theta)\) is inserted into Eq. (2) for the self-energy, the latter acquires the simple structure \[\Sigma^{R}(\omega,\mathbf{k})=-i\pi u_{0}^{2}\sum_{sn}D_{sn}\langle P_{sn}( \theta)\rangle\equiv-i\pi u_{0}^{2}\sum_{sn}D_{sn}P_{sn}^{0}(k_{sn}), \tag{10}\] where \(\langle P_{sn}(\theta)\rangle=P_{sn}^{0}(k_{sn})\) denotes the angle average and \(D_{sn}\) is the density of states in the band with quantum numbers \(s,n\), which reads \[D_{sn}=\frac{k}{2\pi}\frac{dk}{dE_{sn}(k)}\Big{|}_{k=k_{sn}(\mu)}, \tag{11}\] \(\mu\) being the Fermi energy, and \(k_{sn}\equiv k_{sn}(\mu)\) is the Fermi momentum for each band. We notice that the summation over \(s,n\) actually runs only over the occupied subbands. When taking the angle average of the projector \(P_{sn}(\theta)\) the self-energy reduces to only three independent parameters for the conventional and unconventional cases, greatly simplifying the numerical evaluation, and yields \[\Sigma^{R}\approx\begin{pmatrix}\Sigma_{11}&0&\Sigma_{12}&0\\ 0&\Sigma_{11}&0&t\Sigma_{12}\\ \Sigma_{21}&0&\Sigma_{22}&0\\ 0&t\Sigma_{21}&0&\Sigma_{22}\end{pmatrix}, \tag{12}\] where the sign \(t=\pm\) refer to the conventional and unconventional cases. In the first Born approximation, the Hermitian part of \(\Sigma^{R}\) vanishes and \(\Sigma^{R}\) becomes pure skew-symmetric, thus, it does not introduce renormalizations to \(H\). Namely, we get \(\operatorname{Re}\Sigma_{11}=\operatorname{Re}\Sigma_{22}=0\) in all cases. For the conventional case, \(\operatorname{Re}\Sigma_{12}=\operatorname{Re}\Sigma_{21}=0\) and \(\operatorname{Im}\Sigma_{12}=\operatorname{Im}\Sigma_{21}\). For the unconventional case \(\operatorname{Im}\Sigma_{12}=\operatorname{Im}\Sigma_{21}=0\), and \(\operatorname{Re}\Sigma_{12}=-\operatorname{Re}\Sigma_{21}\). Although the above matrix structure has been obtained at the level of the Born approximation, it actually has a general validity granted by the symmetry properties of the model as will be shown in the following. In Fig. 3 we show the behavior of the self-energy as a function of the Fermi energy. The self-energy is given in units of \(\hbar/2\tau_{0}=\pi u_{0}^{2}D_{0}\), \(D_{0}\) being the density of states of the two-dimensional quadratic dispersion. Hence, \(\pi u_{0}^{2}D_{0}\) is the self-energy for the two-dimensional electron gas. In Fig. 3 one sees that, when all bands are occupied, the self-energy reduces to a unit matrix with \(\Sigma_{11}=\Sigma_{22}\) and \(\Sigma_{12}=0\). When only one band is occupied all three parameters are different from zero. In particular the self-energy in the empty subband is non zero due to the mixing of the inter-band SOC. ### The equation for the dressed vertex In this subsection we solve the vertex equation (4) by using the _dressed_ Green's function with the self-energy obtained in the previous section. We begin by introducing the _bare_ current vertex \[J_{x}^{0}(\mathbf{k})=\frac{1}{\hbar}\frac{\partial H(\mathbf{k})}{\partial k _{x}}=\frac{\hbar k_{x}}{m}+J_{SOC,x}, \tag{13}\] where \(J_{SOC,x}\) is a \(\mathbf{k}\)-independent matrix arising from the linear-in-momentum terms describing the SOC in the Hamiltonian from Eq. (1). To solve Eq. (4) one proceeds by iteration i.e., perturbatively in the disorder correlations represented by dashed lines in the diagrams of Fig. 2. In the first step of iteration, when the bare vertex from Eq. (13) (the gray dot in Fig. 2 (c)) is inserted in the momentum integral over \(\mathbf{k}^{\prime}\) in the right hand side of Eq. (4), one obtains the first vertex correction, which turns out to be independent on the external momentum \(\mathbf{k}\). This suggests that a general solution must have the form \[J_{x}(\mathbf{k})=\frac{\hbar k_{x}}{m}+\Gamma_{x}, \tag{14}\] where \(\Gamma_{x}\) is a momentum independent matrix. By using the ansatz from Eq. (14) into the vertex equation (4), one readily obtains an _algebraic_ equation for \(\Gamma_{x}\) \[\Gamma_{x}=\bar{\Gamma}_{x}+u_{0}^{2}\int\frac{d^{2}k^{\prime}}{(2\pi)^{2}}G^{ R}(\omega,\mathbf{k}^{\prime})\Gamma_{x}(\mathbf{k}^{\prime})G^{A}(\omega, \mathbf{k}^{\prime}), \tag{15}\] where \(\bar{\Gamma}_{x}\) is an _effective_ bare vertex obtained by combining the _pure_ SOC-induced vertex and the single-line impurity line dressed ordinary current vertex \[\bar{\Gamma}_{x}=J_{SOC,x}+u_{0}^{2}\int\frac{d^{2}k^{\prime}}{(2\pi)^{2}}G^{ R}(\omega,\mathbf{k}^{\prime})\frac{\hbar k_{x}}{m}G^{A}(\omega,\mathbf{k}^{ \prime}). \tag{16}\] The algebraic structure of Eq. (15) can be made explicit by introducing the basis \(\zeta_{a}=\lambda_{a_{1}}\otimes\sigma_{a_{2}}\) (subband, spin) in the space of four by four matrices. The indices \((a_{1},a_{2})\) are defined as \(a_{1}=\lfloor a/4\rfloor\), \(\lfloor\cdots\rfloor\) being the integer part, and \(a_{2}=a\) Mod 4. Table 1 provides all the indices \(a\) for each pair \((\lambda,\sigma)\). By introducing the decompositions \(\Gamma_{x}=\sum_{a}\Gamma_{x,a}\zeta_{a}\) and similarly for \(\bar{\Gamma}_{x}\), one finds \[\sum_{b}\left(\delta_{ab}-L_{ab}\right)\Gamma_{x,b}=\bar{\Gamma}_{x,a}, \tag{17}\] with the \(L\)-matrix given by \[L_{ab}=u_{0}^{2}\int\frac{d^{2}k^{\prime}}{(2\pi)^{2}}\frac{1}{4}\text{Tr} \left[\zeta_{a}\ G^{R}(\omega,\mathbf{k}^{\prime})\zeta_{b}\ G^{A}(\omega, \mathbf{k}^{\prime})\right]. \tag{18}\] Figure 3: (a1-d1) Self-energy components [i.e., \(\Sigma_{11}\), \(\Sigma_{22}\), \(\Sigma_{12}\), see Eq. (12)] as function of the Fermi energy \(\mu\). Each panel corresponds to the conventional (\(C^{\alpha}\)) or unconventional (\(U^{\alpha}\)) cases, as indicated, and \(\alpha=\pm\) labels the relative sign of \(\alpha_{1}/\alpha_{2}\). The gray area for \(\mu>160\) meV indicates the region where all subbands are occupied. (a2-d2) Imaginary part \(\Gamma_{sn}=-\operatorname{Im}\{E_{sn}\}\) of the self-energy in the eigenstates basis considering only occupied subbands for each \(\mu\). Given the four-dimensional structure of the Hamiltonian from Eq. (1), the basis \(\zeta_{a}\) is formed by sixteen matrices, making a purely analytical approach to Eq. (17) almost impossible. However, even though a numerical approach is possible, it is useful to investigate how the symmetries of the model allow to reduce the algebraic system of the vertex equations in a set of independent systems of equations of lesser size. Because each of the matrix \(\zeta_{a}\) can be connected to an observable, such a reduction allows also to elucidate which physical observables are connected to one another. ### The symmetry analysis To illustrate the usefulness of the symmetry analysis, we begin by considering the evaluation of the effective bare vertex \(\bar{\Gamma}_{x}\) defined in Eq. (16). The \(\zeta\)-matrices decomposition of the matrix part \(J_{SOC,x}\) is easily obtained and reads \[J_{SOC,x}=\frac{\alpha_{1}+\alpha_{2}}{2\hbar}\zeta_{2}-\frac{\eta_{U}}{\hbar} \zeta_{5}+\frac{\eta_{C}}{\hbar}\zeta_{6}+\frac{\alpha_{1}-\alpha_{2}}{2 \hbar}\zeta_{14}. \tag{19}\] Notice that by restricting to the cases \(\alpha_{1}=\pm\alpha_{2}\), the matrix part of \(J_{SOC,x}\) has only two components associated to intra- and interband SOC. In order to obtain the full \(\bar{\Gamma}_{x}\) we need to evaluate the momentum integral of the second term in the right hand side of Eq. (16). After expressing the Green's functions in terms of their spectral decomposition, one gets for that integral \[u_{0}^{2}\sum_{sn}\int\frac{d^{2}k}{(2\pi)^{2}}\ \frac{k_{x}P_{sn}({\bf k})}{( \hbar\omega-E_{sn}({\bf k})+i0^{+})(\hbar\omega-E_{sn}({\bf k})-i0^{+})}, \tag{20}\] where the orthogonality of the projectors has been used \(P_{sn}({\bf k})P_{s^{\prime}n^{\prime}}({\bf k})=\delta_{ss^{\prime}}\delta_{ nn^{\prime}}P_{sn}({\bf k})\). According to the explicit expression of the projector in Eq. (9), we may represent the angle dependence of \(P_{sn}({\bf k})\) as \[P_{sn}({\bf k})=P_{sn}^{0}(k)+e^{i\theta}P_{sn}^{+}(k)+e^{-i\theta}P_{sn}^{-} (k). \tag{21}\] The angle integration in Eq. (20), due to the \(\cos\theta\) factor of \(k_{x}\), yields \(P_{sn}^{0}(k)\) and \(P_{sn}^{1}(k)=(1/2)(P_{sn}^{+}(k)+P_{sn}^{-}(k))\), whose \(\zeta\)-matrices decomposition is readily obtained by inspection looking at Eq. (9), yielding \[P_{sn}^{0}(k) = \frac{1}{2N_{s}^{2}}\left(\frac{|\beta_{s}|^{2}+|\delta_{sn}|^{2 }}{2}\zeta_{0}+\frac{|\beta_{s}|^{2}-|\delta_{sn}|^{2}}{2}\zeta_{12}+\frac{ \beta_{n}\delta_{sn}^{*}+\beta_{n}^{*}\delta_{sn}}{2}\zeta_{4}+i\frac{\beta_{ n}\delta_{sn}^{*}-\beta_{n}^{*}\delta_{sn}}{2}\zeta_{8}\right) \tag{22}\] \[P_{sn}^{1}(k) = \frac{1}{4N_{s}^{2}}s\left(\frac{|\beta_{s}|^{2}+|\delta_{sn}|^{ 2}}{2}\zeta_{2}+\frac{|\beta_{s}|^{2}-|\delta_{sn}|^{2}}{2}\zeta_{14}+\beta_{ n}\delta_{sn}^{*}(\zeta_{6}+i\zeta_{10})-\beta_{n}^{*}\delta_{sn}(\zeta_{6}-i \zeta_{10})\right) \tag{23}\] in the conventional case and \[P_{sn}^{0}(k) = \frac{1}{2N_{s}^{2}}\left(\frac{|\beta_{s}|^{2}+|\delta_{sn}|^{ 2}}{2}\zeta_{0}+\frac{|\beta_{s}|^{2}-|\delta_{sn}|^{2}}{2}\zeta_{12}+\frac{ \beta_{n}\delta_{sn}^{*}+\beta_{n}^{*}\delta_{sn}}{2}\zeta_{7}+i\frac{\beta_{ n}\delta_{sn}^{*}-\beta_{n}^{*}\delta_{sn}}{2}\zeta_{11}\right) \tag{24}\] \[P_{sn}^{1}(k) = \frac{1}{4N_{s}^{2}}s\left(\frac{|\beta_{s}|^{2}+|\delta_{sn}|^{ 2}}{2}\zeta_{14}+\frac{|\beta_{s}|^{2}-|\delta_{sn}|^{2}}{2}\zeta_{2}+\beta_{ n}\delta_{sn}^{*}(i\zeta_{5}-\zeta_{9})-\beta_{n}^{*}\delta_{sn}(-i\zeta_{5}+ \zeta_{9})\right) \tag{25}\] for the unconventional case. By comparing Eq. (19) with Eqs. (23, 25) one sees that only the sets \((\zeta_{2},\zeta_{14},\zeta_{6},\zeta_{10})\) and \((\zeta_{2},\zeta_{14},\zeta_{5},\zeta_{9})\) are involved in the conventional and unconventional cases, respectively. This suggests that in the evaluation of the full vertex \(\Gamma_{x}\) one has to deal with an algebraic system of dimension four, which is a \begin{table} \begin{tabular}{|c||c|c|c|c|} \hline \((\lambda,\sigma)\) & 0 & 1 & 2 & 3 \\ \hline \hline 0 & 0 & 1 & 2 & 3 \\ \hline 1 & 4 & 5 & 6 & 7 \\ \hline 2 & 8 & 9 & 10 & 11 \\ \hline 3 & 12 & 13 & 14 & 15 \\ \hline \end{tabular} \end{table} Table 1: For each pair of \((\lambda,\sigma)\) matrices, the table gives the value of the index \(a\). great simplification with respect to the dimension sixteen expected on general grounds. Finally, by looking at Eqs. (22, 24) one sees that it involves only the sets \((\zeta_{0},\zeta_{12},\zeta_{4},\zeta_{8})\) and \((\zeta_{0},\zeta_{12},\zeta_{7},\zeta_{11})\) for the conventional and unconventional cases, respectively. These are precisely the sets that determine the most general structure of the self-energy obtained in Eq. (10). We now show that the above reduction of the observables in independent sets is dictated by the symmetry properties of the model. #### 3.4.1 The conventional SOC case In the conventional case, the symmetry group of the model is \(C_{2V}\) with generators given by twofold rotations about the z axis \(C_{2}(z)=-i\lambda_{0}\otimes R_{2}(z)\), mirror reflection through the y axis \(M_{y}=-\lambda_{0}\otimes R_{2}(y)\) and time reversal \(T=-i\lambda_{0}\otimes\sigma_{y}K\), where \(R_{n}(\hat{u})=\exp(i(\pi/n)\hat{u}\cdot\mathbf{\sigma})\) is the n-fold spin rotation around the axis given by the unit vector \(\hat{u}\) and \(K\) is complex conjugation. Under any of these symmetry operations, each \(\zeta_{a}\) matrix transforms as \(\zeta_{a}\to\pm\zeta_{a}\). It is then possible to derive the parity eigenvalues for each \(\zeta_{a}\) matrix and the result is shown in Table 2. We see that the observables \(\zeta_{a}\) divide in four groups referring to the charge and the different spin polarization degrees of freedom. One may see that the third group, corresponding to the \(y\)-axis spin polarization includes exactly the set of \(\zeta_{a}\) matrices identified in the analysis of the single-impurity line diagram analyzed in Eq. (23). #### 3.4.2 The unconventional SOC case In the unconventional case, the symmetry group of the model is \(C_{3V}\) with generators given by threefold rotations about the z axis \(C_{3}(z)=-i\lambda_{0}\otimes R_{3}(z)\), mirror reflection through the y axis \(M_{y}=\lambda_{z}\otimes i\sigma_{y}\) and time reversal \(T=\lambda_{0}\otimes i\sigma_{y}K\). In this case, under the \(M_{y}\) and \(T\) symmetry operations above, each \(\zeta_{a}\) matrix transforms as \(\zeta_{a}\to\pm\zeta_{a}\) and we obtain the parity eigenvalues table which is shown in Table 2. Under the \(C_{3}(z)\) symmetry operations, instead, the \(\zeta_{a}\) matrices with spin polarization along the x and y axis transform into one another and do not have a defined parity. Nonetheless, as it is clear from the Table 2, the \(\zeta_{a}\) matrices split again in four distinct groups. The last two groups are related to the x and y axes spin polarization, although in a manner completely different from the conventional case. Indeed, the intraband observables (with the band index \(\lambda=0,3\)) correspond to the given spin polarization components, whereas the interband ones (with the band index \(\lambda=1,2\)) correspond to the other in-plane spin component, responsible for spin admixture. Similarly, the first two groups in Table 2 describe the charge and z-axis spin polarization, again showing a different symmetry in intra- and interband terms. Finally, as in the case of the conventional case, the set \((\zeta_{2},\zeta_{5},\zeta_{14},\zeta_{9})\), which is the third group in Table 2, corresponds to the set which appears in the single-impurity diagram for the effective bare vertex found in Eq. (25). #### 3.4.3 Analysis of the \(L_{ab}\) -matrix The solution of the vertex equation (15) is determined by the \(L_{ab}\) matrix, whose entries are given by the momentum integral in Eq. (18). We notice that the \(L_{ab}\) matrix is dimensionless and must be of order one, no matter what is the strength of disorder. Consider the case, for instance, when disorder is vanishingly small (i. \(u_{0}^{2}\to 0\)). One could naively assume that the \(L_{ab}\) should vanish. This is not so because for vanishing disorder, the poles of the retarded and advanced Green's functions tend to merge towards the real axis and the momentum integral diverges as \(u_{0}^{-2}\), thus compensating exactly the factor \(u_{0}^{2}\) in front of the integral. In this case the Green's functions become those for the clean system and one can analyze the symmetry properties by relying on the parity of the \(\zeta_{a}\) matrices. Under a unitary symmetry transformation the Green's functions remain invariant, whereas the \(\zeta_{a}\) matrices transforms accordingly to Table 2. Thus, for the symmetry operations \(S\) with well defined parities, the \(L\) matrix transform as \(L_{ab}\to p_{a}p_{b}L_{ab}\), where \(p_{a}\) and \(p_{b}\) are the parity eigenvalues determined in the Table 2. It is clear that the entry \(L_{ab}\) may only differ from zero if and only if \(p_{a}p_{b}>0\), i.e. if the two \(\zeta\) matrices transform with the same sign of parity. This implies, according to the analysis carried in the previous subsections, that the \(L\) matrix decomposes into independent blocks. Additionally, under time-reversal symmetry \(G^{R}\leftrightarrow G^{A}\), thus \(L_{ab}\to p_{a}p_{b}L_{ba}\). Finally, the \(C_{3}(z)\) symmetry introduces further constraints to the third and forth blocks from Table 2, but this analysis is not necessary to identify the four split blocks. The \(L\) matrix or better the combination \(\mathbf{1}-L\), which appears in the vertex equation (17) can be numerically evaluated. In Fig. 4 we plot the entries of the \(L\) matrix using a color code to illustrate the block structure. We consider two typical and distinct cases. In Figs. 4(e-h) the chemical potential lies in the second subband and both subbands are occupied, whereas in Figs. 4(a-d) only the lowest subband is occupied. The numbers on the axes refer to the \(a\) and \(b\) indices of the \(\zeta\) matrices. Clearly the blocks confirm the symmetry analysis carried out before. The symmetry analysis of the \(L\)-matrix can be connected to the general structure of the self-energy we obtained in Eq. (12). By using the identity [54]\(G^{R}-G^{A}=G^{R}(\Sigma^{R}-\Sigma^{A})G^{A}\) in Eq. (2) we get \[\Sigma^{R}-\Sigma^{A}=u_{0}^{2}\int\frac{d^{2}k^{\prime}}{(2\pi)^{2}}G^{R}( \omega,{\bf k}^{\prime})(\Sigma^{R}-\Sigma^{A})G^{A}(\omega,{\bf k}^{\prime}). \tag{26}\] Since the self-energy does not depend on the momentum, the above equation can be transformed into an algebraic one as \[(\Sigma^{R}-\Sigma^{A})_{a}=L_{ab}(\Sigma^{R}-\Sigma^{A})_{b}. \tag{27}\] This shows that the skew-Hermitian part of the self-energy, which forms a vector of scattering rates with components \(a=0,\ldots,15\), has a vanishing eigenvalue for the matrix \({\bf 1}-L\) discussed earlier, i.e. it belongs to the nullspace of \({\bf 1}-L\). When the self-energy is proportional to the identity matrix \(\zeta_{0}\) the null space reduces to the total charge sector and the vanishing eigenvalue of \({\bf 1}-L\) is the manifestation of charge conservation. This is what happens in the absence of SOC. In the presence of SOC, by considering Fig. 4, we may clearly distinguish the two different regimes corresponding to one (upper row, energy \(\mu=100\) meV) or two (lower row, energy \(\mu=200\) meV) bands occupied. When only one band is occupied there is a coupling among the set \(\zeta_{0}\), \(\zeta_{4}\) and \(\zeta_{12}\) for the conventional model, or set \(\zeta_{0}\), \(\zeta_{11}\) and \(\zeta_{12}\) for the unconventional models. This implies that the null space of the \({\bf 1}-L\) has components in the \(\zeta\) matrices of the above sets, i.e. the total charge fluctuating mode is coupled to the charge transfer modes between the subbands. In the case of two bands occupied the total charge sector decouples from the other charge transfer modes and has a zero eigenvalue as shown in the color code of Fig. 4. Indeed, this is what is expected based on the self-energy evaluation when the latter reduces to \(\zeta_{0}\). The two \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{5}{c|}{Conventional: \(C_{2V}\)} \\ \hline Set & \(a\) & \((\lambda,\sigma)\) & \(C_{2}(z)\) & \(M_{y}\) & \(T\) & Class \\ \hline \multirow{4}{*}{1} & 0 & (0,0) & + & + & \(+\) & \(A_{1}(s)\) \\ \cline{2-5} & 4 & (1,0) & + & + & \(+\) & \(A_{1}(s)\) \\ \cline{2-5} & 12 & (3,0) & + & + & \(+\) & \(A_{1}(s)\) \\ \cline{2-5} & 8 & (2,0) & + & + & - & \(A_{1}(s)\) \\ \hline \multirow{4}{*}{2} & 3 & (0,3) & + & - & - & \(A_{2}(R_{z})\) \\ \cline{2-5} & 7 & (1,3) & + & - & - & \(A_{2}(R_{z})\) \\ \cline{2-5} & 15 & (3,3) & + & - & - & \(A_{2}(R_{z})\) \\ \cline{2-5} & 11 & (2,3) & + & - & + & \(A_{2}(R_{z})\) \\ \cline{2-5} & 2 & (0,2) & - & + & - & \(B_{1}(x)\) \\ \cline{2-5} & 6 & (1,2) & - & + & - & \(B_{1}(x)\) \\ \cline{2-5} & 14 & (3,2) & - & + & - & \(B_{1}(x)\) \\ \cline{2-5} & 10 & (2,2) & - & + & + & \(B_{1}(x)\) \\ \hline \multirow{4}{*}{4} & 1 & (0,1) & - & - & - & \(B_{2}(y)\) \\ \cline{2-5} & 5 & (1,1) & - & - & - & \(B_{2}(y)\) \\ \cline{2-5} & 13 & (3,1) & - & - & - & \(B_{2}(y)\) \\ \cline{2-5} & 9 & (2,1) & - & - & + & \(B_{2}(y)\) \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|c|c|} \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{5}{c|}{Unconventional: \(C_{3V}\)} \\ \hline \(a\) & \((\lambda,\sigma)\) & \(C_{3}(z)\) & \(M_{y}\) & \(T\) & Class \\ \hline 0 & (0,0) & + & + & + & \(A_{1}(s)\) \\ \hline 11 & (2,3) & + & + & + & \(A_{1}(s)\) \\ \hline 12 & (3,0) & + & + & + & \(A_{1}(s)\) \\ \hline 7 & (1,3) & + & + & - & \(A_{1}(s)\) \\ \hline 3 & (0,3) & + & - & - & \(A_{2}(R_{z})\) \\ \hline 8 & (2,0) & + & - & - & \(A_{2}(R_{z})\) \\ \hline 15 & (3,3) & + & - & - & \(A_{2}(R_{z})\) \\ \hline 4 & (1,0) & + & - & + & \(A_{2}(R_{z})\) \\ \hline 2 & (0,2) & \(-\frac{\sqrt{3}}{2}\zeta_{1}-\frac{1}{2}\zeta_{2}\) & + & - & \(E(x,y)\) \\ \cline{2-5} 5 & (1,1) & \(+\frac{\sqrt{3}}{2}\zeta_{6}-\frac{1}{2}\zeta_{5}\) & + & - & \(E(x,y)\) \\ \hline 14 & (3,2) & \(-\frac{\sqrt{3}}{2}\zeta_{13}-\frac{1}{2}\zeta_{14}\) & + & - & \(E(x,y)\) \\ \cline{2-5} 9 & (2,1) & + \(\frac{\sqrt{3}}{2}\zeta_{10}-\frac{1}{2}\zeta_{9}\) & + & + & \(E(x,y)\) \\ \hline 1 & (0,1) & \(+\frac{\sqrt{3}}{2}\zeta_{2}-\frac{1}{2}\zeta_{1}\) & - & - & \(E(x,y)\) \\ \hline 6 & (1,2) & \(-\frac{\sqrt{3}}{2}\zeta_{5}-\frac{1}{2}\zeta_{6}\) & - & - & \(E(x,y)\) \\ \hline 13 & (3,1) & \(+\frac{\sqrt{3}}{2}\zeta_{14}-\frac{1}{2}\zeta_{13}\) & - & - & \(E(x,y)\) \\ \hline 10 & (2,2) & \(-\frac{\sqrt{3}}{2}\zeta_{9}-\frac{1}{2}\zeta_{10}\) & - & + & \(E(x,y)\) \\ \hline \end{tabular} \end{table} Table 2: Parity of the \(\zeta_{a}\) matrices under the symmetry transformations for the conventional and unconventional Rashba models. For the conventional model (\(C_{2V}\) group), the matrices divide into four sets corresponding to charge \(\sigma=0\) and spin polarization along the i-th axis \(\sigma=1\), \(i=1,2,3\). The case of the \(y\) polarization, \(i=2\), selects the matrices \(\zeta_{2}\), \(\zeta_{6}\), \(\zeta_{14}\) and \(\zeta_{10}\). For the unconventional model (\(C_{3V}\) group), the first and second groups mix charge (\(\sigma=0\)) and spin-z (\(\sigma=3\)) components, while the third and fourth groups mix in-plane spin components. The \(C_{3}(z)\) symmetry partners of the irrep \(E\) are split into the last two groups. charge transfer modes in this regime remain coupled among themselves. This is evidenced in the small coloured square in the top left square in each panel of the bottom row in Fig. 4. Another noticeable feature in Fig. 4 are the diagonal blocks, which describe the in-plane spin modes. For instance, in the case of the conventional model, the third diagonal block corresponding to the sets of matrices \(\zeta_{2},\zeta_{6},\zeta_{14},\zeta_{10}\) includes the spin mode with y-axis polarization. One sees that in the case of only one band occupied (top row, panels (a) and (b)) the spin mode with y-axis polarization is coupled to the spin transfer mode between the bands with the same y-axis polarization. On the other hand, when both bands are occupied, the spin mode with y-axis polarization decouples and behaves in a way similar to the single-band case. In the case of the unconventional model we focus again on third diagonal block with the matrices \(\zeta_{2},\zeta_{5},\zeta_{14},\zeta_{9}\). In the regime when only one band is occupied, the spin mode with \(y\)-axis polarization is coupled to two spin transfer modes between the bands with both \(y\)-axis (\(\zeta_{14}\))and the x-axis polarization (\(\zeta_{5}\)), in sharp contrast with the conventional case. Even when both bands are occupied the spin mode with \(y\)-axis polarization remains coupled with the mode with \(x\)-axis polarization (\(\zeta_{5}\)). ### Numerical evaluation of the current vertex The numerical evaluation of the charge current vertex begins with the evaluation of the effective bare vertex \(\bar{\Gamma}_{x}\), which is the sum of the bare vertex \(J_{soc,x}\) (19) and of the diagram with a single impurity line corresponding to the first iteration and whose integral has the matrix structure reported in Eq. (20). In Fig. 5 we show the non zero components as function of the Fermi energy. As in the case of the self-energy, two distinct regimes appear depending whether one or two bands are occupied. In the first case, the finite components are the ones that appear in the \(J_{soc,x}\) decomposition, Eq. (19), and are coupled by symmetry. Figure 4: Entries of the matrix \(\delta_{ab}-L_{a,b}\) with a color scale. (a-d) Top panels are calculated for \(\mu=100\) meV, such that the chemical potential lies in the first subband and hence only one band is occupied. (e-h) For the bottom panels we use \(\mu=200\) meV, such that both subbands are occupied. In the bottom row the white square at the \((0,0)\) position signal the vanishing eigenvalue associate to the total charge mode. On the other hand when both bands are occupied, the effective bare vertex vanishes, as it happens in the single-band case [53, 55]. The vanishing of the effective bare vertex implies also the vanishing of the dressed vertex and the full current vertex reduces only to the first term in the right hand side of Eq. (14). In the regime when only one band is occupied, the effective bare vertex shows a remarkable non-monotonous behavior when bands coupling is allowed, both for the conventional and unconventional cases. This non-monotonous behavior could be expected in connection with the one of the density of states, which has a peak in the lowest band, when the dispersion flattens a bit around 50 meV. The black dashed line in Fig. 5 refers to the sum of components \(a=2\) (\(\lambda_{0}\otimes\sigma_{2}\)) and \(a=14\) (\(\lambda_{3}\otimes\sigma_{2}\)), which describes the spin polarization along the y-axis in the single-subband block. As a final general remark on the importance of vertex corrections one may state the following. When both bands are occupied, the vertex corrections have a dramatic impact in producing the vanishing of the matrix structure of the current vertex, no matter how weak the disorder scattering may be. This phenomenon is completely similar to what happens in the single-band case and remains true for both conventional and unconventional models as a result of the Rashba SOC. On the other hand, when only one band is occupied, the relevance of the vertex corrections mainly depends on the strength of the disorder scattering. The most important correction is due to the diagram with a single impurity line, which yields the effective bare vertex \(\bar{\Gamma}_{x}\). In Figs. 5(a1-d1) we plot \(\bar{\Gamma}_{x}\) in units of \(\alpha_{1}/\hbar\), which is a typical scale for \(J_{soc,x}\), which clearly shows that the vertex corrections typically leads to \(|\bar{\Gamma}_{x}|<|J_{soc,x}|\). Interestingly, the vertex corrections might even flip the sign of the effective vertex with respect to \(J_{soc,x}\) as a function of the Fermi energy. Because we are in the regime of weak scattering the infinite resummation of ladder diagrams leading to the full vertex \(\Gamma_{x}\) does not have a big impact on its numerical value, thus \(\Gamma_{x}\approx\bar{\Gamma}_{x}\), as shown in Fig. 5(a2-d2). ## 4 Charge-Spin Conversion The current-induced spin polarization (CISP) is given by the response function \(\chi_{yx}\) defined in Eq. (3). In terms of the \(\zeta_{a}\) matrices we have \(S_{y}^{0}=\zeta_{2}\). Then by recalling the structure of the charge current vertex Eq. (14), the response function reduces to two contributions \[\chi_{yx}=\chi_{yx}^{(1)}+\chi_{yx}^{(2)}, \tag{28}\] where the first contribution is the simple bubble diagram with the spin vertex and the spin-independent velocity \[\chi_{yx}^{(1)}=\frac{\hbar}{2\pi}\frac{(-e)\hbar}{2}\int\frac{d^{2}k}{(2\pi)^{ 2}}\mathrm{Tr}\left[\zeta_{2}G^{R}(\omega,\mathbf{k})\frac{\hbar k_{x}}{m}G^{ R}(\omega,\mathbf{k})\right], \tag{29}\] and the second contribution originates from the spin-dependent part of the charge current vertex \[\chi_{yx}^{(2)}=\frac{\hbar}{2\pi}\frac{(-e)\hbar}{2}\sum_{b}\int\frac{d^{2}k} {(2\pi)^{2}}\Gamma_{x,b}\mathrm{Tr}\left[\zeta_{2}G^{R}(\omega,\mathbf{k}) \zeta_{b}G^{R}(\omega,\mathbf{k})\right]. \tag{30}\] Both integrals in Eqs. (29-30) can be evaluated with the technique previously developed and actually reduced to integrals already encountered in the derivation of the vertex corrections. The integral appearing in the first term \(\chi_{yx}^{(1)}\) has been already found in Eq. (16) for the effective bare vertex \(\bar{\Gamma}_{x}\). On the other hand the integral appearing in the second term \(\chi_{yx}^{(2)}\) can be expressed in terms of the matrix elements of the \(L\)-matrix, which carries the algebraic vertex corrections in Eq. (17). As a result we get \[\chi_{yx}=-\frac{e\hbar}{\pi u_{o}^{2}}\left(\bar{\Gamma}_{x,2}-J_{SOC,x,2} \right)-\frac{e\hbar}{\pi u_{0}^{2}}\sum_{b}L_{2b}\Gamma_{x,b}=-\frac{e\hbar}{ \pi u_{o}^{2}}\left(\Gamma_{x,2}-J_{SOC,x,2}\right). \tag{31}\] This is one of the main results of this paper and shows a remarkably compact expression, where only appear the dressed and bare vertices. Notice that to have the expression without the vertex corrections it is enough to make the replacement \(\Gamma_{x}\to J_{SOC,x}\) before the last equality, where the relation from Eq. (17) between the dressed \(\Gamma_{x}\) and effective bare \(\bar{\Gamma}_{x}\) has been used. To appreciate the importance of the vertex corrections, consider that, in the single-sub-band case, the dressed vertex vanishes exactly and hence the second term \(\chi_{yx}^{(2)}\) is only different from zero when there are no vertex corrections. This implies that in the absence of vertex corrections one obtains a result which is smaller by a factor of two. In the present case, the second term of Eq. (28) may differ from zero even in the presence of vertex corrections, but only when one subband is occupied. The numerical evaluation of the CISP is shown in Fig. 6 and it shows that the vertex corrections are always important yielding a factor around 2 between the dressed and bare vertex case. In the regime when both subbands are occupied, the dressed vertex vanishes (cf. Fig. 5) and then only the first term in Eq. (31) contributes. Furthermore, by considering the last equality in Eq. (31), the CISP is expressed only in terms of the bare vertex component \(J_{SOC,x,2}\), which vanishes (see Eq. (13)) when \(\alpha_{1}=-\alpha_{2}\) as it is evident in panels (b) and (d) in Fig. 6. In this regime, for general values of the intra-band Rashba SOC one obtains the simple result \[\chi_{yx}=-\frac{e\hbar}{\pi u_{o}^{2}}\left(-J_{SOC,x,2}\right)=e\tau D_{0} \frac{\alpha_{1}+\alpha_{2}}{\hbar}, \tag{32}\] where the we have used the expression \(\tau^{-1}=2\pi D_{0}u_{0}^{2}/\hbar\) for the scattering time, and the explicit expression from Eq. (13) of the bare vertex component \(J_{SOC,x,2}\). Hence, Eq. (32) generalizes to two bands the well known Edelstein result[13] with exactly a factor of 2 between the dressed and bare vertex case. We stress that this result is crucially obtained by considering the occupations of both bands. At lower Fermi energies, when only the first sub-band is occupied, the second term \(\chi_{yx}^{(2)}\) contributes and the CISP is smaller and ratio between the dressed and bare vertex case varies around 2. As predicted by the last equality in Eq. (31), the behavior of the CISP is controlled by the dressed vertex \(\Gamma_{x,2}\). In particular, the non-monotonous behavior observed in the vertex of Fig. 5 is clearly reproduced in Fig. 6, for panels (b) and (c). Such non-monotonous behavior can be traced back,as already remarked, to the one observed in density of states in Fig. 1, where in columns two and three there is a flattening of the dispersion. In order to evaluate the charge-to-spin conversion efficiency, we need to evaluate the longitudinal electrical conductivity by the charge current-charge current response function \[\sigma=\frac{\hbar e^{2}}{2\pi}\int\frac{d^{2}k}{(2\pi)^{2}}\mathrm{Tr}\left[J _{x}^{0}(\mathbf{k})G^{R}(\omega,\mathbf{k})J_{x}(\mathbf{k})G^{A}(\omega, \mathbf{k})\right], \tag{33}\] Figure 8: Dressed and bare IEE relative efficiencies \(\sigma/\chi_{yx}\) normalized by the ratio in the uncoupled limit (\(\eta=0\)). Both dressed and bare efficiencies are normalized by the dressed ratio with \(\eta=0\). Figure 6: Current-induced spin polarization with (\(\chi_{yx}\)) and without (\(\chi_{yx}^{0}\)) vertex corrections, in units of its single subband limit, as function of the Fermi energy. In all cases we find \(\chi_{yx}\approx 2\chi_{yx}^{0}\) due to the vertex corrections. Figure 7: Dressed (solid) and bare conductivity \(\sigma\) as a function of the Fermi energy. where \(J_{x}^{0}({\bf k})\) and \(J_{x}({\bf k})\) are given by Eqs. (13) and (14), respectively. By evaluating the integrals and using again the relation between the dressed, bare and effective bare vertex (cf. Eqs. (16, 17)), one finds \[\sigma =e^{2}\sum_{sn}\frac{\hbar^{2}k_{sn}^{2}(\mu)D_{sn}}{4m^{2}\Gamma_ {sn}/\hbar}+4e^{2}\tau D_{0}\left[\sum_{a}(\bar{\Gamma}_{x,a}-J_{SOC,x,a})( \Gamma_{x,a}+J_{SOC,x,a})+\sum_{ab}J_{SOC,x,a}L_{ab}\Gamma_{x,a}\right], \tag{34}\] \[\sigma =e^{2}\sum_{sn}\frac{\hbar^{2}k_{sn}^{2}(\mu)D_{sn}}{4m^{2}\Gamma_ {sn}/\hbar}+4e^{2}\tau D_{0}\sum_{a}\left(\Gamma_{x,a}\bar{\Gamma}_{x,a}-J_{ SOC,x,a}J_{SOC,x,b}\right). \tag{35}\] Above, the first expression is general, while to get the second expression we use the relation from Eq. (17), which is valid only when we account for vertex corrections. To obtain the conductivity without vertex corrections, one simply has to replace \(\Gamma_{x,a}\to J_{SOC,x,a}\) in the first expression for \(\sigma\) above. In both cases, the first term is the usual Drude expression summed over all the bands, as it can be seen by identifying \((\hbar^{2}k_{sn}^{2}(\mu)/m^{2})D_{sn}/(4\Gamma_{sn}/\hbar)\) as the diffusion coefficient of band \(sn\). The second term in Eq. (35) shows again a remarkable simplicity when expressed in terms of the dressed, bare and effective bare vertices. From this expression is also clear that this second term is at least quadratic in the SOC. The numerical results for \(\sigma\) are shown in Fig. 7, which clearly shows significant effects from the vertex corrections. While Eqs. (31, 35) provide clearly the spin polarization response and the electrical current response to an applied d.c. electric field, the issue of the charge-to-spin conversion efficiency is much less obvious. In the pioneering experiment by Sanchez et al.,[56] where a spin current was pumped from a ferromagnetic metal into a bismuth-silver bi-layer, the dimensional efficiency \(\lambda_{IEE}=j_{c}^{2D}/j_{s}^{3D}\) was introduced as the ratio of the induced two-dimensional interface charge current and the three-dimensional spin current flowing perpendicular to the interface of the bi-layer. We finally notice that, for the sake of convenience, the spin current is defined as carrying a unit charge \(e\) instead of carrying units of \(\hbar/2\). In the work by Song et al.,[49] a different dimensionful definition of conversion efficiency \(\lambda_{IEE}=j_{c}/j_{s}\) is adopted, where both the charge and spin current are two-dimensional quantities and \(j_{s}=e\langle S_{y}\rangle/\tau_{F}\). Also in this case the efficiency has the dimension of length due to the extra velocity factor in the definition of the charge current and the inverse momentum relaxation time factor in the spin current. In our opinion the choice of what conversion efficiency \(\lambda_{IEE}\) to adopt cannot be decided in general, but must be selected depending on what experimental setting is under consideration. In our Fig. 8 we have adopted as a measure of the efficiency the ratio \(\sigma/\chi_{yx}\) normalized by the ratio to the uncoupled-band limit (\(\eta=0\)). To conclude this section, we report in Fig. 9 the spin relaxation times. These are provided by the eigenvalues of the \(({\bf 1}-L)\)-matrix of the algebraic vertex equation (17). To appreciate this, consider the vertex equation at finite external frequency \(\Omega\), as due, for instance, to an a.c. electromagnetic field. The retarded and advanced Green's function acquire different frequencies, \(\omega\to\omega+\Omega/2\) and \(\omega\to\omega-\Omega/2\), respectively. When performing the momentum integral defining the matrix elements of the \(L\)-matrix, (cf. Eq. (18)) one obtains a term proportional to \(\Omega\tau_{0}\ll 1\), which would give rise, after Fourier antitransforming back to real times, to a time derivative. Hence the eigenvalues of the \(({\bf 1}-L)\)-matrix would yield the relaxation rates in units of the inverse scattering time \(\tau_{0}\). In Fig. 9 we show the eigenvalues of the \(({\bf 1}-L)^{-1}\) relative to the block to whom the spin polarization \(\zeta_{2}=\lambda_{0}\otimes\sigma_{2}\) belongs. The figure uses a color code such that each observable has a finite weight in each eigenvalue. For instance in the panel (a) of Fig. 9 there is an eigenvalue with a lifetime around 2, which mostly coincides with the spin polarization \(\lambda_{0}\otimes\sigma_{2}\). Such a value of 2 can be understood by recalling the single-band case. In this latter case, in the presence of the Rashba SOC, the spin relaxation time, \(\tau_{s}\) for the Edelstein polarization is the well known Dyakonov-Perel spin relaxation time in the diffusive regime.[57] In the deep diffusive regime, when the disorder broadening is larger than the spin splitting due to the SOC, the spin relaxation time is much longer than the quasiparticle relaxation time \(\tau_{0}\), i.e. \(\tau_{s}\gg\tau_{0}\). This is due to the fact that several impurity scattering events are necessary to relax the initial spin orientation. For weak scattering, as it is considered here, spin precession may occur in between two impurity scattering events, and \(\tau_{s}\) becomes of the order of the relaxation time \(\tau_{0}\). In this, almost Elliot-Yafet regime, the single-band model predicts the exact relation \(\tau_{s}=2\tau_{0}\).[57] In the conventional model (see panels (a) and (b) of Fig. 9) a spin mode lifetime, consisting mostly of \(\lambda_{0}\otimes\sigma_{2}\) with the value of 2 in units of \(\tau_{0}\) is always present, signalling that the spin dynamics of the two-band model behaves similarly to the single-band case. This is not surprising considering the spin texture for this case as shown in Fig. 1 in the corresponding panels. In the unconventional model, on the other hand, there is a again a spin mode with lifetime of 2, but its composition is markedly different, consisting mostly of an interband spin mode \(\lambda_{3}\otimes\sigma_{2}\) with spin polarization along the y-axis, but staggered between subbands. This is an evident signal of a completely different spin dynamics, a result of the spin admixture of the energy bands. ## 5 Conclusions In this paper we have studied the CISP and the charge-to-spin conversion efficiency in the two-band model in the presence of both intra- and interband Rashba SOC. We have considered both conventional and unconventional interband Rashba coupling with the aim to analyze whether non-trivial spin texture may produce a more efficient charge-to-spin conversion. The CISP is a non-equilibrium phenomenon which is affected by the impurity disorder scattering relying on the deformation of the Fermi circle in the presence on an applied electric field. For this reason we have taken into account disorder scattering by means of the standard impurity diagrammatic technique. The evaluation of the rainbow diagram for the self-energy and the ladder diagram for the vertex has been carried out by fully exploiting the symmetry properties of the model, which allow to reduce the general algebraic vertex equation from dimension 16 by 16 to 4 by 4 blocks. We have found that generically vertex corrections, at the level of the Born approximation, are important when both bands are occupied, whereas they are typically a small correction at lower Fermi energies when only one band is occupied. We have found that the conventional or unconventional character of the inter-band SOC plays a role together with the relative signs of the intra-band SOC in the two subbands. Furthermore, for both types of inter-band SOC, it is important to take into account the occupation of all the bands for a consistent treatment of disorder scattering. Hence, from the point of view of the efficiency, the unconventional model does not show itself better that the conventional one. On the other hand, it is also true that the unconventional model hosts a far richer spin dynamics that cannot be reduced to the one of the single-band case. This is a marked difference among the two models. We finally conclude by pointing out possible developments of the present analysis. One direction is to consider stronger disorder, which is expected to require a fully self-consistent Born approximation, which can be carried out along the lines shown here, but is expected to be computationally more demanding. A second interesting direction is to expand the investigation of the complex spin dynamics of the unconventional model. We leave these promising paths to future work. ## Appendix A Derivation of the Models The conventional case refers to a model that can be derived from two-subbands GaAs quantum wells grown along the [001] zincblende direction [36]. If the quantum well is symmetric, its crystal lattice is invariant under the Figure 9: The relaxation times for the various observables in the block associated to spin polarization. In red the relaxation rate for the total spin polarization \(\zeta_{2}=\lambda_{0}\otimes\sigma_{2}\). In all panels, the black line has \(\tau_{S}/\tau_{0}=1\), but it was shifted downwards for clarity. point group. But in general, for an asymmetric well or in the presence of external fields, the structural inversion asymmetry (SIA) yields the \(C_{2V}=\{C_{2}(z),M_{y}\}\) point group. Here \(C_{2}(z)\) and \(M_{y}\) are, respectively, the group generators referring to a \(\pi\) rotation around the \(z\) axis, and the mirror \(y\rightarrow-y\), with \(x=[110]\) and \(y=[1\bar{1}0]\). In this case, both subband envelope functions transform as the \(A_{1}(S)\oplus A_{1}(Z)\) irreps, and including spin we get \(2A_{1}\otimes D_{1/2}=2\Gamma_{5}\), where \(\Gamma_{5}=D_{1/2}\) is the pure spinor irrep. Therefore, we can label the basis set \(\left|j,\sigma\right\rangle\) and order it as \(\{\left|1\uparrow\right\rangle,\left|1\downarrow\right\rangle,\left|2\uparrow \right\rangle,\left|2\downarrow\right\rangle\}\). Here \(j=\{1,2\}\) labels the subbands, and \(\sigma=\{\uparrow,\downarrow\}\) the spin. From these states, we obtain the representations for the group generators \(C_{2}(z)=\lambda_{0}\otimes R_{2}(z)\), \(M_{y}=-\lambda_{0}\otimes R_{2}(y)\), and for time-reversal symmetry \(T=-i\lambda_{0}\otimes\sigma_{y}K\). Here \(\lambda_{0},\lambda_{x},\lambda_{y},\lambda_{z}\) are the identity and Pauli matrices in the subband space and \(R_{n}(\hat{u})=\exp(i\frac{\pi}{n}\hat{u}\cdot\mathbf{\sigma})\) is the spin rotation by \(2\pi/n\) over the unit vector \(\hat{u}\). From these representations, using the method of invariants via the Qsymm code,[58] we obtain the conventional case Hamiltonian \(H_{C}\) as \[H_{C}=\begin{pmatrix}\varepsilon_{1}&-i\alpha_{1}k_{-}&0&-i\eta_{C}k_{-}\\ i\alpha_{1}k_{+}&\varepsilon_{1}&i\eta_{C}k_{+}&0\\ 0&-i\eta_{C}k_{-}&\varepsilon_{2}&-i\alpha_{2}k_{-}\\ i\eta_{C}k_{+}&0&i\alpha_{2}k_{+}&\varepsilon_{2}\end{pmatrix}. \tag{36}\] Here, for each subband \(j\), \(\varepsilon_{j}=\varepsilon_{j}^{0}+\frac{h^{2}}{2m}k^{2}\), \(\varepsilon_{j}^{0}\) are the band edges, the effective mass \(m\) is assumed to be the same in both subbands, \(\alpha_{j}\) is the intra-subband Rashba SOC, \(\eta_{C}\) is the inter-subband Rashba SOC, \(\mathbf{k}=(k_{x},k_{y})\) is the in-plane quasi-momentum, and \(k_{\pm}=k_{x}\pm ik_{y}\). In contrast, the unconventional case occurs in 2D materials that transform as the \(C_{3V}\) group at the \(\Gamma\) point as, for instance, the monolayer OsBi\({}_{2}\) discussed in Ref. [49]. There, the orbitals of the two relevant bands transform as the \(E\) irrep of the \(C_{3V}\) single group, and including spin it splits into \(2E\otimes D_{1/2}=2(\Gamma_{4}\oplus\Gamma_{5}\oplus\Gamma_{6})\), where \(\Gamma_{4}=D_{1/2}\) is the pure spinor irrep, and \(\Gamma_{5}\oplus\Gamma_{6}\) are 1D irreps that form Kramer's pairs under time-reversal symmetry. More specifically, the single group orbital representations \(E^{j}\) for each subband \(j=\{1,2\}\) are \(E^{1}=\left|X\pm Z\right\rangle\) and \(E^{2}=\left|XY\pm\frac{i}{2}(X^{2}-Y^{2})\right\rangle\), where \(X_{\pm}=X\pm iY\). Including spin, the \(E^{1}\) orbitals splits into \(\Gamma_{5}^{1}\oplus\Gamma_{6}^{1}=\{\left|X_{+}Z\uparrow\right\rangle,\left| X_{-}Z\downarrow\right\rangle\}\), and \(\Gamma_{4}^{1}=\{\left|X-Z\uparrow\right\rangle,\left|X+Z\downarrow\right\rangle\}\). For the other subband, \(E^{2}\) splits into \(\Gamma_{5}^{2}\oplus\Gamma_{6}^{2}=\{\left|XY+\frac{i}{2}(X^{2}-Y^{2})\uparrow \right\rangle,\left|XY-\frac{i}{2}(X^{2}-Y^{2})\downarrow\right\rangle\}\), and \(\Gamma_{4}^{2}=\{\left|XY-\frac{i}{2}(X^{2}-Y^{2})\uparrow\right\rangle,\left| XY+\frac{i}{2}(X^{2}-Y^{2})\downarrow\right\rangle\}\). Interestingly, similarly to the conventional case, here the unconventional model also arises from the spinor irreps \(\Gamma_{4}^{1}\oplus\Gamma_{4}^{2}\), but under different constraints from the \(C_{3V}\) group. Sorting this basis set as \(\Gamma_{4}^{1}\oplus\Gamma_{4}^{2}=\{\left|X_{-}Z\uparrow\right\rangle,\left| X_{+}Z\downarrow\right\rangle,\left|XY-\frac{i}{2}(X^{2}-Y^{2})\uparrow \right\rangle,\left|XY+\frac{i}{2}(X^{2}-Y^{2})\downarrow\right\rangle\} \equiv\{\left|1\uparrow\right\rangle,\left|1\downarrow\right\rangle,\left|2 \uparrow\right\rangle,\left|2\downarrow\right\rangle,\left|2\downarrow\right\rangle\}\), the group generators read as \(C_{3}(z)=-\lambda_{0}\otimes R_{3}(z)\), \(M_{y}=\lambda_{z}\otimes(i\sigma_{y})\), and the time-reversal operator is \(T=\lambda_{0}\otimes(i\sigma_{y})K\). These lead to the unconventional Hamiltonian \(H_{U}\), which read as \[H_{U}=\begin{pmatrix}\varepsilon_{1}&-i\alpha_{1}k_{-}&0&\eta_{U}k_{-}\\ i\alpha_{1}k_{+}&\varepsilon_{1}&\eta_{U}k_{+}&0\\ 0&\eta_{U}k_{-}&\varepsilon_{2}&-i\alpha_{2}k_{-}\\ \eta_{U}k_{+}&0&i\alpha_{2}k_{+}&\varepsilon_{2}\end{pmatrix}. \tag{37}\] Here \(\eta_{U}\) is the intersubband Rashba SOC for the unconventional case, and the other quantities match the definitions from \(H_{C}\) above. ## Appendix B The "Bubble" integrals When evaluting the vertex corrections and the response functions, one encounters integrals of the type (see Eq. (18)) \[L_{ab}=u_{0}^{2}\int\frac{d^{2}k}{(2\pi)^{2}}\text{Tr}\left[\zeta_{a}G^{R}( \omega,\mathbf{k})\zeta_{b}G^{A}(\omega,\mathbf{k})\right]. \tag{38}\] By setting \(g_{sn}^{R(A)}(\omega,k)=(\omega-E_{sn}(k))^{-1}\) and recalling the spectral decomposition shown in Eq. (2) one obtains \[L_{ab}=u_{0}^{2}\sum_{sns^{\prime}n^{\prime}}\int\frac{d^{2}k}{(2\pi)^{2}}g_{sn}^ {R}(\omega,k)g_{s^{\prime}n^{\prime}}^{A}(\omega,k)\text{Tr}\left[\zeta_{a}P_{ sn}(\mathbf{k})\zeta_{b}P_{s^{\prime}n^{\prime}}(\mathbf{k})\right]. \tag{39}\] We notice that all the angle dependence is within the projectors. We may then use the decomposition from Eq. (21) and perform at once the integral over \(\theta\) \[P^{ab}_{sns^{\prime}n^{\prime}}(k)\equiv\int_{0}^{2\pi}\frac{d\theta}{2\pi}{\rm Tr }\left[\zeta_{a}P_{sn}({\bf k})\zeta_{b}P_{s^{\prime}n^{\prime}}({\bf k}) \right]={\rm Tr}\left[\zeta_{a}P_{sn}^{0}\zeta_{b}P_{s^{\prime}n^{\prime}}^{0}+ \zeta_{a}P_{sn}^{+}\zeta_{b}P_{s^{\prime}n^{\prime}}^{-}+\zeta_{a}P_{sn}^{-} \zeta_{b}P_{s^{\prime}n^{\prime}}^{+}\right], \tag{40}\] where we have omitted for brevity the dependence on the absolute value of the momentum \(P_{sn}^{0}\equiv P_{sn}^{0}(k)\). The matrix elements \(L_{0a}=L_{a0}\) acquire a simpler expression because the closeness of the two Green's functions under the trace symbol allows to exploit the property of the projectors and one gets \[L_{0a}=u_{0}^{2}\sum_{sn}\int\frac{d^{2}k}{(2\pi)^{2}}g_{sn}^{R}(\omega,k)g_{ sn}^{A}(\omega,k){\rm Tr}\left[P_{sn}({\bf k})\zeta_{a}\right]=u_{0}^{2} \sum_{sn}\int\frac{kdk}{2\pi}g_{sn}^{R}(\omega,k)g_{sn}^{A}(\omega,k){\rm Tr }\left[P_{sn}^{0}(k)\zeta_{a}\right]. \tag{41}\] The above matrix element can only differ from zero when the index \(a\) belongs to the set of \(\zeta\) matrices appearing in the decomposition of \(P_{sn}^{0}(k)\) as shown in Eqs. (22, 24). To evaluate the matrix \(L_{ab}\) we further need the integral over the absolute value of the momentum \[{\cal G}^{(2)}_{sns^{\prime}n^{\prime}}=\int\frac{kdk}{2\pi}g_{sn}^{R}(\omega, k)g_{s^{\prime}n^{\prime}}^{A}(\omega,k). \tag{42}\] The above integral depends on the comparison between the energy differences \(E_{sn}-E_{s^{\prime}n^{\prime}}\) and the disorder-induced broadening \(\Gamma_{sn}=-{\rm Im}\Sigma_{sn}^{R}\), \(\Gamma_{s^{\prime}n^{\prime}}=-{\rm Im}\Sigma_{s^{\prime}n^{\prime}}^{R}\), \(\Sigma_{sn}^{R}\) being the elements of the self-energy in the diagonal basis. For the case \(E_{sn}-E_{s^{\prime}n^{\prime}}\ll\Gamma_{sn}\sim\Gamma_{s^{\prime}n^{\prime}}\), one has \[{\cal G}^{(2)}_{sns^{\prime}n^{\prime}}=\frac{\pi}{2}\left(\frac{D_{sn}}{ \Gamma_{sn}}+\frac{D_{s^{\prime}n^{\prime}}}{\Gamma_{s^{\prime}n^{\prime}}} \right). \tag{43}\] In the opposite case of large energy separation \(E_{sn}-E_{s^{\prime}n^{\prime}}\gg\Gamma_{sn}\sim\Gamma_{s^{\prime}n^{\prime}}\) we have instead \[{\cal G}^{(2)}_{sns^{\prime}n^{\prime}}=-i\pi\left(\frac{D_{sn}}{\Delta_{sn}} +\frac{D_{s^{\prime}n^{\prime}}}{\Delta_{s^{\prime}n^{\prime}}}\right), \tag{44}\] where \(\Delta_{sn}=E_{sn}-E_{s^{\prime}n^{\prime}}\) with \(E_{sn}\equiv E_{sn}(k_{sn}(\mu))\) and \(E_{s^{\prime}n^{\prime}}\equiv E_{s^{\prime}n^{\prime}}(k_{sn}(\mu))\) and a similar expression for \(\Delta_{s^{\prime}n^{\prime}}\). For the case in Eq. (43), we see that \({\cal G}^{(2)}_{sns^{\prime}n^{\prime}}\propto u_{0}^{-2}\), which exactly cancels the \(u_{0}^{2}\) prefactor in \(L_{ab}\) and gives its leading order contribution. On the other hand, for the case in Eq. (44), the \(u_{0}^{2}\) term appears only in the imaginary part of \(\Delta_{sn}\), which is assumed to be small, and the leading contribution of \({\cal G}^{(2)}_{sns^{\prime}n^{\prime}}\) in this case is independent of \(u_{0}^{2}\). Consequently, its contribution to the \(L_{ab}\) matrix is of order \(u_{0}^{2}\), which we neglect within the first Born approximation. ###### Acknowledgements. G.J.F. acknowledges support from the Brazilian funding agencies CNPq, CAPES and FAPEMIG (Grant PPM-00798-18). J.Y.F. acknowledges support by the National Natural Science Foundation of China (Grants No. 12274256 and No. 11874236) and the Major Basic Program of Natural Science Foundation of Shandong Province (Grant No. ZR2021ZD01). One of the authors (J.Y.F.) thanks Rui Song and Ning Hao for helpful discussions.
2303.16705
Planar 3-way Edge Perfect Matching Leads to A Holant Dichotomy
We prove a complexity dichotomy theorem for a class of Holant problems on planar 3-regular bipartite graphs. The complexity dichotomy states that for every weighted constraint function $f$ defining the problem (the weights can even be negative), the problem is either computable in polynomial time if $f$ satisfies a tractability criterion, or \#P-hard otherwise. One particular problem in this problem space is a long-standing open problem of Moore and Robson on counting Cubic Planar X3C. The dichotomy resolves this problem by showing that it is \numP-hard. Our proof relies on the machinery of signature theory developed in the study of Holant problems. An essential ingredient in our proof of the main dichotomy theorem is a pure graph-theoretic result: Excepting some trivial cases, every 3-regular plane graph has a planar 3-way edge perfect matching. The proof technique of this graph-theoretic result is a combination of algebraic and combinatorial methods. The P-time tractability criterion of the dichotomy is explicit. Other than the known classes of tractable constraint functions (degenerate, affine, product type, matchgates-transformable) we also identify a new infinite set of P-time computable planar Holant problems; however, its tractability is not by a direct holographic transformation to matchgates, but by a combination of this method and a global argument. The complexity dichotomy states that everything else in this Holant class is \#P-hard.
Jin-Yi Cai, Austen Z. Fan
2023-03-29T13:54:50Z
http://arxiv.org/abs/2303.16705v1
# Planar 3-way Edge Perfect Matching Leads to A Holant Dichotomy ###### Abstract We prove a complexity dichotomy theorem for a class of Holant problems on planar 3-regular bipartite graphs. The complexity dichotomy states that for every weighted constraint function \(f\) defining the problem (the weights can even be negative), the problem is either computable in polynomial time if \(f\) satisfies a tractability criterion, or #P-hard otherwise. One particular problem in this problem space is a long-standing open problem of Moore and Robson [14] on counting Cubic Planar X3C. The dichotomy resolves this problem by showing that it is #P-hard. Our proof relies on the machinery of signature theory developed in the study of Holant problems. An essential ingredient in our proof of the main dichotomy theorem is a pure graph-theoretic result: Excepting some trivial cases, every 3-regular plane graph has a planar 3-way edge perfect matching. The proof technique of this graph-theoretic result is a combination of algebraic and combinatorial methods. The P-time tractability criterion of the dichotomy is explicit. Other than the known classes of tractable constraint functions (degenerate, affine, product type, matchgates-transformable) we also identify a new infinite set of P-time computable planar Holant problems; however, its tractability is not by a direct holographic transformation to matchgates, but by a combination of this method and a global argument. The complexity dichotomy states that everything else in this Holant class is #P-hard. Introduction Holant problems are also known as edge-coloring models. They can express a broad class of counting problems, such as counting matchings, perfect matchings (#PM), proper edge-colorings, cycle coverings, and a host of counting orientation problems such as counting Eulerian orientations or the six-vertex model. Every counting constraint satisfaction problem (#CSP) can be expressed as a Holant problem. On the other hand, Freedman, Lovasz and Schrijver [10] proved that the prototypical Holant problem #PM cannot be expressed as a graph homomorphism function (vertex-coloring model) by any real valued constraint function. This is true even for complex valued constraint functions [11]. Some problems are #P-hard in general, yet computable on planar graphs. The problem #PM is such a problem [21, 12]. A most fascinating algorithm--the FKT algorithm [13, 14, 15]--computes #PM in polynomial time (FP, polynomial-time computable functions) for planar graphs. Valiant introduced holographic algorithms which are non-parsimonious reductions to the FKT algorithm, placing many planar counting problems in FP that seemed to be intractable. To understand these algorithms a signature theory was developed and the Holant framework was introduced. Stated in this signature theory, Valiant's holographic algorithms boil down to what constraint functions (signatures) can be realized by the so-called matchgate signatures under a holographic transformation. Delineating the precise boundary of FP tractability for these problems has been a central focus in the classification theory of counting problems [1, 15, 16, 17, 18, 19]. A general theme has emerged: for very broad classes of counting problems, one can classify every problem in the class to be of exactly one of three types: (1) FP, (2) #P-hard in general but FP on planar graphs, or (3) #P-hard on planar graphs. Furthermore, for all #CSP on Boolean variables (which includes vertex models), Valiant's holographic algorithm is a universal algorithm [15] that solves problems in (2)1. In this paper, we prove that for a class of bipartite Holant problems, this three-way classification _holds_. However, there are _two methods_ for planar tractability in type (2): In addition to holographic transformations to matchgates, there is another type which combines this transformation with a global argument. Either method alone is not, but _together_ they do form, a universal strategy for planar tractability. Footnote 1: However, this is not true for Holant problems in general [15]. If one recalls that the FKT algorithm solves #PM for planar graphs, which is _the_ prototypical Holant problem but not a vertex model, this is particularly intriguing. We briefly define Holant problems on Boolean variables. An input is a signature grid \(\Omega\) consisting of a graph \(G=(V,E)\) with each vertex \(v\) labeled by a constraint function \(f_{v}\) (also called a signature). The Holant problem is to compute a sum-of-product \(\operatorname{Holant}(\Omega)=\sum_{\sigma:E\rightarrow\{0,1\}}\prod_{v\in V}f_ {v}(\sigma|_{E(v)})\), where \(E(v)\) denotes the incident edges of \(v\). E.g., #PM is the counting problem where each \(f_{v}\) is the 0-1 valued Exact-One function. In planar Holant problems, denoted by Pl-Holant, \(G\) is required to be planar, and \(f_{v}\) takes inputs from \(E(v)\) which is given a cyclic order starting from some edge (specified by \(\Omega\)). In this paper, we study a class of Holant problems whose input graphs are planar, 3-regular and bipartite. More precisely, let \(f(x,y,z)=[f_{0},f_{1},f_{2},f_{3}]\) be any ternary constraint function which takes value \(f_{i}\in\mathbb{Q}\), if the input has Hamming weight \(i\). We allow both positive and negative values. We study Pl-Holant (\(f\mid(=_{3})\)), the Holant problem on planar, 3-regular bipartite graphs where LHS vertices are assigned \(f\) and RHS vertices are assigned a ternary equality (\(=_{3}\)). Without planarity, a complexity dichotomy was proved for these bipartite Holant problem in [15]. Planarity plus regularity add considerable difficulty. One can think of them as counting problems on 3-regular 3-uniform hypergraphs, or set systems where every subset has cardinality 3 and every element appears in 3 subsets. The planarity refers to its (bipartite) incidence graph. These include some well studied problems. One long-standing open problem raised by Moore and Robson in [14] is counting Cubic-Planar-X3C, (X3C stands for Exact-3-Cover), or equivalently Cubic Planar Monotone 1-in-3 SAT. Expressed as a Holant problem it is \(\text{Pl-Holant}\left([0,1,0,0]\mid(=_{3})\right)\), where \([0,1,0,0]\) is the ternary Exact-One function. Schaefer [13] proved that Monotone 1-in-3 SAT is NP-complete. Lichtenstein [16] first considered the complexity of many planar problems, and Laroche [15] proved that Planar Monotone 1-in-3 SAT is NP-complete. Monotone 1-in-3 SAT is the same as X3C. Dyer and Frieze [10] proved the NP-completeness of Planar X3C and 3DM where each element is in either 2 or 3 subsets (of cardinality 3). Moore and Robson [14], in a reduction using ingenious combinatorial gadgets, showed that this problem remains NP-complete when each element is in exactly 3 subsets. However, they noted that they were not able to conclude the #P-hardness of its counting version, which is precisely \(\text{Pl-Holant}\left([0,1,0,0]\mid(=_{3})\right)\), while all previous NP-complete proofs listed here do extend to #P-hardness for its counting version. We observe that these proofs are combinatorial, and they become increasingly more delicate with planarity and regularity restrictions. Our proof is carried out using the machinery of signature theory developed in the study of Holant problems. These are algebraic proofs which show that the underlying combinatorial constructions succeed. This machinery demonstrates the power of using _algebraic_ method to prove complexity results which are combinatorial in nature. Indeed, this is exactly in the spirit of Valiant's holographic algorithms which use arithmetic cancellations to achieve reductions that are globally valid for counting, but solutions do not correspond in a 1-1 fashion (i.e., non-parsimonious reductions). One difficulty in working with 3-regular bipartite Holant problems is the severe limitation on the gadgets that can be possibly constructed. One can show that on either side of the bipartite problem, every constructible gadget defines a constraint function having arity a multiple of 3. So in particular, one cannot directly produce unary signatures, or binary signatures on either side. One _can_ produce "straddled" signatures that take some input variables from one side and some from the other. Typically a "degenerate" signature is not very useful in the proof of a dichotomy theorem. A counter-intuitive idea from [11] is to utilize straddled and degenerate signatures, to "virtually" produce unary signatures. This idea led to a complexity dichotomy for these bipartite counting problems in the setting that ignores planarity. But the essence of Valiant's holographic algorithm and the study of Holant problems is to account for planar tractability, and we know there are problems in this class that are #P-hard in general but in FP on planar graphs. In this paper we settle that by proving a planar complexity dichotomy. A major technical challenge is how to "virtually" produce unary signatures in a planar way. We prove a pure graph-theoretic result that says that, except in some trivial cases, every 3-regular plane graph 2 has a planar 3-way edge perfect matching (P3EM). We use it as an essential ingredient to the proof of the dichotomy. This result should be of independent interest. The proof technique to prove this matching theorem is a combination of algebraic and combinatorial methods. This theorem lets us virtually "manufacture" and then "absorb" unary signatures in the #P-hardness reduction. This allows us to carry out the needed #P-hardness reductions in a planar way. Footnote 2: A 3-regular graph is also called a _cubic_ graph. Properties of cubic planar graphs have been studied extensively [13, 1, 12, 14]. Preliminaries and Our Main Theorem A (symmetric) constraint function (a.k.a. signature) of arity \(n\) is \(f=[f_{0},f_{1},\ldots,f_{n}]\), where \(f_{i}\) denotes the function value on inputs of Hamming weight \(i\). E.g., the ternary Exact-One function is \([0,1,0,0]\), and the ternary Equality function (\(=_{3}\)) is \([1,0,0,1]\). In this paper we consider the following set of Holant problems, denoted by \(\text{Pl-Holant}\left(f\mid(=_{3})\right)\), where \(f\) is a ternary function. An input is a signature grid \(\Omega\) consisting of a planar 3-regular bipartite graph \(G=(U,V,E)\), where each vertex in \(U\) is assigned \(f=[f_{0},f_{1},f_{2},f_{3}]\) with values \(f_{i}\in\mathbb{Q}\), and each vertex in \(V\) is assigned (\(=_{3}\)). The Holant problem is to compute \[\text{Holant}\left(\Omega\right)=\sum_{\sigma:E\rightarrow\{0,1\}}\prod_{u\in U }f\left(\sigma|_{E(u)}\right)\prod_{v\in V}\left(=_{3}\right)\left(\sigma|_{E( v)}\right).\] For clarity, we shall call vertices in \(U\) are on the left hand side (LHS) and vertices in \(V\) are on the right hand side (RHS). We can write a signature of arity \(n\) as a vector in \(\mathbb{Q}^{2^{n}}\) indexed in lexicographical order. A (symmetric) signature \(f\) is _degenerate_ if there exists a unary signature \(u\in\mathbb{C}^{2}\) such that \(f=u^{\otimes n}\), the \(n\)th tensor power. The main result of this paper is the following dichotomy theorem: **Theorem 2.1**.: \(\text{Pl-Holant}\left(f\mid(=_{3})\right)\) _where \(f=[f_{0},f_{1},f_{2},f_{3}]\) and \(f_{i}\in\mathbb{Q}\)\((0\leq i\leq 3)\) is \(\#\text{P-hard except in the following cases, for which the problem is in \(\mathrm{FP}\): (1) \(f\) is degenerate; (2) \(f=[a,0,0,b]\), for some \(a,b\); (3) \(f=[a,0,\pm a,0],[0,a,0,\pm a],[a,-a,-a,a],[a,a,-a,-a]\) for some \(a\); (4) \(f=[a,b,b,a]\) or \(f=[a,b,-b,-a]\), for some \(a,b\); (5) \(f=[3a+b,-a-b,-a+b,3a-b]\) for some \(a,b\). Without the planar restriction, the problem \(\text{Holant}(f\mid(=_{3}))\) remains in \(\mathrm{FP}\) in cases (1), (2) and (3), but #P-hard in cases (4) and (5)._ In case (1), the signature \(f\) decomposes into three unary signatures. In case (2), \(f\) is a generalized equality. In case (3), \(f\) is in the affine class. In case (4), the Holant problem is transformable to planar #PM with matchgates (see more details about these tractable classes in [1]). In case (5), the planar P-time tractability is _neither_ by Valiant's holographic reduction alone, _nor_ entirely independent from it. Rather it is by a combination of a holographic reduction together with a global argument. As mentioned in Section 1, counting Cubic-Planar-X3C is just \(\text{Pl-Holant}\left([0,1,0,0]\mid(=_{3})\right)\), the counting problem of Moore and Robson [14]. It clearly belongs to this class. It is also equivalent to Cubic Planar Monotone 1-in-3 SAT. By Theorem 2.1, it is #P-complete. To see that case (5) is planar tractable, we prove for any \(a\) and \(b\), the value of \(\text{Pl-Holant}\left(f\mid(=_{3})\right)\) for \(f=[3a+b,-a-b,-a+b,3a-b]\) on any planar signature grid exactly equals the value of \(\text{Pl-Holant}\left([0,2a,0,0]\mid[0,1,0,0]\right)\) on the same signature grid, and thus can be computed by the FKT algorithm for counting perfect matchings. Indeed, by a holographic transformation using \(H=\left[\begin{smallmatrix}1&1\\ 1&-1\end{smallmatrix}\right]\) we have the following sequence of equivalences: \[\text{Pl-Holant}\left(f\mid(=_{3})\right) \equiv_{T}\] where the third equivalence follows from the observation that for each nonzero term in the Holant sum, every vertex on the LHS has at least two of three edges assigned 1 (from \([0,0,2a,2b]\)), meanwhile every vertex on the RHS has at most two of three edges assigned 1 (from \([1,0,1,0]\)). The graph being bipartite and 3-regular, the number of vertices on both sides must equal, thus every vertex has exactly two incident edges assigned 1. An example of a planar tractable problem that belongs to case (5) is as follows. It can be viewed as a covering problem on 3-uniform hypergraphs of degree 3. We say \((X,\mathcal{S})\) is a 3-regular \(k\)-uniform set system (or 3-regular \(k\)-uniform hypergraph), if \(\mathcal{S}\) consists of a family of sets \(S\subset X\) each of size \(|S|=k\), and every \(x\in X\) is in exactly 3 sets. If \(k=2\) this is just an ordinary 3-regular graph (where the 2-subsets are ordinary edges). We consider 3-regular 3-uniform set systems. We say \(\mathcal{S}^{\prime}\) is a _leafless partial cover_ if every \(x\in\bigcup_{S\in\mathcal{S}^{\prime}}S\) belongs to more than one set \(S\in\mathcal{S}^{\prime}\). We say \(x\) is _lightly covered_ if \(|\{S\in\mathcal{S}^{\prime}:x\in S\}|\) is 2, and _heavily covered_ if this number is 3. **Problem** : Weighted-Leafless-Partial-Cover. **Input** : A 3-regular 3-uniform set system \((X,\mathcal{S})\). **Output** : \(\sum_{\mathcal{S}^{\prime}}(-1)^{l}2^{h}\), where the sum is over all leafless partial covers \(\mathcal{S}^{\prime}\), and \(l\) (resp. \(h\)) is the number of \(x\in X\) that are lightly covered (resp. heavily covered). Expressed in the Holant framework this problem is just Holant (\(f\mid(=_{3})\)), where \(f=[1,0,-1,2]\). This problem belongs to case (5) with \(a=1/2\) and \(b=-1/2\). Figure 1 illustrates a small instance of this problem. Blue dots represent elements \(x\) and red dots represent the family of sets \(S\). An element \(x\) is contained in a set \(S\) if and only if the blue dot for \(x\) is connected to the red dot for \(S\). It is not hard to see that there are exactly 6 leafless partial covers, which are \(\emptyset\), any family of 3 sets (there are 4 of them), and the family of all 4 sets. Therefore, the Holant value of this instance is \(1+4(-1)^{3}2^{1}+2^{4}=9\). One can also verify that there are exactly 9 distinct perfect matchings in the graph in Figure 1. Therefore, it is known that cases (1)-(5) are in FP. The main claim lies in that all other cases are #P-hard over planar graphs. The cases (4) and (5) capture precisely those problems that are #P-hard on general but in FP on planar graphs; neither case alone does that. A gadget in this paper, such as those illustrated in Figure 19 and Figure 26, is a planar 3-regular bipartite graph \(G=(U,V,E_{\text{in}},E_{\text{out}})\) with internal edges \(E_{\text{in}}\) and dangling edges \(E_{\text{out}}\). There can be \(m\) dangling edges internally incident to vertices from \(U\) and \(n\) dangling edges internally incident to vertices from \(V\). These \(m+n\) dangling edges correspond to Boolean variables \(x_{1},\ldots,x_{m},y_{1},\ldots,y_{n}\) and the gadget defines a signature \[f(x_{1},\ldots,x_{m},y_{1},\ldots,y_{n})=\sum_{\sigma:E_{\text{in}}\to\{0,1\}} \prod_{u\in U}f\left(\widehat{\sigma}|_{E(u)}\right)\prod_{v\in V}\left(=_{3} \right)\left(\widehat{\sigma}|_{E(v)}\right),\] where \(\widehat{\sigma}\) denotes the extension of \(\sigma\) by the assignment on the dangling edges. The variables Figure 1: A small instance for the problem Weighted-Leafless-Partial-Cover \(x_{1},\ldots,x_{m}\) (respectively, \(y_{1},\ldots,y_{n}\)) are called LHS (respectively, RHS) variables and are to be connected externally to RHS (respectively, LHS) signatures in \(\text{Pl-Holant}\,(f\mid(=_{3}))\). Gadgets that are constructible for \(\text{Pl-Holant}\,(f\mid(=_{3}))\) is severely limited due to planarity and bipartiteness. Suppose \(g\) is the signature of a gadget construction with all of its variables on the LHS. Then by a simple counting argument, the arity of \(g\) must be a multiple of \(3\). The same is true for a gadget construction with all of its variables on the RHS. In particular, one cannot hope to produce any unary or binary signature on either side. However, being able to have unary or binary signatures at hand has been proven to be very useful in studying Holant problems. To tackle this difficulty, _straddled_ gadgets were introduced [10] which have both LHS and RHS variables. For example, the gadget \(G_{1}\) in Figure 19, after we place \(f\) on the square vertex and \((=_{3})\) on the circle vertex, has one variable on the LHS (the dangling edge that connects to a square) and one variable on the RHS (the dangling edge that connects to a circle). We list the values of a signature \(f\) in a _signature matrix_\(M_{f}\) where the row(s) \(R\) and column(s) \(C\) correspond to assignments, in lexicographic order, of input variables \(X=R\cup C\). We may identify \(f\) with \(M_{f}\). When two signatures \(M_{f}\) and \(M_{g}\) are composed by merging the dangling edges of the column variables of \(f\) with the row variables of \(g\), the signature matrix of the resulting signature is the matrix product \(M_{f}M_{g}\). In our paper, the composition must respect the bipartiteness and planarity. Also, note that if a straddled gadget has \(m\) dangling edges to be connected to RHS and \(n\) dangling edges to be connected to LHS, then \(m-n\equiv 0\bmod 3\). One crucial idea in [10] is to interpolate _degenerate_ straddled binary signatures and use them as two unary signatures; one of which is desired and the other is to be grouped together to form an easily computable positive constant, which does not affect the complexity. However, the "grouped together" process destroys the planar structure and thus the reduction fails for planar graphs. However, we can make it work for planar graphs if we can group these leftover unaries three at a time within each face. This is where planar \(3\)-way edge matching (P3EM) comes in. Our theorem on P3EM will allow us to do that. More formally, let \(G=(V,E)\) be an undirected plane graph, i.e., a planar graph with a given planar embedding. We allow \(G\) to be a multi-graph, i.e., parallel edges and self-loops are allowed. A planar \(3\)-way edge matching (P3EM) is a partition of \(E\) into a collection \(M\) of \(3\)-edge subsets \(E=\bigcup_{t\in M}t\) such that we can add one vertex \(v_{t}\) for each \(t\in M\) and connect \(v_{t}\) to the three edges of \(E\) in \(t\) so that the resulting graph is still a plane graph. In Section 3, we prove that a P3EM always exists for any plane \(3\)-regular graph (except for some trivial cases) and, moreover, can be constructed in polynomial time. An often-used technique in dealing with plane graphs is first taking a spanning tree of the dual graph and picking a root node, e.g., the node associated to the outer face. Starting from a leaf, one argues that some invariant property can be "propagated" through the tree until finally reaching the root. This technique is used in [14] as well as in the proof of previous dichotomies concerning planarity [10, 11]. However, this technique does not work in this case. New techniques have to be invented. The proof of our P3EM theorem is a mixture of algebra and combinatorics, and it should be of independent interest. ## 3 Planar 3-way Edge Matching (P3EM) We begin with the following lemma. **Lemma 3.1**.: _A 3-regular plane graph \(G\) has a P3EM iff there is an assignment that assigns each edge to an adjacent face so that the number of edges assigned to each face is \(0\bmod 3\)._ Proof.: If \(E=\bigcup_{t\in M}t\) is a P3EM, then for every 3-edge subset \(t\in M\), the point \(v_{t}\) belongs to a face adjacent to all three edges in \(t\). This gives the assignment of \(E\). Conversely, suppose there is such an assignment \(\sigma\) for \(G\), and we first assume \(G\) is a connected plane graph. Then a partition of \(E\) into 3-edge subsets can be obtained by collecting consecutive triples from edges that are assigned toward any face \(F\) along a cyclic traversal of the boundary of \(F\). This produces a P3EM for \(G\). Now suppose \(G\) is disconnected and we consider \(G\) as given on the sphere \(S^{2}\). There is a simple closed curve \(S\) (homeomorphic to a circle \(S^{1}\)) disjoint from \(G\) separating \(S^{2}\) into two discs \(D_{1}\) and \(D_{2}\), and separating \(G\) into two nonempty disjoint plane graphs \(G_{1}\cup G_{2}\), with \(G_{i}\subset D_{i}\) (\(i=1,2\)). \(S\) is contained in some face \(F\). For \(i=1,2\), every face other than \(F\) in \(D_{i}\) is assigned by \(\sigma\) edges from \(E(G_{i})\) only, and the number of which is \(0\bmod 3\). Since \(G_{i}\) is 3-regular, we have \(|E(G_{i})|\equiv 0\bmod 3\). Hence the number of edges from \(E(G_{i})\) assigned to \(F\) is also \(0\bmod 3\). Thus the restriction of \(\sigma\) to \(E(G_{i})\) is an edge assignment for \(G_{i}\) that satisfies the stipulation in the lemma statement. Formally, for \(G_{1}\) we can remove \(G_{2}\), then \(F\) becomes an extended face \(\widehat{F}\) containing \(D_{2}\), and we get an assignment \(\sigma_{1}\) from \(E(G_{1})\) to the set of faces of \(G\) in \(D_{1}\) together with \(\widehat{F}\). For \(G_{1}\) we can contract \(D_{2}\) to a single point, and \(\widehat{F}\) becomes essentially the intersection of \(F\) with \(D_{1}\). Similarly we have an assignment \(\sigma_{2}\) for \(G_{2}\). By induction, we have a P3EM for both \(G_{1}\) and \(G_{2}\), made up of triples of edges of \(G_{1}\) and \(G_{2}\) separately. Due to the contraction of \(D_{2}\) for \(G_{1}\) the triples \(t\) assigned to \(F\) from \(G_{1}\) correspond to points \(v_{t}\) inside \(D_{1}\). The same statement is true for \(G_{2}\). This removes any potential interference with planarity when putting the two P3EMs together to form a P3EM for \(G\). Thus to prove the existence of a P3EM of \(G\) we will prove the existence of such an assignment. It also follows that \(G\) has a P3EM iff each connected component of \(G\) does. Plane graphs \(G_{i}=(V_{i},E_{i})\), (\(i=1,2\)), are planarly isomorphic if there exists an 1-1 correspondence of \(V_{1}\) and \(V_{2}\) such that it induces a 1-1 correspondence of the edges and faces by incidence. Clearly, having a P3EM is a property preserved by planar isomorphism. **Theorem 3.2**.: _Every 3-regular plane graph, except for those containing a connected component \(K_{4}\) or the multi-graph \(M_{2,3}\) on 2 vertices with 3 parallel edges, admits a planar 3-way edge matching, and one can be found in polynomial time._ We note that P3EM indeed does not exist for the two exceptional graphs. Also, up to planar isomorphism there is only one plane embedding for these two graphs, as well as all graphs depicted in Figure 2, which will serve as our induction base cases. In Figure 2, edges of the same color form a triple. Proof.: By Lemma 3.1, it suffices to prove the case when \(G\) is connected, as putting together P3EMs for each connected component gives a P3EM for \(G\). We prove Theorem 3.2 by induction. Our induction hypothesis is as follows: if \(|V(G)|\leq k\) for some integer \(k\) and it is not one of the two exceptions, then it admits a P3EM. Given a larger graph \(G\), we try to reduce it to a smaller graph \(G^{\prime}\) such that we obtain a P3EM for \(G\) from that of \(G^{\prime}\); whenever the reduction step produces one of the two exceptions, we will give a P3EM directly to the original graph. We first show it suffices to consider simple planar 3-regular graphs. If \(G\) has a self-loop, then locally it has a fragment depicted in Figure 3, unless it is planarly isomorphic to the base case Figure 1(a), which we directly give a P3EM. Now perform the transformation depicted in Figure 3. If the resulting graph is one of the two exceptions, then the original graph is planarly isomorphic to the base cases Figure 1(b) or Figure 1(c), for which we give their Figure 3: Transforming self-loops Figure 2: Base cases in the proof of Theorem 3.2 P3EMs directly. Otherwise, by induction hypothesis, there exists a P3EM \(M\) for the resulting graph. If the edge \(e^{*}\) in Figure 2(b) is mapped rightwards (_resp._ leftwards) in \(M\), then we obtain a P3EM for the original graph by simulating \(e_{4}\) as \(e^{*}\) to be mapped rightwards (_resp._ leftwards) and connecting \(e_{1}\), \(e_{2}\) and \(e_{3}\). Note that the self-loop transformation is valid regardless whether there is a self-loop at the vertices \(C\) or \(D\). If \(G\) has parallel edges between two vertices, say \(B\) and \(C\), since \(G\) is 3-regular and \(G\) is not planarly isomorphic to \(M_{2,3}\), there must be exactly two edges between them. If \(B\) and \(C\) have a common neighbor, say \(A\), we may delete \(B,C\) and their incident edges, and add a self-loop at \(A\). The resulting graph is not one of the exceptions, by induction, it has a P3EM. One can then easily obtain a P3EM for \(G\). Now suppose the third edges from \(B\) and \(C\) are \(e_{1}=\{A,B\}\) and \(e_{4}=\{C,D\}\) with \(A\neq D\), as depicted in Figure 4. We likewise perform a transformation which "deletes" \(B\) and \(C\) with double edges, and merge \(e_{1}\) and \(e_{4}\) to a single edge \(e^{*}=\{A,D\}\). If the resulting graph is one of the two exceptions, the original graph is isomorphic to the base cases Figure 1(d) or Figure 1(e) which we give their P3EMs directly. Otherwise, by induction hypothesis, there exists a P3EM \(M\) for the resulting graph. If the edge \(e^{*}\) is mapped upward (_resp._ downward) in \(M\) (Figure 3(b)), then we obtain a P3EM for the original graph by simulating \(e_{2}\) (_resp._\(e_{3}\)) as \(e^{*}\) and connecting \(e_{3}\) (_resp._\(e_{2}\)), \(e_{1}\) and \(e_{4}\). Below we assume \(G\) is simple, i.e., without parallel edges or self-loops. Next we consider the case when \(G\) contains a triangle face as depicted in Figure 4(a). Since it is a simple graph, all three edges \(e_{1},e_{2},e_{3}\) are distinct, and \(D,E,F\not\in\{A,B,C\}\). If the vertices \(D,E,F\) Figure 4: Transforming double edges Figure 5: Transforming triangles are all distinct, then we perform the transformation from Figure 4(a) to Figure 4(b). By induction hypothesis, the resulting graph admits a P3EM unless it is \(K_{4}\) (in this case, the resulting graph cannot be \(M_{2,3}\) since it has more than two vertices). If the resulting graph is \(K_{4}\), then the original graph is (or planarly isomorphic to) the base case Figure 1(f) for which we give a P3EM directly. If the resulting graph is not \(K_{4}\) and hence admits a P3EM \(M\) by our induction hypothesis, then the original graph can simulate \(M\) by connecting the edges of the triangle face internally. We now consider the case when the vertices \(D,E,F\) are not all distinct. Since the original graph is not \(K_{4}\), the vertices \(D,E,F\) are not all the same vertex. Without loss of generality we assume \(D=E\neq F\). See Figure 4(c) for an illustration. \(D\) has an incident vertex \(E^{\prime}\neq A,B\). Suppose \(E^{\prime}\neq F\). Then we perform the transformation illustrated in Figure 4(d). Similarly as above, if the resulting graph is not one of the exceptions, then we can easily simulate the P3EM (which is given by the induction) in the resulting graph. If the resulting graph Figure 4(d) is planarly isomorphic to the exceptions \(M_{2,3}\) or \(K_{4}\), then the original graph is planarly isomorphic to the base cases Figure 1(g) or Figure 1(h), which we give a P3EM directly. Finally suppose \(E^{\prime}=F\), then we transform the original graph by deleting the vertices \(A,B,C,D\) and their incident edges, and form a self-loop at \(E^{\prime}=F\). The resulting graph has fewer vertices (and has a self-loop and so it is not \(M_{2,3}\) or \(K_{4}\)), and so by the induction hypothesis it admits a P3EM. It is easy to verify that a P3EM in the resulting graph, as before, can be simulated in the original graph. In the following we may assume \(G\) is simple without triangle faces. Next we consider the case when the graph contains a bridge, i.e., an edge whose removal disconnects the graph (see Figure 5(a)). This means that the same face \(f\) is on both sides of the bridge \(\{B,E\}\) in \(G\). Perform the transformation illustrated in Figure 6. The resulting graph has two disconnected components which are not isomorphic to any exception case. Indeed, neither is isomorphic to \(M_{2,3}\) since \(G\) has no parallel edges, and if it were \(K_{4}\) then the original graph \(G\) contains a triangle face. Thus by induction there are P3EMs, \(M_{1}\) and \(M_{2}\), for the two components respectively. We will use the edges \(\{B,C\}\) and \(\{E,F\}\) to simulate \(\{A,C\}\) and \(\{D,F\}\) respectively, and match the three edges \(\{A,B\},\{B,E\}\) and \(\{E,D\}\) directly. Then we obtain a P3EM for \(G\) from \(M_{1}\) and \(M_{2}\). Note that both \(\{B,C\}\) and \(\{E,F\}\) are on the face \(f\). Since \(\{B,E\}\) is a bridge, regardless of how \(\{A,C\}\) and \(\{D,F\}\) are matched respectively by \(M_{1}\) and \(M_{2}\), we can substitute \(\{B,C\}\) and \(\{E,F\}\) for them respectively. When viewed in a spherical embedding, we may assume both \(\{A,C\}\) and \(\{D,F\}\) are on the outer face for the two disconnected components. Then in \(G\) the substitution of \(\{B,C\}\) for \(\{A,C\}\), and \(\{E,F\}\) for \(\{D,F\}\), gives a total number of edges assigned to the face \(f\) in \(G\) to be the sum of the corresponding numbers assigned by \(M_{1}\) and \(M_{2}\), and thus this total number is \(\equiv 0\pmod{3}\). Below we assume the graph has no bridges. We show next that if \(G\) has a square face, then it admits a P3EM. See Figure 7 for an illustration. Since it is simple, \(E\) is distinct from \(B,D\). Also \(E\neq C\), for otherwise \(\{B,F\}\) or \(\{D,I\}\) would be Figure 6: Transforming a bridge a bridge. By the same reason, none of the vertices \(E,F,H,I\) can be from \(\{A,B,C,D\}\). If \(E=H\) we again would have a bridge. Also \(E\neq F,I\), because \(G\) has no triangle face. It follows that all \(A,B,C,D,E,F,H,I\) are distinct. Now we perform the transformation in Figure 7, replacing the square \(ABCD\) by an edge \(e\). It is clearly not one of the exception graphs (it has at least 6 vertices). By induction hypothesis, the resulting graph admits a P3EM \(M\). If \(e\) in the resulting graph is mapped leftwards, then we can simulate \(M\) in \(G\) by mapping \(\{A,B\}\) leftwards, and match \(\{A,D\}\), \(\{D,C\}\) and \(\{C,B\}\) inside the square. Similarly, if \(e\) is mapped rightwards, then we use \(\{D,C\}\) in its place, and match \(\{A,D\}\), \(\{A,B\}\) and \(\{B,C\}\) inside the square. This gives a P3EM for \(G\). Below we assume \(G\) has no square faces. We now consider the case when the graph contains a chord. Let \(\mathcal{C}\) be the boundary of the external face of the plane graph. Since we can now assume \(G\) is bridgeless, \(\mathcal{C}\) is a simple cycle. We say it contains a chord if there exist two vertices on \(\mathcal{C}\) that are joined by an edge that is not in \(\mathcal{C}\). See Figure 7(a) for an illustration. Since \(\mathcal{C}\) is the outer boundary, any chord must connect inside of \(\mathcal{C}\). Let \(\{A,B\}\) be a chord, and let \(C,E\) and \(D,F\) be their neighbors on \(\mathcal{C}\). We note that there is no edge connecting \(\{C,D\}\) or \(\{E,F\}\) since \(G\) has no square face. In Figure 7(a) we mark the part of \(G\) to the left, respectively to the right, of (but including) the edge \(\{A,B\}\) as region 1, respectively region 2. By planarity, the only edges connecting regions 1 and 2 are those incident to \(A\) or \(B\). Denote the numbers of edges in regions 1 and 2 by \(E_{1}\) and \(E_{2}\) (both including \(\{A,B\}\)). Then \(E_{1}+E_{2}\equiv 1\pmod{3}\). Perform the transformation illustrated in Figure 8 and we obtain a resulting graph that is disconnected and its two components are 3-regular plane graphs. Since in the original graph, \(E\) and \(F\) are not adjacent, there exists another vertex distinct from \(E\) and \(F\) in region 2. Thus, the left side component in the resulting graph has at least one vertex fewer than \(G\), and by induction hypothesis it admits a P3EM \(M_{1}^{\prime}\). Similarly, the right side component also admits a P3EM \(M_{2}^{\prime}\) (it cannot be \(M_{2,3}\) since \(G\) is simple, nor \(K_{4}\) since that will imply \(G\) has a triangle face). The edge \(\{E^{\prime},F^{\prime}\}\) is assigned either inside the triangle \(AE^{\prime}F^{\prime}\) or inside \(BE^{\prime}F^{\prime}\). In Figure 8: Transforming a chord Figure 7: Transforming a square the former case, the edges \(\{B,E^{\prime}\}\) and \(\{B,F^{\prime}\}\) must be assigned outside the triangle \(BE^{\prime}F^{\prime}\). In the latter case, the edges \(\{A,E^{\prime}\}\) and \(\{A,F^{\prime}\}\) are assigned outside the triangle \(AE^{\prime}F^{\prime}\). In either case, exactly one of the edges \(\{A,E^{\prime}\}\) or \(\{B,E^{\prime}\}\) is assigned leftwards and one of the edges \(\{A,F^{\prime}\}\) or \(\{B,F^{\prime}\}\) is assigned outside. Thus, there are \(N_{1}\equiv 2\pmod{3}\) edges in the path \((A,C,\ldots,D,B)\) along the cycle \(\mathcal{C}\) assigned outside. Similarly, let \(N_{2}\) denote the number of edges assigned outside along the external face in \(M_{2}^{\prime}\), then \(N_{2}\equiv 0\pmod{3}\). We now construct a 3DEM in \(G\). In region 1, the edge \(\{A,B\}\) will be assigned leftwards (taking its place as either \(\{A,E^{\prime}\}\) or \(\{B,E^{\prime}\}\) which was assigned leftwards in \(M_{1}^{\prime}\)). All other edges will be assigned in the same way as in \(M_{1}^{\prime}\). In region 2 we assign all edges, other than \(\{A,B\}\), as follows. The edge \(\{B,F\}\) will be assigned outside (taking its place as either \(\{A,F^{\prime}\}\) or \(\{B,F^{\prime}\}\) which was assigned outside in \(M_{1}^{\prime}\)); \(\{A,E\}\) will be assigned as \(\{E,F\}\) in \(M_{2}^{\prime}\); all other edges will be assigned the same way as in \(M_{2}^{\prime}\). By doing so, all internal faces are assigned \(0\pmod{3}\) edges, as in the case of \(M_{1}^{\prime}\) and \(M_{2}^{\prime}\). For the external face, note that there are also \(N_{1}+1+N_{2}\equiv 0\pmod{3}\) edges assigned to it in total. Thus our construction gives a valid P3EM in \(G\). To summarize, we can assume now that the 3-regular plane graph \(G\) is simple, without triangle and square faces, bridgeless and chordless. We now show that the graph must have a pentagon face. Let \(v,e,f\) denote the number of vertices, edges and faces (including the external one) of \(G\), respectively. Since \(G\) is 3-regular, we have \(3v=2e\). Suppose the minimum number of edges around any face is \(n\), then \(2e\geq nf\). By Euler's formula, we have \(2=v-e+f\leq(2/n-1/3)e\), and thus \(n<6\). Since \(G\) is simple and without triangle and square faces, we have \(n=5\). Now fix a pentagon face \(P\) in the graph with vertices \(\{a_{0},a_{1},\ldots,a_{4}\}\). See Figure 9 for an illustration. Since \(G\) is 3-regular, simple and bridgeless, there is a neighbor \(b_{i}\) of \(a_{i}\) distinct from \(\{a_{0},a_{1},\ldots,a_{4}\}\), for every \(0\leq i\leq 4\). For example, \(b_{0}\neq a_{1}\) by simplicity. If \(b_{0}=a_{2}\), i.e., if \(\{a_{0},a_{2}\}\) were an edge, then there would be a bridge \(\{a_{1},b_{1}\}\). Indeed, the edge \(\{a_{0},a_{2}\}\) must lie outside of the pentagon face \(P\). If one traverses the edges \(\{a_{2},a_{1}\},\{a_{1},a_{0}\}\) with \(P\) to its left, then follows with the edge \(\{a_{0},a_{2}\}\), one gets a cycle which separates the part of \(G\) that contains \(P\) from the part of \(G\) that connects to \(a_{1}\) via the edge \(\{a_{1},b_{1}\}\) (note that all three neighbors of each vertex in \(\{a_{0},a_{1},a_{2}\}\) are accounted for and thus no other adjacent edge exists to the right of this cycle). So, deleting \(\{a_{1},b_{1}\}\) disconnects \(G\), and thus \(\{a_{1},b_{1}\}\) is a bridge. Thus, \(b_{0}\neq a_{2}\) as \(G\) is bridgeless. See Figure 10 for an illustration. By symmetry, \(b_{0}\neq a_{3},a_{4}\) as well. By the same reason, \(b_{i}\not\in\{a_{0},a_{1},\ldots,a_{4}\}\), for all \(0\leq i\leq 4\). Furthermore, since \(G\) has no triangle or square faces and is bridgeless, we claim that without loss of generality \(b_{i}\) are all distinct (\(0\leq i\leq 4\)). To see that, we first note that \(b_{0}\neq b_{1},b_{4}\), for Figure 9: Pentagon in the original graph otherwise there would be a triangle face. Next we deal with the case \(b_{0}=b_{2}\) or \(b_{0}=b_{3}\). By symmetry suppose \(b_{0}=b_{2}\). There is a third adjacent vertex \(b\) of \(b_{0}\), other than \(a_{0},a_{2}\). Consider the cycle \(C=(a_{2},a_{1},a_{0},b_{0}=b_{2},a_{2})\) with \(P\) to its left, which defines two simply connected regions in the spherical embedding of \(G\). Call the region that contains \(P\) the interior region. If the edge \(\{b_{0},b\}\) is in the interior region, then \(\{a_{1},b_{1}\}\) is a bridge, by the same proof for \(b_{0}\neq a_{2}\). So we may assume \(\{b_{0},b\}\) lies in the exterior region of the cycle \(C\). (See Figure 11.) Clearly \(b\neq b_{1}\), for otherwise there is a square face. Now there is a face \(\Delta\) bounded by the cycle that contains the edge \(\{a_{1},a_{0}\}\) on the opposite side of \(P\). The bounding cycle contains the path \((b_{1},a_{1},a_{0},b_{0},b)\), followed by a path \(\pi=(b=x_{0},x_{1},\ldots,x_{\ell}=b_{1})\) of \(\ell\geq 1\) edges from \(b\) back to \(b_{1}\). Here the first edge \(\{b,x_{1}\}\) is the right branch we take when we go from \(b_{0}\) to \(b\), and the last edge \(\{x_{\ell-1},b_{1}\}\) is the left branch we take if we go from \(a_{1}\) to \(b_{1}\). Similarly there is another face \(\Delta^{\prime}\) bounded by the cycle that contains the edge \(\{a_{1},a_{2}\}\) on the opposite side of \(P\). The bounding cycle contains the path \(b_{1},a_{1},a_{2},b_{0},b\) followed by a path \(\pi^{\prime}\) of \(\ell^{\prime}\geq 1\) edges from \(b\) back to \(b_{1}\). (See Figure 12.) We will now define two auxiliary graphs \(G_{1}\) and \(G_{2}\). \(G_{1}\) consists of the cycle \(C\) and its interior Figure 11: \(\{b_{0},b\}\) lies in the exterior region of the cycle \(C\) region, augmented by a single edge \(e^{*}=\{a_{1},b_{0}\}\). \(G_{2}\) consists of everything in \(G\) properly exterior to the cycle \(C\) (i.e., not containing \(C\) and its interior) with the two edges \(e_{1}=\{a_{1},b_{1}\}\) and \(e_{2}=\{b,b_{0}\}\) replaced by one new edge \(e_{12}=\{a_{1},b_{0}\}\). (See Figure 13). \(G_{1}\) is not one of the exceptional graphs since it contains a pentagon. If \(G_{2}\) were \(M_{2,3}\) then the two paths \(\pi\) and \(\pi^{\prime}\) denoted by the dotted lines with labels \(\ell\) and \(\ell^{\prime}\) both consist of a single edge and are present in \(G\), contradicting \(G\) being simple. If \(G_{2}\) were \(K_{4}\) then there are four triangle faces (on the spherical embedding), two of which must be present in \(G\), contradicting \(G\) having no triangle faces. So, by induction, both \(G_{1}\) and \(G_{2}\) have a P3EM. Note that \(G_{1}\) contains two triangle faces separated by the edge \(e^{*}\). Any P3EM of \(G_{1}\) assigns \(e^{*}\) to one of these two triangle faces which implies that all three edges of this triangle face must be assigned to this face. Thus, in Figure 13a the four edges \(\{a_{1},a_{2}\},\{a_{2},b_{0}\},\{a_{1},a_{0}\},\{a_{0},b_{0}\}\) must be assigned all up or all down, according to whether \(e^{*}\) is assigned down or up, respectively. In \(G_{2}\), the edge \(e_{12}\) is assigned either up or down to the two adjacent faces. If \(e_{12}\) is assigned up, then along the path \(\pi\) (resp. \(\pi^{\prime}\)) of \(\ell\) (resp. \(\ell^{\prime}\)) edges there are \(0\pmod{3}\) (resp. \(2\pmod{3}\)) edges assigned toward the face that \(e_{12}\) is on its boundary. If \(e_{12}\) is assigned down, then the opposite happens, i.e., \(2\pmod{3}\) (resp. \(0\pmod{3}\) ) edges of \(\pi\) (resp. \(\pi^{\prime}\)) are assigned toward the face that \(e_{12}\) is on its boundary. We now define an edge assignment that will be a P3EM for \(G\). Every edge in \(G\) other than \(e_{1}\) and \(e_{2}\) belongs to exactly one of \(G_{1}\) or \(G_{2}\). We assign these edges according to the assignment in \(G_{1}\) or \(G_{2}\) respectively. This satisfies the requirement of P3EM for every face other than \(\Delta\) and \(\Delta^{\prime}\) in \(G\). For the assignment on \(e_{1}\) and \(e_{2}\), there are four cases according to how \(e^{*}\) in \(G_{1}\) and \(e_{12}\) in \(G_{2}\) are assigned. The first case is both \(e^{*}\) in \(G_{1}\) and \(e_{12}\) in \(G_{2}\) are assign up, and we assign \(e_{1}\) down and \(e_{2}\) up in \(G\). Then, there are a total of \(3\) edges \(e_{1}=\{b_{1},a_{1}\}\), \(\{a_{1},a_{0}\}\) and \(\{a_{0},b_{0}\}\) and \(0\pmod{3}\) edges of \(\pi\) assigned toward \(\Delta\). Also there are a total of \(1\) edge \(e_{2}=\{b_{0},b\}\), and \(2\pmod{3}\) edges of \(\pi^{\prime}\) assigned toward \(\Delta^{\prime}\). The second case is when \(e^{*}\) and \(e_{12}\) are assigned respectively up and down, and we assign \(e_{1}\) and \(e_{2}\) both down in \(G\). Then, there are a total of \(4\) edges \(e_{1}=\{b_{1},a_{1}\}\), \(\{a_{1},a_{0}\}\), \(\{a_{0},b_{0}\}\) and \(e_{2}=(b_{0},b)\), and \(2\pmod{3}\) edges of \(\pi\) assigned toward \(\Delta\), making it \(0\pmod{3}\) altogether. Also there are no edge among these four and \(0\pmod{3}\) edges of \(\pi^{\prime}\) assigned toward \(\Delta^{\prime}\). The other two cases are similar. We have proved that a P3EM exists for \(G\). Hence we may assume that \(b_{0},\ldots,b_{4}\) are all distinct. We claim that there is a simple path connecting \(b_{i}\) and \(b_{i+1}\) for each \(0\leq i\leq 4\) (where \(b_{5}=b_{0}\)), and furthermore the cycle \((a_{i},b_{i},\ldots,b_{i+1},a_{i+1},a_{i})\) using this path is the boundary of a face. Define an \(a_{i}\)-R path as follows: start from \(a_{i}\) and take the first edge \(\{a_{i},b_{i}\}\), and then at every new vertex (of degree \(3\)) choose the right branch for the next vertex, until we encounter a previously visited Figure 12: When \(b_{0}=b_{2}\) vertex on this walk, or one of \(\{a_{0},a_{1},\ldots,a_{4}\}\), then stop. For notational simplicity we consider the case for \(i=0\); all other cases are the same. Suppose the \(a_{0}\)-R path is \(\{x_{0},x_{1},...,x_{k},...,x_{m}\}\), where \(x_{0}=a_{0}\) and \(x_{1}=b_{0}\). First we claim \(x_{m}\neq a_{0}\). If it were, then the step before would have been \(a_{1},a_{4}\) or \(b_{1}=x_{0}\), but then the \(a_{0}\)-R path should have stopped at \(x_{m-1}\). Next we claim that \(x_{m}\in\{a_{1},a_{2},a_{3},a_{4}\}\). Indeed, if \(x_{m}\not\in\{a_{1},a_{2},a_{3},a_{4}\}\), then it is a previously visited vertex \(x_{k}\) on this walk, with \(k\geq 1\). Then \(x_{k-1}\) exists. Moreover, \((x_{k},x_{k+1},\ldots,x_{m})\) is a cycle which is formed by always taking the right branch at the next vertex. The last edge \((x_{m-1},x_{m})\) (which is \((x_{m-1},x_{k})\)) must be the left branch edge when coming from the direction \((x_{k-1},x_{k})\), thus the traversal of the cycle \((x_{k},x_{k+1},\ldots,x_{m})\) is counterclockwise. Thus the edge \(\{x_{k-1},x_{k}\}\) is a bridge, a contradiction. See Figure 14 for an illustration. Next we claim that \(x_{m}=a_{1}\), and \(x_{m-1}=b_{1}\), (see Figure 9). We prove this by eliminating the possibilities \(x_{m}\in\{a_{2},a_{3},a_{4}\}\). Suppose \(x_{m}=a_{4}\). It follows from the definition of the \(a_{0}\)-R path that \(x_{m-1}\not\in\{a_{0},a_{1},\ldots,a_{4}\}\), being the step before \(x_{m}\), and then the only way to reach \(x_{m}=a_{4}\) is \(x_{m-1}=b_{4}\). Since this \(a_{0}\)-R path always takes the right branch, viewing the plane graph on a spherical embedding we can consider the face to the right of this \(a_{0}\)-R path as the outer face and then the edge \(\{a_{0},a_{4}\}\) is a chord. However, by our assumption \(G\) is chordless. Now suppose \(x_{m}=a_{3}\). Then consider the \(a_{2}\)-R path. By planarity and the fact that one single face borders the right hand side of the \(a_{0}\)-R path which ends in \(a_{3}\), the \(a_{2}\)-R path cannot end in \(a_{4}\), and therefore Figure 14: Simple paths between \(b_{i}\)’s Figure 13: \(G_{1}\) and \(G_{2}\) it must end in \(a_{1}\). However, considering the indices mod 5 this is exactly the same situation with the \(a_{0}\)-R path ending in \(a_{4}\), another contradiction. Finally, if the \(a_{0}\)-R path ends in \(x_{m}=a_{2}\), then the \(a_{1}\)-R path would violate planarity, or produce a bridge. We conclude that \(x_{m}=a_{1}\). And then it follows that \(x_{m-1}=b_{1}\), and we have a face with boundary \((a_{0},b_{0},\ldots,b_{1},a_{1},a_{0})\) from this \(a_{0}\)-R path. The same is true for all \(a_{i}\)-R paths. In other words, we now have a pentagon face \(P\) depicted as in Figure 15. We now perform the transformation as illustrated in Figure 16. The transformed graph (**b**) on the right is not \(M_{2,3}\) or \(K_{4}\) by vertex count. By induction there is a P3EM \(M^{\prime}\) on the transformed graph. We use Boolean variables \(x^{\prime}_{i}\) (\(0\leq i\leq 4\)), and \(y^{\prime}_{3}\), \(y^{\prime}_{4}\) to denote the assignment on those 7 edges in Figure 15(b), such that the variable is 1 if the corresponding edge is assigned to the face indicated by its arrow, and is 0 if it is assigned to the face on the other side. We also use nonnegative integer variables \(z^{\prime}_{i}\) (\(0\leq i\leq 4\)) to denote the number of edges assigned to the side indicated along the simple path \(b_{i}\) to \(b_{i+1}\). Now we define a P3EM \(M\) on \(G\) using \(M^{\prime}\) as follows. All edges in \(G\) that are not incident to \(a_{0},a_{1},a_{2},a_{3}\) or \(a_{4}\) will retain their assignment as in \(M^{\prime}\). These include all edges on the path \(b_{i}\) to \(b_{i+1}\) (and all edges beyond these simple paths that are not depicted in Figure 15(a).) In particular, if \(z_{i}\) (\(0\leq i\leq 4\)) is the number of edges assigned to the side indicated along the simple path \(b_{i}\) to \(b_{i+1}\) in \(G\), then \(z_{i}=z^{\prime}_{i}\). For the 10 edges incident to at least one of \(a_{0},a_{1},a_{2},a_{3}\) or \(a_{4}\) in Figure 15(a) we will use Boolean variables \(x_{i}\) and \(y_{i}\) (\(0\leq i\leq 4\)) to Figure 16: Transforming a pentagon Figure 15: Pentagon in the original graph denote the assignment of \(M\) on \(G\), such that the variable is \(1\) if the corresponding edge is assigned to the face indicated by its arrow, and is \(0\) otherwise. A moment's reflection will convince the reader that \(M\) is a P3EM on \(G\) iff we can assign Boolean 0-1 variables \(x_{i}\) and \(y_{i}\) (\(0\leq i\leq 4\)) that satisfy the following equation system (\(\Sigma\)), where \(\overline{x}=1-x\in\{0,1\}\) denotes the negation of the Boolean variable \(x\). \[(\Sigma)\quad\begin{cases}x_{0}+y_{0}+\overline{x_{1}}\equiv x_{0}^{\prime}+ \overline{x_{1}^{\prime}}\pmod{3}\\ x_{2}+y_{2}+\overline{x_{3}}\equiv x_{2}^{\prime}+\overline{x_{3}^{\prime}} \pmod{3}\\ x_{3}+y_{3}+\overline{x_{4}}\equiv x_{3}^{\prime}+y_{3}^{\prime}+\overline{x_{ 4}^{\prime}}\pmod{3}\\ x_{4}+y_{4}+\overline{x_{0}}\equiv x_{4}^{\prime}+y_{4}^{\prime}+\overline{x_{ 0}^{\prime}}\pmod{3}\\ x_{1}+y_{1}+\overline{x_{2}}\equiv x_{1}^{\prime}+\overline{x_{2}^{\prime}}+ \overline{y_{3}^{\prime}}+\overline{y_{4}^{\prime}}\pmod{3}\\ \sum_{i=0}^{4}\overline{y_{i}}\equiv 0\pmod{3}\end{cases}\] We note that, while this equation system consists of all linear equations mod \(3\), it is not an ordinary linear equation system over \(\mathbb{Z}_{3}\); the complicating factor is that all the variables must take Boolean values in \(\{0,1\}\). Somewhat miraculously, we show that for any Boolean values of \(x_{i}^{\prime}\) (\(0\leq i\leq 4\)) and \(y_{3}^{\prime}\), \(y_{4}^{\prime}\), we can always solve the equation system (\(\Sigma\)) for the Boolean variables \(x_{i}\) and \(y_{i}\) (\(0\leq i\leq 4\)). If \((y_{3}^{\prime},\underline{y_{4}^{\prime}})\neq(0,0)\), then we set \(x_{i}=x_{i}^{\prime}\) for \(0\leq i\leq 4\), \(y_{3}=\underline{y_{3}^{\prime}}\), \(\underline{y_{4}}=y_{4}^{\prime}\), \(y_{0}=y_{2}=0\), and \(y_{1}=\overline{y_{3}^{\prime}}+\overline{y_{4}^{\prime}}\in\{0,1\}\). (Note that \((y_{3}^{\prime},y_{4}^{\prime})\neq(0,0)\) is used to obtain \(\overline{y_{3}^{\prime}}+\overline{y_{4}^{\prime}}\in\{0,1\}\).) One can check that this assignment solves (\(\Sigma\)). Now suppose \((y_{3}^{\prime},y_{4}^{\prime})=(0,0)\). The system of equations now becomes \[(\Sigma^{\prime})\quad\begin{cases}x_{0}+y_{0}+\overline{x_{1}}\equiv x_{0}^ {\prime}+\overline{x_{1}^{\prime}}\pmod{3}\\ x_{2}+y_{2}+\overline{x_{3}}\equiv x_{2}^{\prime}+\overline{x_{3}^{\prime}} \pmod{3}\\ x_{3}+y_{3}+\overline{x_{4}}\equiv x_{3}^{\prime}+\overline{x_{4}^{\prime}} \pmod{3}\\ x_{4}+y_{4}+\overline{x_{0}}\equiv x_{4}^{\prime}+\overline{x_{0}^{\prime}} \pmod{3}\\ x_{1}+y_{1}+\overline{x_{2}}\equiv x_{1}^{\prime}+\overline{x_{2}^{\prime}} +2\pmod{3}\\ \sum_{i=0}^{4}\overline{y_{i}}\equiv 0\pmod{3}\end{cases}\] If \(x_{1}^{\prime}=0\), then we set \(x_{1}=1\), and \(x_{i}=x_{i}^{\prime}\) for \(0\leq i\leq 4\), \(i\neq 1\), and set \(y_{2}=y_{3}=y_{4}=0\), \(y_{0}=y_{1}=1\). One can check that this assignment solves (\(\Sigma^{\prime}\)). If \((x_{1}^{\prime},x_{2}^{\prime})=(1,1)\), then we set \(x_{2}=0\), and \(x_{i}=x_{i}^{\prime}\) for \(0\leq i\leq 4\), \(i\neq 2\), and set \(y_{0}=y_{3}=y_{4}=0\), \(y_{1}=y_{2}=1\). This solves (\(\Sigma^{\prime}\)). Thus it remains to consider the case when \((x_{1}^{\prime},x_{2}^{\prime})=(1,0)\). There remain eight cases, each corresponding to an assignment \((x_{0}^{\prime},x_{3}^{\prime},x_{4}^{\prime})\in\{0,1\}^{3}\). At this point we have \((y_{3}^{\prime},y_{4}^{\prime})=(0,0)\) in addition to \((x_{1}^{\prime},x_{2}^{\prime})=(1,0)\), so now we are in a situation in Figure 15(b) where \(x_{1}^{\prime},x_{2}^{\prime},y_{3}^{\prime}\) and \(y_{4}^{\prime}\) are all pointing into the face bounded by the cycle \((b_{1},a_{0},a_{4},a_{3},b_{2},\dots,b_{1})\). By a reflection along the \(\{a_{4},b_{4}\}\)-axis, we only need to consider four cases, with \(x_{4}^{\prime}=0\). These four cases are explicitly given in Figure 27, where we use a double arrow to indicate an actual assignment of the corresponding edge into the face indicated. For example, the following figure deals with the case \((x_{0}^{\prime},x_{3}^{\prime},x_{4}^{\prime})=(0,0,0)\). For other cases, see Figure 27. Finally, we note that the proof is constructive. When smaller graphs are defined for induction purposes, the size of the smaller graph strictly decreases and in the case when two smaller graphs are needed (as in the case dealing with a chord or getting distinct \(b_{i}\)'s) the sum of sizes of the smaller graphs is approximately that of the original graph. Tracing through the proof it can be easily verified that a planar 3-way edge matching can be found in polynomial time. This completes the proof of Theorem 3.2. ## 4 Dichotomy Theorem In this section we start the proof of Theorem 2.1. When \(\neg(f_{0}=f_{3}=0)\), (i.e., it is not the case that both \(f_{0}=0\) and \(f_{1}=0\)), by dividing a nonzero constant and possibly flipping 0 and 1 without changing the complexity of Holant, we can normalize the signature \([f_{0},f_{1},f_{2},f_{3}]\) to be \([1,a,b,c]\). We first deal with a special case where \(a=b\) and \(c=1\). **Lemma 4.1**.: \(\mathrm{Pl}\)_-Holant \(([1,a,a,1]\mid(=_{3}))\) is in \(\mathrm{FP}\)._ Proof.: Perform the holographic transformation by the Hadamard matrix \(H=\left[\begin{smallmatrix}1&1\\ 1&-1\end{smallmatrix}\right]\), to \([1,a,a,1]\) on the left and \((=_{3})=[1,0,0,1]\) on the right in the bipartite setting, we get \[[1,a,a,1]H^{\otimes 3}=[2+6a,0,2-2a,0],\quad\text{and}\quad(H^{\otimes 3})^{-1 }[1,0,0,1]=\tfrac{1}{4}[1,0,1,0].\] Both transformed signatures are matchgate signatures [11] and thus the problem can be solved in polynomial time by the FKT algorithm. Another special case where \(b=1\) and \(a=c\) will be needed later. **Lemma 4.2**.: \(\mathrm{Pl}\)_-Holant \(([1,a,1,a]\mid(=_{3}))\) is \(\mathrm{\#P}\)-hard unless \(a=0\) or \(\pm 1\), in which cases it is in \(\mathrm{FP}\)._ To prove Lemma 4.2, let us define the cross-over signature \(\mathcal{C}\) of arity 4, illustrated in Figure 17(a). It is 0-1 valued, and it takes value 1 iff the two red dangling edges are equal and the two blue dangling edges are equal; furthermore, it is a straddled signature where the two top dangling edges are to be connected to RHS externally and the two bottom dangling edges are to be connected to LHS externally. In Figure 17(a) only one vertex is present pictorially. However, when this signature is actually implemented or interpolated by some construction, the internal vertices that the two top dangling edges are incident to are LHS vertices; and the internal vertices that the two bottom dangling edges are incident to are RHS vertices. For later convenience, we will write the signature matrix for \(\mathcal{C}\) with rows (resp. columns) indexed by \((b_{1},b_{2})\in\{0,1\}^{2}\) corresponding to the dangling edges on the leftside (resp. rightside) as it appears in Figure 17(a) (not the LHS, RHS designation according to the bipartiteness), with \(b_{1}\) for the top edge. The signature matrix is \(\mathcal{C}=\left(\begin{smallmatrix}1&0&0&0\\ 0&0&1&0\\ 0&1&0&0\\ 0&0&0&1\end{smallmatrix}\right)\). Note that the signature matrix of \(\mathcal{C}\) is invariant under a cyclic rotation by \(90^{\circ}\) of the graph in Figure 17(a). The importance of this cross-over signature is conveyed in the following lemma. We say a signature \(f\) can be planarly constructed or interpolated if there is a polynomial time construction of planar gadgets or a sequence of planar gadgets with external dangling edges conforming to that of \(f\) with respect to its bipartite specification, such that the construction implements or interpolates \(f\). **Lemma 4.3**.: _For any signature sets \(\mathcal{F},\mathcal{G}\), if the cross-over signature \(\mathcal{C}\) can be planarly constructed or interpolated, then \(\operatorname{Holant}\left(\mathcal{F}\mid\mathcal{G}\right)\leq_{T} \operatorname{Pl-Holant}\left(\mathcal{F}\mid\mathcal{G}\right)\)._ Proof.: Given any input signature grid of the problem \(\operatorname{Holant}\left(\mathcal{F}\mid\mathcal{G}\right)\), we place it on the plane with possible edges intersecting each other at non-vertices. We may assume at most two edges intersect at any point, and the number of such intersections is polynomially bounded. We replace each such intersection by a copy of \(\mathcal{C}\) as follows. Note that every edge connects a LHS vertex with a RHS vertex. Suppose an edge \(e=\{u,v\}\) intersects consecutively \(k\geq 1\) edges at non-vertices \(x_{1},\ldots,x_{k}\), where \(u\) and \(v\) are from LHS and RHS, respectively. As we traverse from \(u\) to \(v\), for each \(1\leq i\leq k\) we label R and L respectively just as we enter and leave \(x_{i}\). Labeling in this way for every edge having such intersections, we find that locally at each intersection point, cyclically two consecutive edges are labeled R and the other two consecutive edges are labeled L. This is because at each local intersection point each pair of incident edges that are _not_ cyclically consecutive are always labeled with distinct R \(\neq\) L. A moment reflection shows that the signature \(\mathcal{C}\) can always be used with a suitable rotation at each intersection, while respecting the bipartite structure. We thus obtain an input of the problem \(\operatorname{Pl-Holant}\left(\mathcal{F}\mid\mathcal{G}\right)\) with Holant value unchanged. We are now ready to prove Lemma 4.2. Proof of Lemma 4.2.: When \(a=0\) or \(\pm 1\), the problem is in the affine class or degenerate, respectively, and thus in FP (see [10] for details of the algorithms). Assume \(a\neq 0\) and \(a\neq\pm 1\). The problem \(\operatorname{Holant}\left([1,a,1,a]\mid(=_{3})\right)\) without the planar restriction is shown to be #P-hard in [10]. By Lemma 4.3, it suffices to show we can interpolate the cross-over signature \(\mathcal{C}\). Consider the gadget \(G_{4}\) in Figure 17(b) where we place the signature \([1,a,1,a]\) at the square vertices and \(=_{3}\) at the circle vertices. Note that \(G_{4}\) is a straddled signature with the two dangling Figure 18: Interpolate the cross-over signature \(\mathcal{C}\) edges at the top (reps. bottom) to be connected externally to the RHS (reps. LHS), just like the cross-over signature \(\mathcal{C}\). After normalization by a constant \(a+a^{2}\neq 0\), the signature matrix of \(G_{4}\) is \(\left(\begin{smallmatrix}z&1&1&1\\ 1&1&z&1\\ 1&1&z&1\\ 1&1&z\end{smallmatrix}\right)\), where \(z=\frac{1+a^{3}}{a+a^{2}}=a+a^{-1}-1\). As \(a\neq\pm 1\), if \(a>0\) then \(z=(a^{1/2}-a^{-1/2})^{2}+1>1\), and if \(a<0\) then \(z=-(|a|+|a|^{-1})-1=-(|a|^{1/2}-|a|^{-1/2})^{2}-3<-3\). Here the rows (resp. columns) are indexed by \((b_{1},b_{2})\in\{0,1\}^{2}\) in lexicographic order corresponding to the dangling edges on the leftside (resp. rightside, as it appears in Figure 18b), with \(b_{1}\) for the top edge. This can be verified by first computing the signatures for \(A\) and \(B\) in Figures 18c and 18d, \(A=\left(\begin{smallmatrix}1&0&a&0\\ 0&a&0&1\\ 0&0&1&0\\ 0&1&0&a\end{smallmatrix}\right)\), and \(B=\left(\begin{smallmatrix}1&a&0&0\\ 0&a&1\\ 0&0&a&1\\ 0&0&1&a\end{smallmatrix}\right)\) is obtained from \(A\) by exchanging both middle two rows and middle two columns. Then we have \(G_{4}=A\cdot B\cdot A\), as a matrix product. Notice that the "shape" of \(G_{4}\) looks just like \(\mathcal{C}\) if we replace \(1\) by \(0\), and \(z\) by \(1\). We will exploit this remarkable coincidence in our proof below. Note also that the signature matrix of \(G_{4}\) is invariant under cyclic rotations of the gadget. Now we define a sequence of gadgets \(\Gamma_{2s+1}\) of linear size, which is a sequential composition of \(2s+1\) sub-gadgets, where for odd index \(i=1,3,\ldots,2s+1\) we use \(G_{4}\), and for even index \(i=2,4,\ldots,2s\) we use a \(180^{\circ}\)-rotated copy of \(G_{4}\), and we merge the rightside two edges of the \(i\)th sub-gadget with the leftside two edges of the \((i+1)\)th sub-gadget. This sequential composition satisfies the bipartite restriction. As the rotated copy of \(G_{4}\) has the same signature matrix as that of \(G_{4}\), the signature matrix of \(\Gamma_{2s+1}\) is \(G_{4}^{2s+1}\), the \((2s+1)\)th power of \(G_{4}\). We can show that it has the form (after normalization) \(\Gamma_{2s+1}=\left(\begin{smallmatrix}x_{s}&1&1&1\\ 1&1&x_{s}&1\\ 1&x_{s}&1&1\\ 1&1&x_{s}\end{smallmatrix}\right)\), where \(\{x_{s}\}_{s\geq 0}\) are defined by a recurrence, with \(x_{0}=z\) and \[x_{s+1}=\frac{6+6z+3x_{s}+z^{2}\cdot x_{s}}{7+4z+z^{2}+2x_{s}+2z\cdot x_{s}}.\] We are going to show that \(x_{s}\)'s are pairwise distinct. First suppose \(a>0\). We have \(x_{s+1}-1=\frac{(z-1)^{2}(x_{s}-1)}{7+4z+z^{2}+2x_{s}+2z\cdot x_{s}}\), which shows inductively that \(x_{s}>1\) for all \(s\in\mathbb{N}\), as the denominator is clearly positive. Next, \(\frac{x_{s+1}-1}{x_{s}-1}=\frac{(z-1)^{2}}{(z+2)^{2}+3+2x_{s}(z+1)}<1\), as \(x_{s},z>1\). It follows that \(x_{s+1}<x_{s}\) and hence pairwise distinct. Now suppose \(a<0\). Inductively assume \(x_{s}<-3\), which is true at \(x_{0}=z<-3\). We have \(x_{s+1}+3=\frac{(z+3)^{2}(x_{s}+3)}{7+4z+z^{2}+2x_{s}+2z\cdot x_{s}}\). The denominator \(7+4z+z^{2}+2x_{s}+2z\cdot x_{s}=(z+2)^{2}+3+2(z+1)x_{s}>0\), as \(z<-3\) and inductively also \(x_{s}<-3\). Then we have \(x_{s+1}+3<0\) since \(x_{s}+3<0\). Now the denominator is \((z+2)^{2}+3+2(z+1)x_{s}>|z+2|^{2}>|z+3|^{2}\) as \(z<-3\). Hence \(\frac{x_{s+1}+3}{x_{s}+3}=\frac{(z+3)^{2}}{(z+2)^{2}+3+2(z+1)x_{s}}<1\). And so, \(x_{s+1}>x_{s}\), and in particular they are pairwise distinct. Note also that the number of bits required to represent \(x_{s}\)'s is polynomially bounded in the size of the input because \(x_{s}\)'s come from, by definition, sums of at most \(2^{n^{O(1)}}\) terms, each a product of \(n^{O(1)}\) factors. Given any signature grid \(\Omega\) where the cross-over signature \(\mathcal{C}\) appears \(n\) times, we construct signature grids \(\Omega_{s}\), \(0\leq s\leq n\), by replacing each copy of \(\mathcal{C}\) by \(\Gamma_{2s+1}\) while respecting the bipartite restrictions. We now stratify the assignments in the Holant sum for \(\Omega\) according to the number \(i\), \(0\leq i\leq n\), of total times that the input of \(\mathcal{C}\) is (0,0,0,0), (1,0,1,0), (0,1,0,1), or (1,1,1,1) in cyclic order (these are the only inputs to \(\mathcal{C}\) with nonzero evaluations). Let \(c_{i}\) be the sum over all corresponding assignments of the products from other signatures with this restriction of \(i\). Then we have \(\operatorname{Holant}(\Omega)=c_{n}\), and \[\operatorname{Holant}(\Omega_{s})=\sum_{i=0}^{n}x_{s}^{i}\cdot c_{i}. \tag{4.1}\] Since \(x_{s}\)'s are pairwise distinct, (4.1) is a full ranked Vandermonde system, and we can solve for all \(c_{i}\) in polynomial time, and in particular compute \(c_{n}\), from the values of \(\operatorname{Holant}(\Omega_{s})\), \(0\leq i\leq n\). Hereafter, we say \([1,a,b,c]\) is #P-hard or in FP to mean the problem \(\operatorname{Pl-Holant}\left([1,a,b,c]\ |\ (=_{3})\right)\) is #P-hard or in FP. We shall invoke the following theorem in [10] when proving our results: **Theorem 4.4** (Kowalczyk & Cai).: _Suppose \(a,b\in\mathbb{C}\), and let \(X=ab\), \(Z=\left(\frac{a^{3}+b^{3}}{2}\right)^{2}\). Then \(\operatorname{Pl-Holant}\left([a,1,b]\ |\ (=_{3})\right)\) is #P-hard except in the following cases, for which the problem is in FP._ 1. \(X=1\)_;_ 2. \(X=Z=0\)_;_ 3. \(X=-1\) _and_ \(Z=0\)_;_ 4. \(X=-1\) _and_ \(Z=-1\)_;_ 5. \(X^{3}=Z\)_._ By restricting Theorem 4.4 to real numbers, we have the following corollary. **Corollary 4.5**.: _Suppose \(a,b\in\mathbb{R}\), then \(\operatorname{Pl-Holant}\left([a,1,b]\ |\ (=_{3})\right)\) is #P-hard except in the following cases, for which the problem is in FP._ 1. \(ab=1\)_;_ 2. \(a=1\) _and_ \(b=-1\)_;_ 3. \(a=-1\) _and_ \(b=1\)_;_ 4. \(a=b\)_._ Consider the binary straddled gadget \(G_{1}\) in Figure 19. Parallel edges are allowed. Its signature is \(G_{1}=\left[\begin{smallmatrix}1&b\\ a&c\end{smallmatrix}\right]\), where \(G_{1}(i,j)\) (at row \(i\) column \(j\)) is the value of this gadget when the left dangling edge (from the "square") and the right dangling edge (from the "circle" (\(=_{3}\))) are assigned \(i\) and \(j\) respectively, for \(i,j\in\{0,1\}\). Iterating \(G_{1}\) sequentially \(k\) times is represented by the matrix power \(G_{1}^{k}\). It turns out that it is very useful either to produce directly or to obtain by interpolation a rank _deficient_ straddled signature, which would in most cases allow us to obtain unary signatures on either side. With unary signatures we can connect to a ternary signature to produce binary signatures on one side and then apply Corollary 4.5. The following lemma is proved in [10]. **Lemma 4.6**.: _Given the binary straddled signature \(G_{1}=\left[\begin{smallmatrix}1&b\\ a&c\end{smallmatrix}\right]\), we can interpolate the degenerate binary straddled signature \(\left[\begin{smallmatrix}y&xy\\ 1&x\end{smallmatrix}\right]\), provided that \(c\neq ab\), \(a\neq 0\), \(\Delta=\sqrt{(1-c)^{2}+4ab}\neq 0\) and \(\frac{\lambda}{\mu}\) is not a root of unity, where \(\lambda=\frac{-\Delta+(1+c)}{2}\), \(\mu=\frac{\Delta+(1+c)}{2}\) are the two eigenvalues, and \(x=\frac{\Delta-(1-c)}{2a}\) and \(y=\frac{\Delta+(1-c)}{2a}\)._ Figure 19: Gadget \(G_{1}\) Given a degenerate binary straddled signature, we want to use it as unary signatures _in a planar way_. It is only in this step that we need our P3EM theorem. More concretely, in the next lemma we show how we can _essentially_ separate a binary straddled signature to get a unary signature. **Lemma 4.7**.: _For \(\operatorname{Pl-Holant}(\,[1,a,b,c]\,|=_{3})\), \(a,b,c\in\mathbb{Q}\), \(a\neq 0\), with the availability of the binary degenerate straddled signature \([\begin{smallmatrix}y&xy\\ 1&x\end{smallmatrix}]\) where \(x=\frac{\Delta-(1-c)}{2a}\), \(y=\frac{\Delta+(1-c)}{2a}\) and \(\Delta=\sqrt{(1-c)^{2}+4ab}\), we have the following reductions_ 1. \(\operatorname{Pl-Holant}(\,[1+ax,a+bx,b+cx]\,|=_{3})\leq_{T}\operatorname{ Pl-Holant}(\,[1,a,b,c]\,|=_{3})\) _except for 2 cases:_ \([1,a,a,1]\)_,_ \([1,a,-1-2a,2+3a]\)_;_ 2. \(\operatorname{Pl-Holant}(\,[1,a,b,c]\,|[y,0,1])\leq_{T}\operatorname{Pl-Holant }(\,[1,a,b,c]\,|=_{3})\)_._ Proof.: The signature \([1+ax,a+bx,b+cx]\) is the binary signature obtained by connecting \([1,a,b,c]\) on LHS with \([1,x]\) on RHS. For simplicity, we denote \(f:=[1,a,b,c]\) and \(f^{\flat}:=[1+ax,a+bx,b+cx]\). To prove the first reduction \(\operatorname{Pl-Holant}(\,f^{\flat}\,|=_{3})\leq_{T}\operatorname{Pl-Holant}( \,f\,|=_{3})\), consider any input instance of the LHS problem. Let \(G\) be its underlying 2-3 bipartite plane graph. We may assume \(G\) is connected, as the Holant value of \(G\) is the product of the Holant values of its connected components. We can view \(G\) as the edge-vertex incidence graph of a plane 3-regular graph \(G^{\prime}\), where every vertex of degree 2 in \(G\) on the LHS is viewed as an edge in \(G^{\prime}\). One can also obtain \(G^{\prime}\) by merging the two edges incident to every vertex of degree 2 in \(G\). If \(G^{\prime}\) is isomorphic to one of the two exceptions in Theorem 3.2, then the size of \(G\) is constant and we can compute the Holant value directly. Otherwise, we construct an input of the RHS problem as follows. We first obtain the degenerate binary straddled signature \(D=[\begin{smallmatrix}y&xy\\ 1&x\end{smallmatrix}]\) in \(\operatorname{Pl-Holant}(\,[1,a,b,c]\,|\,(=_{3}))\). Then for every edge of \(G^{\prime}\), which is assigned the binary signature \(f^{\flat}\), we replace it by a copy of \([1,a,b,c]\) and connecting it with the edge of \(D\) that corresponds to \([1,x]\). This leaves 1 dangling edge from each copy of \(D\), each edge functionally equivalent to a unary \([y,1]\) on LHS. They need to be connected Figure 21: Three gadgets where each triangle represents the unary gadget \([1,x]\) Figure 20: Two gadgets where each triangle represents the unary gadget \([y,1]\) to other (\(=_{3}\)) signatures in a planar way. Now we apply Theorem 3.2 to the 3-regular plane graph \(G^{\prime}\), which constructively assigns every edge of \(G^{\prime}\) one of the two incident faces such that we have a P3EM. We then add a suitable number of (\(=_{3}\)) and \(f\) in each face and connect them to _exactly_ 3 copies of \([y,1]\) as shown in Figures 1(a) and 1(b). Theorem 3.2 guarantees that this can be done in a planar way. Each connection produces a multiplicative factor \(g_{1}=y^{3}+1\) in Figure 1(a) and a multiplicative factor \(g_{2}=y^{3}+by^{2}+ay+c\) in Figure 1(b). It can be directly checked that3 Footnote 3: We use Mathematica to solve the system of equation \[\begin{cases}g_{1}=y^{3}+1=0\\ g_{2}=y^{3}+by^{2}+ay+c=0\\ y=\frac{\Delta+(1-c)}{2a}\end{cases}\] The empty solution of the system is proved by cylindrical decomposition, an algorithm for Tarski's theorem on real-closed fields., for \(y=\frac{\Delta+(1-c)}{2a}\), at least one of these factors is nonzero unless \(y=-1\), and in that case the signature has the form \([1,a,a,1]\) or \([1,a,-2a-1,3a+2]\). The proof of the reduction is complete. For the reduction \(\text{Pl-Holant}(\,[1,a,b,c]\,[y,0,1])\leq_{T}\text{Pl-Holant}(\,[1,a,b,c]\, =_{3})\), we use the same P3EM argument as above. Therefore it suffices to "absorb" those dangling unaries \([1,x]\) to produce some nonzero factor. We claim that at least one of the connection gadgets in Figures 1(a), 1(b), and 1(c) creates a nonzero global factor. The factors of these four gadgets are \[\begin{cases}f_{1}=cx^{3}+3bx^{2}+3ax+1\\ f_{2}=(ab+c)x^{3}+(3bc+2a^{2}+b)x^{2}+(2b^{2}+ac+3a)x+ab+1\\ f_{3}=&(ab+2abc+c^{3})x^{3}+(2a^{2}+b+2a^{2}c+3ab^{2}+bc+3b^{2}c)x^{2}\\ &\quad+(3a+3a^{2}b+ac+2b^{2}+2b^{2}c+ac^{2})x+1+2ab+abc\end{cases}\] respectively. By setting the three formulae to be 0 simultaneously together with the condition \(x=\frac{\Delta-(1-c)}{2a}\), with \(a\neq 0\), \(a,b,c\in\mathbb{Q}\), we found that there is no common solution. The proof is now complete. We note that signatures of the form \([3x+y,-x-y,-x+y,3x-y]\) for some \(x,y\) are exactly either of the form \([1,a,-2a-1,3a+2]\) after normalization, or of the form \([0,a,-2a,3a]\), for some \(a\). **Remark 1**.: Just before Lemma 4.10 we stated that we could _essentially_ separate a binary straddled signature to get a unary. This statement is delicate. Getting unrestricted use of the unary \([1,x]\) on RHS would be \(\text{Pl-Holant}\left(f\mid(=_{3}),[1,x]\right)\). The following two problems are equivalent. \[\text{Pl-Holant}\left(f,f^{\flat}\mid(=_{3})\right)\equiv_{T}\text{Pl-Holant} \left(f\mid(=_{3}),[1,x]\right).\] This is because for the second problem, every occurrence of \([1,x]\) is connected to \(f\) to produce \(f^{\flat}\), and conversely for the first problem, every occurrence of \(f^{\flat}\) can be replaced by connecting a copy of \(f\) with \([1,x]\). However, we do not claim that the problem \(\text{Pl-Holant}\left(f\mid(=_{3}),[1,x]\right)\) is reducible to \(\text{Pl-Holant}\left(f\mid(=_{3})\right)\), which is the following stronger reduction than what we showed: \[\text{Pl-Holant}\left(f,f^{\flat}\mid(=_{3})\right)\leq_{T}\text{Pl-Holant} \left(f\mid(=_{3})\right).\] The issue is that now the input graph for the LHS problem is not an edge-vertex incidence graph for a 3-regular plane graph, and so we cannot apply Theorem 3.2 as before. If we merge the two incident edges of all degree 2 vertices (assigned the binary signature \(f^{\flat}\)) we do get a planar 3-regular graph. But this graph may still have degree 3 vertices labeled \(f\), and not every edge comes from merging a degree 2 vertex that was labeled \(f^{\flat}\). Thus, not every edge participates in a 3-way perfect matching. In summary, a degenerate binary straddled signature is not completely equivalent to a unary \([1,x]\) on RHS. We further remark that, if this were true, we would have a much simpler proof of Theorem 5.2. **Remark 2**.: Reader should think of Lemma 9 mainly as an illustration of what we will call the _P3EM argument_. The main take-away is that we can separate a degenerate binary straddled signature to get unaries so long as we use one of them on _every_ ternary signature on one side and the remaining dangling unaries can be absorbed to create a nonzero global factor. For example, in the proof of Theorem 4.16, we are in fact using the gadget depicted in Figure 24. For the sake of simplicity for presentation, we will say "interpolate" \([1,x]\) on RHS or \([y,1]\) on LHS hereafter while the reader is welcome to check the delicate issue in Remark 1 is taken care. The following proposition is proved in [12]. **Proposition 4.8**.: _For \(G_{1}=[\begin{smallmatrix}1&b\\ a&c\end{smallmatrix}]\), with \(a,b,c\in\mathbb{Q}\), if it is non-singular (i.e., \(c\neq ab\)), then it has two nonzero eigenvalues \(\lambda\) and \(\mu\). The ratio \(\lambda/\mu\) is not a root of unity unless at least one of the following conditions holds:_ \[\begin{cases}c+1=0\\ ab+c^{2}+c+1=0\\ 2ab+c^{2}+1=0\\ 3ab+c^{2}-c+1=0\\ 4ab+c^{2}-2c+1=0\end{cases} \tag{4.2}\] Now we introduce a new binary straddled signature \(G_{2}\) as shown in Figure 22. The signature matrix of \(G_{2}\) is \([\begin{smallmatrix}w&b^{\prime}\\ a^{\prime}&c^{\prime}\end{smallmatrix}]\), where \(w=1+ab\), \(a^{\prime}=a+b^{2}\), \(b^{\prime}=a^{2}+bc\) and \(c^{\prime}=ab+c^{2}\). Similar to \(G_{1}\), we have \(\Delta^{\prime}=\sqrt{(w-c^{\prime})^{2}+4a^{\prime}b^{\prime}}\), two eigenvalues \(\lambda^{\prime}=\frac{-\Delta^{\prime}+(w+c^{\prime})}{2}\) and \(\mu^{\prime}=\frac{\Delta^{\prime}+(w+c^{\prime})}{2}\). If \(a^{\prime}\neq 0\), we have \(x^{\prime}=\frac{\Delta^{\prime}-(w-c^{\prime})}{2a^{\prime}}\), \(y^{\prime}=\frac{\Delta^{\prime}+(w-c^{\prime})}{2a^{\prime}}\) and if further \(\Delta^{\prime}\neq 0\) we can write its Jordan Normal Form as \[G_{2}=\left(\begin{array}{cc}w&b^{\prime}\\ a^{\prime}&c^{\prime}\end{array}\right)=\left(\begin{array}{cc}-x^{\prime}& y^{\prime}\\ 1&1\end{array}\right)\left(\begin{array}{cc}\lambda^{\prime}&0\\ 0&\mu^{\prime}\end{array}\right)\left(\begin{array}{cc}-x^{\prime}&y^{ \prime}\\ 1&1\end{array}\right)^{-1}. \tag{4.3}\] Similar to Proposition 4.8, we have the following claim on \(G_{2}\). Figure 22: Binary straddled signature \(G_{2}\) **Proposition 4.9**.: _If the signature matrix of \(G_{2}\) is non-degenerate, then the ratio \(\lambda^{\prime}/\mu^{\prime}\) of its eigenvalues is not a root of unity unless at least one of the following conditions holds, where \(A=w+c^{\prime},B=(c^{\prime}-w)^{2}+4a^{\prime}b^{\prime}\)._ \[\begin{cases}A=0\\ B=0\\ A^{2}+B=0\\ A^{2}+3B=0\\ 3A^{2}+B=0\end{cases} \tag{4.4}\] **Lemma 4.10**.: _Suppose \(a,b,c\in\mathbb{Q}\), \(a\neq 0\) and \(c\neq ab\) and \(a,b,c\) do not satisfy any condition in (4.2). Let \(x=\frac{\Delta-(1-c)}{2a}\), \(y=\frac{\Delta+(1-c)}{2a}\) and \(\Delta=\sqrt{(1-c)^{2}+4ab}\). Then for \(\operatorname{Holant}(\,[1,a,b,c]\,|=_{3})\),_ 1. _we can interpolate_ \([y,1]\) _on LHS;_ 2. _we can interpolate_ \([1,x]\) _on RHS except for 2 cases:_ \([1,a,a,1]\)_,_ \([1,a,-1-2a,2+3a]\)_._ Proof.: This lemma follows from Lemma 4.6 and Lemma 4.7 using the binary straddled gadget \(G_{1}\) with signature matrix \([\begin{smallmatrix}1&b\\ a&c\end{smallmatrix}]\). Note that \(c\neq ab\) indicates that matrix \(G_{1}\) is non-degenerate, and \(\lambda/\mu\) not being a root of unity is equivalent to none of the equations in (4.2) holds. We have similar statements corresponding to \(G_{2}\). When the signature matrix is non-degenerate and does not satisfy any condition in (4.4), we can interpolate the corresponding \([y^{\prime},1]\) on LHS, and we can also interpolate the corresponding \([1,x^{\prime}]\) on RHS except when \(y^{\prime}=-1\). **Definition 4.11**.: _For \(\operatorname{Pl-Holant}(\,[1,a,b,c]\,|=_{3})\), with \(a,b,c\in\mathbb{Q}\), \(a\neq 0\), we say a binary straddled gadget \(G\) works if the signature matrix of \(G\) is non-degenerate and the ratio of its two eigenvalues \(\lambda/\mu\) is not a root of unity._ **Remark 3**.: Explicitly, the condition that \(G_{1}\)_works_ is that \(c\neq ab\) and \(a,b,c\) do not satisfy any condition in (4.2), which is just the assumptions in Lemma 4.10. \(G_{1}\)_works_ implies that it can be used to interpolate \([y,1]\) on LHS, and to interpolate \([1,x]\) on RHS with two exceptions for which we already proved the dichotomy. The \(x,y\) are as stated in Lemma 4.10. Similarly, when the binary straddled gadget \(G_{2}\)_works_, for the corresponding values \(x^{\prime}\) and \(y^{\prime}\), we can interpolate \([y^{\prime},1]\) on LHS, and we can interpolate \([1,x^{\prime}]\) on RHS except when \(y^{\prime}=-1\). The ternary gadget \(G_{3}\) in Figure 23 will be used in the proof here and later. Figure 23: A ternary gadget \(G_{3}\) The unary signatures \(\Delta_{0}=[1,0]\) and \(\Delta_{1}=[0,1]\) are called the pinning signatures because they "pin" a variable to \(0\) or \(1\). Another useful unary signature is \(\Delta_{2}=[1,1]\). One good use of having unary signatures is that we can use Lemma 4.13 to get these three signatures. They are helpful as the following lemma shows. **Lemma 4.12**.: _If \(\Delta_{0}\), \(\Delta_{1}\) and \(\Delta_{2}\) can be interpolated on the RHS in \(\operatorname{Pl-Holant}(\,[1,a,b,c]\,|\,=_{3})\), where \(a,b,c\in\mathbb{Q}\), \(ab\neq 0\), then the problem is \(\#\mathrm{P}\)-hard unless \([1,a,b,c]\) is affine or degenerate, in which cases it is in FP._ Proof.: Connecting \([1,0]\), \([0,1]\) to \([1,a,b,c]\) on LHS respectively, we get binary signatures \([1,a,b]\) and \([a,b,c]\). Then we can apply Corollary 4.5, and the problem is \(\#\mathrm{P}\)-hard unless both \([1,a,b]\) and \([a,b,c]\) are in FP. When \(ab\neq 0\), both \([1,a,b]\) and \([a,b,c]\) are in FP only when \([1,a,b,c]\) is (1) degenerate, i.e. \(b=a^{2}\) and \(c=a^{3}\), in which case the problem is in FP, (2) of the form \([1,a,1,a]\) which is resolved by Lemma 4.2 (when \(a=\pm 1\), \([1,a,1,a]\) is degenerate), (3) of the form \([1,a,a^{2},a]\) which we will resolve later in this proof, (4) \([1,a,b,c]=[1,1,1,-1]\) or \([1,-1,1,1]\) which we will resolve later in this proof, or (5) \([1,a,b,c]=[1,1,-1,-1]\) or \([1,-1,-1,1]\) which are affine and hence in FP. For case (2), if we connect \([1,1]\) to \([1,a,a^{2},a]\), we get a binary signature \([1+a,a+a^{2},a+a^{2}]\) which after normalization (when \(a\neq-1\)) is \([1,a,a]\). This problem is \(\#\mathrm{P}\)-hard by Corollary 4.5 unless \(a=\pm 1\), in which cases the problem \([1,a,b,c]\) is \([1,1,1,1]\) or \([1,-1,1,-1]\) which are degenerate and thus in FP. For case (3), due to the symmetry by flipping \(0\) and \(1\) in the signature, it suffices to consider only \(f=[1,1,1,-1]\) and \(g=[1,-1,1,1]\); they are neither affine nor degenerate. For both \(f\) and \(g\) we use the gadget \(G_{3}\) to produce ternary signatures \(f^{\prime}=[1,1,3,3]\) and \(g^{\prime}=[1,1,-1,3]\) respectively. Neither are among the exceptional cases above. So \(\operatorname{Holant}(\,f\,|\,=_{3})\) and \(\operatorname{Holant}(\,g\,|\,=_{3})\) are both \(\#\mathrm{P}\)-hard. The following lemma lets us interpolate arbitrary unary signatures on RHS, in particular \(\Delta_{0}\), \(\Delta_{1}\) and \(\Delta_{2}\), from a binary gadget with a straddled signature and a suitable unary signature \(s\) on RHS. **Lemma 4.13** (Vadhan, [20]).: _Let \(M\in\mathbb{R}^{2\times 2}\) be a non-singular signature matrix for a binary straddled gadget which is diagonalizable with distinct eigenvalues, and \(s=[a,b]\) be a unary signature on RHS that is not a row eigenvector of \(M\). Then \(\{s\cdot M^{j}\}_{j\geq 0}\) can be used to interpolate any unary signature on RHS._ ### Dichotomy for \([1,a,b,c]\) when \(ab\neq 0\) and \(G_{1}\) works Let us introduce a _non-linearity_ gadget in Figure 24. If we place in the non-linearity gadget the binary degenerate straddled signature \(D\) on triangles (in the way that respects the bipartite Figure 24: Non-linearity gadget, where a triangle represents the unary gadget \([y,1]\) structure), \(f\) on squares and \((=_{3})\) on circles, we get its signature \([1,x]\otimes[1,x]\otimes[y^{2}+yb,ya+c]\). Note that it is a ternary planar gadget on RHS. The following two lemmas will be used in the proof of Theorem 4.16. **Lemma 4.14**.: _Let \(a,b,c\in\mathbb{Q}\), \(ab\neq 0\), and satisfy_ (con1)_\(a^{3}-b^{3}-ab(1-c)=0\) and_ (con2)_\(a^{3}+ab+2b^{3}=0\)._ _Then \(\text{Pl-Holant}(\,[1,a,b,c]\,|=_{3})\) is \(\#\text{P-hard unless it is }[1,-1,1,-1]=[1,-1]^{\otimes 3}\) which is degenerate, or a matchgate, in both cases the problem is in FP._ Proof.: If \(a+b^{2}=0\) in addition to (_con1_) and (_con2_) with \(ab\neq 0\), then \([1,a,b,c]=[1,-1,1,-1]\) which is degenerate. Now we assume \(a+b^{2}\neq 0\). Here we use Gadget \(G_{2}\). First assume \(G_{2}\)_works_. Using \(a+b^{2}\neq 0\) together with (_con1_) and (_con2_), we can verify that \(\Delta=\sqrt{4(a+b^{2})(a^{2}+bc)+(c^{2}-1)^{2}}\neq 0\), and we can write the Jordan Normal Form \[G_{2}=\left(\begin{array}{cc}1+ab&a^{2}+bc\\ a+b^{2}&ab+c^{2}\end{array}\right)=\left(\begin{array}{cc}-x&y\\ 1&1\end{array}\right)\left(\begin{array}{cc}\lambda&0\\ 0&\mu\end{array}\right)\left(\begin{array}{cc}-x&y\\ 1&1\end{array}\right)^{-1},\] where \(\lambda=\frac{1+2ab+c^{2}-\Delta}{2}\), \(\mu=\frac{1+2ab+c^{2}+\Delta}{2}\), \(x=\frac{\Delta+c^{2}-1}{2(a+b^{2})}\), \(y=\frac{\Delta-c^{2}+1}{2(a+b^{2})}\). Because \(G_{2}\) works, \([y,1]\) on LHS is available. Use this \([y,1]\) in the non-linearity gadget in Figure 24, we get the unary signature \(\left[y^{2}+yb,ya+c\right]\) on the RHS. By Lemma 4.13, we can interpolate any unary signature, in particular \(\Delta_{0}\), \(\Delta_{1}\) and \(\Delta_{2}\) on RHS and apply Lemma 4.12, unless \(\left[y^{2}+yb,ya+c\right]\) is proportional to a row eigenvector of \(G_{3}\), namely \([1,-y]\) and \([1,x]\). Thus the exceptions are \(ya+c=x(y^{2}+yb)\) and \(ya+c=-y(y^{2}+yb)\). Notice that now \(xy=\frac{a^{2}+bc}{a+b^{2}}\). The first equation implies \(c=ab\) or \(a+b^{2}=0\) or \(a^{3}-b^{3}c+ab(-1+c^{2})=0\). The second equation implies \(a+b^{2}=0\) or \(f_{1}=0\) where \(f_{1}=a^{3}+4a^{6}+3a^{5}b^{2}+a^{3}b^{3}-c-4a^{3}c+6a^{4}bc-6a^{2}b^{2}c-b^{3} c-3a^{2}b^{5}c-3a^{3}c^{2}-3abc^{2}-4b^{3}c2-a^{3}b^{3}c^{2}-6ab^{4}c^{2}-4b^{6}c^{ 2}+3c^{3}+4a^{3}c^{3}+6a^{2}b^{2}c^{3}+3b^{3}c^{3}+a^{3}c^{4}+3abc^{4}+4b^{3}c^ {4}-3c^{5}-b^{3}c^{5}+c^{7}\). So there are four exceptional cases, \[\begin{cases}c=ab\\ a+b^{2}=0\\ a^{3}-b^{3}c+ab(-1+c^{2})=0\\ f_{1}=0\end{cases} \tag{4.5}\] For each of them, together with (_con1_) and (_con2_), we get 3 equations and can solve them using Mathematica(tm). For rational \(a,b,c\), when \(ab\neq 0\), there are only two possible results -- \([1,-1,1,-1]\) and \([1,-\frac{1}{3},-\frac{1}{3},1]\). The first one violates \(a+b^{2}\neq 0\), and the second is a matchgate and thus in FP. For all other cases when \(G_{2}\) works, we have the pinnng signatures \(\Delta_{0}\), \(\Delta_{1}\) and \(\Delta_{2}\) on the RHS and then the lemma is proved by Lemma 4.12. Now suppose \(G_{2}\) does not work. Then by Proposition 4.9, we get at least one more condition, either one in (4.4) or \((a+b^{2})(a^{2}+bc)=(1+ab)(ab+c^{2})\) which indicates that \(G_{2}\) is degenerate. For each of the 6 conditions, together with (_con1_) and (_con2_), we can solve them using Mathematica(tm) for rational \(a,b,c\). The only solution is \([1,-1,1,-1]\) which violates \(a+b^{2}\neq 0\). The proof of the lemma is complete. **Lemma 4.15**.: _Let \(a,b,c\in\mathbb{Q}\), \(ab\neq 0\), and satisfy_ (con1)_\(a^{3}-b^{3}-ab(1-c)=0\) and_ (con2) \((a^{4}b+ab^{4})^{2}=(a^{5}+b^{4})(b^{5}+a^{4}c)\). _Then \(\text{Pl-Holant}(\,[1,a,b,c]\,|=_{3})\) is \(\#\text{P}\)-hard unless it is \([1,a,a^{2},a^{3}]\) which is degenerate, or a matchgate, in both cases the problem is in FP._ Proof.: Eliminating \(c\) from (_con1_) and (_con2_) we get \(a^{11}-a^{9}b+a^{6}b^{4}+a^{5}b^{6}-a^{4}b^{5}-a^{3}b^{7}+a^{2}b^{9}-b^{10}=0\), which, quite miraculously, can be factored as \((a^{2}-b)(a^{9}+a^{4}b^{4}+a^{3}b^{6}+b^{9})=0\). If \(b=a^{2}\), then with (_con1_), we get \(c=a^{3}\) thus the signature becomes \([1,a,a^{2},a^{3}]\) which is degenerate. We assume \(a^{9}+a^{4}b^{4}+a^{3}b^{6}+b^{9}=0\), then the rest of the proof is essentially the same as Lemma 4.14. **Theorem 4.16**.: _For \(a,b,c\in\mathbb{Q}\), \(ab\neq 0\), if \(G_{1}\) works, then \(\text{Pl-Holant}([1,a,b,c]\,|=_{3})\) is \(\#\text{P}\)-hard unless it is in the tractable cases of Theorem 2.1 and thus in FP._ Proof.: If \([1,a,b,c]\) has the form \([1,a,a,1]\) or \([1,a,-1-2a,2+3a]\) then the problem is in FP. We now assume the signature is not of these two forms. By Lemma 4.10, when \(G_{1}\) works, we can interpolate \([y,1]\) on LHS and also \([1,x]\) on RHS. Let us write down the Jordan Normal Form again: \[G_{1}=\left(\begin{array}{cc}1&b\\ a&c\end{array}\right)=\left(\begin{array}{cc}-x&y\\ 1&1\end{array}\right)\left(\begin{array}{cc}\lambda&0\\ 0&\mu\end{array}\right)\left(\begin{array}{cc}-x&y\\ 1&1\end{array}\right)^{-1},\] \(\lambda=\frac{-\Delta+(1+c)}{2}\), \(\mu=\frac{\Delta+(1+c)}{2}\), \(x=\frac{\Delta-(1-c)}{2a}\), \(y=\frac{\Delta+(1-c)}{2a}\), \(\Delta=\sqrt{(1-c)^{2}+4ab}\). Using \([y,1]\) and the gadget in Figure 24, we get \([y^{2}+yb,ya+c]\) on the RHS. We can interpolate \(\Delta_{0}\), \(\Delta_{1}\) and \(\Delta_{2}\) on RHS unless \(\left[y^{2}+yb,ya+c\right]\) is proportional to a row eigenvector of \(G_{1}\), namely \([1,-y]\) or \([1,x]\), according to Lemma 4.13. Thus the exceptions are \(ya+c=(y^{2}+yb)x\) or \(ya+c=-y(y^{2}+yb)\). The first equation implies \(a^{3}-b^{3}-ab(1-c)=0\) or \(c=ab\). The second equation implies \(c=ab\) or \(c=1+a-b\). By assumption \(G_{1}\) works, so \(c\neq ab\). Thus, we consider two exceptional cases. **Case 1**: \(a^{3}-b^{3}-ab(1-c)=0\) In this case, we have \(1-c=\frac{a^{3}-b^{3}}{ab}\) and thus \(\Delta=\sqrt{(1-c)^{2}+4ab}=|\frac{a^{3}+b^{3}}{ab}|\). One condition \((4ab+c^{2}-2c+1=0)\) in (4.2) is the same as \(\Delta=0\). Since \(G_{1}\) works, we have \(\Delta\neq 0\) and thus \(a^{3}+b^{3}\neq 0\), which is equivalent to \(a+b\neq 0\) when \(a,b\in\mathbb{Q}\). Subcase 1: \(\frac{a^{3}+b^{3}}{ab}>0\). We have \([1,x]=[1,\frac{\Delta-(1-c)}{2a}]=[1,\frac{b^{2}}{a^{2}}]\) on RHS. Connect \([1,x]\) to \([1,a,b,c]\) on LHS, we get the binary signature \([1+\frac{b^{2}}{a},a+\frac{b^{3}}{a^{2}},b+\frac{b^{2}c}{a^{2}}]\) on LHS. Note that \(a+\frac{b^{3}}{a^{2}}\neq 0\) when \(a+b\neq 0\). It is \(\#\text{P}\)-hard (and thus the problem \([1,a,b,c]\) is \(\#\text{P}\)-hard) unless one of the tractable conditions in Corollary 4.5 holds. It turns out that the only possibilities are either (1) first case in Corollary 4.5, i.e. \((1+\frac{b^{2}}{a})(b+\frac{b^{2}c}{a^{2}})=(a+\frac{b^{3}}{a^{2}})^{2}\), which becomes \(\left(a^{2}-b\right)\left(a^{3}+ab+2b^{3}\right)=0\) after substituting \(c=\frac{ab-a^{3}+b^{3}}{ab}\), or (2) the second and third case in Corollary 4.5, which implies \((a=-b^{2})\wedge(c=-b^{3})\) and thus the problem \([1,a,b,c]\) becomes the problem \([1,-b^{2},b,-b^{3}]\), or (3) the fourth case in Corollary 4.5, which implies \((a=b)\wedge(c=1)\) or \((a=-b^{2})\wedge(c=-b^{3})\), where in the former case \([1,a,b,c]\) is of the matchgate form \([1,a,a,1]\) and thus in FP. We now deal with the case (1). When \(a^{2}-b=0\), together with \(a^{3}-b^{3}=ab\,(1-c)\), we have \(c=a^{3}\), and thus \([1,a,b,c]\) is degenerate. When \(a^{3}+ab+2b^{3}=0\), together with \(a^{3}-b^{3}=ab\,(1-c)\), by Lemma 4.14, \([1,a,b,c]\) is \(\#\text{P}\)-hard (with \(a+b\neq 0\) ruling out the exception). We now deal with the case \([1,-b^{2},b,-b^{3}]\). If \(b=0,\pm 1\), it is degenerate or affine. Now assume \(b\neq 0,\pm 1\). Then we can get \([1,-b^{2}]\) on the LHS. Note that connecting three copies of \([1,b]\) with \([1,-b^{2},b,-b^{3}]\) on LHS produces a global factor \(1-b^{6}\neq 0\). Connect \([1,-b^{2}]\) twice to \([1,0,0,1]\) on RHS, and we get \([1,b^{4}]\) on RHS. Connect \([1,b^{4}]\) back to \([1,-b^{2},b,-b^{3}]\) on LHS, and we get a binary signature \(g=[1-b^{6},-b^{2}+b^{5},b-b^{7}]\), which by Corollary 4.5 is #P-hard unless \(b=\pm 1\) which has been discussed, and therefore \([1,-b^{2},b,-b^{3}]\) is also #P-hard. * \(\frac{a^{3}+b^{3}}{ab}<0\). We have \([y,1]=[-\frac{b^{2}}{a^{2}},1]\) on LHS. Connecting two copies of \([y,1]\) to (=\({}_{3}\)) we get \([y^{2},1]=[\frac{b^{4}}{a^{4}},1]\) on RHS. Connecting it back to LHS, we get a binary signature \([\frac{b^{4}}{a^{3}}+a,\frac{b^{4}}{a^{3}}+b,\frac{b^{5}}{a^{4}}+c]\) on LHS. It is #P-hard unless one of the tractable conditions in Corollary 4.5 holds. It turns out that the only possibilities are either \((a^{4}b+ab^{4})^{2}=(a^{5}+b^{4})(b^{5}+a^{4}c)\), or \((a=b)\wedge(c=1)\) which is the matchgate case and thus in FP. We now deal with the former case. Together with \(a^{3}-b^{3}=ab\,(1-c)\), by Lemma 4.15, \([1,a,b,c]\) is #P-hard unless it is degenerate. * \(1+a-b-c=0\) In this case, \(\Delta=|a+b|\), and since \(G_{1}\) works, one condition is \(4ab+c^{2}-2c+1=0\) in (4.2) which says \(\Delta\neq 0\), and thus \(a+b\neq 0\). If \(a+b>0\), then \(\Delta=a+b\), and \(x=\frac{a+b-(1-c)}{2a}=1\). Then we can interpolate \([1,x]=[1,1]\) on RHS (as \(y=\frac{a+b+(1-c)}{2a}=\frac{b}{a}\neq-1\)). Else, \(a+b<0\), then \(\Delta=-(a+b)\), and \(y=\frac{-a-b+(1-c)}{2a}=\frac{-a-b+(b-a)}{2a}=-1\). We can get \([1,1]\) on RHS by connecting two copies of \([y,1]=[-1,1]\) to \([1,0,0,1]\). Then connecting \([1,1]\) to \([1,a,b,c]\) on LHS we get a binary signature \([a+1,a+b,a+c]\) on LHS. Again we can apply Corollary 4.5 to it, and conclude that it is #P-hard. It turns out that the only feasible cases of tractability leads to \([1,a,-1-2a,2+3a]\), and \((c=1)\wedge(a=b)\), in both cases the problem is in FP. This proves the #P-hardness of \(\mathrm{Holant}(\,[1,a,b,c]\mid=_{3})\). ### Dichotomy for \([1,a,b,0]\) **Theorem 4.17**.: _The problem \([1,a,b,0]\) for \(a,b\in\mathbb{Q}\) is #P-hard unless it is in the tractable cases of Theorem 2.1 and thus in FP._ Proof.: If \(ab\neq 0\) and \(G_{1}\) works, then this is proved in Theorem 4.16. If \(a=b=0\), it is degenerate and in FP. We divide the rest into three cases: 1. \(ab\neq 0\) and \(G_{1}\) does not work; 2. \(f=[1,a,0,0]\) with \(a\neq 0\); 3. \(f=[1,0,b,0]\) with \(b\neq 0\). \(\bullet\) Case 1: \(ab\neq 0\) in \(f=[1,a,b,0]\) and \(G_{1}\) does not work. Since \(c=0\neq ab\), this implies that at least one equation in (4.2) holds. After a simple derivation, we have the following family of signatures to consider: \([1,a,-\frac{1}{ka},0]\), for \(k=1,2,3,4\). We use \(G_{3}\) to produce another symmetric ternary signature in each case. If the new signature is #P-hard, then so is the given signature. We will describe the case \([1,a,-\frac{1}{a},0]\) in more detail; the other three types (\(k=2,3,4\)) are similar. For \(k=1\), the gadget \(G_{3}\) produces \(g=[3a^{3}+4,a^{4}-a-\frac{2}{a^{2}},-a^{2}+\frac{1}{a}+\frac{1}{a^{4}},a^{3}+3]\). For \(a=-1\), this is \([1,0,-1,2]\), which has the form \([1,a^{\prime},-1-2a^{\prime},2+3a^{\prime}]\) and is in FP. Below we assume \(a\neq-1\). Then all entries of \(g\) are nonzero. We claim that the gadget \(G_{1}\) works using \(g\). Since \(a\in\mathbb{Q}\), it can be checked that \(g\) is non-degenerate since \((a^{4}-a-\frac{2}{a^{2}})(-a^{2}+\frac{1}{a}+\frac{1}{a^{4}})=(3a^{3}+4)(a^{3}+3)\) has no solution, and that no equation in (4.2) has a solution applied to \(g\). Hence, \(G_{1}\) works using \(g\) and we may apply Theorem 4.16 to \(g\). Using the fact that \(a\in\mathbb{Q}\), one can show that \(g\) cannot be a Gen-Eq because it has no zero entry, nor can it be affine or degenerate. Also, it can be checked that there is no solution for \(a\) if we were to impose the condition that \(g\) is a matchgate, i.e. \((3a^{3}+4=a^{3}+3)\wedge(a^{4}-a-\frac{2}{a^{2}}=-a^{2}+\frac{1}{a}+\frac{1}{a ^{4}})\), and also \(a=-1\) is the only solution for \(g\) being in the form \([1,a^{\prime},-1-2a^{\prime},2+3a^{\prime}]\). Thus \([1,a,-\frac{1}{a},0]\) is #P-hard. \(\bullet\) Case 2: \(f=[1,a,0,0]\) with \(a\neq 0\). The gadget \(G_{3}\) produces \(g^{\prime}=[3a^{3}+1,a^{4}+a,a^{2},a^{3}]\). Since \(a\in\mathbb{Q}\), \(3a^{3}+1\neq 0\). If \(a=-1\), \(g^{\prime}=[-2,0,1,-1]\) and it suffices to consider \([1,-1,0,2]\) (which is \(-1\) times the reversal \([-1,1,0,-2]\) obtained by swapping and roles of \(0\) and \(1\)), in which case \(G_{2}\) works where the matrix \(G_{2}=\left[\begin{smallmatrix}1&1&1\\ -1&1\end{smallmatrix}\right]\). We can interpolate \([1,x]=[1,-\frac{3+\sqrt{5}}{2}]\) on RHS. Connect it back to \([1,-1,0,2]\) and get a binary signature \([\frac{5+\sqrt{5}}{2},-1,-(3+\sqrt{5})]\) on LHS, which, by Corollary 4.5, is #P-hard. Thus, \([1,-1,0,2]\) is #P-hard and so is \([1,a,0,0]\). Else, \(a\neq-1\). We claim that the gadget \(G_{1}\) works using \(g^{\prime}\). The signature \(g^{\prime}\) is non-degenerate since \(a\in\mathbb{Q}\) is nonzero and thus \((a^{4}+a)a^{2}\neq(3a^{3}+1)a^{3}\). Also no equation in (4.2) has a solution applied to \(g^{\prime}\). Hence, \(G_{1}\) works using \(g^{\prime}\) and we may apply Theorem 4.16 to \(g^{\prime}\). Using the fact that \(a\in\mathbb{Q}\), one can show that \(g^{\prime}\) cannot be a Gen-Eq because it has no zero entry, nor can it be affine or degenerate. Also, there is no solution for \(g^{\prime}\) being a matchgate or in the form \([1,a^{\prime},-1-2a^{\prime},2+3a^{\prime}]\). Thus \([1,a,0,0]\) is #P-hard. \(\bullet\) Case 3: \(f=[1,0,b,0]\) with \(b\neq 0\). The gadget \(G_{1}\) produces a binary straddled signature \(G_{1}=\left[\begin{smallmatrix}1&b\\ 0&0\end{smallmatrix}\right]=\left[\begin{smallmatrix}1\\ 0\end{smallmatrix}\right]\cdot\left[\begin{smallmatrix}1&b\\ 0\end{smallmatrix}\right]\) which decomposes into a unary signature \([1,b]\) on RHS and a unary signature \([1,0]\) on LHS. This gives us a reduction Pl-Holant(\([1,b^{2},b]|(=_{3})\)) \(\leq_{T}\) Pl-Holant(\(\,f\,|=_{3}\)) by the P3EM argument. The problem Pl-Holant(\([1,b^{2},b]|=_{3}\)) is #P-hard except \(b=\pm 1\), by Corollary 4.5, which implies that Pl-Holant(\(\,f\,|=_{3}\)) is also #P-hard when \(b\neq\pm 1\). If \(b=\pm 1\), then \(f\) is affine, and Pl-Holant(\(\,f\,|=_{3}\)) is in FP. ### Dichotomy for \([1,a,0,c]\) **Theorem 4.18**.: _The problem \([1,a,0,c]\) with \(a,c\in\mathbb{Q}\) is #P-hard unless \(a=0\), in which case it is Gen-Eq and thus in FP._ Proof.: When \(a=0\), it is Gen-Eq and so is in FP. When \(a\neq 0\), if \(c=0\), it is #P-hard by Theorem 4.17. In the following we discuss \([1,a,0,c]\) with \(ac\neq 0\). If \(c=\pm 1\), the signature is \([1,a,0,1]\) or \([1,a,0,-1]\). We use \(G_{2}\) to produce a ternary signature \(g=[3a^{3}+1,a^{4}+a,a^{2},a^{3}+1]\) (both mapped to the same signature, surprisingly). If \(a=-1\), it is \([1,0,-\frac{1}{2},0]\) after normalization, which by Theorem 4.17 is #P-hard and so is the given signature \([1,-1,0,1]\). If \(a\neq-1\), then \(g\) has no zero entry. We then claim that the gadget \(G_{1}\) works using \(g\). It can be checked that \(g\) is non-degenerate since \((a^{4}+a)a^{2}=(3a^{3}+1)(a^{3}+1)\) has no solution, and that no equation in (4.2) has a solution applied to \(g\). Hence, \(G_{1}\) works using \(g\) and we may apply Theorem 4.16 to \(g\). Using the fact that \(a\in\mathbb{Q}\), one can show that \(g\) cannot be a Gen-Eq because it has no zero entry, nor can it be affine or degenerate. Also, it can be checked that the only solution for \(a\) if we were to impose the condition that \(g\) is a matchgate or in the form \([1,a^{\prime},-1-2a^{\prime},2+3a^{\prime}]\) for some \(a^{\prime}\) is \(a=0\). Thus \([1,a,0,\pm 1]\) are both #P-hard. Now assume \(c\neq 0,\pm 1\). We claim that the gadget \(G_{1}\) works. It can be checked that for the non-degenerate matrix \(G_{1}=\left[\begin{smallmatrix}1&0\\ a&c\end{smallmatrix}\right]\), \(\Delta=|1-c|\), \(\lambda/\mu\in\{c,\frac{1}{c}\}\) is not a root of unity. Next we claim that we can obtain \([1,0]\) on RHS. If \(c<1\) by Lemma 4.10 we can interpolate \([1,x]=[1,0]\) on RHS with two exceptions to which we already give a dichotomy (see the Remark after Definition 4.11). If \(c>1\), we can interpolate \([y,1]=[0,1]\) on LHS and so the gadget in Figure 24 produces \([0,c]\) on RHS, which is not proportional to the row eigenvectors \([1,-y]=[1,0]\) and \([1,x]=[1,\frac{c-1}{a}]\) of \(G_{1}\). By Lemma 4.13, we can interpolate any unary gadget on RHS, including \([1,0]\). Thus we can always get \([1,0]\) on RHS. Connect \([1,0]\) to \([1,a,0,c]\) and we will get a binary signature \([1,a,0]\) on LHS, which is #P-hard by Corollary 4.5. Therefore \([1,a,0,c]\) is #P-hard when \(c\neq 0,\pm 1\). ### Dichotomy for \([1,a,b,c]\) when \(abc\neq 0\) We need three lemmas to handle some special cases. Lemma 4.19 is a part of Theorem 4.16 (one verifies that \(G_{1}\) works, in fact for \([1,-b^{2},b,-b^{3}]\) the condition (4.2) amounts to \(b=1\), and the \(b=0\) case is degenerate thus trivially in FP). For convenience, we state it explicitly here. **Lemma 4.19**.: _The problem \([1,-b^{2},b,-b^{3}]\) with \(b\in\mathbb{Q}\) is #P-hard unless \(b=0,\pm 1\), which is in FP._ The next lemma is not part of Theorem 4.16 since the condition (4.2) fails for \([1,a,-\frac{1}{a},-1]\). **Lemma 4.20**.: _The problem \([1,a,-\frac{1}{a},-1]\) with \(a\in\mathbb{Q}\), \(a\neq 0\) is #P-hard unless \(a=\pm 1\), in which case it is in FP._ Proof.: If \(a=\pm 1\), \([1,1,-1-1]\) is affine and \([1,-1,1,-1]\) is degenerate, both of which are in FP. Now we assume \(a\neq\pm 1\) (so the matrix \(\left[\begin{smallmatrix}a^{-2}&a\\ 1&1\end{smallmatrix}\right]\) is invertible). We use the ternary gadget \(G_{3}\) to get the signature \(g=[3a^{3}+\frac{1}{a^{3}}+4,a^{4}-\frac{1}{a^{2}},-a^{2}+\frac{1}{a^{4}},a^{3} +\frac{3}{a^{3}}+4]\) on LHS. A direct computation (using Mathematica) shows that \(G_{1}\) always works for \(g\) unless \(a=-1\). Therefore, by Theorem 4.16 we have \(g\) is #P-hard and then \([1,a,-\frac{1}{a},-1]\) is #P-hard unless \(g\) is in the tractable cases in Theorem 2.1. The only solution for \(g\) being in the tractable cases in Theorem 2.1 is \(a=\pm 1\). This completes our proof. **Lemma 4.21**.: _The problem \([1,a,b,ab]\) with \(a,b\in\mathbb{Q}\) and \(a,b\neq 0\) is #P-hard unless it is degenerate or affine, which is in FP._ Proof.: If \(a=-1\) and \(b=\pm 1\) then problem is in FP. Indeed, \([1,-1,1,-1]\) is degenerate, and \([1,-1,-1,1]\) can be transformed to matchgate; both problems are in FP. We therefore now assume it is not the case that both \(a=-1\) and \(b=\pm 1\). We first use \(G_{3}\) to construct a ternary signature \(h=[-b^{4}+3b^{2}-2,b^{4}-2b^{3}-b^{2}+2b,2b^{3}-b^{2}-2b+1,-2b^{4}+3b^{2}-1]\) on LHS, which can be normalized to \(h=[2-b^{2},b^{2}-2b,2b-1,1-2b^{2}]\) after Figure 25: Two gadgets where each triangle represents the unary gadget \([1,a]\) dividing \((b+1)(b-1)\). Using gadget \(G_{1}\), we have a degenerate matrix \(G_{1}=\left[\begin{smallmatrix}1&b\\ a&ab\end{smallmatrix}\right]=\left[\begin{smallmatrix}1&b\\ a\end{smallmatrix}\right]\cdot\left[\begin{smallmatrix}1&b\end{smallmatrix}\right]\). We get \([1,b]\) on RHS if \([1,a]\) can appropriately form some nonzero global factor. Figure 25 indicates two different ways of "absorbing" \([1,a]\) on LHS. Importantly, we place \(g\) instead of \([1,a,b,ab]\) on the square vertex. Figure 24(a) provides a factor \(1+a^{3}\) which is nonzero if \(a\neq-1\). When \(a=-1\), Figure 24(b) provides a factor \(2b^{2}-4b+2=2(b-1)^{2}\) which is nonzero unless \(b=1\). Therefore, we can interpolate \([1,b]\) on RHS. Connect \([1,b]\) back to \([1,a,b,ab]\) on LHS and we get the binary signature \(g=[1+ab,a+b^{2},b+ab^{2}]\). If \(a+b^{2}=0\), the given signature is \([1,-b^{2},b,-b^{3}]\) which, according to Lemma 4.19, is #P-hard (the exceptions in Lemma 4.19 do not apply as \(b\neq 0\) and if \(b=\pm 1\) then \(a=-b^{2}=-1\) which is excluded.) Now we assume \(a+b^{2}\neq 0\). Normalize \(g\) by dividing \(a+b^{2}\), we have the binary signature \([\frac{1+ab}{a+b^{2}},1,\frac{b+ab^{2}}{a+b^{2}}]\) on LHS. Applying Corollary 4.5 to \(g\), it is #P-hard (and so is the given signature \([1,a,b,ab]\)) unless 1. \((1+ab)(b+ab^{2})=(a+b^{2})^{2}\). This implies \(\left(a^{2}-b\right)\left(b^{3}-1\right)=0\). If \(a^{2}-b=0\), the given signature is \([1,a,a^{2},a^{3}]\) and is degenerate. If \(b^{3}-1=0\), since \(b\in\mathbb{Q}\), we have \(b=1\) and \(a\neq-b^{2}=-1\), and the given signature is \([1,a,1,a]\). This is resolved by Lemma 4.2. 2. \(\frac{1+ab}{a+b^{2}}=1\) and \(\frac{b+ab^{2}}{a+b^{2}}=-1\). Dividing the two expressions gives \(b=-1\). This implies \(a=0\) and therefore violates our assumption that \(a\neq 0\). 3. \(\frac{1+ab}{a+b^{2}}=-1\) and \(\frac{b+ab^{2}}{a+b^{2}}=1\). No \((a,b)\) pair solves these two equations, under the assumption \(a+b^{2}\neq 0\). 4. \(\frac{1+ab}{a+b^{2}}=\frac{b+ab^{2}}{a+b^{2}}\). We have either \(b=1\) or \(ab=-1\). If \(b=1\), then \([1,a,b,ab]\) becomes \([1,a,1,a]\) which is #P-hard by Lemma 4.2 unless \(a=0\) (this violates our assumption) or \(a=\pm 1\), in which cases the signature is affine and therefore the problem is in FP. If \(ab=-1\), then \([1,a,b,ab]\) becomes \([1,a,-\frac{1}{a},-1]\) which is #P-hard by Lemma 4.20 unless \(a=\pm 1\), in which cases the signature is affine and therefore the problem is in FP. Note that since \(a,b\neq 0\), \([1,a,b,ab]\) cannot be Gen-Eq. The lemma is proved. Now we prove **Theorem 4.22**.: _The problem \([1,a,b,c]\) with \(a,b,c\in\mathbb{Q}\), \(abc\neq 0\), is #P-hard unless it is in the tractable cases in Theorem 2.1._ Proof.: By Proposition 4.8, Theorem 4.16 and Lemma 4.21 it suffices to consider the case when the ratio of two eigenvalues in \(G_{1}=\left[\begin{smallmatrix}1&b\\ a&c\end{smallmatrix}\right]\) is a root of unity and \(c\neq ab\). If the ratio of eigenvalues of \(G_{1}\) is a root of unity, we know at least one condition in (4.2) holds. For convenience, we list the conditions in (4.2) here and label them as \(R_{i}\) where \(i=1,2,3,4,5\): \[R=\bigvee_{i=1}^{5}R_{i},\ \ \text{where}\begin{cases}R_{1}:c=-1\\ R_{2}:ab+c^{2}+c+1=0\\ R_{3}:2ab+c^{2}+1=0\\ R_{4}:3ab+c^{2}-c+1=0\\ R_{5}:4ab+c^{2}-2c+1=0\end{cases} \tag{4.6}\] We apply \(G_{3}\) on \([1,a,b,c]\), i.e. placing squares to be \([1,a,b,c]\) and circles to be \(=_{3}\), to produce a ternary signature \([w,x,y,z]=[1+3a^{3}+3a^{2}b^{2}+b^{3}c,a+a^{4}+2a^{2}b+a^{2}bc+2ab^{3}+b^{2}c^{ 2},a^{2}+ab^{2}+2a^{3}b+b^{4}+2ab^{2}c+bc^{3},a^{3}+3a^{2}b^{2}+3b^{3}c+c^{4}]\). If \(w\neq 0\) and \(G_{1}\) works on \([w,x,y,z]\), by Theorem 4.16 we have \([w,x,y,z]\) is #P-hard and thus \([1,a,b,c]\) is #P-hard unless at least one condition listed below holds, where \(i=1,2,3,4,5,6,7\): \[S=\bigvee_{i=1}^{7}S_{i}\ \ \ \text{where}\begin{cases}S_{1}:x^{2}=wy\wedge y^{2}=xz \text{ (degenerate form)}\\ S_{2}:x=0\wedge y=0\text{ (Gen-Eq form)}\\ S_{3}:w=y\wedge x=0\wedge z=0\text{ (affine form }[1,0,1,0])\\ S_{4}:w+y=0\wedge x=0\wedge z=0\text{ (affine form }[1,0,-1,0])\\ S_{5}:w=z\wedge x=y\text{ (matchgate-realizable form }[a^{\prime},b^{\prime},b^{\prime},a^{ \prime}])\\ S_{6}:w=-z\wedge x=-y\text{ (matchgate-realizable form }[a^{\prime},b^{\prime},-b^{ \prime},-a^{\prime}])\\ S_{7}:-w-2x=y\wedge 2w+3x=z\text{ (form }[3a^{\prime}+b^{\prime},-a^{\prime}-b^{ \prime},-a^{\prime}+b^{\prime},3a^{\prime}-b^{\prime}])\end{cases} \tag{4.7}\] Note that the affine forms \([1,1,-1,-1]\) and \([1,-1,-1,1]\) are special forms of \([a,b,b,a]\) and \([a,b,-b,-a]\). Solve the equation system \(R\wedge S\) for variables \(a,b,c\in\mathbb{Q}\), we have the following solutions: * \(a=c=-1,b=1\); the problem \([1,-1,1,-1]\) is in FP since it is degenerate; * \(a=1,b=c=-1\); the problem \([1,1,-1,-1]\) is in FP since it is affine; * \(a=c=1,b=-1\); the problem \([1,1,-1,1]\) is #P-hard (use the gadget \(G_{3}\) to produce \([1,1,-1,3]\) after flipping 0's and 1's, then use it again to produce \([1,1,-5,19]\) which is #P-hard by Theorem 4.16. Note that we need to apply \(G_{3}\) twice in order that the condition that \(G_{1}\) works in Theorem 4.16 is satisfied for the newly created ternary signature); * \(a=-b,c=-1\); the problem \([1,a,-a,-1]\) is matchgate-transformable and thus in FP. Continuing the discussion for the ternary signature \([w,x,y,z]\), it remains to consider the case when \(w=0\) or \(G_{1}\) does not work on \([w,x,y,z]\). For \(w\neq 0\) we normalize \([w,x,y,z]\) to be \([1,\frac{x}{w},\frac{y}{w},\frac{z}{w}]\) and substituting \(\frac{x}{w},\frac{y}{w},\frac{z}{w}\) into \(a,b,c\) respectively in (4.2), we get at least one condition \(T_{i}\) listed below, where \(i=1,2,3,4,5,6\): \[T=\bigvee_{i=1}^{6}T_{i},\ \ \text{where}\begin{cases}T_{1}:zw+w^{2}=0\\ T_{2}:xy+z^{2}+zw+w^{2}=0\\ T_{3}:2xy+z^{2}+w^{2}=0\\ T_{4}:3xy+z^{2}-zw+w^{2}=0\\ T_{5}:4xy+z^{2}-2zw+w^{2}=0\\ T_{6}:xy=wz\end{cases} \tag{4.8}\] Note that \(T_{1}\) incorporates the case when \(w=0\). So we have the condition \(R\wedge T\). We now apply \(G_{3}\) once again using \([w,x,y,z]\) to produce another new ternary signature \([w_{2},x_{2},y_{2},z_{2}]\) where \(w_{2}=w^{4}+3wx^{3}+3x^{2}y^{2}+y^{3}z\), \(x_{2}=w^{3}x+2wx^{2}y+x^{4}+2xy^{3}+x^{2}yz+y^{2}z^{2}\), \(y_{2}=w^{2}x^{2}+wxy^{2}+2x^{3}y+y^{4}+2xy^{2}z+yz^{3}\), \(z_{2}=wx^{3}+3x^{2}y^{2}+3y^{3}z+z^{4}\). Similarly as the previous argument, if \(w_{2}\neq 0\) and \(G_{1}\) works on \([w_{2},x_{2},y_{2},z_{2}]\), we know \([w_{2},x_{2},y_{2},z_{2}]\) is #P-hard and thus \([1,a,b,c]\) is #P-hard unless at least one condition \(U_{i}\) listed below holds, where \(i=1,2,3,4,5,6,7\): \[U=\bigvee_{i=1}^{7}U_{i},\ \text{where}\begin{cases}U_{1}:x_{2}^{2}=w_{2}y_{2} \wedge y_{2}^{2}=x_{2}z_{2}\text{ (degenerate form)}\\ U_{2}:x_{2}=0\wedge y_{2}=0\text{ (Gen-Eq form)}\\ U_{3}:w_{2}=y_{2}\wedge x_{2}=0\wedge z_{2}=0\text{ (affine form }[1,0,1,0])\\ U_{4}:w_{2}+y_{2}=0\wedge x_{2}=0\wedge z_{2}=0\text{ (affine form }[1,0,-1,0])\\ S_{5}:w_{2}=z_{2}\wedge x_{2}=y_{2}\text{ (matchgate-realizable form }[a^{\prime},b^{\prime},b^{\prime},a^{ \prime}])\\ S_{6}:w_{2}=-z_{2}\wedge x_{2}=-y_{2}\text{ (matchgate-realizable form }[a^{\prime},b^{\prime},-b^{\prime},-a^{ \prime}])\\ S_{7}:-w_{2}-2x_{2}=y_{2}\wedge 2w_{2}+3x_{2}=z_{2}\text{ (form }[3a^{\prime}+b^{\prime},-a^{ \prime}-b^{\prime},-a^{\prime}+b^{\prime},3a^{\prime}-b^{\prime}])\end{cases} \tag{4.9}\] Solve the equation system \(R\wedge T\wedge U\) for rational-valued variables \(a,b,c\), we have the following solutions: * \(a=c=-1,b=1\); the problem \([1,-1,1,-1]\) is in FP since it is degenerate; * \(a=1,b=c=-1\); the problem \([1,1,-1,-1]\) is in FP since it is affine; * \(a=-1,b=c=1\); the problem \([1,-1,1,1]\) is #P-hard (use the gadget \(G_{4}\) to produce \([1,1,-1,3]\), use it again to produce \([1,1,-5,19]\) which is #P-hard by Theorem 4.16); * \(a=c=1,b=-1\); the problem \([1,1,-1,1]\) is #P-hard (this is the reversal of \([1,-1,1,1]\)); Otherwise, we know \(w_{2}=0\) or \(G_{1}\) does not work on \([w_{2},x_{2},y_{2},z_{2}]\). Similarly, we know at least one condition \(V_{i}\) listed below holds, where \(i=1,2,3,4,5,6\): \[V=\bigvee_{i=1}^{6}V_{i},\ \text{ where}\begin{cases}V_{1}:z_{2}w_{2}+w_{2}^{2}=0\\ V_{2}:x_{2}y_{2}+z_{2}^{2}+z_{2}w_{2}+w_{2}^{2}=0\\ V_{3}:2x_{2}y_{2}+z_{2}^{2}+w_{2}^{2}=0\\ V_{4}:3x_{2}y_{2}+z_{2}^{2}-z_{2}w_{2}+w_{2}^{2}=0\\ V_{5}:4x_{2}y_{2}+z_{2}^{2}-2z_{2}w_{2}+w_{2}^{2}=0\\ V_{6}:x_{2}y_{2}=w_{2}z_{2}\end{cases} \tag{4.10}\] Finally, solve the equation system \(R\wedge T\wedge V\) for variables \(a,b,c\in\mathbb{Q}\), we have the following solutions: * \(a=-1,b=c=1\); the problem \([1,-1,1,1]\) is #P-hard (see the case above for \(R\wedge T\wedge U\)); * \(a=c=1,b=-1\); the problem \([1,1,-1,1]\) is #P-hard (this is the reversal of \([1,-1,1,1]\)); * \(a=-b,c=-1\); the problem \([1,a,-a,-1]\) is matchgate-transformable and thus in FP. The proof of Theorem 4.22 is now complete. ## 5 Dichotomy for \([0,a,b,0]\) We now finish the discussion for \([0,a,b,0]\) with the help of previous theorems on \([1,a,b,c]\). **Theorem 5.1**.: _The problem \([0,a,b,0]\) with \(a,b\in\mathbb{Q},ab\neq 0\) is #P-hard unless \(a=\pm b\)._ Proof.: We apply the gadget \(G_{3}\) on \([0,a,b,0]\) to produce the ternary signature \(g=[3a^{2}b^{2},a(a^{3}+2b^{3}),b(2a^{3}+b^{3}),3a^{2}b^{2}]\). We can normalize \(g\) to be the form \([1,a^{\prime},b^{\prime},c^{\prime}]\). Since \(a,b,c\in\mathbb{Q}\), we have \(a^{\prime}b^{\prime}c^{\prime}\neq 0\). By Theorem 4.22, we know \([1,a^{\prime},b^{\prime},c^{\prime}]\) is #P-hard (and so is \([0,a,b,0]\)) unless it is in the tractable cases in Theorem 2.1. However, the problem \([1,a^{\prime},b^{\prime},c^{\prime}]\) in FP implies \(a=\pm b\). This finishes the proof. If now \(a=b=0\), then the Holant value is \(0\) and the problem is trivially in FP. Suppose exactly one of \(a\) and \(b\) is \(0\). In this case, by normalizing and possibly flipping \(0\) and \(1\) in the input, it suffices to consider the ternary signature \([0,1,0,0]\). **Theorem 5.2**.: _The problem \(\mathrm{Pl}\)-Holant \(([0,1,0,0]\mid(=_{3}))\) is \(\#\mathrm{P}\)-complete._ Proof.: In [10], Dyer and Frieze proved the problem Planar-X3C NP-complete: An input is a collection \(\mathcal{S}\) of \(3\)-element subsets of a set \(U\), where the bipartite incidence graph is planar, and we ask for an exact cover of \(U\) by some \(\mathcal{S}^{\prime}\subseteq\mathcal{S}\). Their reduction in fact produces instances where every \(x\in U\) appears in exactly two or three sets of \(\mathcal{S}\). One can further verify that their reduction is parsimonious. Thus, their proof yields the \(\#\mathrm{P}\)-completeness for \(\mathrm{Pl}\)-Holant \(([0,1,0,0],[0,1,0]\mid(=_{3}))\). We prove our theorem by a reduction \[\mathrm{Pl}\text{-Holant}\left([0,1,0,0],[0,1,0]\mid(=_{3})\right)\leq_{T} \mathrm{Pl}\text{-Holant}\left([0,1,0,0]\mid(=_{3})\right). \tag{5.11}\] Note that a unary pin-\(0\) signature \(\Delta_{0}\) connected to \([0,1,0,0]\) produces \([0,1,0]\). If we replace each \([0,1,0]\) in \(\Omega\) by \([0,1,0,0]\) connected with \(\Delta_{0}\), the Holant value is unchanged. So if we can produce \(\Delta_{0}\) on the RHS in \(\mathrm{Pl}\)-Holant \(([0,1,0,0]\mid(=_{3}))\), then (5.11) follows. But, in any bipartite \(3\)-regular problem _provably_ no construction can produce individual unary signatures. Next, note that in any signature grid \(\Omega\) of \(\mathrm{Pl}\)-Holant \(([0,1,0,0],[0,1,0]\mid(=_{3}))\), the number of appearances of \([0,1,0]\) is congruent to \(0\bmod 3\), by counting the total degrees of LHS and RHS. Then, our idea is to create triples of \(\Delta_{0}\) so that we can apply them, one triple at a time. There remains the difficulty of how to construct triples of \(\Delta_{0}\) on the RHS in the setting \(\mathrm{Pl}\)-Holant \(([0,1,0,0]\mid(=_{3}))\), and more importantly, not only the construction must be planar but also we must be able to apply them in \(\Omega\), three at a time, in a planar fashion. Notice that the appearances of \([0,1,0]\) in \(\Omega\) generally do not allow this planar grouping (and indeed the output instances in [10] do not have this property). The following construction accomplishes all these requirements in one fell swoop! The planar cross-over-pinned-\(0\) gadget \(\mathcal{P}\) is illustrated in Figure 26, where we place \([0,1,0,0]\) at the squares and \(=_{3}\) at the circles. It has the following properties: 1. externally the two left dangling edges are to be connected to LHS, and two right dangling edges are to be connected to RHS; 2. the two blue dangling edges are pinned to be \(0\); 3. the two red dangling edges can be assigned to either \(0\) or \(1\), but must be the same value, and either choice induces a unique assignment for the internal edges. Figure 26: The cross-over-pinned-\(0\) gadget \(\mathcal{P}\) Note also that if we "flip" \(\mathcal{P}\) along the "axis" of the two blue edges, thereby exchange the two red edges, we have a reflected copy of \(\mathcal{P}\), call it \(\mathcal{P}^{\prime}\), where the North-East red edge connects externally to LHS, and the South-West red edge connects externally to RHS, exactly the opposite of \(\mathcal{P}\). Thus, the gadget \(\mathcal{P}\) allows "passing over" one crossing edge (the pair of red edges will take its place) while the two end blue edges are pinned to \(0\). We can link any \(k\geq 1\) copies of \(\mathcal{P}\) or \(\mathcal{P}^{\prime}\) by the blue edges to "pass over" \(k\) crossings. Note that the linking of the two end blue edges respects the bipartite structure, and \(\mathcal{P}\) or \(\mathcal{P}^{\prime}\) allow any individual bipartite orientation of the crossed edge. We call this a linked \(\mathcal{P}\) gadget. Let \(n=3m\) be the number of \([0,1,0]\) in \(\Omega\) for some integer \(m\geq 0\). We now add \(m\) new vertices on RHS assigned the signature \(=_{3}\). We then use three copies of the linked \(\mathcal{P}\) gadgets to connect this \(=_{3}\) to three occurrences of \([0,1,0]\) in \(\Omega\), while replacing the signatures there by \([0,1,0,0]\). (If some passage from \(=_{3}\) to \([0,1,0]\) does not encounter any crossing edge, we will artificially introduce two such crossings!) This defines a signature grid \(\Omega^{\prime}\) in \(\text{Pl-Holant}\left([0,1,0,0]\ |\ (=_{3})\right)\) with \(\text{Holant}(\Omega^{\prime})=\text{Holant}(\Omega)\). **Remark 4**.: One can easily construct a degenerate ternary signature \([1,0,0,0]=[1,0]\otimes[1,0]\otimes[1,0]\) on RHS by placing circles to be \([0,1,0,0]\) and squares to be \((=_{3})\) on \(G_{2}\). However, we cannot apply Theorem 3.2 and conclude that \(\text{Pl-Holant}\left([0,1,0,0],[0,1,0]\ |\ (=_{3})\right)\leq_{T}\text{Pl-Holant} \left([0,1,0,0]\ |\ (=_{3})\right).\) See also Remark 1. **The use of the gadget \(\mathcal{P}\) is essential.** ## 6 Main Theorem We are now ready to prove our main theorem. At the end of the proof there is a flowchart of the logical structure for this proof of Theorem 2.1. Proof of Theorem 2.1.: First, if \(f_{0}=f_{3}=0\), we separate the discussion into whether \(f_{1}f_{2}\neq 0\). If \(f_{1}f_{2}\neq 0\), by Theorem 5.1 we know that it is #P-hard unless \(f_{1}=f_{2}\), in which case it is matchgate-transformable and thus in FP. If \(f_{1}f_{2}=0\), then by Theorem 5.2 we know that it is #P-hard unless \(f_{1}=f_{2}=0\), in which case the problem is trivially in FP. This finishes the case when \(f_{0}=f_{3}=0\). Assume now at least one of \(f_{0}\) and \(f_{3}\) is not \(0\). By considering the reversal of the signature, we can assume \(f_{0}\neq 0\), then the signature becomes \([1,a,b,c]\) after normalization. If \(c=0\), the dichotomy for \([1,a,b,0]\) is proved in Theorem 4.17. If in \([1,a,b,c]\), \(c\neq 0\), then \(a\) and \(b\) are symmetric by flipping. Now if \(ab=0\), we can assume \(b=0\) by the afore-mentioned symmetry, i.e., the signature becomes \([1,a,0,c]\). By Theorem 4.18, it is #P-hard unless \(a=0\), in which case it is Gen-Eq. Finally, for the problem \([1,a,b,c]\) where \(abc\neq 0\), Theorem 4.22 proves the dichotomy that it is #P-hard unless the signature is in the tractable cases of Theorem 2.1. **Flowchart of proof structure:**
2303.01279
Two-color soliton meta-atoms and molecules
We present a detailed overview of the physics of two-color soliton molecules in nonlinear waveguides, i.e. bound states of localized optical pulses which are held together due to an incoherent interaction mechanism. The mutual confinement, or trapping, of the subpulses, which leads to a stable propagation of the pulse compound, is enabled by the nonlinear Kerr effect. Special attention is paid to the description of the binding mechanism in terms of attractive potential wells, induced by the refractive index changes of the subpulses, exerted on one another through cross-phase modulation. Specifically, we discuss nonlinear-photonics meta atoms, given by pulse compounds consisting of a strong trapping pulse and a weak trapped pulse, for which trapped states of low intensity are determined by a Schr\"odinger-type eigenproblem. We discuss the rich dynamical behavior of such meta-atoms, demonstrating that an increase of the group-velocity mismatch of both subpulses leads to an ionization-like trapping-to-escape transition. We further demonstrate that if both constituent pulses are of similar amplitude, molecule-like bound-states are formed. We show that z-periodic amplitude variations permit a coupling of these pulse compound to dispersive waves, resulting in the resonant emission of Kushi-comb-like multi-frequency radiation.
O. Melchert, S. Willms, I. Babushkin, U. Morgner, A. Demircan
2023-03-02T14:05:04Z
http://arxiv.org/abs/2303.01279v1
# (Invited) Two-color soliton meta-atoms and molecules ###### Abstract We present a detailed overview of the physics of two-color soliton molecules in nonlinear waveguides, i.e. bound states of localized optical pulses which are held together due to an incoherent interaction mechanism. The mutual confinement, or trapping, of the subpulses, which leads to a stable propagation of the pulse compound, is enabled by the nonlinear Kerr effect. Special attention is paid to the description of the binding mechanism in terms of attractive potential wells, induced by the refractive index changes of the subpulses, exerted on one another through cross-phase modulation. Specifically, we discuss nonlinear-photonics meta atoms, given by pulse compounds consisting of a strong trapping pulse and a weak trapped pulse, for which trapped states of low intensity are determined by a Schrodinger-type eigenproblem. We discuss the rich dynamical behavior of such meta-atoms, demonstrating that an increase of the group-velocity mismatch of both subpulses leads to an ionization-like trapping-to-escape transition. We further demonstrate that if both constituent pulses are of similar amplitude, molecule-like bound-states are formed. We show that \(z\)-periodic amplitude variations permit a coupling of these pulse compound to dispersive waves, resulting in the resonant emission of Kushi-comb-like multi-frequency radiation. keywords: Nonlinear optics, Optical solitons, Two-color soliton molecules, Resonant radiation ###### Contents * 1 Introduction * 2 Model and methods * 3 Nonlinear-photonics meta-atoms * 3.1 Stable propagation of trapped states * 3.2 Trapping-to-escape transition caused by a group-velocity mismatch * 4 Two-color soliton molecules * 4.1 Two-color soliton pairs * 4.2 Kushi-comb-like multi-frequency radiation * 5 Summary and conclusions ## 1 Introduction The confinement of two - and possibly more - quasi co-propagating optical pulses has been discussed in terms of various propagation settings since the 80's of the last century, with early accounts discussing the self-confinement multimode optical pulses in glass fibers [1], nonlinear pairing of light and dark optical solitons [2; 3], and stability of solitons with different polarization components in birefringent fibers [4]. A very paradigmatic instance of self-confinement is supported by the standard nonlinear Schrodinger equation (NSE) [5; 6]. In the integrable case, it features localized field pulses given by solitary waves [7]. When considering two or more quasi group-velocity matched pulses, their incoherent, cross-phase modulation (XPM) induced mutual interaction co-determines their dynamics [1; 2; 3; 4; 8; 9; 10; 11]. For instance, in nonlinear waveguides with a single zero-dispersion point, a soliton induces a strong refractive index barrier that cannot be surpassed by quasi group-velocity matched waves located in a domain of normal dispersion [12], resulting in their mutual repulsion. The underlying interaction process is enabled by a general wave reflection mechanism originally reported in fluid dynamics [13]. In optics this process is referred to as push-broom effect [14], optical event horizon [15; 16], or temporal reflection [17]. This interaction mechanism allows for a strong and efficient control of light pulses [18; 19; 20], and has been shown to appear naturally during the supercontinuum generation process [21; 22; 23]. When considering waveguides that support group-velocity matched propagation of pulses in separate domains of anomalous dispersion, their mutual interaction is expressed in a different way: the aforementioned XPM induces attractive potentials that hold the pulses together, enabling two-color soliton molecules through an incoherent binding mechanism [24]; the resulting pulse compound consists of two subpulses at vastly different center frequencies. Putting emphasis on the frequency-domain representation of these pulse compounds lead to observe that a soliton can in fact act as a localized trapping potential with a discrete level spectrum [24]. Let us emphasize that in order to achieve a strong attractive interaction between the subpulses of such pulse compounds, group-velocity matching is crucial [25]. In terms of a modified NSE with added fourth-order dispersion, these objects where identified as parts of a large family of generalized dispersion Kerr solitons that can be characterized using the concept of a meta-envelope [26]. Such pulses were recently verified experimentally in mode-locked laser cavities [27; 28; 29]. In a complementary approach to the multi-scales analysis presented in Ref. [26], modeling both subpulses in terms of coupled NSEs allowed to derive a special class of two-color soliton pairs and their meta-envelopes in closed form [30]. Let us note that the concept of soliton molecules has meanwhile been extended to pulse compounds with three frequency centers [31], and recently also to a number of \(J\) equally spaced frequency components [27; 32]. Further, two-color soliton microcomb states with similar structure where also observed in the framework of the Lugiato-Lefever equation [33; 34]. The underlying scheme is much more general and requires quasi group-velocity mathing between different optical pulses. This can be achieved in different settings, and can, e.g., already been found in an early work of Hasegawa [1], where a strong incoherent XPM interaction between different components of a multimode optical pulse has been considered. At this point, we would also like to emphasize that these pulse compounds are different from usual soliton molecules, which can be realized by dispersion engineering in the framework of a standard NSE [35], characterized by two pulses separated by a fixed temporal delay and stabilized by a phase relation between both pulses [36]. Here, we review the rich dynamical behavior of two-color pulse compounds, which consist of two group-velocity matched subpulses in distinct domains of anomalous dispersion, with frequency loci separated by a vast frequency gap. First, we will demonstrate paradigmatic propagation scenarios that demonstrate photonic meta-atoms, arising in the limiting case where the pulse compounds consist of an intense trapping pulse, given by a soliton, and a weak trapped pulse. Then, we will address the case where both subpulses have similar amplitudes, so that their mutual XPM induced confining action results in the formation of a narrow two-color soliton molecule. Finally, we show that non-stationary dynamics of the subpulses results in the emission of resonant radiation, and we show how the location of the newly generated frequencies depends on the \(z\)-periodic amplitude and width variations of the oscillating soliton molecule. The article is organized as follows. In Sec. 2 we discuss the propagation model used for our theoretical investigations of two-color meta-atoms and soliton molecules, and detail the numerical methods employed for their simulation and analysis. In Sec. 3 we demonstrate the ability of solitons to act as attractive potential wells that can host trapped states, and probe the stability of the resulting photonic meta-atoms with respect to a group-velocity mismatch between the trapping soliton and the trapped state. In Sec. 4 we derive a simplified model that yields simultaneous solutions for the subpulses that make up a two-color soliton molecule and show that theses solutions entail the two-color soliton pairs derived in Ref. [30]. We perturb these pulse compounds by increasing their initial amplitude, which results in periodic amplitude and width oscillations, and triggers the generation of resonant multi-frequency radiation with a complex stucture that can be precisely predicted theoretically. Section 5 concludes with a summary. ## 2 Model and methods Propagation modelIn order to study the propagation dynamics of nonlinear photonic meta-atoms and two-color soliton molecules, we consider a modified nonlinear Schrodinger equation (NSE) of the form \[i\partial_{z}A=\left(\frac{\beta_{2}}{2}\partial_{t}^{2}-\frac{\beta_{4}}{24} \partial_{t}^{4}\right)A-\gamma|A|^{2}A, \tag{1}\] describing the single-mode propagation of a complex-valued field \(A\equiv A(z,t)\), on a periodic temporal domain of extent \(T\) for the boundary condition \(A(z,-T/2)=A(z,T/2)\). The linear part of Eq. (1) includes higher orders of dispersion, with \(\beta_{2}>0\) (in units of fs\({}^{2}/\mu\)m) a positive-valued group-velocity dispersion coefficient, and \(\beta_{4}<0\) (fs\({}^{4}/\mu\)m) a negative-valued fourth-order dispersion coefficient. The nonlinear part of Eq. (1) includes a positive-valued scalar nonlinear coefficient \(\gamma\) (W\({}^{-1}/\mu\)m). Considering the discrete set of angular frequency detunings \(\Omega\in\frac{2\pi}{T}\mathbb{Z}\), the transform-pair \[A_{\Omega}(z) =\mathsf{F}[A(z,t)]\equiv\frac{1}{T}\int_{-T/2}^{T/2}A(z,t)\,e^{t \Omega t}\,\mathrm{d}t, \tag{2a}\] \[A(z,t) =\mathsf{F}^{-1}[A_{\Omega}(z)]\equiv\sum_{\Omega}A_{\Omega}(z) \,e^{-t\Omega t}, \tag{2b}\] specifies a Fourier transform [Eq. (2a)], and the corresponding inverse [Eq. (2b)], relating the field envelope \(A(z,t)\) to the spectral envelope \(A_{\Omega}(z)\). _Propagation constant._ Using the identity \(\partial_{t}^{n}\,e^{-t\Omega t}=(-i\Omega)^{n}\,e^{-t\Omega t}\) of the spectral derivative,1 the frequency-domain representation of the propagation constant is given by the polynomial expression Footnote 1: Let us note that the “-”-sign in the bracket on the right-hand-side of the preceding identity reflects the sign-choice of the plane-wave basis in Eqs. (2). This has to be taken into account when using scientific computing tools such as, e.g., Python’s scipy package [37, 38], where readily available routines for spectral derivative exist that implement a different sign-choice for the pair of Fourier-transforms. \[\beta(\Omega)=\frac{\beta_{2}}{2}\Omega^{2}+\frac{\beta_{4}}{24}\Omega^{4}.\] (3a) The frequency-dependent inverse group-velocity of a mode at detuning \[\Omega\] reads \[\beta_{1}(\Omega)\equiv\partial_{\Omega}\beta(\Omega)=\beta_{2}\Omega+\frac{ \beta_{4}}{6}\Omega^{3}, \tag{3b}\] with group-velocity (GV) \(v_{g}(\Omega)=1/\beta_{1}(\Omega)\), and the group-velocity dispersion (GVD) is given by \[\beta_{2}(\Omega)\equiv\partial_{\Omega}^{2}\beta(\Omega)=\beta_{2}+\frac{ \beta_{4}}{2}\Omega^{2}. \tag{3c}\] Subsequently, we use the parameter values \(\beta_{2}=1\) fs\({}^{2}/\mu\)m, and \(\beta_{4}=-1\) fs\({}^{4}/\mu\)m, resulting in the model dispersion characteristics shown in Fig. 1. For the nonlinear coefficient in Eq. (1) we use \(\gamma=1\) W\({}^{-1}/\mu\)m. As evident from Fig. 1(c), the GVD profile Eq. (3c) has a concave downward shape with two zero-dispersion points, defined by the condition \(\beta_{2}(\Omega)\stackrel{{!}}{{=}}0\), located at \(\Omega_{Z1,Z2}=\pi\sqrt{2\beta_{2}/|\beta_{4}|}=\pi\sqrt{2}\,\mathrm{rad}/\)fs. It exhibits anomalous dispersion for \(\Omega<\Omega_{Z1}\) as well as for \(\Omega>\Omega_{Z2}\). The interjeactiony range \(\Omega_{Z1}<\Omega<\Omega_{Z2}\) exhibits normal dispersion. Inspecting the inverse group velocity shown in Fig. 1(b), it can be seen that two frequencies are GV matched to \(\Omega=0\). Due to the symmetry of the propagation constant, these are given by the pair \(\Omega_{1}=-\Omega_{2}=-\sqrt{6\beta_{2}/|\beta_{4}|}\approx-2.828\) rad\(/\)fs, uniquely characterized by \(\beta(\Omega_{1})=\beta(\Omega_{2})\) and indicated by the open and filled circles in Fig. 1. In fact, for the considered propagation constant, GV matching of three distinct modes can be realized as long as the frequency loci in AD1 and AD2 lie within the range of frequencies shaded in red in Fig. 1(b). Let us note that while the type of GV matching for two optical pulses at vastly different center frequencies, supported by the propagation constant Eq. (3a), it is methodologically different from the type of GV matching that supports quasi co-propagation of different modes with similar frequencies [1]. Nevertheless, both allow for quasi co-propagation of optical pulses under different circumstances, supporting similar XPM induced propagation effects. In our case, quasi group-velocity matched propagation of optical pulses across a vast frequency gap is possible, enabled by a tailored propagation constant with multiple zero-dispersion points. Further, the considered mechanism of GV matching differs from that in Ref. [39], wherein two pulses at the same central frequency but different polarization states were assumed to be launched in the anomalous dispersion regime of a hollow-core photonic crystal fiber filled with a noble gas. The mathematical structure of Eq. (1) and the above choice of parameters yields a very basic setting supporting the stable propagation of nonlinear photonic meta-atoms and two-color soliton molecules. In fact, the two-parameter GVD curve shown in Fig. 1(c) is a simplified model of the dispersion considered earlier in Ref. [24], wherein two-color soliton molecules were first demonstrated, and is similar to the setting considered in Ref. [26], wherein generalized dispersion Kerr solitons were described comprehensively. However, let us note that the phenomena reported below are not limited to the particular choice of the above parameters and persist even in the presence of perturbations such as pulse self-steepening [25; 30], which can be accounted for by replacing \(\gamma\to\gamma(\Omega)\) in the nonlinear part of Eq. (1), and - with some reservation - a self-frequency shift caused by the Raman effect [31]. Propagation algorithmFor our pulse propagation simulations in terms of Eq. (1), we employ the "Conservation quantity error" method (CQE) [40; 41]. It maintains an adaptive \(z\)-propagation stepsize \(h\), and uses a conservation law of the underlying propagation equation to guide stepsize selection. Specifically, we here use the relative error \[\delta_{E}(z)=\frac{|E(z+h)-E(z)|}{E(z)}, \tag{4}\] where \(E\) is the total energy, conserved by Eq. (1). Employing Parseval's identity for Eqs. (2) [42; 43], the total energy in the time and frequency domains is given by \[E(z)=\int_{-T/2}^{T/2}|A(z,t)|^{2}\ \mathrm{d}t=T\sum_{\Omega}|A_{\Omega}(z)|^{ 2}, \tag{5}\] with instantaneous power \(|A(z,t)|^{2}\) (W = J/s), and power spectrum \(|A_{\Omega}(z)|^{2}\) (W). The CQE method is designed to keep the relative error \(\delta_{E}\) within the goal error range \((0.1\,\delta_{\mathrm{G}},\delta_{\mathrm{G}})\), for a preset local goal error \(\delta_{\mathrm{G}}\) (throughout our numerical experiments we set \(\delta_{\mathrm{G}}=10^{-10}\)). This is accomplished by decreasing the stepsize \(h\) when necessary while increasing \(h\) when possible. To advance the field from position \(z\) to \(z+h\), the CQE uses the "Fourth-order Runge-Kutta in the interaction picture" (RK4IP) method [44]. The ability of the algorithm to increase or decrease the stepsize is most valuable when the propagation of an initial condition results in a rapid change of the pulse intensities over short Figure 1: Details of the frequency-dependent propagation constant supporting nonlinear-photonic meta-atoms and two-color soliton molecules. (a) Propagation constant, (b) inverse group velocity, and, (c) group-velocity dispersion. In (c), AD1 and AD2 label two distinct domains of anomalous dispersion, separated by an extended domain of normal dispersion (labeled ND). In (a-c), the domain of normal dispersion is shaded gray. Zero-dispersion points are labeled \(\Omega_{Z1}\) and \(\Omega_{Z2}\). In (b), the frequency range shaded in red allows for group-velocity matching of two modes with loci in AD1 and AD2. Open circle (labeled \(\Omega_{1}\)) and filled circle (labeled \(\Omega_{2}\)) indicate such a pair of group-velocity matched frequencies. propagation distances. Nevertheless, if one is willing to accept an increased running time resulting from an integration scheme with fixed stepsize, usual split-step Fourier methods [45; 43; 46] will work similarly well. _Spectrograms._ To assess the time-frequency interrelations within the field \(A(z,t)\) at a selected propagation distance \(z\), we use the spectrogram [47; 48; 49] \[P_{S}(t,\Omega;z)=\frac{1}{2\pi}\left|\int_{-T/2}^{T/2}A(z,t^{\prime})h(t^{ \prime}-t)e^{-i\Delta t^{\prime}t}\ \mathrm{d}t^{\prime}\right|^{2}. \tag{6}\] To localize the field in time, we use a hyperbolic-secant window function \(h(x)=\mathrm{sech}(x/\sigma)\) with width parameter \(\sigma\). _Incoherently coupled pulse pairs._ To facilitate a simplified description of two-color pulse compounds in the form \[A(z,t)=A_{1}(z,t)\,e^{-i\Omega_{1}t}+A_{2}(z,t)\,e^{-i\Omega_{2}t}, \tag{7}\] in which two quasi group-velocity-matched subpulses \(A_{1}\equiv A_{1}(z,t)\) and \(A_{2}\equiv A_{2}(z,t)\) exist at the frequency gap \(\Omega_{\mathrm{gap}}=|\Omega_{2}-\Omega_{1}|\), it is convenient to consider the two coupled nonlinear Schrodinger equations (CNSEs) [6; 10; 11] \[i\partial_{z}\,A_{1}+\beta^{\prime}_{0}\,A_{1}-i\beta^{\prime }_{1}\partial_{t}\,A_{1}-\frac{\beta^{\prime}_{2}}{2}\partial^{2}_{t}\,A_{1} +\gamma^{\prime}\left(|A_{1}|^{2}+2|A_{2}|^{2}\right)A_{1}=0, \tag{8a}\] \[i\partial_{z}\,A_{2}+\beta^{\prime\prime}_{0}\,A_{2}-i\beta^{ \prime\prime}_{1}\partial_{t}\,A_{2}-\frac{\beta^{\prime\prime}_{2}}{2} \partial^{2}_{t}\,A_{2}+\gamma^{\prime\prime}\left(|A_{2}|^{2}+2|A_{1}|^{2} \right)A_{2}=0. \tag{8b}\] The parameters in Eqs. (8) are related to Eqs. (3) through \(\beta^{\prime}_{0}=\beta(\Omega_{1})\), \(\beta^{\prime\prime}_{0}=\beta(\Omega_{2})\), \(\beta^{\prime}_{1}=\beta_{1}(\Omega_{1})\), \(\beta^{\prime\prime}_{1}=\beta_{1}(\Omega_{2})\), \(\beta^{\prime}_{2}=\beta_{2}(\Omega_{1})\), \(\beta^{\prime\prime}_{2}=\beta_{2}(\Omega_{2})\), and, \(\gamma^{\prime}=\gamma^{\prime\prime}=\gamma\). The mismatch of inverse GV for both subpulses is given by \(\Delta\beta_{1}\equiv|\beta^{\prime\prime}_{1}-\beta^{\prime}_{1}|\). For specific choices of the detunings \(\Omega_{1}\) and \(\Omega_{2}\), exact GV matching, signaled by \(\Delta\beta_{1}=0\), can be achieved. In contrast to Eq. (1), the incoherently coupled Eqs. (8) neglect higher-orders of dispersion within their linear parts, as well as rapidly varying four-wave-mixing terms within their nonlinear parts. The mutual interaction of both subpulses is taken into account via XPM. As evident from Eq. (8a), pulse \(A_{1}\) can be viewed as being exposed to a total potential field of the form \(V_{1}\equiv\gamma^{\prime}(|A_{1}|^{2}+2|A_{2}|^{2})\), entailing the effects of SPM and XPM. Likewise, \(A_{2}\) is exposed to the potential field \(V_{2}\equiv\gamma^{\prime\prime}(|A_{2}|^{2}+2|A_{1}|^{2})\). As we will show in Sects. (3), (4), the potential fields \(V_{1}\) and \(V_{2}\) yield attractive potentials that enable the mutual trapping of both subpulses. Subsequently we take \(\Omega_{1}\) and \(\Omega_{2}\) as indicated in Fig. 1, so that the above parameters are given by \(\beta^{\prime}_{0}=\beta^{\prime\prime}_{0}=1.33\ \mu\mathrm{m}^{-1}\), \(\beta^{\prime}_{1}=\beta^{\prime\prime}_{1}=0,\beta^{\prime}_{2}=\beta^{\prime \prime}_{2}=-2\ \mathrm{fs}^{2}/\mu\mathrm{m}\), and, \(\gamma^{\prime}=\gamma^{\prime\prime}=1\ \mathrm{W}^{-1}/\mu\mathrm{m}\). For a more general description of simultaneous solutions in the form of Eq. (7), we will continue to refer to the nonlinear coefficients in Eqs. (8) as \(\gamma^{\prime}\) [Eq. (8a)] and \(\gamma^{\prime\prime}\) [Eq. (8b)]. In addition, the scalar factors \(\beta^{\prime}_{0}=\beta^{\prime\prime}_{0}\equiv\beta_{0}\) can be removed by a common linear transformation \(A_{1,2}\to A_{1,2}e^{i\beta_{0}z}\), which does not affect the \(z\)-propagation dynamics of the interacting pulses. Let us note that, in general, higher-orders of dispersion within a modified NSE can cause a solitary wave to shed resonant radiation [50], and can result in a modification of its group-velocity [50; 51]. These types of perturbations are neglected by Eqs. (8), which can be justified in the limit where the subpulse separation \(\Omega_{\mathrm{gap}}\) is large and their spectra are sufficiently narrow. Moreover, in case of a frequency dependent coefficient function \(\gamma(\Omega)\), \(\gamma^{\prime}=\gamma(\Omega_{1})\) and \(\gamma^{\prime\prime}=\gamma(\Omega_{2})\) in Eqs. (8). Let us point out that, in the presence of a linear variation of \(\gamma\), a solitary wave exhibits a further modification of its group-velocity [52], an effect neglected by Eqs. (8). It is important to bear these perturbation effects in mind when comparing results based on Eqs. (8) to numerical simulations in terms of the full model Eq. (1). We can relate the above trapping mechanism for two-color pulse compounds to the mechanism enabling the self-confinement of a multimode optical pulses in a multimode fiber, discussed by Hasegawa as early as 1980 [1]. Therein, Hasegawa considered a propagation equation of the nonlinear Schrodinger type for a multimodal pulse, where the nonlinear change of the refractive index, felt by an individual mode, depends on the total intensity of the multimodal pulse. This results in coupled equations for the different modes, wherein an individual mode perceives the intensity of the total pulse as a potential field. If the considered mode is subject to anomalous dispersion, the potential is attractive. Based on the expectation that if the velocity mismatch between a given mode and the potential is smaller than the escape velocity, the potential has the ability to trap the mode, he derived a condition for self-confinement of the multimode pulse. While the results in Ref. [1] are valid for multimodal optical pulses composed of possibly many modes, the simplified modeling approach given by Eqs. (8) considers only two subpulses. Meanwhile, an extension of the above approach to pulse compounds with three and more subpulses has been accomplished [31; 53]. Given the ansatz for two-color pulse compounds in the form of Eq. (7), initial conditions \(A_{0}(t)\equiv A(z=0,t)\) that specify nonlinear photonic meta-atoms and two-color soliton molecules in terms of the subpulses \(A_{1}\) and \(A_{2}\) are different in some respects and are discussed separately in Sect. 3, and Sect. 4. Subsequently, we demonstrate the self-consistent \(z\)-propagation dynamics of these pulse compound, originally reported in Refs. [24; 26; 30; 54; 55], as well as their breakup in response to sufficiently large GV mismatches between both subpulses, originally reported in Ref. [25], in terms of numerical simulations governed by the full model Eq. (1). These numerical results demonstrate several theoretical findings reported by Hasegawa [1], applied to the concept of two-color pulse compounds. In passing, let us stress that coupled equations of the form of Eqs. (8) comprise a much-used theoretical instrument for studying mutually bound solitons [56; 57; 58; 59; 3; 60; 61; 62]. ## 3 Nonlinear-photonics meta-atoms Description of stationary trapped statesSubsequently we look for stationary solutions in the form of Eq. (7) under the additional constraint \(\max(|A_{2}|)\ll\max(|A_{1}|)\). This allow to decouple Eqs. (8) and enables direct optical analogues of quantum mechanical bound-states [63; 24; 54]. Therefore, we assume the resulting two-color pulse compounds to consist of a strong trapping pulse, given by a solitary wave (S) at detuning \(\Omega_{\mathrm{S}}\equiv\Omega_{1}\), and a weak trapped pulse (TR) at detuning \(\Omega_{\mathrm{TR}}\equiv\Omega_{2}\). For the solitary wave part of the total pulse we neglect the XPM contribution in the nonlinear part of Eq. (8a) and assume \[A_{1}(z,t)=U_{\mathrm{S}}(t)\,e^{i\kappa^{\prime}z},\quad\text{with}\quad U_{ \mathrm{S}}(t)=\sqrt{P_{0}}\,\text{sech}\left(\frac{t}{t_{0}}\right), \tag{9}\] wherein \(P_{0}=|\beta_{2}^{\prime}|/(\gamma^{\prime}t_{0}^{2})\), and \(\kappa^{\prime}=\beta_{0}+\gamma^{\prime}P_{0}/2\). Neglecting the SPM contribution in the nonlinear part of Eq. (8b) and making the ansatz \[A_{2}(z,t)=\phi(t)\,e^{i\kappa^{\prime}z}, \tag{10}\] the envelope \(\phi(t)\) of a weak stationary trapped state is determined by the Schrodinger type eigenvalue problem \[\left(-\frac{|\beta_{2}^{\prime\prime}|}{2}\frac{\mathrm{d}^{2}}{\mathrm{d}t^{ 2}}+V_{\mathrm{S}}(t)\right)\,\phi_{n}(t)=\kappa_{n}\,\phi_{n}(t). \tag{11}\] Therein, the solitary wave enters as a stationary attractive potential well \(V_{\mathrm{S}}(t)=-2\gamma^{\prime\prime}P_{0}\,\text{sech}^{2}(t/t_{0})\). Hence, as pointed out above and discussed in the context of multimode optical pulses in glass fibers in Ref. [1], a weak pulse can be attracted by the intensity of the entire pulse if it exists in a domain of anomalous dispersion. Due to \(\beta_{2}^{\prime\prime}<0\), this condition is met in the considered case. In analogy to the \(\text{sech}^{2}\)-potential in one-dimensional quantum scattering theory we may equivalently write the solitary-wave induced potential as [63] \[V_{\mathrm{S}}(t)=-\nu\left(\nu+1\right)\frac{|\beta_{2}^{\prime\prime}|}{2t_{ 0}^{2}}\,\text{sech}^{2}\left(\frac{t}{t_{0}}\right),\quad\text{with}\quad\nu =-\frac{1}{2}+\left(\frac{1}{4}+4\left|\frac{\gamma^{\prime\prime}}{\gamma^{ \prime}}\frac{\beta_{2}^{\prime}}{\beta_{2}^{\prime\prime}}\right|\right)^{1/2}. \tag{12}\] Moreover, due to the particular shape of the trapping potential, the eigenvalue problem Eq. (11) can even be solved exactly [63; 64]. The number of trapped states of the potential in Eq. (12) is given by \(N_{\mathrm{TR}}=\lfloor\nu\rfloor+1\), where \(\lfloor\nu\rfloor\) is the integer part of the strength-parameter \(\nu\). From the analogy to the quantum mechanical scattering problem [64], the real-valued wavenumber eigenvalues can directly be stated as \[\kappa_{n}=-\frac{|\beta_{2}^{\prime\prime}|}{2t_{0}^{2}}\,(\nu-n)^{2},\quad \text{for}\quad n=0,\ldots,\lfloor\nu\rfloor. \tag{13}\] For a given value of \(n\), they are related to Eq. (10) through \(\kappa^{\prime\prime}=\beta_{0}-\kappa_{n}\). To each eigenvalue corresponds an eigenfunction \(\phi_{n}\) with \(n\) zeros, specifying the \((n+1)\)-th fundamental solution of the eigenvalue problem Eq. (11). These solutions constitute the weak trapped states of the potential \(V_{\rm S}\). Referring to the Gaussian hypergeometric function as \({}_{2}F_{1}\)[65], and abbreviating \(a_{n}=\frac{1}{2}(1+n)\) and \(b_{n}=\frac{1}{2}(2\nu+1-n)\), they can be stated in closed form as [64] \[\phi_{n}(t)=\begin{cases}\cosh^{\nu+1}\left(\frac{t}{t_{0}}\right)\ _{2}F_{1}\left[a_{n},b_{n};\frac{1}{2};-\sinh^{2}\left(\frac{t}{t_{0}}\right) \right],&\text{for even $n$,}\\ \cosh^{\nu+1}\left(\frac{t}{t_{0}}\right)\ \sinh\left(\frac{t}{t_{0}}\right)\ _{2}F_{1}\left[a_{n}+\frac{1}{2},b_{n}+\frac{1}{2}; \frac{3}{2};-\sinh^{2}\left(\frac{t}{t_{0}}\right)\right],&\text{for odd $n$.}\end{cases} \tag{14}\] Let us note that, as evident from the potential strength parameter \(\nu\) in Eq. (12), the number \(N_{\rm TR}\) of trapped states is uniquely defined by the four parameters \(\beta_{2}^{\prime\prime},\beta_{2}^{\prime\prime}\), \(\gamma^{\prime}\), and \(\gamma^{\prime\prime}\). It is not affected by the duration \(t_{0}\) of the trapping potential, which, according to Eq. (13), codetermines the value of the wavenumber eigenvalue of a fundamental solution. Analogy to quantum mechanicsThe eigenvalue problem Eq. (11) suggests an analogy to quantum mechanics, wherein a fundamental solution \(\phi_{n}\) represents the wavefunction of a fictitious particle of mass \(m=|\beta_{2}^{\prime\prime}|^{-1}\), confined to a localized, \(\text{sech}^{2}\)-shaped trapping potential \(V_{\rm S}\). The discrete variable \(n=0,\dots,\lfloor\nu\rfloor\) resembles a principal quantum number that labels solutions with distinct wavenumbers, and the number of trapped state \(N_{\rm TR}\) is similar to an atomic number. Consequently, a bare soliton, with none of its trapped states occupied, resembles the nucleus of an one-dimensional atom. By this analogy, a soliton along with its trapped states represents a nonlinear-photonics meta-atom. Figure 2: Solitary-wave induced potential well exhibiting two trapped states. (a) Trapping potential \(V_{\rm S}\), wavenumber eigenvalues \(\kappa_{n}\), and squared magnitude \(|\phi_{n}|^{2}\) of trapped state eigenfunctions for \(n=0,1\). (b) Dispersion profile \(D_{\rm TS}(\Omega)\) in the vicinity of the trapped state center frequency \(\Omega_{\rm TS}\), (c) Time-domain propagation dynamics of the soliton and its lowest lying trapped state for \(n=0\). The propagation distance is scaled by the soliton period \(z_{0}=(\pi/2)(\beta_{2}^{\prime}/\beta_{2}^{\prime})\approx 50\ \mu\)m. (d) Corresponding spectrum. The inverse Fourier transform of the part of the spectrum enclosed by the box (labeled A) in (d) is shown in the box (labeled A) in (c), providing a filtered view of the trapped state while leaving out the soliton part of the total pulse. (e) Spectrogram of the total pulse at \(z/z_{0}=45\) for \(\sigma=8\) fs. (f,g,h) Same as (c,d,e) for the trapped state with \(n=1\). (i,j,k) Same as (c,d,e) for a superposition of both trapped states. Movies of the propagation dynamics are provided as supplementary material under Ref. [66]. ### Stable propagation of trapped states Subsequently, we discuss the propagation dynamics of a nonlinear-photonics meta-atom with the ability to host two trapped states. More precisely, we consider an example for \(\Omega_{\mathrm{S}}=-2.828\,\mathrm{rad}/\mathrm{fs}\) and \(t_{0}=8\,\mathrm{fs}\), with \(\Omega_{\mathrm{TR}}=2.828\,\mathrm{rad}/\mathrm{fs}\) and \(\nu\approx 1.566\). The resulting trapping potential and both its trapped states are shown in Fig. 2(a). In this case, the wavenumber eigenvalues are \((\kappa_{0},\,\kappa_{1})=(-0.0382,-0.0050)\,\mu\mathrm{m}^{-1}\), and the corresponding fundamental solutions take the simple form \[\phi_{0}(t) =\mathrm{sech}^{\nu}\left(\frac{t}{t_{0}}\right),\quad\mathrm{and}, \tag{15a}\] \[\phi_{1}(t) =\mathrm{sech}^{\nu-1}\left(\frac{t}{t_{0}}\right)\tanh\left( \frac{t}{t_{0}}\right). \tag{15b}\] As evident in Fig. 2(b), in the vicinity of \(\Omega_{\mathrm{TR}}\) and due to \(\kappa^{\prime\prime}>0\) [Eq. (10)], a finite wavenumber-gap separates each trapped state from linear waves bound to the dispersion curve \(D_{\mathrm{TR}}(\Omega)\equiv\beta(\Omega)-\beta(\Omega_{\mathrm{TR}})-\beta _{1}(\Omega_{\mathrm{TR}})(\Omega-\Omega_{\mathrm{TR}})<0\). Therefore, we expect that trapped states composed by Eqs. (15) propagate in a stable manner. For the lowest lying trapped state, having order \(n=0\), this is demonstrated in Figs. 2(c,d). These figures summarize pulse propagation simulations in terms of the modified NSE (1), using an initial condition of the form of Eq. (7) with \(A_{1}\) as in Eq. (9), and \(A_{2}\) as in Eq. (10) with \(\phi(t)=\sqrt{10^{-7}P_{0}}\,\phi_{0}(t)\). In the time-domain propagation dynamics, shown in Fig. 2(c), a small drift of the soliton, caused by higher orders of dispersion at \(\Omega_{\mathrm{S}}\) [see Fig. 1], is accounted for by shifting to a moving frame of reference with time coordinate \(\tau=t-\tilde{\beta}_{1}z\) and \(\tilde{\beta}_{1}=0.00637\,\mathrm{fs}/\mu\mathrm{m}\). In Fig. 2(d), the vast frequency gap between the soliton and the trapped state is clearly visible. By means of an inverse Fourier transform of the frequency components belonging to the trapped state [box labeled A in Fig. 2(d)], an unhindered "filtered view" of the time-domain propagation dynamics of the trapped state is possible [box labeled A in Fig. 2(c)]. A spectrogram, providing a time-frequency view of the field at \(z/z_{0}=45\), is shown in Fig. 2(d). The stable propagation of a trapped state with \(n=1\) for \(\phi(t)=\sqrt{10^{-7}P_{0}}\,\phi_{1}(t)\), is detailed in Figs. 2(f-h). Finally, the simultaneous propagation of a superposition of both trapped states in the form \(\phi(t)=\sqrt{10^{-7}P_{0}}\,[\phi_{0}(t)+5\phi_{1}(t)]\) is shown in Figs. 2(i-k). The \(z\)-periodicity of the beating pattern visible in the time-domain propagation dynamics in Fig. 2(i), is a result of the different wavenumber eigenvalues of the trapped states, and is determined by \(z_{\mathrm{p}}=2\pi/|\kappa_{1}-\kappa_{0}|\approx 189\)\(\mu\)m (\(z_{\mathrm{p}}/z_{0}\approx 3.8\)). Thus, the coherent superposition of trapped states exhibits Rabi-type oscillations, similar to bound state dependent revival times in the quantum recurrence of wave packets [67; 68]. Let us note that, bearing in mind that the number of bound states \(N_{\mathrm{TR}}\) is determined by the potential strength parameter \(\nu\) in Eq. (12), a setup with a different number of bound states can be obtained as well. This is possible by fixing \(\Omega_{\mathrm{S}}\) at some other feasible value, resulting in a different group-velocity matched detuning \(\Omega_{\mathrm{TR}}\), implying different values of the parameters \(\beta_{2}^{\prime},\beta_{2}^{\prime\prime}\), \(\gamma^{\prime}\), and \(\gamma^{\prime\prime}\). For example, keeping \(t_{0}=8\,\mathrm{fs}\) but choosing \(\Omega_{\mathrm{S}}=-2.75\,\mathrm{rad}/\mathrm{fs}\) yields \(\nu\approx 3.1\), resulting in a potential well with the ability to host \(N_{\mathrm{TR}}=4\) trapped states. In such a case, however, phase-matched transfer of energy from the trapped states to dispersive waves within the domain of normal dispersion can be efficient [69]. ### Trapping-to-escape transition caused by a group-velocity mismatch In the context of multimodal pulses in glass fibers in Ref. [1], the attraction of a wave packet by a potential well, created by the total pulse, was illustrated in terms of the kinetic equations of a fictitious particle associated with the wave packet. From a classical mechanics point of view, in order to ensure trapping of the wave packet by the total pulse, the velocity mismatch between the particle and the potential needs to be smaller than the escape velocity of the potential. Based on this view, and for a given velocity mismatch, the critical value of the total pulse intensity, required to achieve self-confinement, was determined [1]. In the presented work, pulse propagation simulations, such as those reported in Fig. 2, comprise a complementary approach to study the considered XPM induced attraction effect. Specifically, by keeping the detuning of the soliton fixed at \(\Omega_{\mathrm{S}}=\Omega_{1}\), but shifting the detuning of the trapped pulse to \(\Omega_{\mathrm{TR}}=\Omega_{2}+\Delta\Omega\), we can enforce a group-velocity mismatch between both pulses and probe the stability of the meta-atom. For \(\Delta\Omega>0\) it is \(\beta_{1}(\Omega_{\mathrm{S}})>\beta_{1}(\Omega_{\mathrm{TR}})\), see Fig. 1(b). Thus, in a reference frame in which the soliton is stationary, the trapped state will initially have the propensity to move towards smaller times. This is demonstrated in Figs. 3(a,b) for the center frequency shift \(\Delta\Omega=0.05\) rad\(/\)fs. To assess the fraction of energy of the trapped state that is retained within the soliton induced potential well, we consider the quantity \[e_{\rm TR}(z)\equiv\frac{E_{\rm TR}(z)}{E_{\rm TR}(0)},\quad\text{with}\quad E_{ \rm TR}(z)=\int_{-10\,t_{0}}^{10\,t_{0}}\left|\phi(z,\tau)\right|^{2}\,\mathrm{ d}\tau. \tag{16}\] As evident from Fig. 3(e), at \(\Delta\Omega=0.05\) rad\(/\)fs, the trapped state is kept almost entirely within the well, i.e. \(e_{\rm TR}\approx 1\). In contrast, at \(\Delta\Omega=0.25\) rad\(/\)fs, a major share of the trapped pulse escapes the well during the initial propagation stage [Figs. 1(c,d)], indicated by the small value \(e_{\rm TR}\approx 0.3\) [Fig. 3(e)]. Let us note that, when viewing the considered pulse compounds as meta-atoms, the quantity \(1-e_{\rm TR}(z)\) specifies the fraction of trapped energy that is radiated away, resembling an ionization probability for quantum mechanical atoms. A parameter study, detailing the dependence of \(e_{\rm TR}\) as function of the center frequency shift \(\Delta\Omega\) is summarized in Fig. 3(f). The transition from trapping to escape can be supplemented by an entirely classical picture similar as in Ref. [1]: from a classical point of view we might expect that a particle, initially located at the center of the well, remains confined to the well if its "classical" kinetic energy \(T_{\rm kin}=\frac{1}{2}m\Delta\beta_{1}^{2}=\frac{1}{2}|\rho_{2}^{\prime\prime }|^{-1}\) [\(\beta_{1}(\Omega_{\rm S})-\beta_{1}(\Omega_{\rm TR})\)]\({}^{2}\) does not exceed the well depth \(V_{0}=2\gamma^{\prime\prime}P_{0}\). As evident from Fig. 3, the findings based on this classical picture complement the results obtained in terms of direct simulations of the modified NSE (1) very well. The above results clearly demonstrate the limits of stability of nonlinear photonics meta-atoms with respect to a group-velocity mismatch between the trapping soliton and the trapped state. These findings are consistent with our previous results on the break-up dynamics of two-color pulse compounds [25]. ## 4 Two-color soliton molecules _Seeding of tightly bound two-color pulse compounds._ When considering initial conditions of the form of Eq. (7), with \(A_{1}\) a fundamental nonlinear Schrodinger soliton as in Eq. (9), and \(A_{2}\) a trapped state as in Eq. (10) with \(\phi(t)=r\sqrt{P_{0}}\,\phi_{0}(t)\), the XPM contribution of the weak trapped pulse onto the trapping soliton can be heightened Figure 3: Characterization of the transition from trapping to escape. (a) Time-domain propagation dynamics of the soliton and its lowest lying trapped state (\(n=0\)) shifted from \(\Omega_{\rm TS}\) to \(\Omega_{\rm TS}+\Delta\Omega\) for \(\Delta\Omega=0.05\) (rad\(/\)fs). (b) Corresponding spectrum. The inverse Fourier transform of the part of the spectrum enclosed by the box (labeled A) in (b) is shown in the box (labeled A) in (a). This provides a filtered view of the trapped state with the benefit of leaving out the soliton part of the total pulse. (c,d) Same as (a,b) for \(\Delta\Omega=0.25\) (rad\(/\)fs). (e) Fraction of trapped energy as function of the propagation distance. (f) Fraction of trapped energy as function of the trapped state center frequency shift. Secondary ordinate shows the potential depth (\(V_{0}\)) as well as the kinetic energy \(T_{\rm kin}\) of the fictitious classical particle. Parameter range in which the particle cannot escape the well is shaded gray. Movies of the propagation dynamics shown in (a-d) are provided as supplementary material under Ref. [66]. by increasing the parameter \(r\). This is demonstrated in Figs. 4(a-c), where pulse propagation simulations in terms of the modified NSE (1) are shown for different values of \(r\), significantly larger than those considered in the preceding section. Especially for larger values of \(r\) [Figs. 4(b,c)], the intensity exhibits the following dynamics: the mutual confining action of XPM results in a contraction of both subpulses, prompting the formation of a narrow localized pulse compound. A similar effect has previously been suggested by Hasegawa for multimode optical pulses in glass fibers in Ref. [1], where he writes "[...] as many modes are trapped, the peak intensity of the packet increases quite analogously to a gravitational instability, resulting in a further contraction of the packet." (Ref. [1], p. 417). The results shown in Figs. 4(a-c) demonstrate this effect in the context of two-color pulse compounds in nonlinear fibers or waveguides with two zero-dispersion points. Let us note that, for \(r\approx 1\), initial conditions as pointed out above directly generate tightly bound, mutually confined two-color pulse compounds. They are accompanied by radiation, emanating from the localized state upon propagation, and can exhibit internal dynamics reminiscent molecular vibrations [24; 25; 70; 31]. However, such a seeding procedure generates two-color pulse compounds in a largely uncontrolled manner. For completeness, we have observed the formation of similar localized pulse compounds when taking trapped state initial conditions of the form \(\phi(t)=r\,\sqrt{P_{0}}\,\phi_{1}(t)\) for large enough \(r\). Simultaneous solutions of the coupled equations.We can surpass the above seeding approach by directly searching for simultaneous solitary-wave solutions of the coupled nonlinear Eqs. (8) beyond the linear limit discussed in Sect. 3. Substituting an ansatz for two subpulses, labeled \(m=1,2\), in the form of \[A_{m}(z,t)=U_{m}(t)\,e^{i(\tilde{u}_{0}+\kappa_{m})z},\quad\text{with}\quad m =1,2 \tag{17}\] into Eqs. (8), yields two coupled ordinary differential equations (ODEs) of second order \[\dot{U}_{1}-\frac{2}{\beta_{2}^{\prime}}\left[\gamma^{\prime} \left(|U_{1}|^{2}+2|U_{2}|^{2}\right)-\kappa_{1}\right]U_{1}=0, \tag{18a}\] \[\dot{U}_{2}-\frac{2}{\beta_{2}^{\prime\prime}}\left[\gamma^{ \prime\prime}\left(|U_{2}|^{2}+2|U_{1}|^{2}\right)-\kappa_{2}\right]U_{2}=0, \tag{18b}\] for two real-valued envelopes \(U_{m}\equiv U_{m}(t)\), \(m=1,2\), with dots denoting derivatives with respect to time. Under suitable conditions, solitary-wave solutions for the coupled nonlinear Eqs. (18) can be specified analytically [71; 72; Figure 4: Transition from trapping to tightly bound, molecule-like two-color pulse compounds. (a-c) Time-domain propagation dynamics arising from an initial condition of the form \(\phi(t)=r\sqrt{P_{0}}\,\phi_{0}(t)\) (see text). (a) Trapped state for amplitude parameter \(r=0.3\), (b) \(r=0.7\), and, (c) \(r=1\). (d-f) Solutions of the coupled ODEs (18), fitted to functions of the form \(U_{m}=U_{0,m}\text{sech}^{\text{int}_{m}}(t/t_{m})\), for \(m=1,2\). (d) Scaled pulse amplitudes \(u_{m}=U_{0,m}/\sqrt{P_{0}}\), (e) pulse durations \(t_{m}\), and, (f) pulse shape exponents \(v_{m}\), \(m=1,2\). In (d), \(\tilde{u}_{2}\) indicates the peak amplitude of a fundamental nonlinear Schrödinger soliton with wavenumber \(\kappa_{2}\). 60; 73; 30]. Approximate solutions based on parameterized trial functions can be found, e.g., in terms of a variational approach [74]. In order to obtain simultaneous solutions \(U_{1}(t)\), and \(U_{2}(t)\) under more general conditions, Eqs. (18) need to be solved numerically. This can be achieved, e.g., by spectral renormalization methods [75; 76; 77; 78], shooting methods [8; 9], squared operator methods [79], conjugate gradient methods [80; 81], \(z\)-propagation adapted imaginary-time evolution methods [82; 83], or Newton-type methods [84]. Here, in order to solve for simultaneous solutions of the ODEs (18), we employ a Newton method that is based on a boundary value Runge-Kutta algorithm [85]. So as to systematically obtain solutions \(U_{1}(t)\) and \(U_{2}(t)\), we keep five of the six parameters that enter Eqs. (18) fixed. Therefore we set \(\beta_{2}^{\prime}\), \(\beta_{2}^{\prime\prime}\), \(\gamma^{\prime}\), and \(\gamma^{\prime\prime}\) to the values considered througout the preceding section, and preset the wavenumber \(\kappa_{1}=|\beta_{2}^{\prime}|(2t_{0}^{2})^{-1}\approx 0.0156\ \mu\)m\({}^{-1}\) of a fundamental nonlinear Schrodinger soliton with \(t_{0}=8\) fs in Eq. (18a). We then sweep the remaining paramter \(\kappa_{2}\) over the wavenumber range \((0.002,0.05)\ \mu\)m\({}^{-1}\), enclosing the value of \(\kappa_{1}\). We start the parameter sweep at \(\kappa_{2}=0.05\ \mu\)m\({}^{-1}\), which vastly exceeds the wavenumber eigenvalue of the lowest lying trapped state solution at \(0.0382\ \mu\)m\({}^{-1}\). Above this value, we expect \(U_{2}\) to vanish, and \(U_{1}\) to yield a fundamental soliton \(U_{1}(t)=\sqrt{P_{0}}\ \text{sech}(t/t_{0})\) with \(P_{0}=|\beta_{2}^{\prime}|(\gamma^{\prime}\ t_{0}^{2})^{-1}\). We set initial trial functions for \(U_{1}\) and \(U_{2}\) with parity similar to the soliton and the lowest lying trapped state, and continue the obtained solutions to smaller values of \(\kappa_{2}\). The results of this paramter sweep are summarized in Figs. 4(d-f). We find that all solutions can be parameterized in the form \(U_{m}(t)=U_{0,m}\ \text{sech}^{\nu_{m}}(t/t_{m})\), with pulse peak amplitudes \(U_{0,m}\) [Fig. 4(d)], pulse durations \(t_{m}\) [Fig. 4(e)], and pulse shape exponents \(\nu_{m}\) [Fig. 4(f)], for \(m=1,2\). In agreement with the results reported in sect. 3.1, we find that a weak nonzero solution \(U_{2}\) with \(t_{2}=8\) fs and \(\nu_{2}\approx 1.55\) originates at \(\kappa_{2}\approx 0.038\ \mu\)m\({}^{-1}\). For \(\kappa_{2}<0.038\ \mu\)m\({}^{-1}\), the peak amplitude of the subpulse \(m=1\) continuously decreases while that for \(m=2\) increases. Below \(\kappa_{2}\approx 0.007\mu\)m\({}^{-1}\), subpulse \(U_{1}\) vanishes and \(U_{2}\) describes a fundamental soliton with pulse shape paramter \(\nu_{2}=1\) and wavenumber \(\kappa_{2}\). To facilitate intuition, we included the amplitude of a free soliton with wavenumber \(\kappa_{2}\), i.e. peak amplitude \(\tilde{U}_{0,2}=\sqrt{2\kappa_{2}/\gamma^{\prime\prime}}\), in Fig. 4(d). Let us note that the intermediate parameter range \(0.007\ \mu\)m\({}^{-1}<\kappa_{2}<0.038\ \mu\)m\({}^{-1}\) bears tightly coupled pulse compounds, characterized by subulse amplitudes with similar peak heights, see Fig. 4(d). ### Two-color soliton pairs Upon closely assessing the results shown in Figs. 4(d-f), we find that at \(\kappa_{2}=0.0156\ \mu\)m\({}^{-1}\), a pair of matching solutions with plain hyperbolic-secant shape \(U_{m}(t)=U_{0,m}\ \text{sech}(t/t_{0})\), \(m=1,2\), is attained. This can be traced back to the uniformity of Eqs. (18a) and (18b) for the considered set of parameters. Formally, by assuming \(\kappa\equiv\kappa_{1}=\kappa_{2}\) and \(U\equiv U_{1}=U_{2}\), both equations take the form of a standard NSE with modified paramters \[-\frac{\beta_{2}^{\prime}}{2}\frac{\text{d}^{2}}{\text{d}t^{2}}U(t)+3\gamma^{ \prime}|U(t)|^{2}U(t)=\kappa U(t), \tag{19}\] where, for convenience only, we used the parameters of Eq. (18a). The real-valued pulse envelope \(U\) should therefore be identified by the peak intensity \(\tilde{P}_{0}=|\beta_{2}^{\prime}|(3\gamma^{\prime}t_{0}^{2})^{-1}\), and thus \(u_{1}=u_{2}=\sqrt{1/3}\approx 0.57\) in Fig. 4(d). Hence, at \(\kappa_{2}=0.0156\ \mu\)m\({}^{-1}\), both subpulses resemble true two-color _soliton_ pairs: the pulse envelopes \(U_{1}\) and \(U_{2}\) both specify a fundamental NSE soliton; for each pulse, its binding partner modifies the nonlinear coefficient of the underlying NSE through XPM, helping the pulse sustain its shape. Consequently, both pulses can only persist conjointly as a bonding unit. This special case is consistent with a description of two-color pulse compounds in terms of incoherently coupled pulses [30]. By considering the ansatz Eq. (7), we can plug in the obtained pulse envelopes for \(U_{1}\) and \(U_{2}\) and resubstitute the parameters that define the propagation constant in sect. 2 to obtain \[A(z,t)=F(z,t)\ \cos\left(\sqrt{\frac{6\beta_{2}}{|\beta_{4}|}t}\right)e^{-i\beta_{ 0}z},\quad\text{with}\quad F(z,t)=\sqrt{\frac{8\beta_{2}}{3\gamma t_{0}^{2}}} \ \text{sech}\left(\frac{t}{t_{0}}\right)e^{i\kappa z},\quad\text{and}\quad \kappa=\frac{\beta_{2}}{t_{0}^{2}}. \tag{20}\] Let us note that \(F\) is equivalent to the fundamental meta-soliton obtained in Ref. [26], which becomes evident when substituting \(\epsilon=t_{0}^{-1}[3\beta_{2}/(2|\beta_{4}|)]^{-1/2}\) and \(\mu_{0}\epsilon^{2}=\beta_{2}/t_{0}^{2}\). This fundamental meta-soliton was first formulated by Tam _et al._, when studying stationary solutions for the modified NSE (1) by putting emphasis on the time-domain representation of the field in terms of a multi-scales analysis [26]. This unveiled a large superfamily of solitons, now referred to as generalized dispersion Kerr solitons. We would like to point out that within the presented approach, i.e. by putting emphasis on the frequency-domain representation of two-color pulse compounds, the fundamental meta-soliton is derived with great ease. Furthermore, both approaches complement each other very well. We should note that the above two-color soliton pairs resemble vector solitons studied in the context of birefringent optical fibers [86; 87; 88; 61; 89; 90]. The stationary propagation of the two-color soliton pair defined by Eq. (20) in terms of the modified NSE (1) is demonstrated in Figs. 5(a,b). The inset in Fig. 5(a) provides a close-up view onto the localized pulse, indicating interference fringes with period \(\Delta t\approx\sqrt{|\beta_{4}|/(6\beta_{2}\pi^{2})}\approx 1.3\) fs that are due to the cosine in Eq. (20). These interference fringes appear stationary since the propagation scenario exhibits the symmetry \(\beta(\Omega_{1})=\beta(\Omega_{2})\) and \(\kappa_{1}=\kappa_{2}\). A spectrogram of the propagation scenario at \(z/z_{0}=29.17\) is shown in Fig. 6(a). A small amount of residual radiation can be seen to lie right on the curve \(\beta_{1}(\Omega)z\), given by the short-dashed line in Fig. 6(a). It was emitted by the pulse compound during the initial propagation stage and is caused by the presence of higher orders of dispersion at the individual subpulse loci, which were neglected in the simplified description leading to Eqs. (20). Figure 5: Resonant radiation of two-color soliton molecules. (a,b) Stationary propagation of a soliton molecule with subpulse loci at \(\Omega_{1}=-\Omega_{2}=2.828\) rad/fs. (a) Time-domain propagation dynamics. The inset shows a close-up view of \(|A(z,\tau)|^{2}/\max(|A(0,\tau)|^{2})\) in the range \(\tau=-20\dots 20\) fs and \(z/z_{0}=2\dots 10\). (b) Corresponding spectrum. (c,d) Same as (a,b) for soliton molecule order \(N=1.8\). Horizontal dashed line in (d) indicates \(z/z_{0}=29.2\). (e) Dispersion profile and grapical solution of the resonance conditions Eqs. (22) for \(z\)-oscillation periods of order \(m=-10\dots 1\). (f) Spectrum at \(z/z_{0}=29.2\) (g-l) Same as (a-f) for a soliton molecule with subpulse loci at \(\Omega_{1}=-2.674\) rad/fs and \(\Omega_{2}=2.134\) rad/fs. In (i,j) the soliton molecule order is \(N=1.6\). Horizontal dashed line in (j) indicates \(z/z_{0}=28.6\). The time-domain propagation dynamics is shown in a moving frame of reference where \(\tau=t-\beta_{1}z\). In (a,c) \(\beta_{1}=0\) fs/\(\mu\)m. In (g,i) \(\beta_{1}=0.509\) rad/fs. Propagation distance is scaled by \(z_{0}=32\)\(\mu\)m. Movies of the propagation dynamics shown in (a-d) and (g-j) are provided as supplementary material under Ref. [66]. ### Kashi-comb-like multi-frequency radiation Previously, it was shown that \(z\)-periodic amplitude and width oscillations of two-color soliton molecules can be excited in a systematic manner by increasing their initial peak amplitude by some factor \(N\) according to \(F(z,t)\gets NF(z,t)\)[26; 55]. In analogy to usual nonlinear Schrodinger solitons, values \(N>1\) define higher order metasolitons. Recently, we have performed a comprehensive analysis of the amplitude oscillations of such higher order metasolitons, indicating that with increasing \(N\), the number of spatial Fourier-modes needed to characterize their periodic peak-intensity variation, increases [55]. In other words, with increasing strength of perturbation of a soliton molecule, its dynamics changes from harmonic to nonlinear oscillations. _Degenerate multi-frequency radiation._ To demonstrate amplitude and width oscillations, we show the propagation dynamics of a symmetric soliton molecule of order \(N=1.8\), based on the two-color soliton pair (20), in Figs. 5(c,d). As can be seen from the time-domain dynamics in Fig. 5(c), the localized pulse exhibits periodic amplitude and width variations [close-up view in Fig. 5(c)], and emits radiation along either direction along the coordinate \(t\) in a symmetric fashion. Quite similar dynamics where obtained using the seeding approach in Figs. 4(b,c). The oscillation of the soliton molecule is also clearly visible in the spectrum shown in Fig. 5(d). As evident from Fig. 5(f), at \(z/z_{0}\approx 29.17\) it exhibits comb-like bands of frequencies in the vicinity of the subpulse loci \(\Omega_{1}\) and \(\Omega_{2}\). The location of these newly generated frequencies can be understood by extending existing approaches for the derivation of resonance conditions [91; 92; 93; 94; 95] to two-color pulse compounds [70; 55]. Below, we summarize these resonance conditions, which where obtained by assuming a dynamically evolving pulse compound of the form [70] \[U_{m}(z,t)=\sum_{\ell}C_{m\ell}(t)\,\exp\left[i\left(\kappa_{m}+K_{\ell} \right)z\right],\quad\text{with}\quad m\in(1,2),\ \ell\in\mathbb{Z}. \tag{21}\] In Eq. (21), \(C_{m\ell}\) are expansions coefficients, and \(\kappa_{m}\) indicate wavenumbers that govern the \(z\)-propagation of each subpulse. The wavenumbers of the higher harmonics of the \(z\)-oscillation period are \(K_{\ell}=2\pi\ell/\Lambda\), with \(\Lambda\) refering to the \(z\)-oscillation wavelength of the pulse compound and \(\ell\) labeling the corresponding order. Based on this ansatz, the resonance conditions \[D_{m}(\Omega_{RR})-\kappa_{m}=K_{\ell},\quad\text{with}\quad m \in(1,2),\ \ell\in\mathbb{Z},\quad\text{and} \tag{22a}\] \[D_{m}(\Omega_{RR})-2\kappa_{m}+\kappa_{m^{\prime}}=K_{\ell}, \quad\text{with}\quad m,m^{\prime}\in(1,2),\ m\neq m^{\prime},\ \ell\in\mathbb{Z}, \tag{22b}\] with dispersion profiles \(D_{m}(\Omega)\equiv\beta(\Omega)-\beta(\Omega_{m})-\beta_{1}(\Omega)(\Omega- \Omega_{m})\) for \(m=1,2\), can be derived [70]. In Eqs. (22), \(\Omega_{RR}\) specifies those frequencies at which resonant radiation (RR) is excited. While Eq. (22a) defines resonance conditions for the generation of Cherenkov radiation by each subpulse, Eq. (22b) defines additional resonance conditions indicative of four-wave mixing (FWM) processes involving both subpulses. For the considered soliton molecule of order \(N=1.8\), we find \(\Lambda\approx 63\ \mu\text{m}\approx 2z_{0}\) [with \(z_{0}=32\ \mu\text{m}\), see Fig. 5(d)]. In this case, the aforementioned symmetry \(\kappa_{1}=\kappa_{2}\) renders Eqs. (22a) and (22b) degenerate. As evident from the graphical solution of Eqs. (22) in Fig. 5(e), the resonance conditions predict the newly generated frequencies in Fig. 5(f) very well. A spectrogram of the propagation scenario at \(z/z_{0}=29.17\) is shown in Fig. 6(b). Therein, the multi-peaked spectral bands, at which the oscillating soliton molecule sheds radiation, are reminiscent of the shape of traditional Japanese Kushi combs. _Non-degenerate multi-frequency radiation._ Let us note that, due to the wide variety of two-color pulse compounds with different substructure, their emission spectra manifest in various forms. For example, considering a pair of group-velocity matched detunings different from the one considered above, the degeneracy among Eqs. (22) can be lifted. Subsequently we take \(\Omega_{1}=-2.674\ \text{rad}/\text{fs}\) and \(\Omega_{2}=2.134\ \text{rad}/\text{fs}\), for which \(\beta_{1}^{\prime}=0.514\ \text{fs}/\mu\text{m}\), \(\beta_{1}^{\prime\prime}=0.514\ \text{fs}/\mu\text{m}\), \(\beta_{2}^{\prime}=-2.576\ \text{fs}^{2}/\mu\text{m}\), and, \(\beta_{2}^{\prime\prime}=-1.278\ \text{fs}^{2}/\mu\text{m}\). In terms of the coupled ODEs (18) we then determine a pair of simultaneous solutions which specify the initial condition \[A_{0}(t)=U_{0,1}\ \text{sech}^{\nu_{1}}\left(\frac{t}{t_{1}}\right)e^{-t \Omega_{1}t}+U_{0,2}\ \text{sech}^{\nu_{2}}\left(\frac{t}{t_{2}}\right)e^{-t\Omega_{2}t}, \tag{23}\] with parameters \(U_{0,1}=0.050\ \sqrt{W}\), \(U_{0,2}=0.141\ \sqrt{W}\), \(t_{1}=7.207\ \text{fs}\), \(t_{2}=7.271\ \text{fs}\), \(\nu_{1}=0.901\), and \(\nu_{2}=1.022\). The stationary propagation of this soliton molecule with non-identical subpulses is shown in Fig. 4(g,h). As a consequence of the broken subpulse-symmetry, the interference fringes that characterize the pulse compound are not stationary any more [close-up view in Fig. 5(g)]. The fact that the pulse compound remains localized, despite its envelope exhibiting a non-stationary profile, might be the reason why no such objects could be found using a time-domain based Newton conjugate-gradient method [26]. Next, we increase the order of this soliton molecule to \(N=1.6\), resulting in the propagation dynamics with \(z\)-oscillation period \(\Lambda\approx 106\ \mu\mathrm{m}\approx 3.3z_{0}\) shown in Figs. 5(i,j). In this case, a pronounced mulit-peaked spectral band of frequencies within the domain of normal dispersion is excited [see Figs. 5(j,l)]. These newly generated frequencies can be linked to multi-frequency Cherenkov radiation emitted by the subpulse at \(\Omega_{2}\), as can be seen from the graphical solution of the resonance conditions (22a), shown in Fig. 5(k). Let us note that similar coupling phenomena of localized states to the continuum have have earlier been observed for solitons in periodic dispersion profiles [93], oscillating bound solitons in twin-core fibers [94], and dissipative solitons in nonlinear microring resonators [95]. A further band of frequencies, excited in the vicinity of \(\Omega\approx 3.5\ \mathrm{rad/fs}\) can be attributed to FWM-resonances described by Eq. (22b). A spectrogram of the propagation scenario at \(z/z_{0}=28.6\) is shown in Fig. 6(c), unveiling that the resonant radiation emanates from the oscillating soliton molecule in a pulse-wise fashion. ## 5 Summary and conclusions In summary, we have discussed several aspects of the \(z\)-propagation of two-color pulse compounds in a modified NSE with positive group-velocity dispersion coefficient and negative fourth-order dispersion coefficient. Therefore, we considered the interaction dynamics of two pulses in distinct domains of anomalous dispersion, group-velocity matched despite a large frequency gap. We have demonstrated that their mutual confining action can manifest itself in different forms, depending on the relative strength of SPM and XPM felt by each pulse. In the limiting case where the resulting bound states consist of a strong trapping pulse, given by a soliton, and a weak trapped pulse, we have shown that optical analogues of quantum mechanical bound states can be realized that are determined by a Schrodinger-type eigenvalue problem [24]. The resulting photonic meta-atoms even support Rabi-type oscillations of its trapped states, similar to the recurrence dynamics of wave packets in quantum wells [67]. We further probed the limits of stability of these meta-atoms by imposing a group-velocity mismatch between the trapping soliton and the trapped pulse. With increasing strength of perturbation, parts of the trapped state escapes the soliton, similar in effect to the ionization of quantum mechanical atoms. These findings complement our earlier results on the break-up dynamics of two-color pulse compounds [95]. Figure 6: Spectrograms of selected soliton molecules. (a) Two-color soliton pair of Figs. 5(a,b) at \(z/z_{0}=29.17\), computed using \(\sigma=30\ \mathrm{fs}\) in Eq. (6). (b) Oscillating, symmetric soliton molecule of Figs. 5(c,d) at \(z/z_{0}=29.17\) for \(\sigma=30\ \mathrm{fs}\), showing many narrowly spaced resonances reminiscent of the shape of traditional Japanese Kushi combs. (c) Oscillating, non-symmetric soliton molecule of Figs. 5(i,j) at \(z/z_{0}=28.6\) for \(\sigma=20\ \mathrm{fs}\). The pulse-wise emission of radiation, synchronized with the periodic amplitude and width variations of the pulse compound, is clearly visible. In (a-c), the short-dashed line shows \(\beta_{1}(\Omega)z\), indicating the delimiting temporal position of a mode at detuning \(\Omega\), emitted at \(z=0\). Movies of the propagation dynamics are provided as supplementary material under Ref. [66]. For the more general case where the mutual confining action between the pulses is dominated by XPM, we have discussed a simplified modeling approach, allowing to determine simultaneous solutions for the bound pair of pulses. The resulting solutions feature the above meta-atoms as limiting cases when the disparity of the subpulse amplitudes is large. Further, by exploiting symmetries of the underlying propagation model, a special class of solutions, forming true two-color soliton pairs [30], was characterized in closed form. This special class of solutions, referred to as generalized dispersion Kerr solitons, has also been derived in Ref. [26]. We have presented numerical results demonstrating the complex propagation dynamics of such pulse compounds, which we here referred to as two-color soliton molecules. Specifically, we have shown that soliton molecules exhibit highly robust vibrational characteristics, a behavior that is difficult to achieve in a conservative NSE system. These non-stationary, \(z\)-periodic dynamics of the subpulses triggers the emission of resonant radiation. The location of the resulting multi-peaked spectral bands can be precisely predicted by means of phase-matching conditions [70; 55]. Due to the manifold of soliton molecules with different substructure, their emission spectra manifest in various complex forms. Most notably, if the oscillating soliton molecule consists of a pair of identical subpulses, inherent symmetries lead to degeneracies in the resonance spectrum, causing their spectrogram trace to resemble the shape of Japanese Kushi combs. Additional perturbations lift existing degeneracies and result in more complex emission spectra which are characterized by distinct spectral bands that can be separately linked to resonant Cherenkov radiation and additional four-wave mixing processes. The occurrence of such multi-frequency radiation, especially in the degenerate form, comprises a fundamental phenomenon in nonlinear waveguides with multiple zero-dispersion points and sheds light onto the puzzling propagation dynamics of two-frequency pulse compounds, resembling the generation of radiation by vibrating molecules. Finally, let us note that we recently extended the range of systems in which such two-color pulse compounds are expected to exist. Therefore, we considered waveguides with a single zero-dispersion point and frequency dependent nonlinearity with a zero-nonlinearity point [96; 97]. In such waveguides, soliton dynamics in a domain of normal dispersion can be achieved by a negative nonlinearity [98; 99]. In the corresponding description of pulse compounds in terms of the simplified model (8), having \(\beta_{2}^{\prime}<0\) and \(\beta_{2}^{\prime\prime}>0\) then requires \(\gamma^{\prime}>0\) and \(\gamma^{\prime\prime}<0\), and the potential well in the eigenproblem corresponding to Eq. (11) is ensured by \(\gamma^{\prime\prime}<0\)[54]. We studied the above binding mechanism for incoherently coupled two-color pulse compounds in such waveguides, demonstrating meta-atoms and molecule-like bound states of pulses that persist in the presence of the Raman effect [31; 54], allowing to understand the complex propagation dynamics observed in a recent study on higher-order soliton evolution in a photonic crystal fiber with one zero-dispersion point and frequency dependent nonlinearity [100]. ## Acknowledgements We acknowledge support from the Deutsche Forschungsgemeinschaft (DFG) under Germany's Excellence Strategy within the Cluster of Excellence PhoenixD (Photonics, Optics, and Engineering - Innovation Across Disciplines) (EXC 2122, projectID 390833453).
2310.01341
Two-dimensional ideal magnetohydrodynamic waves on a rotating sphere under a non-Malkus field: I. Continuous spectrum and its ray-theoretical interpretation
Two-dimensional ideal incompressible magnetohydrodynamic (MHD) linear waves at the surface of a rotating sphere are studied as a model to imitate the outermost layer of the Earth's core or the solar tachocline. This thin conducting layer is permeated by a toroidal magnetic field the magnitude of which depends only on the latitude. The Malkus background field, which is proportional to the sine of the colatitude, provides two well-known groups of branches; on one branch, retrograde Alfv\'en waves gradually become fast magnetic Rossby (MR) waves as the field amplitude decreases, and on the other, prograde Alfv\'en waves undergo a gradual transition into slow MR waves. In the case of non-Malkus fields, we demonstrate that the associated eigenvalue problems can yield a continuous spectrum instead of Alfv\'en and slow MR discrete modes. The critical latitudes attributed to the Alfv\'en resonance eliminate these discrete eigenvalues and produce an infinite number of singular eigenmodes. The theory of slowly varying wave trains in an inhomogeneous magnetic field shows that a wave packet related to this continuous spectrum propagates toward a critical latitude corresponding to the wave and is eventually absorbed there. The expected behaviour whereby the retrograde propagating packets pertaining to the continuous spectrum approach the latitudes from the equatorial side and the prograde ones approach from the polar side is consistent with the profiles of their eigenfunctions derived using our numerical calculations. Further in-depth discussions of the Alfv\'en continuum would develop the theory of the ``wave-mean field interaction'' in the MHD system and the understanding of the dynamics in such thin layers.
Ryosuke Nakashima, Shigeo Yoshida
2023-10-02T17:04:54Z
http://arxiv.org/abs/2310.01341v2
Two-dimensional ideal magnetohydrodynamic waves on a rotating sphere under a non-Malkus field: I. Continuous spectrum and its ray-theoretical interpretation ###### Abstract Two-dimensional (2D) ideal incompressible magnetohydrodynamic (MHD) linear waves at the surface of a rotating sphere are studied as a model imitating the outermost Earth's core or the solar tachocline. This thin conducting layer is permeated by a toroidal magnetic field whose magnitude depends only on the latitude. The Malkus background field, which is proportional to the sine of the colatitude, gives two well-known groups of branches on which Alfven waves gradually become fast or slow magnetic Rossby (MR) waves as the field amplitude decreases. For non-Malkus fields, we show that the associated eigenvalue problems can yield a continuous spectrum instead of Alfven and slow MR discrete modes. Critical latitudes attributed to the Alfven resonance wipe out these discrete eigenvalues and produce an infinite number of singular eigenmodes. The theory of slowly varying wave trains in an inhomogeneous magnetic field shows that a wave packet related to this continuous spectrum propagates toward a critical latitude corresponding to the wave and is eventually absorbed there. The expected behaviour that the retrograde propagating packets which pertain to the continuous spectrum approach the latitudes from the equatorial side and that the prograde ones approach there from the polar side is consistent with the profiles of their eigenfunctions shown by our numerical calculations. Further in-depth discussions of the Alfven continuum would progress the theory of "wave-mean field interaction" in the MHD system and one's understanding of the dynamics in such thin layers. **Keywords:** Magnetic Rossby waves; Continuous modes; Stably stratified layer; Geomagnetic variations; ## 1 Introduction Geomagnetic and geodetic variations may partly be accounted for by MHD waves within the Earth's outer core (e.g. Gillet _et al._, 2021; Triana _et al._, 2021; Hori _et al._, 2023). For example, Hide (1966) suggested that the westward drift of the geomagnetic field may originate from slow MR waves in the liquid core. Braginsky (1970) ascribed both the \(60\)-year length-of-day and geomagnetic variations to the torsional oscillations. If this type of attribution is substantiated, the comparison between observations and the theory leads to some inferences on the relevant physical quantities. Zatman and Bloxham (1997) accepted Braginsky (1970)'s explanation to infer that the magnitude of the cylindrical radial field is about \(0.2\,\mathrm{mT}\). This suggestion was questioned by Gillet _et al._ (2010), who associated the torsional oscillations (or torsional waves) with six-year length-of-day signals to conclude that its magnitude is about \(2\,\mathrm{mT}\). A stably stratified layer at the top of the outer core has been proposed on various grounds. Simple thermal (Gubbins _et al._, 1982) and compositional (Braginsky, 1984) stratification were considered first, and the proposed physical mechanisms of stratification have been more sophisticated lately. For the thermal stratification, a subadiabatic temperature gradient due to the core's high thermal conductivity (Pozzo _et al._, 2012; Zhang _et al._, 2022) has been invoked, and for the compositional stratification, barodiffusion and chemical interactions between the core and the mantle (Buffett and Seagle, 2010; Gubbins and Davies, 2013; Brodholt and Badro, 2017; Davies _et al._, 2018), and a remnant of the Moon-creating impact (Landeau _et al._, 2016) were proposed. Seismological evidence for such a layer has been controversial (e.g. Helffrich and Kaneshima, 2010; Kaneshima, 2018; Irving _et al._, 2018; van Tent _et al._, 2020). Regional, instead of global, stratification arising from the core-mantle boundary (CMB) heterogeneity was also proposed (Mound _et al._, 2019). The obscure properties of the layer may be able to be inferred from the identification of sources of geomagnetic and geodetic signatures from the core. Magnetic-Archimedes-Coriolis (MAC) and MR waves in such a stably stratified layer at the top of the core have often been invoked as possible causes of geomagnetic fluctuations (Braginsky, 1993, 1998, 1999; Buffett, 2014; Chulliat _et al._, 2015; Buffett _et al._, 2016; Knezek and Buffett, 2018; Chi-Duran _et al._, 2021). If the stratified layer exists atop the core, vigorous convection prevailing in the bulk of the core overshoots the interface between the bulk and the layer (Takehiro and Lister, 2001; Gastine _et al._, 2020) and can excite MHD waves that travel in the layer (Jaupart and Buffett, 2017; Buffett and Knezek, 2018; Couston _et al._, 2017; Bouffard _et al._, 2022). Two-dimensional ideal incompressible MHD linear waves within a thin conducting fluid layer over a rotating sphere are addressed by the present paper. A non-uniform toroidal (\(\phi\)-directional, \(\phi\) being the longitude) magnetic field is imposed on the fluid film as a background field, and the layer rotates almost rigidly with the sphere. To examine Hide (1966)'s suggestion, a similar problem was first considered by Stewartson (1967), in which the azimuthal main field is spatially uniform. Such a toroidal field does not vanish even at the north and south poles (\(\theta=0\) and \(\pi\), where \(\theta\) is the colatitude). One should therefore express the basic field as \(B_{0\phi}(\theta)=B_{0}\mathcal{B}\sin\theta\) with a constant \(B_{0}\) and a bounded continuous function \(\mathcal{B}(\cos\theta)\). Although a smooth background field at the poles also requires the function \(\mathcal{B}\) to satisfy that \(\mathcal{B}\sin\theta=-\sum_{n=1}^{\infty}t_{n}(\mathrm{dP}_{n}/\mathrm{d}\theta)\), where \(t_{n}\) (\(n=1,2,\ldots\)) are constants and \(\mathrm{P}_{n}\) is the Legendre polynomial of degree \(n\), we now disregard this condition. The simplest profile \(B_{0\phi}=B_{0}\sin\theta\) in this expression, or \(\mathcal{B}=1\), is equivalent to the well-known Malkus field (e.g. Malkus, 1967; Finlay, 2008). This elementary 2D model is useful to qualitatively understand the wave dynamics in the thin layer at the top of the core. The dispersion relation of 2D waves under the Malkus background field should be reviewed before moving on to our main topic. Under the Malkus field, the local Alfven wave velocities are \(V_{\mathrm{A}\phi}(\theta)=(B_{0}/\sqrt{\rho_{0}\mu_{\mathrm{m}}})\sin\theta\), where \(\rho_{0}\) is a uniform density and \(\mu_{\mathrm{m}}\) is a constant magnetic permeability. If other effects are absent, local Alfven waves with speeds proportional to \(\sin\theta\) can collectively travel in concert in the azimuthal direction like the rigid body rotation. This reduces our perturbation equations to a regular Sturm-Liouville problem because interior poles vanish (Boyd, 1981), as will be shown in Section 2. If the Malkus field permeates a rigidly rotating thin layer on top of a sphere with a radius \(R_{0}\), the dispersion relation for the 2D ideal incompressible MHD wave (Zaqarashvili _et al._, 2007; Marquez-Artavia _et al._, 2017) is given by \[\lambda\,=\,\frac{-m\pm m\sqrt{1+4\alpha^{2}n(n+1)[n(n+1)-2]}}{2n(n+1)}\,, \tag{1}\] where \(\lambda\equiv\omega/2\Omega_{0}\) is the angular frequency nondimensionalised by the double of the constant rotation rate \(\Omega_{0}\). In the above expression, \(m\) denotes the (positive) zonal wavenumber, \(\alpha\equiv B_{0}/2\Omega_{0}R_{0}\sqrt{\rho_{0}\mu_{\mathrm{m}}}\) is the (signed) Lehnert number, and \(n\) is the degree of the associated Legendre polynomial \(\mathrm{P}_{n}^{m}(\cos\theta)\). Although the Lehnert number is often defined as \(Le\equiv 2|\alpha|\) in geophysics literature, we follow the precedents referred to above. The double sign in (1) corresponds to prograde and retrograde propagating Alfven waves \(\lambda\simeq\pm m|\alpha|\sqrt{1-2/n(n+1)}\) for large \(|\alpha|\), and the two classes of MR waves as \(|\alpha|\to 0\): the slow mode \(\lambda\simeq m\alpha^{2}[n(n+1)-2]\) peculiar to the MHD system and the fast mode \(\lambda\simeq-m/n(n+1)\). The last is closely related to (hydrodynamic) Rossby waves celebrated in meteorology and oceanography (e.g. Longuet-Higgins, 1968). Figure 1 shows that the exponent of \(|\alpha|\) in a branch of \(\lambda\), or \((\partial\log|\lambda|/\partial\log|\alpha|)\), can be used to distinguish among the three types of waves derived from (1). The simplest equatorially-antisymmetric non-Malkus field \(B_{0\phi}=B_{0}\sin\theta\cos\theta\), or \(\mathcal{B}=\cos\theta\), should be more appropriate for the Earth's core than the pure Malkus field. In the current study, we devote our attention to the case of this basic field profile because MHD waves under a non-Malkus field are poorly understood. Note that the field is eastward in the northern hemisphere when \(B_{0}>0\), and westward when \(B_{0}<0\). It is important that \(|\alpha|\) is much smaller than unity in the Earth's core. If the field strength \(|B_{0}|\) inside the core is several millitesla (Gillet _et al._, 2010), one can estimate that \(|\alpha|\approx 10^{-4}\) with \(\varOmega_{0}\approx+0.729\times 10^{-4}\,\mathrm{s}^{-1}\), \(R_{0}\approx 3480\,\mathrm{km}\), \(\rho_{0}\approx 10^{4}\,\mathrm{kg}\,\mathrm{m}^{-3}\) and \(\mu_{\mathrm{m}}\approx 1.26\times 10^{-6}\,\mathrm{H}\,\mathrm{m}^{-1}\). Non-Malkus fields complicate our linear wave problem because regular singular points appear in their equations. At the singular colatitudes \(\theta=\theta_{\mathrm{c}}\) resulting from the Alfven resonance, the azimuthal phase velocity \(\omega/(m/R_{0}\sin\theta_{\mathrm{c}})\) of a wave is equal to the local Alfven velocity \(V_{\mathrm{A}\phi}(\theta_{\mathrm{c}})\) at the latitude (e.g. Uberoi, 1972; Goedbloed and Poedts, 2004). For example, such latitudes ordinarily exist for low-frequency neutral modes under a non-Malkus configuration whose function \(\mathcal{B}\) has at least one zero in \(0<\theta<\pi\). Even if such points exist for a given angular frequency \(\lambda\), one can, however, construct its eigenmode when its eigenfunction is permitted to have singular profiles only at the points, or rather such singular modes are required for completeness (Van Kampen, 1955; Case, 1959; Barston, 1964). This means that its spectrum should include a continuous range, \(\min(\mathcal{B}^{2})\leq\lambda^{2}/m^{2}\alpha^{2}\leq\max(\mathcal{B}^{2})\). Even though it makes no sense to extract an eigenmode (hereinafter referred to as a "continuous eigenmode" or "continuous mode") from this continuous spectrum as its eigenfunction is singular, the integration (or the superposition) with respect to \(\lambda\) in which the integrand is the eigenfunctions weighted by a coefficient depending only on \(\lambda\), can constitute a physically relevant solution. Recall that it is not always true that the Fourier transform of a well-behaved function is not pathological. Solutions composed only of the continuous eigenmodes do not show the behaviour being typical of collective oscillations by normal discrete eigenvalues but represent transient growth of initial disturbances (Farrell, 1982) and non-diffusive attenuation (e.g. Adam, 1986). The latter involves the phase mixing and the Landau damping, which cause algebraic decay (Case, 1960; Balmforth and Morrison, 1995b) and exponential decay (Landau, 1946; Briggs _et al._, 1970; Sedlacek, 1971; Tataronis and Grossmann, 1973), respectively. These unintuitive aspects due to the advent of continuous spectra have long been discussed in various research areas: inviscid shear flows, collisionless plasmas, ideal MHD systems, 2D vortices and differentially rotating disks, self-gravitating systems, and the Kuramoto model (e.g. Adam, 1986; Balmforth and Morrison, 1995a; Strogatz, 2000; Barre _et al._, 2015). The reasons that we chose the azimuthal field \(B_{0\phi}=B_{0}\mathcal{B}\sin\theta\) are as follows, although the dominant field Figure 1: Nondimensional angular frequency \(\lambda\) calculated from (1) as a function of the absolute value \(|\alpha|\) of the Lehnert number when the zonal wavenumber \(m=1\). The left panel shows retrograde propagating waves (\(\lambda<0\)) and the right corresponds to prograde ones (\(\lambda>0\)). Fast MR waves for \(n=1,2,\ldots,10\) are represented as the warm colour curves, while slow MR waves for \(n=2,\ldots,10\) are depicted as the cold colour curves. As \(|\alpha|\to\infty\), these modes approach the lines \(|\lambda|\propto|\alpha|^{1}\) of Alfvén waves except for \(m=n=1\). may be radial in the Earth's outermost core due to the small conductivity in the mantle (e.g. Knezek and Buffett, 2018). First of all, this model is simpler and can be straightforwardly extended into the "MHD shallow water" system (Zaqarashvili _et al._, 2007, 2009; Heng and Spitkovsky, 2009; Zaqarashvili _et al._, 2011; Marquez-Artavia _et al._, 2017), which was first introduced by Gilman (2000). In contrast with the toroidal field, radial ones normally bring vertical derivatives into their governing equations, and the operator is obviously incompatible with a 2D model, which is computationally undemanding. Secondly, critical latitudes often reside in equations of similar eigenvalue problems even when a radial background field depending on \(\theta\) passes through a thin layer (e.g. see the equations in Buffett and Matsui, 2019). Although approximations that are apparently appropriate for each target of study sometimes obliterate the latitudes (Zaqarashvili, 2018; Buffett and Matsui, 2019), it is not clear whether the simplifications that drop the Alfven resonance are always valid or not. Thus, we need to scrutinise the influences of the critical latitudes on eigenmodes. Lastly, Hardy _et al._ (2020) reported that slight vertical motions under strong stratification can enhance toroidal fields because of the "Malkus constraint." This suggestion supports our choice as a reasonable main field. Related instability problems may pertain to the dynamics of the solar tachocline underlying the convection zone. Because unstable modes can easily be picked up even in the presence of continuous spectra, this circumstance is significantly different from that of the study of linear waves. Gilman and Fox (1997, 1999a,b) and subsequent work (Dikpati and Gilman, 1999; Gilman and Dikpati, 2000; Zaqarashvili _et al._, 2010a) examined the "joint instability" or "magnetic Rossby wave instability" arising in the tachocline, which can occur in the MHD system accompanied by latitudinal differential rotation. According to them, some non-axisymmetric infinitesimal perturbations are likely to destabilise the coexistence of the solar-like angular velocity profile deduced from helioseismic observations and a variety of plausible toroidal field configurations such as broad profiles written as \(B_{0\phi}=(B_{0}+B_{1}\cos^{2}\theta)\sin\theta\cos\theta\) (in which \(B_{1}\) is also constant) or latitudinally localised field bands expressed by Gaussian functions. These global unstable modes may play an important role in the persistence of such a thin shear layer via latitudinal transport of angular momentum (Spiegel and Zahn, 1992) and put an upper limit on the strength of a toroidal field stored within the layer through the \(\omega\)-effect (Arlt _et al._, 2007a,b). The nonlinear evolution of the modes has been developed by Cally (2001) and Cally _et al._ (2003, 2004), who found the novel "clamshell" and "tipping" patterns. Additionally, viscosity and magnetic diffusion were also introduced in the radial (Dikpati _et al._, 2004) and horizontal (Sharif and Jones, 2005) directions in an anisotropic manner. Even beyond the strict 2D model, stability analyses of the tachocline have been conducted. The MHD shallow water equations (Gilman, 2000) and the 3D thin shell model or "MHD hydrostatic primitive" equations (Miesch and Gilman, 2004) were newly proposed for evaluating the impacts of subadiabatic stratification and weak vertical displacement of fluid particles. Gilman and Dikpati (2002) and Dikpati _et al._ (2003) demonstrated that the combinations of the differential rotation and the toroidal fields in the shallow model become unstable easily again and that the growing perturbations have non-zero kinetic helicities, which are related to the \(\alpha\)-effect (Moffatt, 1978). Furthermore, unstable MR waves in the layer may have been causing some periodicities detected in the solar activity (Zaqarashvili _et al._, 2010a,b, 2015; Dikpati _et al._, 2017; Gachechiladze _et al._, 2019; see Zaqarashvili _et al._, 2021, for a review). For instance, nonlinear development in the MHD shallow water system (Dikpati _et al._, 2017, 2018a,b) indicated that MR waves exchange angular momentum with mean fields and that their wave patterns deformed by consequent reconstructions of the mean profiles can trigger nonlinear quasi-periodic oscillations. In addition, searches for not only non-axisymmetric unstable modes (Cally, 2003; Gilman _et al._, 2007; Kitchatinov and Rudiger, 2008) but also axisymmetric ones (Cally _et al._, 2008; Dikpati _et al._, 2009) were performed in extensive studies of the 3D thin shell model. The studies currently include nonlinear simulations of these growth modes (Miesch _et al._, 2007; Hollerbach and Cally, 2009) and linear stability analyses taking vertical profiles of the differential rotation and the background toroidal field into account (Arlt _et al._, 2007a,b). Critical lines (or levels, latitudes, and so on) and their concomitant continuous modes have important bearings even on unstable modes which are outside of the objective of this paper. Despite the fact that the eigenfrequencies of unstable modes are complex numbers, their eigenfunctions are affected by the positions of critical points, as demonstrated by, e.g. Gilman and Fox (1999a) and Wang _et al._ (2022a). Moreover, a neutral mode sometimes interacts with continuous modes when the branch of the neutral one overlaps the continuous spectrum. This leads to the appearance of a pair of unstable and decay modes (Iga, 1999; Taniguchi and Ishiwatari, 2006) or non-modal growth (Heifetz _et al._, 2020). On the other hand, continuous spectra sometimes cover and hide the branches of such interacting neutral modes (Iga, 2013). Therefore, we also want to detail the critical latitudes and the neutral continuous eigenmodes deeply in advance to understand linear stability in similar problems (e.g Wang _et al._, 2022b), though our current problem does not have such unstable modes. Non-ideality and non-linearity are also intimately related to critical points. Small viscosity and magnetic diffusion transform a singular point on the real axis on the "fictional" complex plane of the involved spatial coordinate into a complex turning point, because the diffusion terms include higher-order derivatives (e.g. Drazin and Reid, 1981; Shivamoggi, 1992). Since the eigenfunctions that we are going to seek are functions on the real axis on the complex plane, they then become non-singular. This seems at a glance to mean that we have to recover these ignored damping terms. However, very weak diffusions only give rise to thin boundary layers around the turning points (and by any walls), and the eigenfunctions would still be similar to the profiles of continuous eigenmodes obtained from the non-diffusive limit sufficiently outside the layers. Although decaying normal modes stemming from measurable dissipations (Steinolfson, 1985; Gizon _et al._, 2020) and nonlinear boundary layers (e.g. Tung, 1979; Maslowe, 1986) may also be somewhat important to our problem, they are beyond the scope of this article. This paper is organised as follows. In Section 2, we shall derive the governing equations for our problem, and then present a method to seek eigenmodes numerically by the associated Legendre polynomial expansion in Section 2.1. Section 3 gives numerical solutions for the case when the background field is expressed as the simplest equatorially-antisymmetric non-Malkus field \(\mathcal{B}=\cos\theta\). In Sections 3.1 and 3.2, the structures of the obtained eigenfunctions are examined outside and near critical latitudes, respectively. In Section 4, we also conduct the numerical integration of ray-tracing equations at large wavenumbers (e.g. Bardsley and Davidson, 2017; Teruya _et al._, 2022) in order to interpret our eigenvalue problem from a different angle. This approach tracks paths of wave packets migrating with their group velocities. Note that this is different from Dikpati _et al._ (2020)'s calculations of the Stokes drift, which are trajectories of fluid particles advected by oscillatory flow induced by MR waves. Finally, we conclude in Section 5. ## 2 Mathematical formulation We now begin with a description of the governing equations for the 2D ideal incompressible MHD on a rotating sphere. In the spherical coordinate system \((r,\theta,\phi)\), the 2D vorticity and 2D uncurled induction equations on the spherical surface \(r=R_{0}\)(e.g. Raphaldini and Raupp, 2020) are \[\frac{\mathrm{D}\zeta}{\mathrm{D}t}\,-\,\frac{2\varOmega_{0}\sin \theta}{R_{0}}u_{\theta} = \frac{(\mathbf{B}\mathbf{\cdot}\mathbf{\nabla}_{\mathrm{H}})(\mu_{\mathrm{m}} J)}{\rho_{0}\mu_{\mathrm{m}}}\,, \tag{2a}\] \[\frac{\mathrm{D}A}{\mathrm{D}t} = 0\,, \tag{2b}\] where \(\mathbf{\nabla}_{\mathrm{H}}\equiv(\hat{\mathbf{e}}_{\theta}/R_{0})(\mathfrak{d}/ \mathfrak{d}\theta)+(\hat{\mathbf{e}}_{\phi}/R_{0}\sin\theta)(\mathfrak{d}/ \mathfrak{d}\phi)\) is the horizontal nabla operator, and the material derivative is expressed as \((\mathrm{D}/\mathrm{D}t)\equiv(\mathfrak{d}/\mathfrak{d}t)+\mathbf{u}\mathbf{\cdot} \mathbf{\nabla}_{\mathrm{H}}\). The velocity field \(\mathbf{u}=(0,u_{\theta},u_{\phi})\) (relative to the rotating frame of reference) and the magnetic field \(\mathbf{B}=(0,B_{\theta},B_{\phi})\) are assumed to have only the horizontal components within the thin layer, for the sake of simplicity. The radial components of the vorticity and electrical current in (2) are defined as \[\zeta \equiv \frac{1}{R_{0}\sin\theta}\left[\frac{\mathfrak{d}(u_{\phi}\sin \theta)}{\mathfrak{d}\theta}-\frac{\mathfrak{d}u_{\theta}}{\mathfrak{d}\phi} \right]\,=\,-\nabla_{\mathrm{H}}^{2}\psi\,, \tag{3a}\] \[J \equiv \frac{1}{\mu_{\mathrm{m}}}\frac{1}{R_{0}\sin\theta}\left[\frac{ \mathfrak{d}(B_{\phi}\sin\theta)}{\mathfrak{d}\theta}-\frac{\partial B_{\theta }}{\mathfrak{d}\phi}\right]\,=\,-\mu_{\mathrm{m}}^{-1}\nabla_{\mathrm{H}}^{2}A\,, \tag{3b}\] in which we introduce the stream function \(\psi\) and the magnetic vector potential \(\mathbf{A}=(A,0,0)\). Owing to the solenoidal conditions of the fields \(\mathbf{u}\) and \(\mathbf{B}\), the scalars \(\psi\) and \(A\) are related to the two vectors as \[u_{\theta} = \frac{1}{R_{0}\sin\theta}\frac{\partial\psi}{\partial\phi}\,, \qquad u_{\phi}\,=\,-\frac{1}{R_{0}}\frac{\mathfrak{d}\psi}{\mathfrak{d}\theta}\,, \tag{4a}\] \[B_{\theta} = \frac{1}{R_{0}\sin\theta}\frac{\partial A}{\partial\phi}\,, \qquad B_{\phi}\,=\,-\frac{1}{R_{0}}\frac{\mathfrak{d}A}{\mathfrak{d}\theta}\,. \tag{4b}\] Using these expressions, the governing equations (2) become \[\frac{\partial(\nabla_{\rm H}^{2}\psi)}{\partial t}\,+\,{\cal J}( \nabla_{\rm H}^{2}\psi,\psi)\,+\,\frac{2\varOmega_{0}}{R_{0}^{2}}\frac{\partial \psi}{\partial\phi} = \frac{1}{\rho_{0}\mu_{\rm m}}{\cal J}(\nabla_{\rm H}^{2}A,A)\,, \tag{5a}\] \[\frac{\partial A}{\partial t}\,+\,{\cal J}(A,\psi) = 0\,, \tag{5b}\] where the operator \({\cal J}(f,g)\) for any two scalar functions \(f\) and \(g\) is defined as \[{\cal J}(f,g)\,\equiv\,\frac{1}{R_{0}^{2}\sin\theta}\left(\frac{\partial f}{ \partial\theta}\frac{\partial g}{\partial\phi}-\frac{\partial f}{\partial \phi}\frac{\partial g}{\partial\theta}\right)\,. \tag{5c}\] In what follows, waves of small amplitude are considered in this system. We introduce a small positive parameter \(\varepsilon\) (\(\ll 1\)) which represents the amplitude of waves to rewrite \(\psi\) and \(A\) as \[\psi\,=\,\varepsilon\psi_{1}\,+\,{\rm O}(\varepsilon^{2})\,,\qquad A\,=\,A_{0 }(\theta)\,+\,\varepsilon a_{1}\,+\,{\rm O}(\varepsilon^{2})\,. \tag{6}\] The basic state is assumed to be the rigid body rotation \(\psi_{0}\equiv 0\) with a latitude-dependent toroidal field \(B_{0\phi}(\theta)\). With the expression of \(B_{0\phi}\) given in the previous section, one obtains \(({\rm d}A_{0}/{\rm d}\theta)=-R_{0}B_{0}{\cal B}\sin\theta\). Then, the equations (5) become \[\left(\frac{\partial}{\partial t}\nabla_{\rm H}^{2}+\frac{2 \varOmega_{0}}{R_{0}^{2}}\frac{\partial}{\partial\phi}\right)\psi_{1} = \frac{1}{R_{0}\rho_{0}\mu_{\rm m}\sin\theta}\left[B_{0\phi} \nabla_{\rm H}^{2}-\frac{1}{R_{0}}\frac{{\rm d}(\mu_{\rm m}J_{0})}{{\rm d} \theta}\right]\frac{\partial a_{1}}{\partial\phi}\,+\,{\rm O}(\varepsilon)\,, \tag{7a}\] \[\frac{\partial a_{1}}{\partial t} = \frac{B_{0\phi}}{R_{0}\sin\theta}\frac{\partial\psi_{1}}{ \partial\phi}\,+\,{\rm O}(\varepsilon)\,, \tag{7b}\] where \(J_{0}(\theta)=\mu_{\rm m}^{-1}(1/R_{0}\sin\theta)[{\rm d}(B_{0\phi}\sin\theta )/{\rm d}\theta]\) is the background electrical current. The normal mode approach is valid for our purposes. For a given azimuthal wavenumber \(m\) and an angular frequency \(\omega\) determined later by a dispersion relation, which becomes (1) for the Malkus field \({\cal B}=1\) and is going to be numerically sought in Section 3 when \({\cal B}=\cos\theta\), we postulate that \(\psi_{1}(\theta,\phi,t)\equiv{\rm Re}[\tilde{\psi}(\mu;m,\omega){\rm e}^{{ \rm i}\varphi(\phi,t)}]=\tilde{\psi}{\rm e}^{{\rm i}\varphi}/2+{\rm c.c.}\) and \(a_{1}\equiv{\rm Re}[\tilde{a}{\rm e}^{{\rm i}\varphi}]\), where \(\mu=\cos\theta\) and \(\varphi\equiv m\phi-\omega t=m\phi-\lambda\tau\) is the phase of waves with the nondimensional time \(\tau\equiv 2\varOmega_{0}t\). We use \({\rm c.c.}\) for the complex conjugate of the preceding terms. Upon substituting this ansatz into (7), we immediately get \[(-\lambda\nabla_{\rm h}^{2}+m)\tilde{\psi} = m|\alpha|\left\{{\cal B}\nabla_{\rm h}^{2}-\frac{{\rm d}^{2}[{ \cal B}(1-\mu^{2})]}{{\rm d}\mu^{2}}\right\}\left[\frac{{\rm sgn}(\alpha) \tilde{a}}{\sqrt{\rho_{0}\mu_{\rm m}}}\right]\,, \tag{8a}\] \[-\lambda\left[\frac{{\rm sgn}(\alpha)\tilde{a}}{\sqrt{\rho_{0}\mu _{\rm m}}}\right] = m|\alpha|{\cal B}\tilde{\psi}\,, \tag{8b}\] in which \(\nabla_{\rm h}^{2}\equiv R_{0}^{2}\nabla_{\rm H}^{2}\) is the dimensionless horizontal Laplacian. From now on, our interest is limited to the case where \(m\neq 0\). The above equations are easily transformed into a single ordinary differential equation in the form \[\frac{{\rm d}}{{\rm d}\mu}\left[\varLambda(1-\mu^{2})\frac{{\rm d}\tilde{ \psi}}{{\rm d}\mu}\right]\,-\,\left\{\frac{m^{2}\varLambda}{1-\mu^{2}}+m \left[\lambda+2m\alpha^{2}{\cal B}\frac{{\rm d}({\cal B}\mu)}{{\rm d}\mu} \right]\right\}\tilde{\psi}\,=\,0\,, \tag{9}\] where the factor \(\varLambda(\mu)\equiv\lambda^{2}-m^{2}\alpha^{2}{\cal B}^{2}\) is crucial to our problem. If there exist real values \(\mu\) which satisfy \(\varLambda(\mu)=0\) within the interval \(-1<\mu<1\), which are hereinafter denoted by \(\mu_{\rm c}\), those points are interior poles depending on \(\lambda\). The poles yield continuous spectra, as we stated in the introduction section. On the other hand, the Malkus field \({\cal B}=1\) obviously produces no singular points except for the endpoints \(\mu=\pm 1\). Dividing (9) by the factor \(\varLambda(\neq 0)\), one can obtain its dispersion relation (1) without any hurdles because (9) is then reduced to the associated Legendre differential equation. Note that if \(\lambda\) and \(\tilde{\psi}\) are an eigenvalue and its corresponding eigenfunction of (9), respectively, the same holds true for their complex conjugates. The existence of continuous spectra can be justified when the function \(\varLambda\) has zeros in \(-1<\mu<1\). Since (9) is a second-order differential equation, it should have two linearly independent solutions. Let \(\tilde{\psi}_{\rm I}\) be the nonsingular one of the solutions, and let the other be expressed in the form \(\tilde{\Psi}_{\rm I\!I}=\tilde{\psi}_{\rm I}\int^{\mu}f(\mu_{*}){\rm d}\mu_{*}\) with a function \(f(\mu)\). Substituting the form of \(\tilde{\Psi}_{\rm I\!I}\) into (9), we find \[\tilde{\psi}\,=\,C_{\rm I\!I}\tilde{\psi}_{\rm I}\,+\,C_{\rm I\!I}\tilde{\Psi}_{ \rm I\!I}\,,\qquad\tilde{\Psi}_{\rm I\!I}(\mu)\,=\,\tilde{\psi}_{\rm I}(\mu) \int^{\mu}\frac{{\rm d}\mu_{*}}{(1-\mu_{*}^{2})\varLambda(\mu_{*})\tilde{\psi}_ {\rm I}^{2}(\mu_{*})}\,, \tag{10}\] with both \(C_{\rm I}\) and \(C_{\rm I\!I}\) being constants. Let us examine what the integral becomes if the interval of integration in (10) pass over zeros of \(\varLambda\) (we assume that \(\tilde{\psi}_{\rm I}^{2}\neq 0\) within the interval). With an arbitrary starting point \(\mu_{0}\) of the integration interval (in the following equation, we suppose that \(\mu_{0}<\min(\mu,\mu_{\rm c})\) and that at most one singular point exists between \(\mu_{0}\) and \(\mu\)), we can express the second solution of the linearly independent set as the improper integral \[\tilde{\psi}_{\rm I\!I}(\mu)\,=\,\begin{cases}\tilde{\psi}_{\rm I}(\mu)\int_{ \mu_{0}}^{\mu}\frac{{\rm d}\mu_{\star}}{(1-\mu_{\star}^{2})\varLambda(\mu_{ \star})\tilde{\psi}_{\rm I}^{2}(\mu_{\star})}&(\mu<\mu_{\rm c})\\ \tilde{\psi}_{\rm I}(\mu)\left[\lim_{\varDelta_{1}\to+0}\int_{\mu_{0}}^{\mu_ {\rm c}-\varDelta_{1}}\frac{{\rm d}\mu_{\star}}{(1-\mu_{\star}^{2})\varLambda( \mu_{\star})\tilde{\psi}_{\rm I}^{2}(\mu_{\star})}\right.&\\ \left.\hskip 14.226378pt+\,\,\lim_{\varDelta_{2}\to+0}\int_{\mu_{\rm c}+ \varDelta_{2}}^{\mu}\frac{{\rm d}\mu_{\star}}{(1-\mu_{\star}^{2})\varLambda( \mu_{\star})\tilde{\psi}_{\rm I}^{2}(\mu_{\star})}\right]&(\mu_{\rm c}<\mu) \end{cases}\,. \tag{11}\] We may then deduce from (11) that (10) is also written as \[\tilde{\psi}\,=\,C_{\rm I}\tilde{\psi}_{\rm I}\,+\,C_{\rm I\!I}\tilde{\psi}_{ \rm I\!I}\,+\,\tilde{\psi}_{\rm I}\sum_{i}C_{\rm I\!I,i}{\rm H}(\mu-\mu_{\rm c,i})\,,\] (12a) where the integer \[i\] is the index of the singular latitudes and \[{\rm H}(\mu)\] is the step function. In addition, we have introduced the new second solution \[\tilde{\psi}_{\rm I\!I}(\mu)\,\equiv\,\tilde{\psi}_{\rm I}(\mu)\,{\cal P}\! \int^{\mu}\frac{{\rm d}\mu_{\star}}{(1-\mu_{\star}^{2})\varLambda(\mu_{\star} )\tilde{\psi}_{\rm I}^{2}(\mu_{\star})}\,,\] (12b) in which \[{\cal P}\] denotes the Cauchy principal value. In the vicinity of \[\mu=\mu_{\rm c}\], we know that \[{\cal B}^{2}={\cal B}_{\rm c}^{2}+({\cal B}_{\rm c}^{2})^{\prime}(\mu-\mu_{\rm c })+({\cal B}_{\rm c}^{2})^{\prime\prime}(\mu-\mu_{\rm c})^{2}/2+({\cal B}_{\rm c }^{2})^{\prime\prime\prime}(\mu-\mu_{\rm c})^{3}/6+{\rm O}(|\mu-\mu_{\rm c}|^ {4})\], where \[{\cal B}_{\rm c}^{2}\equiv{\cal B}^{2}(\mu_{\rm c})=\lambda^{2}/m^{2}\alpha^{2}\], \[({\cal B}_{\rm c}^{2})^{\prime}\equiv({\rm d}{\cal B}^{2}/{\rm d}\mu)|_{\mu=\mu _{\rm c}}\] and the like. Now, in this paper, we will narrow down a target only to the case when \[({\cal B}_{\rm c}^{2})^{\prime}\] does not vanish. Using this series expansion, one obtains the magnitude \[C_{\rm I\!I,i}\] of the discontinuities at the latitudes in the form \[C_{\rm I\!I,i}\,\equiv\,-\frac{C_{\rm I\!I}I_{i}}{(1-\mu_{\rm c,i}^{2})m^{2} \alpha^{2}({\cal B}_{\rm c,i}^{2})^{\prime}\tilde{\psi}_{\rm I}^{2}(\mu_{\rm c,i})}\] (12c) with the integral \[I_{i}\] given by \[I_{i}\,\equiv\,\lim_{\varDelta,\varDelta_{1},\to+0}\int_{\mu_{\rm c,i}- \varDelta_{1,i}}^{\mu_{\rm c,i}-\varDelta_{1,i}}\frac{{\rm d}\mu_{\star}}{\mu_ {\star}-\mu_{\rm c,i}}\,+\,\lim_{\varDelta,\varDelta_{2,i}=+0}\int_{\mu_{\rm c,i}+\varDelta_{2,i}}^{\mu_{\rm c,i}+\varDelta}\frac{{\rm d}\mu_{\star}}{\mu_{ \star}-\mu_{\rm c,i}}\,=\,\lim_{\varDelta_{1,i},\varDelta_{2,i}\to+0}\ln\frac{ \varDelta_{1,i}}{\varDelta_{2,i}}\,. \tag{12d}\] This integral can be an arbitrary number, depending on how we take the limit \(\varDelta_{1,i},\varDelta_{2,i}\to+0\). Indeed, Van Kampen (1955) suggested that, in the problem of plasma oscillations, his counterpart of \(I\) may be considered as an arbitrary parameter, which can be determined by a normalization condition of his counterpart of \(\tilde{\psi}\), or the distribution function of plasma. As seen in (12a), the existence of these more than two linearly independent solutions despite (9) being a second-order differential equation can lead to an excess of arbitrary coefficients which should be adjusted to satisfy boundary conditions. This excessive freedom results in continuous spectra. Another explanation for the appearance of continuous spectra is derived from the condition that should be satisfied by the solution of (9) in the vicinity of the singular point. On integrating (9) with respect to \(\mu\) over the narrow range sandwiched between \(\mu_{\rm c}-\varDelta_{1}\) and \(\mu_{\rm c}+\varDelta_{2}\) with \(\varDelta_{1}\), \(\varDelta_{2}\to+0\), one get \[\lim_{\varDelta_{1}\to+0}\varDelta_{1}\left.\frac{{\rm d}\tilde{\psi}}{{\rm d }\mu}\right|_{\mu=\mu_{\rm c}-\varDelta_{1}}\,+\,\lim_{\varDelta_{2}\to+0} \varDelta_{2}\left.\frac{{\rm d}\tilde{\psi}}{{\rm d}\mu}\right|_{\mu=\mu_{\rm c }+\varDelta_{2}}\,\rightarrow\,0\,,\] (13a) provided that the condition \[\lim_{\varDelta_{1},\varDelta_{2}\to+0}\int_{\mu_{\rm c}-\varDelta_{1}}^{\mu_ {\rm c}+\varDelta_{2}}|\tilde{\psi}|{\rm d}\mu\,\rightarrow\,0\] (13b) is fulfilled. The fact that \[\tilde{\psi}_{\rm I}\] is surely a solution for (9) and that \[\tilde{\psi}_{\rm I}{\rm H}(\mu-\mu_{\rm c})\] always fulfills ( 13a ) shows that the third term in ( 12a ) is a weak solution for (9 ) and \[C_{\rm I\!I\!I}\] should then be an undetermined parameter. An alternative survey of their structures at the critical latitudes is conducted with the Frobenius method (e.g. Braun, 1975). Suppose that a power series of the form \(\tilde{\psi}^{(\rm c)}\equiv\sum_{k=0}^{\infty}a_{k}(\mu-\mu_{\rm c})^{k+\varrho}\) (\(a_{0}\neq 0\)) is a solution for (9) around a critical latitude \(\mu_{\rm c}\), in which \(\varrho\) is a root of the indicial equation for (9). On substituting this assumption for \(\tilde{\psi}\) in (9), one obtains the equation \(\varrho^{2}=0\) from its leading order term. As a result, the two linearly independent solutions near the latitude are given on the basis of the way to deal with the repeated root \(\varrho=0\) by \[\tilde{\psi}^{(\rm c)}_{\rm I} \equiv 1\,+\,\sum_{k=1}^{\infty}a_{k}(\mu-\mu_{\rm c})^{k}\,, \tag{14a}\] \[\tilde{\psi}^{(\rm c)}_{\rm I\!I} \equiv \tilde{\psi}^{(\rm c)}_{\rm I\!I}\ln|\mu-\mu_{\rm c}|\,+\,\sum_{k= 1}^{\infty}b_{k}(\mu-\mu_{\rm c})^{k}\,. \tag{14b}\] Some of the expansion coefficients are found with slightly tedious but standard manipulations as \(a_{1}=D_{1}\), \(a_{2}=(D_{1}^{2}+2D_{1}D_{2}+D_{3})/4\), \(b_{1}=-2D_{1}+D_{2}\), \(b_{2}=(-3D_{1}^{2}-2D_{1}D_{2}+2D_{2}^{2}-D_{3}+2D_{4})/4\), where \[D_{1}\,\equiv\,-\frac{\lambda/m\alpha^{2}+2{\cal B}_{\rm c}^{2} +({\cal B}_{\rm c}^{2})^{\prime}\mu_{\rm c}}{({\cal B}_{\rm c}^{2})^{\prime}(1 -\mu_{\rm c}^{2})}\,, D_{2}\,\equiv\,\frac{2({\cal B}_{\rm c}^{2})^{\prime}\mu_{\rm c }-({\cal B}_{\rm c}^{2})^{\prime\prime}(1-\mu_{\rm c}^{2})/2}{({\cal B}_{\rm c }^{2})^{\prime}(1-\mu_{\rm c}^{2})}\,,\] \[D_{3}\,\equiv\,\frac{m^{2}({\cal B}_{\rm c}^{2})^{\prime}/(1-\mu _{\rm c}^{2})-3({\cal B}_{\rm c}^{2})^{\prime}-({\cal B}_{\rm c}^{2})^{\prime \prime}\mu_{\rm c}}{({\cal B}_{\rm c}^{2})^{\prime}(1-\mu_{\rm c}^{2})}\,, D_{4}\,\equiv\,\frac{({\cal B}_{\rm c}^{2})^{\prime}+({\cal B}_{\rm c }^{2})^{\prime\prime}\mu_{\rm c}-({\cal B}_{\rm c}^{2})^{\prime\prime}(1-\mu_ {\rm c}^{2})/6}{({\cal B}_{\rm c}^{2})^{\prime}(1-\mu_{\rm c}^{2})}\,. \tag{15}\] If one substitutes the first Frobenius solution \(\tilde{\psi}^{(\rm c)}_{\rm I\!I}\) for \(\tilde{\psi}_{\rm I\!I}\) into the integral expression (12b) to calculate the second solution up to its second-order terms, the resulting expression certainly agrees with the second Frobenius one \(\tilde{\psi}^{(\rm c)}_{\rm I\!I}\) up to the same order (strictly speaking, \(\tilde{\psi}_{\rm I\!I}\simeq-(\tilde{\psi}^{(\rm c)}_{\rm I\!I}+{\rm const.} \times\tilde{\psi}^{(\rm c)}_{\rm I\!I})/(1-\mu_{\rm c}^{2})m^{2}\alpha^{2}({ \cal B}_{\rm c}^{2})^{\prime}\) around the latitude). Accordingly, we can associate \(\tilde{\psi}_{\rm I}\) with \(\tilde{\psi}^{(\rm c)}_{\rm I\!I}\), and \(\tilde{\psi}_{\rm I\!I}\) with \(\tilde{\psi}^{(\rm c)}_{\rm I\!I}\), in the vicinity of \(\mu=\mu_{\rm c}\). The series solutions (14) also justify (13b), since the term which becomes the largest contributor to the integral is \(\int_{\mu_{\rm c}-{\cal A}_{1}}^{\mu_{\rm c}+{\cal A}_{2}}\ln|\mu-\mu_{\rm c}|{ \rm d}\mu\). The comparison between linear combinations of these linearly independent solutions and our numerical solutions will be shown in Section 3.2. Necessary conditions for instability often receive interest from a lot of hydrodynamicists. As will be proved in Appendix A, one of the so-called semicircle theorems giving an eigenvalue bound in the current problem is written as \[\left[\frac{{\rm Re}(\lambda)}{m}+\alpha^{2}\max\left(2{\cal B}\frac{{\rm d}({ \cal B}\mu)}{{\rm d}\mu}\right)\right]^{2}+\left(\frac{{\rm Im}(\lambda)}{m} \right)^{2}\,\leq\,\alpha^{4}\left[\max\left(2{\cal B}\frac{{\rm d}({\cal B} \mu)}{{\rm d}\mu}\right)\right]^{2}\,-\,\alpha^{2}\min({\cal B}^{2})\,,\] (16a) only if \[{\rm Im}(\lambda)\neq 0\]. Additionally, we also find another bound for the case when \[{\rm Im}(\lambda)\neq 0\] in the form \[-\frac{1}{2m(m+1)}\leq\frac{{\rm Re}(\lambda)}{m}\leq 0\,. \tag{16b}\] These relations mean that if unstable modes exist, they must propagate in the retrograde direction, and the value \(\max\{2{\cal B}[{\rm d}({\cal B}\mu)/{\rm d}\mu]\}\) concerning the gradient of an imposed magnetic field must be positive. Hughes and Tobias (2001); Mak _et al._ (2016), and Wang _et al._ (2022a,b) derived similar theorems for the MHD or MHD shallow water systems with a background shear flow. The theorem (16a) indicates that magnetic shear ascribed to the spherical geometry may have a destabilising effect. We were not, however, able to find out unstable modes that are likely to be physically meaningful when \({\cal B}=\mu\), as described in Section 3. ### Numerical method A numerical method to solve our eigenvalue problem is described now. In this article, we shall focus on the simplest equatorially-antisymmetric non-Malkus field \({\cal B}=\mu\). This choice prompts us to utilise the associated Legendre polynomial expansion, since (9) exactly becomes the associated Legendre differential equation if \({\cal B}=1\) and recurrence formulae of these polynomials are useful in the present situation. For a fixed \(m\), the polynomials \({\rm P}^{m}_{n}\) (\(n\geq m\)) constitute a basis for function expansion in the Galerkin discretization on a spherical surface. Thus, we have \[\tilde{\psi}(\mu)\,\equiv\,\sum_{n=m}^{N_{\rm t}}\tilde{\psi}^{[n]}{\cal N}^{m}_ {n}{\rm P}^{m}_{n}(\mu)\,,\qquad\frac{{\rm sgn}(\alpha)\tilde{a}}{\sqrt{\rho_{0 }\mu_{\rm m}}}\,\equiv\,\sum_{n=m}^{N_{\rm t}}\hat{a}^{[n]}{\cal N}^{m}_{n}{ \rm P}^{m}_{n}\,, \tag{17a}\] where \(N_{\rm t}\) denotes the truncation degree and the normalising factor is written as \[\mathcal{N}_{n}^{m}\,\equiv\,(-1)^{m}\sqrt{\frac{2n+1}{2}\frac{(n-m)!}{(n+m)!}}\,. \tag{17b}\] Assuming (17a) to be an approximate solution for (8) and using useful relations in the forms \[\nabla_{\rm b}^{2}{\rm P}_{n}^{m} \,=\,-n(n+1){\rm P}_{n}^{m}\,, \tag{18a}\] \[\mu{\rm P}_{n}^{m} \,=\,\frac{n+m}{2n+1}{\rm P}_{n-1}^{m}\,+\,\frac{n-m+1}{2n+1}{\rm P }_{n+1}^{m}\,, \tag{18b}\] and the orthogonality relation \(\int_{-1}^{1}{\rm P}_{n}^{m}{\rm P}_{\nu}^{m}\mathrm{d}\mu=\delta_{n,\nu}/ \mathcal{N}_{n}^{m}\mathcal{N}_{\nu}^{m}\) with the Kronecker delta \(\delta_{n,\nu}\), we get the following simultaneous equations for all integers \(n\) which satisfy \(m\leq n\leq N_{\rm t}\): \[\left[\lambda+\frac{m}{n(n+1)}\right]\tilde{\psi}^{[n]} \,=\,-m|\alpha|\left[\frac{(n-3)(n+2)}{n(n+1)}k_{n}^{m}\tilde{a}^ {[n-1]}\,+\,\frac{(n-1)(n+4)}{n(n+1)}k_{n+1}^{m}\tilde{a}^{[n+1]}\right]\,, \tag{19a}\] \[\lambda\tilde{a}^{[n]} \,=\,-m|\alpha|\left(k_{n}^{m}\tilde{\psi}^{[n-1]}\,+\,k_{n+1}^{m} \tilde{\psi}^{[n-1]}\right)\,, \tag{19b}\] in which the number sequence \[k_{n}^{m}\,=\,\sqrt{\frac{(n-m)(n+m)}{(2n-1)(2n+1)}}\qquad(n=m,m+1,\ldots,N_{ \rm t})\,, \tag{19c}\] is introduced for convenience. The system of linear equations (19) is equivalent to the eigenvalue problem for the corresponding \(2(N_{\rm t}-m+1)\times 2(N_{\rm t}-m+1)\) matrix, and arrays of the expansion coefficients \(\tilde{\psi}^{[n]}\) and \(\tilde{a}^{[n]}\) are its eigenvectors. We performed numerical calculations solving this eigenvalue problem with our Python code which is based on the numpy.linalg.eig function of the NumPy library. In these calculations, the truncation number \(N_{\rm t}\) was set to \(2000\). As can be seen from (19), the subset of \(\tilde{\psi}^{[n]}\) with \(n\) being odd numbers pertains only to the subset of \(\tilde{a}^{[n]}\) with \(n\) being even, and the same is true of the relationship between \(\tilde{\psi}^{[n]}\) with \(n\) even and \(\tilde{a}^{[n]}\) with \(n\) odd. On the basis of this dichotomy, we refer to eigenmodes for which \(\tilde{\psi}^{[n]}\)'s are non-zero only when \(n-m\) is even (in other words, \(\tilde{a}^{[n]}\)'s do not vanish only when \(n-m\) is odd, and \(u_{\theta}\), \(b_{\phi}\) are equatorially-symmetric and \(u_{\phi}\), \(b_{\theta}\) are equatorially-antisymmetric) as the sinuous modes (cf. Marquez-Artavia _et al._, 2017). Conversely, eigenmodes for which \(\tilde{\psi}^{[n]}\)'s become non-zero only when \(n-m\) is odd (or, \(u_{\theta}\) is antisymmetric and \(u_{\phi}\) is symmetric about the equator) are hereinafter referred to as the varicose modes. The validity of eigenmodes obtained numerically is diagnosed from the aspect of convergence. This is conducted by evaluating \[\sum_{n=m}^{\lfloor N_{\rm t}/2\rfloor}\left|\tilde{\psi}^{[n]}\right|^{2}\, >\,10^{2}\sum_{n=\lfloor N_{\rm t}/2\rfloor+1}^{N_{\rm t}}\left|\tilde{\psi} ^{[n]}\right|^{2}\quad\text{and}\quad\sum_{n=m}^{\lfloor N_{\rm t}/2\rfloor} \left|\tilde{a}^{[n]}\right|^{2}\,>\,10^{2}\sum_{n=\lfloor N_{\rm t}/2\rfloor +1}^{N_{\rm t}}\left|\tilde{a}^{[n]}\right|^{2}, \tag{20}\] where \(\lfloor x\rfloor\) is the integer part of \(x\). Only eigenmodes that pass the screening by this validation will be studied and illustrated in the results section. Normalising the amplitudes of eigenmodes is valuable to those who want to understand their characteristics by comparing physical quantities such as energies. We employed the manner of normalization in which we let the mean total energies \({\rm MKE}+{\rm MME}\) of perturbations be \((\rho_{0}/8R_{0}^{2}){\rm e}^{2{\rm Im}(\omega)t}\), where the mean kinetic and mean magnetic energies of an eigenmode are expressed as \[{\rm MKE} \,\equiv\,\frac{1}{4\pi}\int_{0}^{\pi}\mathrm{d}\theta\int_{0}^{2 \pi}\sin\theta\mathrm{d}\phi\frac{\rho_{0}|\mathbf{u}_{1}|^{2}}{2}\,=\,\frac{\rho _{0}}{8R_{0}^{2}}{\rm e}^{2{\rm Im}(\omega)t}\sum_{n=m}^{N_{\rm t}}n(n+1)\left| \tilde{\psi}^{[n]}\right|^{2}\,, \tag{21a}\] \[{\rm MME} \,\equiv\,\frac{1}{4\pi}\int_{0}^{\pi}\mathrm{d}\theta\int_{0}^{2 \pi}\sin\theta\mathrm{d}\phi\frac{|\mathbf{b}_{1}|^{2}}{2\mu_{\rm m}}\,=\,\frac{\rho _{0}}{8R_{0}^{2}}{\rm e}^{2{\rm Im}(\omega)t}\sum_{n=m}^{N_{\rm t}}n(n+1)\left| \tilde{a}^{[n]}\right|^{2}\,, \tag{21b}\] respectively, with \(\mathbf{u}=\varepsilon\mathbf{u}_{1}+\mathrm{O}(\varepsilon^{2})\) and \(\mathbf{B}=\mathbf{B}_{0}+\varepsilon\mathbf{b}_{1}+\mathrm{O}(\varepsilon^{2})\). The energy partitioning between MKE and MME allows us to examine the force balance of eigenmodes and to classify the types of waves. For instance, Figure 2 represents energy partitions for various eigenmodes under the Malkus field and shows that MREs dominate the mean total energies for fast MR waves, MMEs are predominant over MREs for slow MR waves, and Alfven waves show almost equipartition between the two for large \(|\alpha|\). This normalization of eigenvectors is applied to all the figures of profiles of eigenmodes displayed in the following sections. The associated Legendre polynomials \(\mathrm{P}_{n}^{m}\) employed to construct the eigenfunctions from their corresponding eigenvectors are given by the scipy.special.lpmv function of the SciPy library. ## 3 Numerical results The results section starts by presenting the dispersion relation for our current problem. Figure 3 shows the real parts of eigenfrequencies obtained numerically when \(m=1\) as functions of \(|\alpha|\) (we find that their imaginary parts vanish, that is \(\mathrm{Re}(\lambda)=\lambda\), for all the eigenmodes except for unreliable eigenmodes which will be slightly discussed later). The figure has four panels. The left and right columns show the retrograde and prograde modes, respectively, and the upper and lower rows show the sinuous and varicose modes, respectively. Each colour in the scatter plots represents the fraction of the mean kinetic energy within the mean total energy of an eigenmode corresponding to a point on the diagrams. Note that we used a nonlinear colour scale which is made by utilising the arctangent function so as to highlight whether an eigenmode is similar to the Alfven wave (MKE \(\approx\) MME; the colour of its marker is greenish) or not. To make it easier to find markers of modes dissimilar to the Alfven wave (we chose the range \(\text{MKE}<0.49\) or \(0.51<\text{MKE}\)), we furthermore set the size of them to be larger than that of markers representing the Alfven wave. On the basis of the knowledge learned from Figure 2, even in the present situation, we would be justified to think of an eigenmode whose marker in the figure is coloured reddish as a mode similar to the fast MR wave and a bluish one as a mode similar to the slow MR wave. In all the panels, we can observe bands crowded with eigenmodes lying just below the lines \(|\lambda|=m|\alpha|\). The kinetic and magnetic energies of most eigenmodes in the bands are partitioned almost equally. We conjecture that these bands should be identified with the continuous spectrum due to the Alfven resonance, which is hereinafter referred to as the Alfven continuous spectrum or Alfven continuum, although our numerical method Figure 2: Ratio of the mean kinetic energy MKE to the mean total energy MKE \(+\) MME against the absolute value \(|\alpha|\) of the Lehnert number when the zonal wavenumber \(m=1\) and the Malkus field is imposed. This plot is obtained from the relation \(\text{MKE}/(\text{MKE}+\text{MME})=\lambda^{2}/(\lambda^{2}+m^{2}\alpha^{2})\) in which the nondimensional angular frequency \(\lambda\) is calculated by (1). As \(|\alpha|\to 0\), either MKE or MME approaches zero, depending on whether the eigenmode is a slow or fast MR wave. The colours of the curves correspond to those in Figure 1. Figure 3: Dispersion diagrams when the zonal wavenumber \(m=1\) and the simplest equatorially-antisymmetric non-Malkus field \(\mathcal{B}=\mu\) pervades the system. The ordinates of each panel represent the real part \(\mathrm{Re}(\lambda)\) of the dimensionless angular frequency and the abscissas are the absolute value \(|\alpha|\) of the Lehnert number. Retrograde modes (\(\mathrm{Re}(\lambda)<0\)) are shown in the left column, and the right displays prograde modes (\(\mathrm{Re}(\lambda)>0\)). The upper and lower rows illustrate sinuous and varicose modes, respectively. The colours of the markers represent the ratio of the mean kinetic energy MKE to the mean total energy MKE + MME. yields only approximate discrete modes even when the system has a continuous spectrum. Even though the spectrum ought to cover the expected range \(m|\alpha|\min(\mu)=-m|\alpha|\leq\lambda\leq m|\alpha|=m|\alpha|\max(\mu)\) without any gaps as explained in the previous sections, our eigenmodes satisfying (20) do not have eigenvalues smaller than some levels in terms of absolute values. This is because, in addition to the vertical axes of the panels being logarithmic scale, the critical latitudes \(\mu_{\rm c}=\pm\lambda/m|\alpha|\) get close to the equator as \(\lambda\to 0\) for a given \(m\) and a given \(|\alpha|\), a fine structure in its eigenfunction appearing around the equator. It is necessary to calculate with a higher truncation degree in order for (17a) to express such a fine structure. Besides, small values of \(|\alpha|\) (\(\ll 1\)) reduce typical meridional wavelengths of perturbations, as will be shown in Figure 9. It follows for the same reason as above that the bands are cut off below certain values of \(|\alpha|\). If numerical calculations with an infinite degree were performed, obtained eigenvalues would cover the entire range below the line \(|\lambda|=m|\alpha|\) completely. We also conducted calculations with truncation degrees somewhat lower than that of Figure 3, for example, \(N_{\rm t}=1000\) (not shown). The outlines of these dispersion diagrams look almost unchanged aside from the difference in the widths of the bands; the higher the truncation degree, the wider the band. In the lower panels of Figure 3, some branches of the retrograde modes look like discrete eigenvalues that lie below the bands. For these eigenmodes, MMEs are dominant. In particular, the lowermost two eigenvalues, whose branches overlap (since they are a complex conjugate pair, as mentioned in Section 2), in the lower left panel have non-zero imaginary parts (not shown), and their values are consistent with (16a) and (16b). However, we consider the branches including the unstable modes as unreliable eigenvalues, or a part of the Alfven continuous modes, because the calculations (see also Figure 4) reveal that these eigenvalues depend strongly upon \(N_{\rm t}\) as opposed to normal discrete modes (cf. Carpenter and Guha, 2019). Discrete branches equivalent to slow MR waves found in Figure 1 disappear from Figure 3 as a result of the modification of the main field. Instead, the markers dyed blue, for which the fractions of MMEs of their eigenmodes are close to unity like slow MR waves, are distributed within the Alfven continuum. Therefore, we suggest that discrete modes of slow MR waves turn into continuous ones under a non-Malkus field. This situation is similar to that of equatorial Rossby waves in Taniguchi and Ishiwatari (2006), who studied eigenmodes in a linear shear flow on an equatorial \(\beta\)-plane. An alternative explanation may be their transformation into quasi-modes (Spencer and Rasband, 1997; Schecter _et al._, 2000; Balmforth _et al._, 2001; Turner and Gilbert, 2007; Wang _et al._, 2022a), or the Landau damping in a broad sense. When embedded in a continuous spectrum due to the replacement of basic fields, a discrete real eigenvalue (on the principle Riemann sheet) may change into a complex pole on the next Riemann sheet (e.g. Crawford and Hislop, 1989). Although this novel pole does not Figure 4: Dependence of the real parts \({\rm Re}(\lambda)\) of eigenvalues on the truncation degree \(N_{\rm t}\) when the zonal wavenumber \(m=1\), the absolute value of the Lehnert number \(|\alpha|=1\), and the background field is the simplest equatorially-antisymmetric non-Malkus one (\(\mathcal{B}=\mu\)). The horizontal axis is the eigenvalue number when eigenvalues are arranged in ascending order. The red open squares and the blue circles represent the case when \(N_{\rm t}=2000\) and \(1999\), respectively. produce a true eigenmode, it plays a crucial role in the time evolution of the system as a non-diffusive decaying oscillation whose frequency is close to the original real eigenvalue. We suspect that blue upward wedges with the approximate slope \(\lambda\propto|\alpha|^{2}\) at \(|\alpha|\approx 10^{-2}\) and \(\lambda\approx 10^{-4}\) in the right panels of Figure 3 are connected with quasi-modes originating from slow MR waves. To confirm this, we are preparing a paper that gives the result of a treatment finding quasi-modes, which is described in the foregoing literature (e.g. Spencer and Rasband, 1997), different from our approach. Outside the continuous spectrum, that is, above the lines \(|\lambda|=m|\alpha|\) in the diagrams, fast MR waves remain discrete eigenmodes even in the non-Malkus field. Their semi-analytical solutions can be obtained from eigenvalues of the spheroidal differential equation to which (9) is reduced when \(\mathcal{B}=\mu\) and \(m^{2}\alpha^{2}/\lambda^{2}\) is small (see Appendix B). Additionally, we find that the lowest branch of the \(m=1\) sinuous modes of fast MR waves can penetrate the band of the continuous spectrum without interaction (see the upper left panel of Figure 3). This is because the Lorentz force does not act on this eigenmode. This mode is explained in detail in Appendix C. Figure 5 depicts the dispersion diagrams for \(m=2\). They roughly epitomise the diagrams when \(m\geq 3\) (not shown). Their outlines do not change much from those of Figure 3 with the exception of the absence of the branch of the fast MR waves that penetrates the continuous spectrum. Again, more conspicuous blue upward wedges exist around \(10^{-2}\leq|\alpha|\leq 10^{-1}\) and \(10^{-4}\leq\lambda\leq 10^{-2}\) in the right panels of Figure 5. Figure 5: Same as Figure 3, but for \(m=2\). The two white asterisks of the upper right panel correspond to the two eigenmodes depicted in Figure 7. ### Eigenfunctions of the Alfven continuous modes We shall investigate the eigenfunctions of the Alfven continuous modes in this and the next subsections. Figure 6 shows typical structures of the perturbations in the stream function \(\psi_{1}\) and the magnetic vector potential \(a_{1}\) for the continuous modes. The dependences of their amplitudes \(\tilde{\psi}\) and \(\tilde{a}\) on the colatitude are illustrated in Figure 6(a), and are used for making the contour maps of Figure 6(b). From these figures, we notice that spiky singular structures appear at the critical latitudes of the eigenmode. As shown in Section 2, the eigenfunction has a logarithmic singularity or a step function singularity or both. In Section 3.2, we will provide the ratios of their contributions to the eigenfunctions around their corresponding critical latitudes. The eigenfunctions of eigenmodes which are extracted from each of the two noticeable blue upward wedges in the upper right panel of Figure 5 and which may have something to do with quasi-modes stemming from slow MR waves are also plotted in Figure 7. To get the whole picture of the eigenfunctions of the continuous modes, we exhibit those of all the obtained continuous modes. Figures 8 and 9 are the heatmaps of the absolute values, or \(|\tilde{\psi}|\) and \(|\tilde{a}|\), of their amplitudes as functions of \(\lambda\) and the colatitude when \(|\alpha|=0.1\) and \(0.01\), respectively, and \(m=1\). The left columns in these figures correspond to \(|\tilde{\psi}|\), and \(|\tilde{a}|\) is depicted in their right ones. The sinuous and varicose modes are shown on the upper and lower rows, respectively. The colour gets darker as the absolute value increases. The maps indicate that the retrograde continuous modes are evanescent on the polar side of the critical latitudes, while the prograde ones are evanescent on the equatorial side. However, only the case in which \(|\alpha|\) is sufficiently smaller than unity displays this behaviour (see Figure 10). Furthermore, the comparison between eigenmodes having the same value of \(\lambda/m|\alpha|\) in Figures 8 and 9 shows that the smaller the value of \(|\alpha|\) is, the smaller the Figure 6: Eigenfunction of the sinuous mode with the nondimensional angular frequency \(\lambda\approx 0.05006\) when the zonal wavenumber \(m=1\), the absolute value of the Lehnert number \(|\alpha|=0.1\), and the simplest equatorially-antisymmetric non-Malkus field \(\mathcal{B}=\mu\) is imposed. The critical colatitudes \(\theta_{\rm c}\approx 59.96^{\circ}\) and \(120.04^{\circ}\). (a) Amplitudes of the stream function \(\tilde{\psi}\) (red line) and the scaled magnetic vector potential \(\mathrm{sgn}(\alpha)\tilde{a}/\sqrt{\rho_{0}\mu_{\rm m}}\) (blue line) as functions of the colatitude. (b) Contour maps of the stream function \(\psi_{1}\) (left panel) and the scaled magnetic vector potential \(\mathrm{sgn}(\alpha)a_{1}/\sqrt{\rho_{0}\mu_{\rm m}}\) (right panel) in the Mollweide projection. Figure 7: Same as Figure 6(a), but for \(m=2\), \(|\alpha|=0.013\). typical north-south wavelengths of their amplitudes. In Section 4, we will therefore examine the behaviour of wave packets possessing large wavenumbers at a small \(|\alpha|\), which is similar to the Earth's core conditions, on the basis of the ray theory and attempt to get a better grasp of our numerical results. For moderate or large values of \(|\alpha|\), the less striking difference exists in the evanescent property between the retrograde and prograde modes which possess the same absolute value \(|\lambda|\) of their angular frequency. This statement is based on Figure 10, which shows the heatmaps for \(m=1\) and \(|\alpha|=1\), and other experiments with several values of \(m\) and \(|\alpha|\) (not shown). Therefore, the contrast in the property between the retrograde and prograde modes as demonstrated in Figures 8 and 9 would be attributed to the planetary \(\beta\) effect, that is, the effect of rotation. The ray-tracing analysis and the local dispersion relation which we are going to discuss in Section 4 offer a similar explanation. In addition, the fast MR mode buried in the continuous modes at \(\lambda=-1/2\) is discernible in the upper panels of Figure 10 (see also Appendix C). In fact, we tried to utilise these plots as a means to discover buried discrete eigenmodes other than that, though no such eigenmodes have been found. The evanescent property can be judged by the function \[{\cal L}^{2}(\mu;m,\lambda)\,\equiv\,-m^{2}\,-\,\frac{m(1-\mu^{2})}{\Lambda} \left[\lambda+2m\alpha^{2}{\cal B}\frac{\mbox{d}({\cal B}\mu)}{\mbox{d}\mu} \right]\,-\,\frac{1-\mu^{2}}{2\sqrt{\Lambda}}\frac{\mbox{d}}{\mbox{d}\mu} \left(\frac{1-\mu^{2}}{\sqrt{\Lambda}}\frac{\mbox{d}\Lambda}{\mbox{d}\mu} \right)\,,\] (22a) which appears in an alternative form of the differential equation ( 9 ) \[\frac{\mbox{d}}{\mbox{d}\mu}\left[(1-\mu^{2})\frac{\mbox{d}(\tilde{\psi}\sqrt {\Lambda})}{\mbox{d}\mu}\right]\,+\,\frac{{\cal L}^{2}}{1-\mu^{2}}(\tilde{ \psi}\sqrt{\Lambda})\,=\,0\,. \tag{22b}\] The Mercator projection transformation \(y=(1/2)\ln[(1+\mu)/(1-\mu)]\) yields a differential equation of the harmonic oscillation, \[\frac{\mbox{d}^{2}(\tilde{\psi}\sqrt{\Lambda})}{\mbox{d}y^{2}}\,+\,{\cal L}^{ 2}(\tilde{\psi}\sqrt{\Lambda})\,=\,0\,,\] (23a) which shows that the sign of \[{\cal L}^{2}\] determines the evanescent property of \[\tilde{\psi}\sqrt{\Lambda}\] at the latitude. One can also rewrite ( 22b ) into the form \[\frac{\mbox{d}^{2}[\tilde{\psi}\sqrt{(1-\mu^{2})\Lambda}]}{\mbox{d}\mu^{2}} \,+\,\frac{{\cal L}^{2}+1}{(1-\mu^{2})^{2}}[\tilde{\psi}\sqrt{(1-\mu^{2}) \Lambda}]\,=\,0\,,\] (23b) which likewise shows that the value of \[{\cal L}^{2}\] informs us of the evanescent property. The equivalent of this differential equation was derived by Gilman and Fox ( 1999a ), though the form of \[{\cal L}^{2}\] differs from theirs because their equation is based on a non-rotating frame; the partial derivatives with respect to \[t\] in our equations have to be replaced by \[(\partial/\partial t)+\Omega_{0}(\partial/\partial\phi)\] in the non-rotating frame. The left panel in Figure 11 shows contour plots of \[{\cal L}^{2}\] as a function of \[\lambda\] and the colatitude when \[|\alpha|=0.01\] and \[{\cal B}=\mu\]. We observe that the area where \[{\cal L}^{2}>0\] (or \[{\cal L}^{2}>-1\] ) in the panel certainly agrees with the wavy regions in Figure 9. Now, we shall consider the case when \[|\alpha|\] is small. Noting that \[\lambda=\mbox{O}(|\alpha|)\] and \[|\Lambda|=\mbox{O}(|\alpha|^{2})\] for continuous modes, we have \[{\cal L}^{2}\,=\,-\frac{m(1-\mu^{2})}{\Lambda}\lambda\,+\,\mbox{O}(|\Lambda|^{ -2}|\alpha|^{4})\] (24) unless the latitudinal position \[\mu\] is very close to a critical latitude ( \[|\Lambda|\gg\mbox{O}(|\alpha|^{3})\] ), since \[(\mbox{d}\Lambda/\mbox{d}\mu)=\mbox{O}(|\alpha|^{2})\] and \[(\mbox{d}^{2}\Lambda/\mbox{d}\mu^{2})=\mbox{O}(|\alpha|^{2})\] as can be seen from the definition of the function \[\Lambda\]. When \[{\cal B}=\mu\] (and \[|\mu^{2}-\lambda^{2}/m^{2}\alpha^{2}|\gg\mbox{O}(|\alpha|)\] ), the oscillatory condition \[{\cal L}^{2}>0\] on the equatorial side ( \[\mu^{2}<\lambda^{2}/m^{2}\alpha^{2}\] ) of the critical latitudes, therefore, requires that \[\lambda<0\] (the retrograde continuous modes), and \[{\cal L}^{2}>0\] on the polar side ( \[\mu^{2}>\lambda^{2}/m^{2}\alpha^{2}\] ) for the prograde modes ( \[\lambda>0\] ). This explains the contrasting evanescent behaviour between the retrograde and prograde continuous modes for a small value of \[|\alpha|\]. ### Comparison of the numerical results with the Frobenius series solutions Here, we confirm that the eigenfunctions obtained numerically can be approximated by linear combinations of the linearly independent Frobenius series solutions (14). Let \(\tilde{\psi}_{\rm num}\) be one of the numerical stream functions, Figure 8: Amplitudes of the eigenfunctions of all the obtained continuous modes when the zonal wavenumber \(m=1\), the absolute value of the Lehnert number \(|\alpha|=0.1\), and the basic field is the simplest equatorially-antisymmetric non-Malkus one (\(\mathcal{B}=\mu\)). The four panels divide into the stream function and the magnetic vector potential on the left and the right columns, respectively, and sinuous and varicose modes on the upper and lower rows, respectively. The vertical axes of each panel correspond to the colatitude and the horizontal axes represent the nondimensional angular frequency \(\lambda\). The darker the shades of the colour, the higher the absolute values \(|\tilde{\psi}|\) and \(|\tilde{a}|\) of the amplitudes of the stream function and the magnetic vector potential. Figure 9: Same as Figure 8, but for \(|\alpha|=0.01\). Figure 10: Same as Figure 8, but for \(m=1\) and \(|\alpha|=1\). and we fit it into the form \(C_{\rm I}\tilde{\psi}_{\rm I}^{\rm(c)}+C_{\rm I\hskip-1.0ptI}\tilde{\psi}_{\rm I \hskip-1.0ptI}^{\rm(c)}\) by adjusting the coefficients \(C_{\rm I}\) and \(C_{\rm I\hskip-1.0ptI}\) on each side of the critical latitudes. The fitting procedure is as follows. The colatitude (\(0\leq\theta\leq\pi\)) is divided into \(N_{\theta}\) points at even intervals. On each side of the nearest point of a singular latitude, the \(N_{\rm data}\) points closest to the point are chosen among the \(N_{\theta}\) points for the fitting. For these points \(\theta_{i}\) (\(1\leq i\leq N_{\rm data}\)) either on the equatorial or polar sides, we may write \[\frac{\tilde{\psi}_{\rm num}(\cos\theta_{i})}{\tilde{\psi}_{\rm I}^{\rm(c)}( \cos\theta_{i})}\,\approx\,C_{\rm I}\,\frac{\tilde{\psi}_{\rm I\hskip-1.0ptI}^ {\rm(c)}(\cos\theta_{i})}{\tilde{\psi}_{\rm I}^{\rm(c)}(\cos\theta_{i})}\,+\,C _{\rm I}\quad\mbox{and}\quad\frac{\tilde{\psi}_{\rm num}(\cos\theta_{i})}{ \tilde{\psi}_{\rm I\hskip-1.0ptI}^{\rm(c)}(\cos\theta_{i})}\,\approx\,C_{\rm I }\frac{\tilde{\psi}_{\rm I}^{\rm(c)}(\cos\theta_{i})}{\tilde{\psi}_{\rm I \hskip-1.0ptI}^{\rm(c)}(\cos\theta_{i})}\,+\,C_{\rm I\hskip-1.0ptI}\,. \tag{25}\] Now, \(\tilde{\psi}_{\rm I}^{\rm(c)}\) and \(\tilde{\psi}_{\rm I\hskip-1.0ptI}^{\rm(c)}\) are approximated by the second-order Frobenius solutions with (15). We obtain a candidate value for each of \(C_{\rm I}\) and \(C_{\rm I\hskip-1.0ptI}\) from each equation, through least squares fittings of (25) with the numpy.polyfit function of the NumPy library. Thereby we have four candidate values for each of \(C_{\rm I}\) and \(C_{\rm I\hskip-1.0ptI}\) for one critical latitude, two from the equatorial side, and two from the polar side. The upper panels of Figure 12 show a result when we perform this procedure with \(N_{\theta}=7201\), and \(N_{\rm data}=200\) individually for the equatorial (red circles and solid lines) and the polar (blue circles and dashed lines) sides. These fittings demonstrate that \(C_{\rm I}\) typically has different values (\(C_{\rm I}^{\rm(e)}\) and \(C_{\rm I}^{\rm(p)}\), say) between the two sides of a critical latitude, whilst \(C_{\rm I\hskip-1.0ptI}\) has the same value on both sides. Since \(C_{\rm I\hskip-1.0ptI}\) in (12a) can also be written as \({\rm sgn}(\mu_{\rm c})[C_{\rm I}^{\rm(p)}-C_{\rm I}^{\rm(e)}]\), our numerical eigenmodes are consistent with the general results as described in Section 2. We accordingly adopt the mean values of the two candidate values for each of \(C_{\rm I}^{\rm(e)}\) and \(C_{\rm I}^{\rm(p)}\) and four candidate ones of \(C_{\rm I\hskip-1.0ptI}\) (\(C_{\rm I\hskip-1.0ptI}^{\rm(e)}=C_{\rm I\hskip-1.0ptI}^{\rm(p)}\)) as their definite values, which are used in the graph comparing \(\tilde{\psi}_{\rm num}\) with \(C_{\rm I}\tilde{\psi}_{\rm I}^{\rm(c)}+C_{\rm I\hskip-1.0ptI}\tilde{\psi}_{ \rm I\hskip-1.0ptI}^{\rm(c)}\) (the lower panel of Figure 12). The above procedure is also applied to all the continuous modes obtained from our numerical calculations. Figure 13 depicts their values of \({\rm sgn}(\mu_{\rm c})[C_{\rm I}^{\rm(p)}-C_{\rm I}^{\rm(e)}]\) (\(=C_{\rm I\hskip-1.0ptI}\), red circles) and \(C_{\rm I\hskip-1.0ptI}\) (blue circles) for \(m=1\) and \(|\alpha|=0.1\) in the left ordinates as functions of \(\lambda\). Their results for the sinuous and varicose modes are shown in the left and right panels, respectively. Meanwhile, the right vertical axes of these panels represent the numerical counterpart of (12d), which is written in the present instance as \[I_{\rm num}\,\equiv\,-\,{\rm sgn}(\mu_{\rm c})\frac{C_{\rm I}^{\rm(p)}-C_{\rm I }^{\rm(e)}}{C_{\rm I\hskip-1.0ptI}}(1-\mu_{\rm c}^{2})m^{2}\alpha^{2}(\mathcal{ B}_{\rm c}^{2})^{\prime}\left[\tilde{\psi}_{\rm I}^{\rm(c)}(\mu_{\rm c}) \right]^{2}\,. \tag{26}\] This outcome demonstrates that the values of \(I_{\rm num}\) appear to be compatible with our expectation stated in Figure 11: Dependence of the function \(\mathcal{L}^{2}\) given by (22a) on the nondimensional angular frequency \(\lambda\) and the colatitude when the zonal wavenumber \(m=1\), the absolute value of the Lehnert number \(|\alpha|=0.01\), and the basic fields are (a) \(\mathcal{B}=\mu\) and (b) \(\mathcal{B}=\mu\sqrt{1-\mu^{2}}\). The solid and dashed curves illustrate the contour lines which correspond to \(\mathcal{L}^{2}=0\) and \(-1\), respectively. Figure 12: Comparison between the numerical eigenfunction \(\tilde{\psi}_{\rm num}\) of the sinuous mode with the dimensionless angular frequency \(\lambda\approx 0.05006\) (the critical latitude \(\theta_{\rm c}=59.96^{\circ}\) in the north hemisphere) and linear combinations \(C_{\rm I}\tilde{\psi}_{\rm I}^{\rm(c)}+C_{\rm I\hskip-1.0ptI}\tilde{\psi}_{ \rm I\hskip-1.0ptI}^{\rm(c)}\) of the Frobenius series solutions (14) with (15) when the zonal wavenumber \(m=1\), the absolute value of the Lehnert number \(|\alpha|=0.1\), and the simplest equatorially-antisymmetric non-Malkus field \({\cal B}=\mu\) permeates the system. The undetermined coefficients \(C_{\rm I}\) and \(C_{\rm I\hskip-1.0ptI}\) are estimated from least squares fittings of \(\tilde{\psi}_{\rm num}/\tilde{\psi}_{\rm I}^{\rm(c)}=C_{\rm I\hskip-1.0ptI}( \tilde{\psi}_{\rm I\hskip-1.0ptI}^{\rm(c)}/\tilde{\psi}_{\rm I\hskip-1.0ptI}^{ \rm(c)})+C_{\rm I}\) (upper left panel) and \(\tilde{\psi}_{\rm num}/\tilde{\psi}_{\rm I\hskip-1.0ptI}^{\rm(c)}=C_{\rm I }(\tilde{\psi}_{\rm I}^{\rm(c)}/\tilde{\psi}_{\rm I\hskip-1.0ptI}^{\rm(c)})+C_{ \rm I\hskip-1.0ptI}\) (upper right panel) with \(N_{\rm data}=200\) points on the equatorial (red circles and solid lines) and the polar (blue circles and dashed lines) sides of the critical latitude for \(N_{\theta}=7201\). The lower panel shows \(\tilde{\psi}_{\rm num}\) (black curve) and \(C_{\rm I}\tilde{\psi}_{\rm I}^{\rm(c)}+C_{\rm I\hskip-1.0ptI}\tilde{\psi}_{ \rm I\hskip-1.0ptI}^{\rm(c)}\) with \(C_{\rm I}\) and \(C_{\rm I\hskip-1.0ptI}\) determined by the fittings (red and blue curves) as functions of the colatitude. The vertical grey shaded area of this panel contains the \(2N_{\rm data}=400\) points used in the fittings. Section 2; the values are arbitrary numbers and adjusted to satisfy boundary conditions. In addition, we observe that the value of \(\mathrm{sgn}(\mu_{\mathrm{c}})[C_{\mathrm{I}}^{(\mathrm{p})}-C_{\mathrm{I}}^{( \mathrm{e})}]\) vanishes at the extremum points of \(C_{\mathrm{I}}\), and that \(\mathrm{sgn}(\mu_{\mathrm{c}})[C_{\mathrm{I}}^{(\mathrm{p})}-C_{\mathrm{I}}^{ (\mathrm{e})}]\) has extremums at the zeros of \(C_{\mathrm{I}}\). If discrete eigenmodes without logarithmic and step function singularities are buried in the continuum, the two values must simultaneously vanish. ## 4 Interpretation in terms of the ray theory and discussion To further comprehend the continuous modes and their eigenfunctions, we here reduce the system investigated so far to a more restricted situation when \(|\alpha|\ll 1\). We apply the ray theory, in which an inhomogeneous background field varies with a much larger spatial scale than typical wavelengths. We introduce local coordinates \((\varTheta,\varPhi)\) which suitably measure the spatial scale size of the typical wavelength of a wave train. Introducing small parameters helps incorporate such a setting into governing equations. In Section 3.1, we found that typical meridional wavelengths decrease as the value of \(|\alpha|\) decreases. Thus, it would be reasonable to select \(|\alpha|\) as the parameter, if the value of \(|\alpha|\) is sufficiently smaller than unity. The local coordinates are then stretched in the forms \[\varTheta\,\equiv\,|\alpha|^{-1/2}\theta\,,\qquad\varPhi\,\equiv\,|\alpha|^{-1 /2}\phi\,.\] (27a) The temporal scale of the wave period ( \[\lambda^{-1}=\mathrm{O}(|\alpha|^{-1/2})\] ) is similarly far from that of the migration of the wave train. The new shrunk time \[T\] useful in measuring the latter is given by \[T\,\equiv\,|\alpha|\tau\,.\] (27b) The values of the exponents of \[|\alpha|\] of the variables above are determined in Appendix D. We then introduce a locally defined wavenumber and angular frequency which depend on the global coordinates \[(\theta,\phi)\] and \[T\], and subsequently derive a local dispersion relation and ray-tracing equations, which predict the movement of a wave packet. Their derivations are based on explanations in standard textbooks on wave dynamics (e.g. Lighthill, 1978), and we explain their details in Appendix D. Here we summarise the results. The expression of perturbations of the stream function postulated in Section 2 is here rewritten as \[\psi_{1}\equiv\mathrm{Re}[M(\phi,\theta,T)\mathrm{e}^{|\varphi_{\mathrm{L}}( \varPhi,\varPhi,\tau)|}]\], where \[M\] is the wave amplitude and \[\varphi_{\mathrm{L}}\] is the phase of the wave packet. With Figure 13: Values of \(\mathrm{sgn}(\mu_{\mathrm{c}})[C_{\mathrm{I}}^{(\mathrm{p})}-C_{\mathrm{I}}^{ (\mathrm{e})}]\) (red circles and the left vertical axes), \(C_{\mathrm{I}}\) (blue circles and the left vertical axes), and \(I_{\mathrm{num}}\) defined as (26) (black circles and the right vertical axes), relevant to the coefficients of the Frobenius series solutions against the nondimensional angular frequency \(\lambda\). The case when the zonal wavenumber \(m=1\), the absolute value of the Lehnert number \(|\alpha|=0.1\), and the simplest equatorially-antisymmetric non-Malkus field \(\mathcal{B}=\mu\) is imposed is shown in the left and the right panels for sinuous and varicose modes, respectively. \(N_{\theta}\) and \(N_{\mathrm{data}}\) are set to \(7201\) and \(200\), respectively, in the same way as Figure 12. this ansatz, the local wavenumber and the local nondimensional angular frequency are expressed as \[k(\phi,\theta,T)\,\equiv\,\frac{1}{\sin\theta}\frac{\partial\varphi_{\rm L}}{ \partial\Phi}\,,\qquad l(\phi,\theta,T)\,\equiv\,\frac{\partial\varphi_{\rm L}} {\partial(-\Theta)}\,,\qquad\lambda(\phi,\theta,T)\,\equiv\,-\frac{\partial \varphi_{\rm L}}{\partial\tau}\,. \tag{28}\] Note that \(\varphi_{\rm L}\) depends on the local coordinates \((\Phi,\Theta,\tau)\), while \(M\), \(k\), \(l\) and \(\lambda\) depend on the global ones \((\phi,\theta,T)\). The local dispersion relation for our present problem is obtained from the leading order terms in the governing equations (7) in the form \[{\cal D}(\phi,\theta,T,k,l,H)\,\equiv\,H^{2}(k^{2}+l^{2})\,+\,Hk\sin\theta\,- \,k^{2}{\cal B}^{2}\sin^{2}\theta(k^{2}+l^{2})\,=\,0\,, \tag{29}\] in which \(H\equiv|\alpha|^{-1/2}\lambda\) is the scaled nondimensional angular frequency. Replacing \(\sin\theta\) and \({\cal B}\) in (29) with constants, one would have an equivalent to the nondimensional dispersion relation on a middle latitude \(\beta\) plane (Zaqarashvili _et al._, 2007). Note that we denote \(H={\cal H}(\phi,\theta,k,l,T)\) as the solution of (29) for \(H\). From (29), the components of the nondimensional local group velocity \(\mathbf{c}_{\rm g}\) are given by \[\frac{c_{\rm g,\phi}}{|\alpha|} \equiv\,\frac{\partial{\cal H}}{\partial k}\,=\,-\frac{(\partial{ \cal D}/\partial k)|_{H={\cal H}}}{(\partial{\cal D}/\partial H)|_{H={\cal H} }}\,=\,\frac{2k{\cal B}^{2}\sin^{2}\theta(2k^{2}+l^{2})-{\cal H}(2k{\cal H}+ \sin\theta)}{2{\cal H}(k^{2}+l^{2})+k\sin\theta}\,, \tag{30a}\] \[\frac{c_{\rm g,-\theta}}{|\alpha|} \equiv\,\frac{\partial{\cal H}}{\partial l}\,=-\frac{(\partial{ \cal D}/\partial l)|_{H={\cal H}}}{(\partial{\cal D}/\partial H)|_{H={\cal H }}}\,=\,-\frac{2l({\cal H}^{2}-k^{2}{\cal B}^{2}\sin^{2}\theta)}{2{\cal H}(k^ {2}+l^{2})+k\sin\theta}\,. \tag{30b}\] We eventually find the ray-tracing equations \[\sin\theta\frac{{\rm d}_{\rm g}\phi}{{\rm d}T}\,=\,\frac{c_{\rm g,\phi}}{| \alpha|}\,,\qquad\frac{{\rm d}_{\rm g}(-\theta)}{{\rm d}T}\,=\,\frac{c_{\rm g,-\theta}}{|\alpha|}\,,\] (31a) and \[\frac{{\rm d}_{\rm g}(k\sin\theta)}{{\rm d}T}\,=\,-\left(\frac{ \partial{\cal H}}{\partial\phi}\right)_{k,l}\,=\,0\,, \tag{31b}\] \[\frac{{\rm d}_{\rm g}l}{{\rm d}T}\,+\,\frac{c_{\rm g,\phi}}{| \alpha|}k\cot\theta\,=\,-\left[\frac{\partial{\cal H}}{\partial(-\theta)} \right]_{k,l}\,=\,\frac{k^{2}[{\rm d}({\cal B}^{2}\sin^{2}\theta)/{\rm d} \theta](k^{2}+l^{2})-{\cal H}k\cos\theta}{2{\cal H}(k^{2}+l^{2})+k\sin\theta}\,,\] (31c) \[\frac{{\rm d}_{\rm g}H}{{\rm d}T}\,=\,\left(\frac{\partial{\cal H }}{\partial T}\right)_{k,l}\,=\,0\,, \tag{31d}\] where the material time derivative moving with the local group velocity is \[\frac{{\rm d}_{\rm g}}{{\rm d}T}\,\equiv\,\frac{\partial}{\partial T}\,+\, \frac{c_{\rm g,\phi}}{|\alpha|\sin\theta}\frac{\partial}{\partial\phi}\,+\, \frac{c_{\rm g,-\theta}}{|\alpha|}\frac{\partial}{\partial(-\theta)}\,=\, \frac{\partial}{\partial T}\,+\,\frac{\mathbf{c}_{\rm g}}{|\alpha|}\,\mathbf{\cdot}\, \mathbf{\nabla}_{\rm G}\,, \tag{31e}\] with \(\mathbf{\nabla}_{\rm G}\equiv(\hat{\mathbf{e}}_{\phi}/\sin\theta)(\partial/\partial \phi)+\hat{\mathbf{e}}_{-\theta}[\partial/\partial(-\theta)]\). According to these equations, a wave train migrates with its group velocity depending on its latitudinal position and its dominant local wavenumber, which also varies with the colatitude \(\theta\). Furthermore, (31) show that \(k\sin\theta\) and \(H\) (or \(\lambda\)) are invariant along a ray trajectory, but \(l\) is not. We conduct the numerical time integration of the ray-tracing equations (31) with (30) for the movement of a wave packet originating at a given initial position \((\theta,\phi)\) with a given initial local wavenumber \((k,l)\) and a local dimensionless angular frequency \(\lambda\) determined by the local dispersion relation (29) (e.g. Teruya _et al._, 2022). In our code, the initial local longitudinal wavenumber \(k_{\rm init}\) is calculated from the relation (29) with the scipy.optimize.fsolve function of the SciPy library after one specifies the initial values of \(l\), \(H\), \(\phi\) and \(\theta\). The succeeding time integration of (31) is based on an explicit Runge-Kutta method of order 8 (the DOP853 algorithm in the scipy.integrate.solve_ivp function of the SciPy library). This integration is conducted without explicitly using (29) on the way, and the numerical errors in our calculations are monitored by the value of the function \({\cal D}\) of (29). Their results are also compared to those of Section 3.1 from the aspect of the evanescent property. Before demonstrating their trajectories obtained numerically, we examine some properties of the local dispersion relation. In the following preliminary considerations, we assume that the physical variables satisfy the relation \(H={\cal H}(\phi,\theta,k,l,T)\) at any time. Figures 14 and 15 show contour plots of the scaled dimensionless angular frequency \(H\) as a function of \(k\) and \(l\) for three different latitudes, when \(\mathcal{B}=\cos\theta\) and \(\mathcal{B}=1\), respectively. For ease of understanding Figure 14, we first explain Figure 15, which corresponds to Figure 1 for global modes. These diagrams are calculated from the equation transformed from (29) in the form \[H\,=\,\frac{-k\sin\theta\pm|k|\sin\theta\sqrt{1+4\mathcal{B}^{2}(k^{2}+l^{2})^{ 2}}}{2(k^{2}+l^{2})}\,.\] (32a) In Figures 14 and 15, we take the plus of the plus-minus sign so that \[H\] should be positive. This means that the sign of \[k\] signifies the longitudinal direction of the phase velocity. The nearly vertical contour lines for large absolute values of \[l\] in Figure 15 correspond to the relations describing the propagation properties of wave packets that belong to prograde ( \[H/k>0\] ) and retrograde ( \[H/k<0\] ) Alfven waves. Additionally, it can be concluded that, in Figure 15, the circular contour lines which are tangent to the line \[k=0\] represent the dispersion relation for fast MR waves, and that the slightly curved part of the nearly vertical contour lines near the line \[l=0\] on the half plane \[H/k>0\] explains how slow MR waves propagate. The similarities between Figure 15 and the left and middle panels of Figure 14 suggest that the same is true for the case where \[\mathcal{B}=\cos\theta\]. However, for \[\mathcal{B}=\cos\theta\], no branches of Alfven and slow MR waves exist at the equator ( \[\theta=90^{\circ}\] ) as shown in the right panel of Figure 14, since the main field vanishes there. Note that the direction of the gradient \[(\partial\mathcal{H}/\partial k,\partial\mathcal{H}/\partial l)\] at a point \[(k,l)\] in these plots is identical with that of the group velocity of a wave packet whose dominant local wavenumber is \[(k,l)\] at the colatitude \[\theta\]. For a wave packet belonging to either Alfven or slow MR waves, the sign of the azimuthal component \[c_{\mathrm{g},\phi}\] of its group velocity is the same as that of the azimuthal component \[|\alpha|(H/k)\] of its nondimensional local phase velocity, whilst those of the meridional components ( \[c_{\mathrm{g},-\theta}\] and \[|\alpha|(H/l)\] ) are opposite for the retrograde Alfven packet. Figure 14 illustrates the remarkable feature that the north-south component \(c_{\mathrm{g},-\theta}\) of the group velocity vanishes at \(l=0\) and \(l=\pm\infty\), as can also be seen from (30b). Then, the wave train can be refracted at or absorbed into the latitude, heading only in the \(\phi\) direction there (e.g. Acheson, 1972; McKenzie, 1973; Eltayeb, 1977; Eltayeb and Mckenzie, 1977; Grimshaw, 1979). In particular, from (29) the latter situation \(l^{2}\to\infty\) with a reasonable condition \(Hk\sin\theta\neq 0\) leads to the limit \(H^{2}-k^{2}\mathcal{B}^{2}\sin^{2}\theta\to 0\), which signifies that the latitude is a critical one. It follows that the nearly vertical lines for Alfven waves in Figure 14 should be linked to the Alfven continuous modes observed in the results in Section 3. Note that, though the nearly vertical lines in Figure 15 are similar to those in Figure 14, the Malkus field \(\mathcal{B}=1\) does not yield any continuous modes since \(H^{2}-k^{2}\mathcal{B}^{2}\sin^{2}\theta\) is constant. The local dispersion relation (29) also allows one to understand the evanescent property of waves from the sign of the squared local meridional wavenumber \(l^{2}\). This value can be calculated from \[l^{2}\,=\,-k^{2}\,-\,\frac{Hk\sin\theta}{H^{2}-k^{2}\mathcal{B}^{2}\sin^{2} \theta}\,. \tag{32b}\] Figure 16 contains contour plots of \(l^{2}\) as a function of \(k\) and \(H\) for three different latitudes in the case where \(\mathcal{B}=\cos\theta\). Waves can propagate only when their wavenumbers fall within the parameter domains where Figure 14: Local dispersion relation given by (32a) when the background field is the simplest equatorially-antisymmetric non-Malkus one (\(\mathcal{B}=\cos\theta\)). In these panels, the scaled nondimensional angular frequency \(H=|\alpha|^{-1/2}\lambda\) is shown as a function of the local wavenumber \((k,l)\) for the colatitudes \(\theta=30^{\circ}\) (left panel), \(60^{\circ}\) (middle), and \(90^{\circ}\) (right). \(l^{2}>0\) in these panels, and these regions are classified into three groups. The two thin regions near the lines \(H=\pm k|\mathcal{B}|\sin\theta\) in the left and middle panels correspond to the relations for prograde Alfven and slow MR waves (\(H/k>0\)) and retrograde Alfven waves (\(H/k<0\)). Again, as shown in the right panel, no areas for Alfven and slow MR waves are found at the equator (\(\theta=90^{\circ}\)), since the background field vanishes there. The propagation properties of fast MR waves are represented as the domain near the line \(k=0\) on the half plane \(H/k<0\). Note that again the curved lines \(l=0\) and the lines \(H=\pm k|\mathcal{B}|\sin\theta\) show that wave packets can be refracted at or absorbed into the latitude since the latter lines are equivalent to the situation \(l^{2}\to\infty\). Figure 16 are also helpful in the short-term prediction of the migration of a wave packet. We will now consider the two cases: (i) when the wave packet heads toward its corresponding critical latitude, and (ii) when the packet proceeds in the direction away from the critical latitude. Since \(k\sin\theta\) and \(H\) remain constant during the movement of a wave packet, its migration in the north-south direction can be converted into the movement of the point \((k,H)\) in the horizontal direction of the panels in Figure 16 (unless the outlines of their contour plots change significantly depending on the latitude). 1. When a wave train moves equatorward with \(H/k\) and \(l^{2}\) positive (then \(H/l<0\) in the north hemisphere from (30b) or Figure 14), \(k\) decrease and the point \((k,H)\) approaches the line \(H=k|\mathcal{B}|\sin\theta\) from the right on the plots of Figure 16. This means that the train belonging to either prograde Alfven or slow MR waves approaches its corresponding critical latitude from the polar side and is refracted or absorbed Figure 16: Local dispersion relation written as (32b) when the simplest equatorially-antisymmetric non-Malkus field \(\mathcal{B}=\cos\theta\) is imposed. In these panels, the square \(l^{2}\) of the local meridional wavenumber is displayed as a function of the local longitudinal wavenumber \(k\) and the scaled dimensionless angular frequency \(H=|\alpha|^{-1/2}\lambda\) for the colatitudes \(\theta=30^{\circ}\) (left panel), \(60^{\circ}\) (middle), and \(90^{\circ}\) (right). Figure 15: Same as Figure 14, but for \(\mathcal{B}=1\). there. If the train went beyond the latitude, \(l^{2}\) would become negative, hence evanescent waves. Note that, although plots for the Malkus field \(\mathcal{B}=1\), which are illustrated in Figure 17, are similar to Figure 16 except for its right panel (\(\theta=90^{\circ}\)), then the point \((k,H)\) never arrives near the line \(H=\pm k|\mathcal{B}|\sin\theta\) due to the constancy of \(k\sin\theta\) and \(H\) (unless the initial condition has already approximately satisfied this equality). When \(H/k\) is negative (under \(\mathcal{B}=\cos\theta\)), a wave train that travels poleward (\(H/l<0\) in the northern hemisphere) and that does not belong to fast MR waves approaches its corresponding critical latitude, because \(k\) increases while it migrates and the point \((k,H)\) approaches the line \(H=-k|\mathcal{B}|\sin\theta\) from the left. It follows that the train belonging to retrograde Alfven waves approaches the critical latitude from the equatorial side. 2. A wave packet moving in the opposite direction from its corresponding critical latitude is realised by a local meridional phase velocity \(|\alpha|(H/l)\) which is opposite in sign to the case (i). In other words, we here focus on a packet that travels poleward (\(H/l>0\) in the northern hemisphere) with \(H/k\) positive and one that moves equatorward (\(H/l>0\) in the north hemisphere) with \(H/k\) negative. Then, the point \((k,H)\) approaches the curved line \(l=0\), resulting in its refraction or absorption. It can be concluded that the packet that belongs to either prograde Alfven or slow MR waves approaches the latitude where \(l=0\) from the equatorial side and that the packet belonging to retrograde Alfven waves approaches there from the polar side. Whether a wave packet is to be refracted at or absorbed into a latitude where \(l^{2}\to\infty\), or \(H^{2}=k^{2}\mathcal{B}^{2}\sin^{2}\theta\), is considered now. For a general profile of \(\mathcal{B}\), near such a colatitude \(\theta=\theta_{\rm c}\), (32b) becomes \[l^{2}\,\simeq\,\frac{H/(k\sin\theta)}{(\mathrm{d}\mathcal{B}^{2}/\mathrm{d} \theta)|_{\theta=\theta_{\rm c}}(\theta-\theta_{\rm c})}\,, \tag{33}\] if \((\mathrm{d}\mathcal{B}^{2}/\mathrm{d}\theta)|_{\theta=\theta_{\rm c}}\neq 0\). Specifically, when \(\mathcal{B}=\cos\theta\), the oscillatory condition \(l^{2}>0\) requires that \((\theta-\theta_{\rm c})\cos\theta_{\rm c}\) has the sign opposite to \(H/k\); the packet which belongs to either prograde Alfven or slow MR waves (\(H/k>0\)) approaches the critical colatitude \(\theta_{\rm c}\) from the polar side (\((\theta-\theta_{\rm c})\cos\theta_{\rm c}<0\)), while the retrograde Alfven one (\(H/k<0\)) does from the equatorial side (\((\theta-\theta_{\rm c})\cos\theta_{\rm c}>0\)). This is just a mathematical paraphrase of the consideration of the ways that the packets approach the critical latitudes in the previous paragraph. Since the term \(Hk\sin\theta\) in (29), which has the same sign as the numerator of (33), represents the planetary \(\beta\) effect, the aforementioned distinction between the prograde and retrograde waves is caused by the \(\beta\) effect. Additionally, Figure 17: Same as Figure 16, but for \(\mathcal{B}=1\). on using (30) and (33), one can obtain an asymptotic expression of the group velocity \[\frac{c_{\mathrm{g},\phi}}{|\alpha|} \simeq \frac{H}{k}\;=\;\mathrm{O}(|\theta-\theta_{\mathrm{c}}|^{0})\,, \tag{34a}\] \[\frac{c_{\mathrm{g},-\theta}}{|\alpha|} \simeq \frac{(\mathrm{d}\mathcal{B}^{2}/\mathrm{d}\theta)|_{\theta= \theta_{\mathrm{c}}}(k\sin\theta)^{2}(\theta-\theta_{\mathrm{c}})}{Hl}\;=\; \mathrm{O}(|\theta-\theta_{\mathrm{c}}|^{3/2})\,. \tag{34b}\] This expression gives the travel time of the packet from a given latitude \(\theta\) in the vicinity of the critical one to the latter in the form \[\int_{\theta}^{\theta_{\mathrm{c}}}\frac{|\alpha|}{-c_{\mathrm{g},-\theta}( \theta_{\mathrm{*}})}\mathrm{d}\theta_{\mathrm{*}}\;\simeq\;-\frac{H}{( \mathrm{d}\mathcal{B}^{2}/\mathrm{d}\theta)|_{\theta=\theta_{\mathrm{c}}}(k \sin\theta)^{2}}\int_{\theta}^{\theta_{\mathrm{c}}}\frac{l(\theta_{\mathrm{* }})}{\theta_{\mathrm{*}}-\theta_{\mathrm{c}}}\mathrm{d}\theta_{\mathrm{*}}\;= \;\mathrm{O}(|\theta-\theta_{\mathrm{c}}|^{-1/2})\,. \tag{35}\] Any packets therefore never reach their corresponding critical latitudes in a finite time and are absorbed there. In contrast, wave packets are refracted at the latitudes where \(l\) vanishes, as will be explained in a similar fashion in Appendix E. The latitudes are often referred to as "turning latitudes." At last, we shall demonstrate the ray trajectories obtained by the numerical time integration of the ray-tracing equations, though rough ones can be sketched even from the above examination. Figures 18 and 19 show two of the trajectories for wave trains belonging to prograde and retrograde Alfven waves, respectively. Each of the trains is injected at the black asterisk in each of their upper panels, its position evolving in accordance with (31a). The colours in the trajectories represent their local wavenumbers, or the directions of their local phase velocities, by hue (see their lower left panels for the colour scale). In their lower right panels, the longitudes \(\phi\) of their positions at time \(T\) are recorded in the same colouring scheme as their upper panels. The numerical errors for the results shown in Figures 18 and 19 are \(\mathcal{D}\lesssim 10^{-3}\) throughout their numerical integrations. The trajectories agree with the above predictions in terms of the refraction at turning latitudes, the absorption into the critical ones, and the incident directions to those. From the asymptotic expression (34a) of the group velocity near a critical latitude, one obtains the period for a wave packet to circle along the latitude around a sphere as \(\mathrm{sgn}(\Omega_{0})T=|2\pi\!k\sin\theta/H|\), which approximates the periods read from the lower right panels of Figures 18 and 19. The facts that wave packets can not cross their corresponding critical latitudes and that whether the packets approach there from the polar or equatorial sides depends on the sign of \(H/k\) are consistent with the features observed from the numerical results for the continuous modes (Figures 8 and 9) in the eigenvalue problem for global modes when \(|\alpha|\) is small. Although the spatial scale of waves focused on in this section is smaller than that of global modes which were thematised in Section 3, this implies that, in the case when \(|\alpha|\ll 1\), ray trajectories for Alfven waves enable one to roughly predict the behaviours of the continuous modes without actually solving the eigenvalue problem. For instance, four of the trajectories for another equatorially-antisymmetric field \(\mathcal{B}=\sin\theta\cos\theta\) are displayed in Figures 20 and 21. These figures indicate that, if \(\mathcal{B}=\sin\theta\cos\theta\), the prograde continuous modes should be evanescent in the polar and equatorial regions, while the retrograde ones should become evanescent in the mid-latitudes. The value of the function \(\mathcal{L}^{2}\) of (22a) also gives the information about the evanescent property for global modes, as with the sign of \(l^{2}\). This function explicitly includes the effects of the gradient of the background field, parts of which have been indirectly ignored in the derivation of the ray-tracing equations (see Appendix D). However, its approximate formula (24) for a small \(|\alpha|\) agrees with (33). Figure 11(b) depicts the values of \(\mathcal{L}^{2}\) when \(\mathcal{B}=\mu\sqrt{1-\mu^{2}}\), and our prediction using the ray theory is consistent with this plot. Based on these facts, we can consider that wave packets belonging to Alfven waves pertain to the continuous modes in Section 3 as expected. Moreover, we here learn why discrete branches of slow MR waves disappear under non-Malkus fields, as shown in the dispersion diagrams in Section 3, from the ray-tracing approach. Figure 22 depicts one of the trajectories for a wave packet that (at least initially) belongs to the slow MR wave, because the condition \[H^{2}(k^{2}+l^{2})\;\ll\;Hk\sin\theta \tag{36}\] has been satisfied by its initial condition. This inequality is obtained from the comparison between the first two terms in the local dispersion relation (29) by analogy with slow MR waves in the case of the Malkus field. The illustrated trajectory is similar to those of prograde Alfven waves (see Figure 18). Since the propagation properties for slow MR waves and those of prograde Alfven waves are continuous, as seen in Figures 14 and 16, Figure 19: Same as Figure 18, but for retrograde Alfvén waves (the scaled zonal wavenumber \(k\sin\theta\approx-1.77036\), and the critical colatitude \(\theta_{\rm c}\approx 55.61^{\circ}\) in the north hemisphere). The initial colatitude \(\theta_{\rm init}=60^{\circ}\). Figure 21: Same as Figure 20, but for retrograde Alfvén waves. The initial local meridional wavenumber \(l_{\rm init}=2\). Figure 20: Same as Figure 18, but for \(\mathcal{B}=\sin^{2}\theta\cos\theta\). Ray trajectories for wave packets that belong to prograde Alfvén waves (the scaled zonal wavenumber \(k\sin\theta\approx 2.15606\), and the critical colatitudes \(\theta_{\rm c}\approx 34.03^{\circ},55.97^{\circ}\) in the north hemisphere) are shown. The initial colatitude \(\theta_{\rm init}=45^{\circ}\). wave packets for slow MR waves transform into prograde Alfven waves as they migrate in an inhomogeneous background field, changing their dominant local meridional wavenumber. The time evolution of \(l\) can reverse the inequality sign of the condition (36) as \(l^{2}\to\infty\), when the Alfven balance \(H^{2}\approx k^{2}\mathcal{B}^{2}\sin^{2}\theta\) is reached. From (32b), the latitudinal variation of \(l^{2}\) during the migration of a wave packet is written as \[\left(\frac{\partial l^{2}}{\partial\theta}\right)_{k\sin\theta,H}=\,2k^{2} \cot\theta\,-\,\frac{HK^{3}\sin^{3}\theta}{(H^{2}-k^{2}\mathcal{B}^{2}\sin^{2 }\theta)^{2}}\frac{\text{d}\mathcal{B}^{2}}{\text{d}\theta}\,, \tag{37}\] and one finds that \(\left(\partial l^{2}/\partial\theta\right)_{k\sin\theta,H}\) is always positive (negative) in the northern (southern) hemisphere when \(\mathcal{B}=\cos\theta\) and \(H/k>0\). Therefore, mode conversion from slow MR into prograde Alfven waves should have occurred between the initial colatitude \(\theta_{\rm init}\) and the critical colatitude \(\theta_{\rm c}\) in the example of Figure 22, since \(0\leq l^{2}\leq l_{\rm init}^{2}\) within the interval between \(\theta_{\rm init}\) and the turning latitude. Although mode conversions can cause the valve effect (e.g. Acheson, 1972; McKenzie, 1973; Eltayeb, 1977; Grimshaw, 1979), the effect does not occur in our system because there exists only one kind of critical latitudes: the Alfven resonance \(H^{2}=k^{2}\mathcal{B}^{2}\sin^{2}\theta\). On the other hand, wave trains belonging to fast MR waves, which have discrete branches when \(\lambda<-m|\alpha|\) for global modes, move back and forth between two turning latitudes without transforming into Alfven waves even though its dominant local wavenumber evolves (see Appendix B). The invariants including the products of perturbations are useful for understanding waves and their associated phenomena. As will be derived in Appendix D, our approximation leads to a conservation law in the form \[\frac{\partial}{\partial T}\left(\frac{\partial\mathcal{D}}{\partial H}|M|^{2} \right)\,+\,\mathbf{\nabla}_{\rm G}\cdot\left(\frac{\mathbf{c}_{\rm g}}{|\alpha|} \frac{\partial\mathcal{D}}{\partial H}|M|^{2}\right)\,=\,0\,. \tag{38}\] It follows from the constancy of \(k\sin\theta\) and \(H\) that the equation in which \((\partial\mathcal{D}/\partial H)|M|^{2}\) is replaced by \((k\sin\theta/H^{2})(\partial\mathcal{D}/\partial H)|M|^{2}\) is also correct. Our subsequent paper will prove that the latter quantity is equivalent to the pseudomomentum density (up to constant factor) for the 2D ideal incompressible MHD system. If weak dissipations are introduced, a packet that is related to the Alfven continuous modes would attenuate near its corresponding critical latitude, or within its corresponding thin inner boundary layer (see Section 1), due to its long travel time (35). The fact that the packet carries the pseudomomentum in line with (38) implies that the mean flow would be accelerated there because the damping of waves can cause angular momentum exchange between waves and a mean flow in accordance with the wave-mean flow interaction theory (e.g. Buhler, 2009). This possibly may induce nonlinear oscillations such as the quasi-biennial oscillation (QBO) in the Earth's equatorial stratosphere (e.g. Baldwin _et al._, 2001). At the end of this section, we shall confirm the validity of the ray theory when \(l^{2}\to\infty\). The approximation requires that the spatial scales at which the dominant local wavenumber \((k,l)\) and the amplitude \(M\) of a wave Figure 22: Same as Figure 18, but for slow MR waves (the scaled zonal wavenumber \(k\sin\theta\approx 0.14633\), and the critical colatitude \(\theta_{\rm c}\approx 86.08^{\circ}\) in the north hemisphere). The scaled nondimensional angular frequency \(H=|\alpha|^{-1/2}\lambda=0.01\), the initial local meridional wavenumber \(l_{\rm init}=0.5\), and the initial colatitude \(\theta_{\rm init}=60^{\circ}\). packet vary are sufficiently larger than its wavelength. This condition can be written as \[\min\left(\left|\frac{1}{l}\frac{\partial l}{\partial\theta}\right|^{-1},\left| \frac{1}{M}\frac{\partial M}{\partial\theta}\right|^{-1}\right)\,\gg\,\frac{2 \pi}{|\alpha|^{-1/2}|l|}\,. \tag{39}\] To get an asymptotic expression of \(|M|\) near its corresponding critical latitude \(\theta_{\rm c}\), we take advantage of the conservation law (38) in the form \[\iint_{S_{\rm g}}\left(\frac{\partial\mathcal{D}}{\partial H}|M|^{2}\right) \sin\theta\mathrm{d}\theta\mathrm{d}\phi\,=\,\text{const.}\,, \tag{40}\] where \(S_{\rm g}\) is a region moving and deforming with the local group velocity. Let \(\theta_{1}(T)\) and \(\theta_{2}(T)\) be the latitudinal positions, at an arbitrary time \(T\), of the rear and front of an isolated wave packet, respectively. Since the local group velocity \(\mathbf{c}_{\rm g}\) of the packet depends only on the latitudinal position \(\theta\), we have \[T\,=\,\int_{\theta_{1}(0)}^{\theta_{1}(T)}\frac{|\alpha|}{-c_{{\rm g},-\theta} }\mathrm{d}\theta\,=\,\int_{\theta_{2}(0)}^{\theta_{2}(T)}\frac{|\alpha|}{-c_ {{\rm g},-\theta}}\mathrm{d}\theta\,.\] (41a) Thus, the latitudinal length \[|\theta_{2}-\theta_{1}|\] of the packet satisfies \[\int_{\theta_{1}(T)}^{\theta_{2}(T)}\frac{|\alpha|}{-c_{{\rm g},-\theta}} \mathrm{d}\theta\,=\,\text{const.}\,. \tag{41b}\] Using these relations, we can estimate that \((\partial\mathcal{D}/\partial H)|M|^{2}(\Delta\phi\sin\theta)\propto c_{{\rm g },-\theta}^{-1}\) with the longitudinal length \(\Delta\phi\) of the packet and achieve \(M=\mathrm{O}(|\theta-\theta_{\rm c}|^{-1/4})\). Note that the same asymptotic expression as this is also obtained from the steady problem in the last two paragraphs of Appendix D. Since \(l=\mathrm{O}(|\theta-\theta_{\rm c}|^{-1/2})\) (from (33)) and \(M=\mathrm{O}(|\theta-\theta_{\rm c}|^{-1/4})\), we can not properly discuss the behaviour of a wave packet if it approaches within the distance \(|\theta-\theta_{\rm c}|=\mathrm{O}(|\alpha|)\) from its corresponding critical latitudes. The case where \(l=0\) is addressed in Appendix E. ## 5 Conclusion In the present paper, we numerically scrutinised 2D ideal incompressible MHD linear waves within a thin layer on a rotating sphere with latitudinally varying toroidal magnetic fields \(B_{0\phi}=B_{0}\mathcal{B}(\cos\theta)\sin\theta\). From the eigenvalue problem for the simplest equatorially-antisymmetric non-Malkus field \(\mathcal{B}=\cos\theta\), we did not find Alfven and slow MR discrete branches but a continuous spectrum. In particular, the slow MR waves turn into parts of the Alfven continuous modes owing to the imposition of the non-Malkus field. The eigenfunctions of the continuous modes and their concomitant critical latitudes were investigated in unprecedented detail and compared with the Frobenius series solutions. The observed difference in the evanescent property between the prograde and retrograde continuous modes results from the planetary \(\beta\) effect. The theory of slowly varying wave packets, or the ray theory, in an inhomogeneous magnetic field for a small absolute value of the Lehnert number \(\alpha\) implies that a wave packet that is related to the continuous spectrum moves toward its corresponding critical latitude and is ultimately absorbed there. The fact that whether the packet approaches the latitude from the polar or equatorial sides depends on the sign of the azimuthal component \(|\alpha|(H/k)\) of its nondimensional local phase velocity is consistent with the evanescent property of the global modes obtained from the eigenvalue problem, and the planetary \(\beta\) effect still causes this distinction of the evanescent property. Additionally, this theory strongly corroborates the idea that slow MR waves transform into Alfven continuous modes under a non-Malkus field. The novel conservation law (38) derived from this approximation will provide insights into the interaction between waves and the mean flow and magnetic fields, thereby into the weakly nonlinear evolution of the background fields. Accordingly, our results could act as a stepping stone to a deeper understanding of the dynamics of the outermost Earth's core and the solar tachocline. Other considerations for our problem should be put off until future works. Since slow MR waves occupy an important position as possible causes of geomagnetic fluctuations, it is a pivotal issue whether such waves can remain as discrete modes even under a non-Malkus field by recovering additional effects which we have omitted here or not.
2302.11343
Advancing Stuttering Detection via Data Augmentation, Class-Balanced Loss and Multi-Contextual Deep Learning
Stuttering is a neuro-developmental speech impairment characterized by uncontrolled utterances (interjections) and core behaviors (blocks, repetitions, and prolongations), and is caused by the failure of speech sensorimotors. Due to its complex nature, stuttering detection (SD) is a difficult task. If detected at an early stage, it could facilitate speech therapists to observe and rectify the speech patterns of persons who stutter (PWS). The stuttered speech of PWS is usually available in limited amounts and is highly imbalanced. To this end, we address the class imbalance problem in the SD domain via a multibranching (MB) scheme and by weighting the contribution of classes in the overall loss function, resulting in a huge improvement in stuttering classes on the SEP-28k dataset over the baseline (StutterNet). To tackle data scarcity, we investigate the effectiveness of data augmentation on top of a multi-branched training scheme. The augmented training outperforms the MB StutterNet (clean) by a relative margin of 4.18% in macro F1-score (F1). In addition, we propose a multi-contextual (MC) StutterNet, which exploits different contexts of the stuttered speech, resulting in an overall improvement of 4.48% in F 1 over the single context based MB StutterNet. Finally, we have shown that applying data augmentation in the cross-corpora scenario can improve the overall SD performance by a relative margin of 13.23% in F1 over the clean training.
Shakeel A. Sheikh, Md Sahidullah, Fabrice Hirsch, Slim Ouni
2023-02-21T14:03:47Z
http://arxiv.org/abs/2302.11343v1
Advancing Stuttering Detection via Data Augmentation, Class-Balanced Loss and Multi-Contextual Deep Learning ###### Abstract Stuttering is a neuro-developmental speech impairment characterized by uncontrolled utterances (interactions) and core behaviors (blocks, repetitions, and prolongations), and is caused by the failure of speech sensoritors. Due to its complex nature, stuttering detection (SD) is a difficult task. If detected at an early stage, it could facilitate speech therapists to observe and rectify the speech patterns of persons who stutter (PWS). The stuttered speech of PWS is usually available in limited amounts and is highly imbalanced. To this end, we address the class imbalance problem in the SD domain via a multi-branching (MB) scheme and by weighting the contribution of classes in the overall loss function, resulting in a huge improvement in stuttering classes on the SEP-28k dataset over the baseline (_StutterNet_). To tackle data scarcity, we investigate the effectiveness of data augmentation on top of a multi-branched training scheme. The augmented training outperforms the MB _StutterNet_ (clean) by a relative margin of 4.18% in macro F1-score (\(\mathcal{F}_{1}\)). In addition, we propose a multi-contextual (MC) _StutterNet_, which exploits different contexts of the stuttered speech, resulting in an overall improvement of 4.48% in \(\mathcal{F}_{1}\) over the single context based MB _StutterNet_. Finally, we have shown that applying data augmentation in the cross-corpora scenario can improve the overall SD performance by a relative margin of 13.23% in \(\mathcal{F}_{1}\) over the clean training. Stuttering detection, speech disorder, data augmentation, class balanced learning. ## 1 Introduction Speech impairments, often known as speech disorders, are difficulty in producing speech sounds. These speech difficulties usually take the form of dysarthia, cluttering (poorly intelligible speech), ligning, apraxia, and stuttering [1, 2, 3, 4, 5]. Only a few percentage (5-10%) of the world population can produce accurate speech units, the rest encounter some type of speech disorder in their life span [6]. Of these speech impairments, stuttering is the most predominant one [1]. Stuttering1, is a neuro-developmental speech disorder, in which the flow of speech is disturbed by abnormally persistent and involuntarily speech sounds, which usually take the shape of _core behaviors:_ including blocks, prolongations and repetitions [1]. Stuttering is complex and the several factors that lead to it are delayed childhood development, stress, and speech motor abnormalities [1]. In [7], Smith and Weber put forward the multifactorial dynamic pathway theory, where they argued that stuttering occurs due to the failure of the nervous system. People with stuttering (PWS) exhibit impairment in sensorimotor processes which are responsible for the production of speech, and its direction is influenced by emotional and linguistic aspects. Footnote 1: Stuttering is also called stäumering. In this paper, we will use the terms disfluency, stuttering, and stammering interchangeably. In conventional stuttering detection (SD) and therapy sessions, the speech therapists or speech-language pathologists manually analyze the PWS' speech [8]. The speech therapists observe and monitor the speech patterns of PWS to rectify them [1]. This convention of SD is very laborious and time-consuming and is also inclined toward the idiosyncratic belief of speech therapists. In addition, the automatic speech recognition systems (ASR) are working fine for normal fluent speech, however, they are unsuccessful in recognizing the stuttered speech [9], which makes it impractical for PWS to easily access virtual assistants like Apple Siri, Alexa, etc. As a result, interactive automatic SD systems that provide an impartial objective, and consistent evaluation of stuttering speech are strongly encouraged. The SD can also be used to adapt and improve ASR virtual assistant tools for stuttered speech. Despite the fact that having numerous potential applications, very little research attention has been given to the domain of SD and measurement. The detection and identification of stuttering events can be quite a difficult and complex problem due to several variable factors including language, gender, age, accent, speech rate, etc. The main goal of this work is to build robust automatic stuttering detection systems capable of detecting multiple stuttering types based on speech modality, which later on can be deployed as a tool in real-world settings by providing a means to both PWS and speech therapists to keep track of the stutter speech. These systems can later on be further improved by providing a feedback mechanism to PWS and help them to rectify their stuttering. A significant amount of work is done in the detection of other speech disorders [10] like dysarthia [11] and Parkinsons [12], but stuttering has not been addressed widely even though it is the most common one. In this paper, we propose a deep learning framework for robust SD. The automatic detection of stuttering can help in the treatment of stuttering if detected at an early age [1]. Most of the computer-based SD methods are based either on ASR systems [13], [14] or language models [15], [16]. These methods are two-stage approaches that first convert the acoustic speech signals into their corresponding spoken textual modality, and then detect stuttering by the application of language models. Even though this ASR-based two-stage approach for identifying stuttering has shown promising results, the dependence on the ASR unit makes it computationally costly and prone to error. Moreover, the adaption towards the ASR task results in the possible loss of stuttering relevant information such as prosodic and emotional content. In recent decades, the applications of deep learning have grown tremendously in speech recognition [17], speaker recognition [18], speech synthesis [19], emotion detection [20], voice conversion [21], voice disorder detection [10] including Parkinson's disease detection [22] and dysarthric speech detection [11], [23]. Inspired by the human auditory temporal mechanism, Kodrasi _et al._[23] recently proposed a convolutional neural network based dysarthric speech detection method by factoring the speech signal into two discriminative representations including temporal envelope (stress, voicing, and phonetic information) and fine structure (breathiness, pitch, and vowel quality) and reported the state-of-the-art results in dysarthria detection. However, the application of deep learning in SD is limited. The acoustic properties of speech disfluencies are different for different disfluencies which can help to discriminate from fluent voice. Due to the presence of these acoustic cues in the stuttered embedded speech, deep learning models can be used to exploit these acoustic cues in the detection and identification of stuttering events. Most of the SD existing methods employ spectral features including _mel-frequency cepstral coefficients_ (MFCCs) and spectrograms or their variants that capture the stuttering-related information. The earlier studies in this domain applied shallow deep learning approaches in SD. In 1995, Howell _et al_. [24] employed two fully connected artificial neural networks for the identification of two types of disfluencies, namely, repetition and prolongation. They extracted autocorrelation features, envelope parameters, and spectral information and used these features as an input to the artificial neural networks for SD. The network was trained on 12 speakers with 20 autocorrelation features and 19 vocoder coefficients. In 2009, Ravikumar et al [25] proposed multi-layered perceptron for the repetition type of stuttering. The network was trained by using MFCC input features on 12 different disfluent speakers. In 2019, B. Villegas _et al_. [26] trained multi-layer perceptron on 10-dimensional respiratory features for the block SD. They used a dataset of 68 Latin American Spanish speakers in their case study. In a recent study, Kourkounakis _et al_. [27] proposed residual network and bi-directional long short-term memory (ResNet+BiLSTM) based binary deep learning classifiers for the detection of six different types of disfluencies including prolongation, word repetition, sound repetition, phrase repetition, and false starts. They used spectrograms as input features and reported promising results on a small subset (25 speakers) from the UCLASS dataset [27]. In another study Lee _et al_. [28] curated a large stuttering dataset (SEP-28k) and utilized the convolutional long short-term memory (ConvLSTM) model for the detection and identification of six types of stuttering, namely, blocks, prolongations, sound repetitions, word repetitions, and interjections. The model takes 40-dimensional input MFCCs, eight-dimensional articulatory features, 41-dimensional phoneme feature vector, and three-dimensional pitch features. Sheikh _et al._[29] recently proposed a single multi-class time delay neural network (TDNN) based _SutterNet_ classifier which is capable of detecting core behaviors and fluent speech segments and gives promising detection performance on a larg subset of UCLASS (100+) speakers compared to the state-of-the-art classifiers. The model solely takes 20-dimensional MFCC features as an input. In another recent study M. Jouaiti _et al._[30] introduced phoneme-based bi-directional long short-term memory (BiLSTM) model for SD by mixing the SEP-28k and UCLASS datasets. The model is trained on 20-dimensional MFCCs and 19-dimensional phoneme input features. The disfluencies considered in this work are prolongations, repetitions, and interjections. A detailed summary and comparison of various feature extraction methods and classifiers can be found in [31]. This work provides a complete deep learning framework for robust SD following our preliminary initial investigations [29] in which _SutterNet_ yields state-of-the-art SD results. In this study, we identify the limitations of _SutterNet_ and proposed further advancements to address those drawbacks. Our main contributions are summarized below: * _Solution for class imbalance problem_: The standard stuttering datasets suffer from class imbalance problems. We introduced two strategies based on weighted loss and multi-branch architecture to tackle the class imbalance problem. * _Introducing data augmentation_: The stuttering datasets have a limited amount of training data and this makes it difficult to apply advanced deep learning models with a large number of parameters. To address this limitation, this work introduces audio data augmentation. To the best of our knowledge, this is the first work to apply audio data augmentation in the SD problem. * _Introducing multi-contextual architecture_: Stuttering detection is a special type of speech characterization problem where each class (i.e., stuttering types) has a varying duration. For example, block lasts for a shorter duration than prolongations and repetitions. Therefore, a fixed length of context in the basic _SutterNet_ framework might not be the optimized one for detecting all types of stuttering. We introduce a multi-contextual architecture that uses different context lengths in a parallel fashion. The remaining of the paper is organized as follows. Section II describes the _SutterNet_ and analyzes its deficiencies. Section III discusses the proposed methodology for addressing the deficiencies. Section IV details the experimental design, metrics used and datasets. Section V discusses the experimental results on class balanced, data augmentation, MC _SutterNet_ and cross corpora scenario. Finally, in Section VI, we conclude with the possible future directions. ## 2 StutterNet: overview & limitations ### StutterNet Most of the earlier work employed only a small set of disfluent speakers in their experimental studies, and has approached the SD problem as a binary classification problem: disfluent vs fluent identification or one vs other type [31]. The _SutterNet_ we propose in our earlier work, is a time delay neural network based architecture that has been used to tackle the SD as a multi-class classification problem. The _SutterNet_ takes 20 MFCCs as input features with a frame length of 20 ms, mean-normalized with a hop length of 10 ms. Usually, the initial layers in a standard deep neural network learn wider contexts when processing a temporal input signal. However in _SutterNet_ network as shown in Table 1, the initial layers learn and capture only smaller contexts and the deeper ones compute the activations from a wider context, thus the deeper/higher layers are able to capture and learn the longer temporal contexts. The network consists of five time delay layers with the first three focusing on \([t-2,\ t+2]\), \(\{t-2\,t,\ t+2\}\), \(\{t-3\,t,\ t+3\}\) and the other two on \(\{t\}\) contextual frames, respectively. The TDNN layers are having a dilation of (1, 2, 3, 1, 1) respectively. This is followed by a two-layered BiLSTM unit, mean and standard deviation pooling layer, three fully connected (FC) layers, and a softmax layer on top of the network that reveals the classification scores of stuttering disfluencies. A ReLU nonlinearity activation function and a 1D batch normalization are applied after each layer except the statistical pooling layer. We apply dropout of 0.3. in FC layers. Consider an input speech sample with \(T\) frames. The first five layers of _SutterNet_ focus on the small context of speech frames. For example, layer 2 takes as input the sliced output of layer 1 at time frames of \(\{t-2,\ t,\ t+2\}\), which results in capturing a total temporal context of 9 with the help of the previous layer's context of \([t-2,\ t+2]\). Similarly, layer 3 sees a total context of 15 time frames. The statistical pooling layer computes and concatenates the mean and standard deviation by aggregating all \(T\) output frames at the end of BiLSTM output. The statistical pooling layer accumulates the information across the temporal dimension which makes it suitable for subsequent layers to operate on the entire temporal speech segment. The _SutterNet_ is trained to classify 5 different types of stuttering including _core behaviors_ and the fluent part of the PWS. For detailed _SutterNet_ architecture, please refer to [29]. ### Limitations Although this network has shown promising results on a SEP-28K dataset in SD, it has several deficiencies. First, it does not generalize quite well on unseen data and leads to overfitting due to the limited amount of data available for training. In addition to data scarcity, the SEP-28k dataset was collected from podcasts in a clean environment, which makes the trained _SutterNet_ difficult to generalize in other environmental conditions. Second, obtaining class-balanced datasets is extremely difficult and very expensive in the speech domain and the SEP-28k dataset is no exception. Deep neural network (DNN) classifiers trained on highly class-imbalanced datasets are usually biased towards the majority class, which results in poor modeling of minority classes [32]. In addition to the above deficiencies, we found in our previous work _SutterNet_[29] that the context is very important in SD. Stuttering detection is a special type of speech characterization problem where each class (i.e., stuttering types) has a varying duration. For example, block lasts for a shorter duration than prolongations and repetitions. Therefore, a fixed length of context in the basic _SutterNet_ framework might not be the optimized one for detecting all types of stuttering. A larger context increases the performance of prolongation and repetition types of disfluencies but decreases the recognition performance of fluent speech segments on the UCLASS dataset [29]. ## 3 Addressing Deficiencies In this section, we address the three above-mentioned issues. ### Class Imbalance In class imbalance learning, the distribution of samples across each class is not uniform. The DNN classifiers, trained on highly imbalanced datasets generally perform poorly for the minority class and favor the majority class [33]. In the worst case, where there is an extreme imbalance in the training set, the majority class dominates the learning, and samples among the minority class may go undetected, thus affecting performance [34]. The class-imbalance is one of the major problems in real-world applications, including multi-lingual speech recognition [35], and stuttering is no different as shown in Fig. 1. In fact, stuttering is extremely imbalanced across fluent and other speech disfluencies. The Fig. 1. shows that the interjections are the most common disfluency present in the SEP-28k dataset followed by repetitions, blocks, and prolongations and the overall distribution is approximately equivalent to the fluent distribution. Collecting a balanced dataset is difficult and expensive for the stuttering detection task. Other datasets such as Kassel State of Fluency (KSoF) (not publicly accessible) [36], FluencyBank [28] also suffer from this issue. \begin{table} \begin{tabular}{c|c|c|c} \hline Layer & Output Layer Size & Layer Context & TC \\ \hline TDNN 1 & 64 & \([t-2,\ t+2]\) & 5 \\ TDNN 2 & 64 & \(\{\)t-2, t, t+2\(\}\) & 9 \\ TDNN 3 & 64 & \(\{\)t-3, t, t+3\(\}\) & 15 \\ TDNN 4 & 64 & \(\{\)t\(\}\) & 15 \\ TDNN 5 & 354 & \(\{\}\) & 15 \\ BiLSTM & 64 + 2 & \(\{\}\) & \(T\) \\ Statistical Pooling & \(3*64*2\times 1\) & \([0,T)\) & \(T\) \\ FC1 & 64 & - & \(T\) \\ FC2 & 64 & - & \(T\) \\ FC3 & NumClasses & - & \(T\) \\ \hline \end{tabular} \end{table} Table 1: StutterNet Architecture, TC: Total Context, TDNN: Time Delay Neural Network Layer, FC: Fully Connected Layer, BILSTM: Bidirectional Long Short-Term Memory (2 Layers), Layer Context of [t-2, t+2] means 5 Frames are Taken into Consideration two Before the Current Time Step and two After the Current Time Stamp. Over the years, the class imbalance problem is one of the main concerns due to its prevalence, especially in the biomedical domain. Several methods have been proposed to tackle the class imbalance problem, which are mainly categorized into three groups: data-level, cost-level, and architecture-level [32]. Data-level approaches attempt to re-balance the class distribution by means of some re-sampling methods which include: under-sampling, over-sampling, or combined over-sampling and under-sampling [32, 33]. The architecture-level approaches attempt to modify the existing algorithm or develop a new one to tune and adapt it for imbalanced datasets [32, 33]. Cost-level approaches attempt to influence the loss/objective function by providing comparatively higher misclassification cost penalties to the minority class in order to force the model to learn about minority classes [32, 33]. In DNNs, addressing the class-imbalance problem by re-sampling may either get rid of sensitive speech samples that are extremely important in training when under-sampling. It can also add numerous quantities of duplicated speech samples under the over-sampling strategy, which eventually makes the training expensive and makes the DNN model likely to overfit [32]. Because of these limitations, we investigate cost-level and architecture-level approaches in this work. For the cost-based approach, we modify the standard cross entropy loss by assigning weights to different classes [37]. We set the class weights inversely proportional to the number of samples. We define the weight for class \(i\) as \(w_{i}=\frac{N}{C*N_{i}}\) where \(N\) is the number of training samples, \(C\) is the number of classes, \(N_{i}\) is the number of training samples for class \(i\). Therefore, the weighted cross-entropy (WCE) over the train set can be defined as, \[\mathcal{L}_{\mathrm{WCE}}=\frac{1}{\mathcal{B}}\sum_{b=1}^{\mathcal{B}}\frac{ \sum\limits_{i}^{M}w_{i}*\log(p_{i})}{\sum\limits_{i,i\in\mathcal{B}}^{M}w_{ i}} \tag{1}\] where \(\mathcal{B}\) is the number of batches, M is number of stuttered speech samples in a batch \(b_{i}\), \(p_{i}=\left(\frac{e^{c_{i}}}{\sum_{j=1}^{c_{i}}e^{c_{j}}}\right)\) is the predicted probability of class \(c_{i}\) of sample \(i\). For architecture-level, we propose a multi-branched approach similar to the work by C. Lea _et al._[28] and M. Bader-El-Den _et al._[38] to address the class imbalance issue in SD task. Inspired by the fact that the number of fluent class samples is almost equal to the total number of samples in disfluent classes, we simultaneously classify fluent vs difluent classes in one output branch and subcategories of disfluent classes in another output branch. The multi-branched architecture with two output branches is shown in Fig. 3 (The figure is overall architecture of mulit-contextual _SutterNet_ with two contexts, For single contextual MB _SutterNet_, only context = 5 is taken into consideration). This has one common encoder \(\mathcal{E}\) (\(\theta_{\mathrm{e}}\)) section followed by two parallel branches referred as _FluentBranch_\(\mathcal{F}\) (\(\theta_{\mathrm{f}}\)) and _DisfluentBranch_\(\mathcal{D}\) (\(\theta_{\mathrm{d}}\)). The embeddings from the encoder are processed with both the branches parallelly, where the \(\mathcal{F}\) is trained to distinguish between fluent and disfluent samples, and the \(\mathcal{D}\) is trained to differentiate within the disfluent sub-categories. The objective is to optimize the sum of _FluentBranch_ loss \(\mathcal{L}_{\mathrm{f}}\) and _DisfluentBranch_ loss \(\mathcal{L}_{\mathrm{d}}\). For simplicity, a simple sum of the two losses has been taken into consideration and it works well. Thus, the overall objective function is define as: \[\mathcal{L}(\theta_{\mathrm{e}},\theta_{\mathrm{f}},\theta_{\mathrm{d}})= \mathcal{L}_{\mathrm{f}}(\theta_{\mathrm{e}},\theta_{\mathrm{f}})+\mathcal{L}_{ \mathrm{d}}(\theta_{\mathrm{e}},\theta_{\mathrm{d}}). \tag{2}\] During the evaluation step, if the _FluentBranch_ predicts the sample as fluent, then _FluentBranch_ predictions are considered otherwise _DisfluentBranch_ predictions are taken into consideration to reveal the stuttering category. ### Data Augmentation Deep learning has achieved rapid progress in the domain of speech processing tasks including speech recognition [17], speaker recognition [18], emotion recognition [20], speech disorder detection [29]. However, as a drawback, deep learning based models are hungry for the data and require a substantial amount of annotated data for training, and _SutterNet_ is no exception. The model may require more stuttering data than other speech processing domains, as the presence of disfluencies in stuttering speech corpus is not frequent. Data augmentation is a popular technique that increases the quantity and diversity of the existing annotated training data, improves robustness, and avoids the overfitting of DNNs. For normal speech recognition, data augmentation demonstrates to be an effective approach for dealing with data scarcity and enhancing the performance of various DNN acoustic methods [39]. Several data augmentation techniques have been investigated including pitch adjust [40], spectral distortion [41], tempo perturbation [42], speed perturbation, cross-domain adaptation [43], adding noise to clean speech [42], spectrogram deformation with frequency and time masking [39], mixspeech [40], etc. On the contrary, so far, very limited attention has been given to data augmentation targeting the speech disorder domain. Figure 1: Stuttering data distribution in SEP-28k dataset showing five classes consisting of single stuttering class. In [44], speed and tempo perturbation based data augmentation were used to convert the normal speech to dysarthric speech impairment. In [45], a voice conversion adversarial training based framework was used to simulate dysarthric speech from healthy speech. In [46], normal speech samples (also called out-of-domain) were used as a data augmentation for dysarthric speech in the bottleneck feature extraction stage. In [47], the speaker-dependent parameter was computed, which was then used for augmentation of scarce dysarthric speech in tempo adjustment. In [48], several data augmentation techniques such as noise, time stretching, pitch shifting, time shift, masking, etc., were analysed in Dementia detection. There are some studies on data augmentation targeting text based stuttering detection [13], however, in the case of audio based stuttering/disfluency detection, this has not been studied and analysed deeply [49]. In the stuttering domain, the employment of data augmentation is not straightforward, because most of the data augmentations like time stretch, speed perturbation, etc, alter the underlying structure of the stuttering speech sample completely. Our approach employs reverberation and additive noises because it reflects the real-world scenario and does not change significantly the underlying stuttering in the speech sample as shown in Fig. 2. Reverberation consists of convolving speech samples with room impulse responses. We utilize the simulated room impulse responses as described in [50]. For additive noises, we utilize the MUSAN dataset, comprised of 60 hours of speech from 12 languages, 42 hours of music from various genres, and 900 hours of noises [51]. To augment the original speech samples, we combine the training "clean" set with below mentioned augmented copies which results in 5 times increase in training samples. 1. _music:_ A single music sample file randomly chosen from MUSAN is added to the original clean stuttering speech sample (SNR: 5-15dB) (The music sample file is trimmed or repeated as required to match the duration of the clean stuttered speech sample). 2. _noise:_ Throughout the stuttered speech, samples from MUSAN noises are added at 1 sec intervals (SNR: 0-15dB). 3. _babble:_ Speech samples from randomly three to seven speakers are summed together, then added to the original clean stuttered speech sample (SNR: 13-20dB). 4. _reverb:_ The "clean" train set is convolved with simulated room impulse responses. All the data augmentation types shown in Fig. 2 were performed using Kaldi [52] tool. ### Multi-contextual StutterNet The multi-contextual framework is based on the way humans perceive speech. In the cochlea, the input acoustic speech signal is partitioned into several frequency bands so that the information in each band can be filtered independently and thus processed in parallel in the human brain [53]. Multi-contextual has been studied for action recognition in videos [54], robust ASR [55, 56, 57, 58, 59], speech separation [60], where the input speech signal is processed in multiple Figure 2: Repetition stuttering with utterance _“said that, that”_ and the effect of various data augmentations (From 0 to 0.25 seconds, the speaker is saying _“said”_, then followed by two repetitions _“that”_ from 0.25 to 0.6 and 1.4 to 1.75). streams/contexts (multiple time or frequency resolutions), etc. K.J Han _et al_. [55] recently proposed a multi-stream2 convolutional neural network for robust acoustic modeling. Chiba _et al_. [61] recently proposed multi-stream attention-based BiLSTM network for speech emotion recognition. Li _et al_. [62] extracted deep features by training multi-stream hierarchical DNN for acoustic event detection. Moreover Sheikh _et al_. [29] found that settings like context frame size optimized for one stuttering class are not good for other stuttering types. Footnote 2: multi-stream, multi-scale, multi-resolution are the different names of multi-context. Exploiting this fact, we investigate how the multi-contextual neural networks will impact classification performance in the speech disorder domain, and in particular stuttering identification. In our preliminary study, we found that the context window improves the identification performance of two types of disfluencies on the UCLASS dataset [29]. As the context frame size increases in the _StutterNet_, the performance detection of prolongation and repetition also increases, but decreases for fluent speech segments, and almost remains unaffected for a block type of stuttering. The prolongation and repetition last longer than other types of disfluencies. To address this issue, we exploit the variable contexts of MB _StutterNet_ by training the model jointly on different contexts as shown in Fig. 3. The pseudo-code of multi-contextual (MC) _StutterNet_ is provided in \(Algorithm\) 1. ## 4 Experimental setup We evaluate our proposed architecture thoroughly on the newly released SEP-28k stuttered dataset [28], and for cross copora, we use the FluencyBank and LibriStutter datasets. ### _Datasets_ _SEP-28k_: The SEP-28k stuttering dataset was curated from a set of 385 podcasts. The original podcast recordings have varying lengths. From each podcast, 40 to 250 segments (each of length 3 seconds) are extracted which resulted in a total of 28,177 segments. The original SEP-28k dataset contains two types of labels: stuttering and non-stuttering. The stuttering labels include blocks, prolongations, repetitions, and fluent segments, whereas the nonstuttering labels include unintelligible, unsure, no speech, poor audio quality, and music which are not relevant to our study. In our case study, we use only the stuttering single labeled samples. Out of 28,177 clips, we use only 23573 segments, among which 3286 are repetitions, 1770 are prolongations, 2103 are blocks and 12419 are fluent segments, and 3995 are interjections. This resulted in a total of 19.65 hours of data which includes 2.74 hours of repetition, 1.48 hours of prolongation, 1.75 hours of block, 10.35 hours of fluent speech, and 3.34 hours of interjections. After labeling, each 3-sec sliced speech segment is downsampled to 16 kHz. We randomly select 80% of podcasts (without mixing podcasts) for training, 10% of podcasts for validation, and the remaining 10% of the podcast for evaluation in a 10-fold cross-validation scheme. The speaker information is missing from the SEP-28k dataset, so we divide the dataset based on podcast identities (assuming each podcast is having a unique speaker)3. Footnote 3: The details about the train, validation, and test set splits about the SEP-28k dataset is not publicly available. We create a protocol by ensuring no overlap of podcasts between train, validation, and test sets. The details are available in [https://shakeeel608.github.io/protocol.pdf](https://shakeeel608.github.io/protocol.pdf) _FluencyBank_: The actual FluencyBank AudioVisual dataset was created by Nan Bernstein Ratner (University of Maryland) and Brian MacWhinney (Carnegie Mellon University) for the study of fluency development. In our case study, we use the annotation done by the Apple similar to SEP-28k [1]. This stuttering dataset was curated from 33 podcasts with 23 males and 10 females, which resulted in a total of 4,144 segmented clips. Out of which, we only use 3355 samples (ignoring the non-stuttering and multiple samples), among which 542 are repetitions, 222 are prolongations, 254 are blocks and 1584 are fluent segments, and 753 are Interjections. This results in a total of 2.80 hours of data which includes 0.45 hours of repetition, 0.19 hours of prolongation, 0.21 hours of block, 0.63 hours of interjection samples, and 1.32 hours of fluent samples. We have considered those samples where at least two annotators agree with the same labeling of the segment. _Simulated LibriStutter_: The LibriStutter (English) consists of 50 speakers (23 males and 27 females), is approximately 20 hours and includes synthetic stutters for repetitions, prolongations, and interjections [63]. Random stuttering was inserted within the four-second window of each speech signal. The original LibriStutter is having 6 classes including fluent, interjection4, prolongation, sound, word and phrase repetitions. To make our experiments consistent with the SEP-28k dataset, we treat all repetitions as one class. After extracting samples based on the class label, we did not find any interjection samples in the dataset, so we only train with three classes including fluent, prolongation, and repetitions. For splitting of the dataset, please refer Table 3 in the response sheets. Footnote 4: [https://borealisdata.ca/dataset.xhtml?persistentId=doi:10.5683/SP3/NKVOQ](https://borealisdata.ca/dataset.xhtml?persistentId=doi:10.5683/SP3/NKVOQ) ### Training Setup We implement models using the PyTorch library. The acoustic input features used in this case study are 20-dimensional MFCC features, which are generated every 10 ms using a 20 ms window and extracted using the Librosa library [64]. For 3-dim pitch and phoneme features, we use Pykaldi [65] and Phonexia [66] tools respectively. For training, we use Adam optimizer and cross-entropy loss function with a learning rate of \(10^{-2}\) and batch size of 128. All the results reported in this paper are the average of the 10-fold validation technique and all the training experiments were stopped by early stopping criteria with patience of 7 on validation loss. ### Evaluation metrics To evaluate the model performance, we use the following metrics: macro F1-score and accuracy which are the standard and are widely used in the stuttered speech domain [27, 28, 31, 36, 67]. The macro F1-score (\(\mathcal{F}_{1}\)) (which combines the advantages of both precision and recall in a single metric unlike unweighted average recall which only takes recall into account) from equation (3) is often used in class imbalance scenarios with the intention to give equal importance to frequent and infrequent classes, and also is more robust towards the error type distribution [68]. \[\mathcal{F}_{1}=1/C\sum_{k}F1_{k}=1/C\sum_{k}\frac{2.P_{k}R_{k}}{P_{k}+R_{k}} \tag{3}\] where C is the number of classes and \(P_{k}\), \(R_{k}\), and \(F1_{k}\) denotes the precision, recall, and F1-score with respect to class \(k\). Figure 3: A schematic diagram of Multi-contextual _SutterNet_, which is a multi-class classifier that exploits different variable contexts of (5, 9) in SD. The FluentBranch and DisfluentBranch are composed of 3 fully connected layers followed by a softmax layer for prediction of different stuttering classes, CB: Context Block, SPL: Statistical Pooling Layer. The context C (5, 9) here does not mean the TDNN layers, but rather it means the kernel-size which is number of frames taken into account when processing speech frames (C=5 means a context of 5 frames are taken at a time and C =9 means a context of 9 frames are taken at a time). These two contexts are jointly exploited in MC _SutterNet_ as shown in the left hand side of the figure. ### Experiments This sections describes briefly the experiments carried out in this paper. * We carry out experiments using _StutterNet_ with the two state-of-the art SD baselines including ResNet+BiLSTM and ConvLSTM models in the same settings to have a fair comparison. * We perform experiments using weighted loss and multi-branched training schemes for addressing class-imbalance problem in scattering domain, and also exploit the advantage of both the schemes in freezing parts of network. * We experiment with the data augmented training to evaluate its performance in stuttering detection. * We experiment with MC _StutterNet_ on top of data augmentation, and also analyse its performance in cross-corpora settings. ## 5 Results In this section, we discuss the results with class balanced training, data augmentation, and multi-contextual learning. We propose two modifications to the vanilla _StutterNet_ to address the class imbalance issue and one of them involves the loss function on the SEP-28k dataset. We further present the results of cross-corpora experiments on the FluencyBank and LibriStutter datasets. Table 2 shows the results of state-of-the-art baselines (1st part are baselines) and impact of class balanced training. Table 3 depicts the results using data augmentation on top of class balanced training. Table 4 shows the results using MC _StutterNet_. _Baselines:_ In addition to our single branch _StutterNet_ baseline (BL5), we first implement the state-of-the-art ConvLSTM and ResNet+BiLSTM as our baseline models for comparison purposes in the same settings. Our baseline model _StutterNet_ is performing well in almost all the disfluent classes except blocks as compared to the baseline model used in SEP-28k paper [28] ConvLSTM (MFCC features) (referred to as BL1), ConvLSTM (phoneme features) (referred to as BL2), and ConvLSTM (pitch+MFCC features) (referred to as BL3). In addition, we also use one more model i.e, ResNet+BiLSTM classifier (referred to as BL4) from Kourkounakis _et al._[27]. Comparing our single branch baseline _StutterNet_ to BL4, the model performs well only in interjection and prolongation classes as shown in Table 2. For subsequent comparison, we select BL1 (best among BL1, BL2 and BL3) and BL4. ### Class Imbalance #### 5.1.1 Weighted cross entropy This section discusses the impact of applying weighted cross entropy to _StutterNet_ (which we call _StutterNet_\({}_{\rm WCE}\)) in order to improve the detection performance of minority classes including repetitions, blocks, and prolongations. Figure 4. illustrates the class-wise training loss curves for normal and weighted cross-entropy loss functions. We can observe that for the standard cross entropy loss function, the majority class (i.e., fluents) exhibit higher loss values and it dominates the overall loss. Therefore, during training with backpropagation, the number of updates for fluent class dominates the gradient values that in turn forces the model to focus mainly on correctly classifying/predicting the majority class. Thus, the minority classes including the blocks, prolongations, and repetitions are given less importance during training, which leads to their poor detection performance. Table 2 confirms that the detection performance of blocks, prolongations, and repetitions is very poor, as they are mostly predicted as fluent samples due to the class imbalance nature of the problem. The loss functions for the weighted cross entropy are shown by dashed curves in Fig. 4. The figure indicates that applying weights to standard cross entropy loss function by equation (1) forces the model to give balanced importance to each disfluency class while optimizing the parameters of the network during backpropagation. This, in turn, helps in boosting the gradient updates for minority classes during training, and thus increases their detection performance in baselines including BL4, and BL5 as shown in Table 2. The _StutterNet_\({}_{\rm WCE}\) gives a relative improvement of 33.49%, 83.50%, 1,185%, and 5.98% over BL5 and 58.69%, 462%, 500%, and 7.84% over BL1 for detecting repetitions, prolongations, blocks, and interjections, respectively. Table 2 also demonstrates the suitability of WCE with competitive ResNet+BiLSTM method and results in 54%, 57.35%, 517%, and 10.23% relative improvement in repetitions, prolongations, blocks, and prolongations over BL4. #### 5.1.2 Multi-branch training Moreover, we address the class imbalance problem via a multi-branch network (referred to as _StutterNet_\({}_{\rm MB}\)). This has two output branches: _fluentBranch_ and _DisfluentBranch_ as shown in Fig. 3. (For _StutterNet_\({}_{\rm MB}\), only a single context of five is taken into consideration) This method improves the detection performance of repetitions, prolongations, blocks by a relative margin of 29.92%, 0.65%, 144% respectively over BL5, and 54.45%, 208%, 13.72%, Figure 4: Class-wise loss values for different probabilities of ground truth classes. Here block, fluent, repetition, prolongation, and interjection classes are correspondingly denoted by B, F, R, P, and I. The standard cross entropy (CE) is shown by solid curves and its weighted version is shown by dashed ones (WX represents the weighted loss of class X). 6.80% in repetitions, prolongations, blocks, and fluents respectively over BL1, and 88%, 31.81% in repetitions, blocks respectively over BL4, however, the BL4 is performing better in prolongations and interjection classes. We also applied multi-branch training in ResNet+BiLSTM which results in a relative improvement of 85.45% in repetition class only as compared to the baseline BL4. In applying class balanced training, we found that there is drop in macro \(\mathcal{F}_{1}\) score. Using _SutterNet_WCE and WCE based ResNet+BiLSTM, the macro \(\mathcal{F}_{1}\) score drops from 42.84% and 43.12% to 41.02% in _SutterNet_ and 41.02% in ResNet+BiLSTM respectively. By employing multi-branch training, the macro \(\mathcal{F}_{1}\) score drops very slightly from 42.84% to 42.26% in _SutterNet_ but it drops remarkably from 43.12% \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & \multicolumn{6}{c}{Accuracy} \\ \hline Method & R & P & B & In & F & TA & \(\mathcal{F}_{1}\)(\%) \\ \hline \multicolumn{1}{c}{} & \multicolumn{6}{c}{Baselines} \\ \hline ConvLSTM + F\({}_{\text{HiCC}}\) (BL1) [28] & 22.83 & 10.61 & 06.34 & 56.74 & 72.35 & 52.68 & 34.00 \\ ConvLSTM + F\({}_{\text{phone}}\) (BL2) [28] & 10.18 & 01.06 & 00.35 & 43.88 & 74.48 & 48.43 & 24.00 \\ ConvLSTM + F\({}_{\text{F0+MFCC}}\) (BL3) [28] & 19.28 & 09.55 & 08.51 & 51.78 & 66.60 & 48.47 & 30.80 \\ ResNet+BiLSTM (BL4) [27] & 18.76 & 41.24 & 5.47 & 57.18 & 88.19 & 62.36 & 43.12 \\ \multicolumn{1}{c}{} & \multicolumn{6}{c}{Class} \\ \hline \multicolumn{1}{c}{} & \multicolumn{6}{c}{Class} \\ \hline ResNet+BiLSTM + WCE [27] & 28.90 & 64.89 & 33.79 & 63.03 & 46.90 & 47.42 & 41.00 \\ MB ResNet+BiLSTM [27] & 34.79 & 30.19 & 5.92 & 49.26 & 75.47 & 55.62 & 39.20 \\ StutterNet + WCE (_SutterNet\({}_{\text{WCE}}\)_) & 36.23 & 59.73 & 38.05 & 61.19 & 41.59 & 45.26 & 41.02 \\ MB StutterNet (_SutterNet\({}_{\text{MB}}\)_) & 35.26 & 32.76 & 7.21 & 56.04 & 77.27 & 58.56 & 42.26 \\ \hline \(\mathcal{M}_{\text{enc}}^{\text{flex}}\) & 39.82 & 37.91 & 10.45 & 60.57 & 73.49 & 58.58 & 44.42 \\ \(\mathcal{M}_{\text{enc,dist}}^{\text{flex}}\) & 29.25 & 45.85 & 18.11 & 56.88 & 74.49 & 58.18 & 44.80 \\ \(\mathcal{M}_{\text{enc,fluent}}^{\text{flex}}\) & 31.15 & 27.62 & 05.01 & 57.64 & 73.64 & 55.83 & 38.60 \\ \hline \hline \end{tabular} \end{table} TABLE II: Results with Baselines (BL) and Using Class Imbalance Learning (Clean Training) (B: Block, F: Fluent, R: Repetition, P: Prolongation, In: Interjection, TA: Total Accuracy, \(\mathcal{F}_{1}\): Macro F1- Score), \(\mathcal{F}_{\text{MFCC}}\): MFCC Input Features, F\({}_{\text{F0}}\): 3-Dim (Pitch, Pitch-Delta, Voicing) Features, F\({}_{\text{phone}}\): Phoneme Features, MB: Multi Branch, WCE: Weighted Cross Entropy, \(\mathcal{M}_{\text{enc}}^{\text{flex}}\): Freezing Encoder, \(\mathcal{M}_{\text{enc,dist}}^{\text{flex}}\): Freezing Encoder and DisfluentBranch, \(\mathcal{M}_{\text{enc,fluent}}^{\text{flex}}\): Freezing Encoder and FluentBranch. Fig. 5: Accuracy confusion matrices showing the confusion of fluent speech with repetitions and blocks. F: Fluent, R: Repetition, B: Block, P: Prolongation, In: Interjection, BL: Baseline, WCE: Weighted cross-entropy, MB: Multi-branch. to 39.20% in ResNet+BiLSTM. #### 4.1.3 Analysis of confusion matrix Using the WCE training scheme, the detection performance of minority classes improved remarkably at the cost of fluent accuracy. From the Fig. 5 (a), we analyze that most of the repetitions and blocks are being classified as fluent speech. Initially, we hypothesize that it is most likely because of the imbalanced nature of the problem. Despite addressing the class imbalance problem in the stuttering domain using WCE and multi-branched training, we found that the block and repetition type of stuttering disfluencies are the ones that are still getting confused with the fluent class as depicted in Fig. 5 (b) and Fig. 5 (c). This makes intuitive sense because the blocks are closely related with fluent segments having just different initial silence or gasp followed by fluent utterances like fluent speech. The repetitions, on the other hand, contain some word or phrasal repetitions, which are actually fluent utterances, if we carefully analyze their individual parts. Consider an utterance _he is a boy_. The word he is being repeated twice but is a fluent part if two _he's_ can be analyzed on an individual basis. #### 4.1.4 Exploiting advantage of WCE and MB StutterNetSince _StutterNet_ and _StutterNet_\({}_{\rm MB}\) address the class-imbalance issue differently, we combine them to exploit both of their advantages. We first pre-train the _StutterNet\({}_{\rm VCE}\)_ and use it as a _DisfluentBranch_ in our multi-branched _StutterNet_. After the pre-training step, we freeze parameters in two ways. First, we freeze the parameters of the contextual encoder only and fine-tune only the two output branches. We label this training scheme as \(\mathcal{M}_{\rm enc}^{\rm frz}\). By exploiting this method, we achieve an overall detection improvement of 5.11% in \(\mathcal{F}_{1}\) over _StutterNet\({}_{\rm MB}\)_. Second, we freeze the base encoder and _StutterNetNet\({}_{\rm WCE}\)_(_DisfluentBranch_) and append with one more FluentBranch (to distinguish between fluents and stutter samples). We refer this as \(\mathcal{M}_{\rm enc,dist}^{\rm frz}\) and it results in an overall improvement of 6.01% in \(\mathcal{F}_{1}\) over _StutterNet\({}_{\rm MB}\)_. We also experiment by first training the model using weighted cross entropy in _FluentBranch_, and then by fine-tuning the _DisfluentBranch_ by freezing the parameters of the encoder and _FluentBranch_. We refer to it as \(\mathcal{M}_{\rm enc,fluent}^{\rm frz}\) and the results for this configuration are shown in Table II and III. However, this training scheme degrades performance in almost all the disfluent classes in comparison to \(\mathcal{M}_{\rm enc}^{\rm frz}\) (freezing encoder only) and \(\mathcal{M}_{\rm enc,dist}^{\rm frz}\) (freezing encoder and _DisfluentBranch_). This is possibly due to the reason that the _base encoder_ is trained only to distinguish between fluent and disfluent classes via _FluentBranch_. Then freezing its parameters in a fine-tuning step further inhibits it more in learning sub-classes (repetitions, prolongations, interjections, and blocks) of the disfluent category which makes their overall detection performance lower. ### Data augmentation The main experimental results obtained with various data augmentation techniques are shown in Table III, where we compare the detection performance obtained with data augmented training to the baseline clean dataset. We first separately train the _StutterNet\({}_{\rm MB}\)_ on different data augmentation techniques which are described in Section III. We found that the training MB ResNet+BiLSTM and _StutterNet\({}_{\rm MB}\)_ with Kaldi augmentation increases the overall \(\mathcal{F}_{1}\) performance. From Table III, it can be seen that the data augmentation does help in improving the \(\mathcal{F}_{1}\) in almost all the cases. Applying data augmentation with a single branched and _StutterNet\({}_{\rm WCE}\)_, there is a relative improvement of 5.74% and 3.50% in \(\mathcal{F}_{1}\) respectively, over the clean versions of the training. When data augmentation is applied to MB training, there is a relative improvement of 4.17% and 0.61% in \(\mathcal{F}_{1}\) using _StutterNet\({}_{\rm MB}\)_ and ResNet+BiLSTM over clean training, respectively. Moreover, applying Kaldi data augmentation on top of the \(\mathcal{M}_{\rm enc}^{\rm frz}\) and \(\mathcal{M}_{\rm enc,dist}^{\rm frz}\) training scheme, there is a relative improvement of 7.28%, and 6.81% in overall accuracy respectively. In addition to Kaldi augmentation, we also apply pitch scaling and bandpass filter as data augmentation, however, we did not achieve much improvement in SD. ### Multi-contextual StutterNet Different contexts show different optimized class accuracies. In order to exploit these different variable contexts and to improve the detection performance further, we propose a multi-contextual (MC) _StutterNet_ for SD as shown in Fig. 3. We jointly train and optimize the MC _StutterNet_ on the clean and augmented data using variable contexts of 5 and 9. The embeddings extracted from each context as depicted by \(\mathcal{CB}\) block are passed to a two-layered BiLSTM unit, and then concatenated after applying statistical pooling layer (SPL), resulting in a \(1\times 2\times(2\times N)\)-dimensional feature vector (where Figure 6: Impact of data augmentation (A4) in cross corpora FluencyBank dataset with MC _StutterNet_ (R: Repetition, P: Prolongation, B: Block, In: Interjection, F: Fluent, XA: X: Disfluency Class and A: Accuracy, MC: Multi contextual _StutterNet_, SC: Same corpora, CC: Cross corpora, A4: Augmentation). The bar plot clearly shows that the model MC _StutterNet_ trained on clean SEP-28k dataset fails to generalize on FluencyBank cross corpora data. Applying data augmentation improves the stuttering detection in cross domain corpora as shown orange bars (4\({}^{th}\) column in each disfluency). \(N\) is layer size), which is then fed parallelly to two different branches including _FluentBranch_ and _DisfluentBranch_ for class predictions. This results in a relative improvement of 13.40%, 15.67%, 6.41%, and 1.38% in prolongation, block, interjection, and fluent classes over _StutterNet\({}_{\mathrm{MB}}\)_ (clean), thus an improvement of 3.79% in macro \(\mathcal{F}_{1}\) score, however, employing multi-contextual training, we see a drop from 35.26% to 33.36% in repetition accuracy over the _StutterNet\({}_{\mathrm{MB}}\)_. In comparison to baseline BL5 (vanilla single branch _StutterNetNet_), there is a relative improvement of 22.92%, 14.13%, 181.81%, 3.27% in repetition, prolongation, block, and interjection classes respectively. Thus a relative improvement of 2.38% in macro \(\mathcal{F}_{1}\) score. The MC _StutterNet_ also performs better in macro \(\mathcal{F}_{1}\) score in comparison to state-of-the-art baselines BL1 and BL4. Applying data augmentation on top of the MC _StutterNet_, we found that except noise augmentation, all other data augmentation types help in improving the macro \(\mathcal{F}_{1}\) score in comparison to MC _StutterNet_ (clean) The noise augmented samples help in improving the detection accuracy of fluent class. For interjections, we found that all the data augmentation helps, and for blocks, only babble augmentation helps in improving their detection accuracy. We also found applying all four data augmentation techniques in MC _StutterNet_ results in an accuracy improvement in prolongation and repetition classes, however, with individual data augmentation, a drop in their accuracies can be observed in Table IV. By applying all four data augmentation in MC _StutterNet_ training, there is an overall improvement of 1.76%, 17.15%, and 134% in repetitions, prolongations, and blocks respectively as shown in the Table IV, which is a 4.48% relative gain in macro \(\mathcal{F}_{1}\) score over the augmented _StutterNet\({}_{\mathrm{MB}}\)_ training. ### Summary of proposed methods This work advances the basic _StutterNet_ by addressing its limitations with three modifications. In Table V, we present a summary of the results demonstrating systematic improvements. We observe that all the proposed modifications help to gradually improve the performance and we achieve 7.37% overall relative improvement in terms of macro \(\mathcal{F}_{1}\) score. ### Cross corpora evaluation Table VI shows the results on cross corpora datasets including FluencyBank and simulated LibriStutter. By optimizing on SEP-28k dataset in terms of data augmentation and multi-contextual, we aim to evaluate our proposed methodology MC _StutterNet_ on a cross corpora scenario. We train _StutterNet_ on SEP-28k dataset and evaluate it on _FluencyBank_ dataset which comprises samples from 33 podcasts. We found that the model trained on one corpus (SEP-28k) fails to generalize and performs poorly on cross-domain corpora. As can be seen from the Table VI and Fig. 6, the \(\mathcal{F}_{1}\) detection performance decreases remarkably from 43.86% to 38.92% employing clean training. We hypothesize that the performance drop is due to the domain mismatch due to the difference in speaker accent and recording environment between the SEP-28k and FluencyBank datasets. The repetitions and block classes show more degradation in their performance in cross corpora scenarios. Furthermore, we apply data augmentation in cross corpora evaluation, and we found that it boosts the detection performance of all the classes, which results in an improvement of 13.23% in \(\mathcal{F}_{1}\). Experimental evaluation of MC _StutterNet_ on simulated LibriStutter dataset [63] results in a macro \(\mathcal{F}_{1}\) score of \(\approx\) 91% (shown in Table VI). However, the performance considerably drops when evaluated in cross-corpora setting in comparison to real stuttering dataset FluencyBank. The MC _StutterNet_ trained on SEP-28k dataset shows extremely poor performance when tested on simulated LibriStutter dataset. Applying data augmented training, we see there is minimal improvement of 1% and 2% in repetition and prolongation, respectively. The LibriStutter simulated dataset does not reflect the actual nature and characteristics of stuttered speech. The results also confirms that a model trained on simulated stuttered datasets should not be used in a real clinical condition. \begin{table} \begin{tabular}{c c} \hline Method & \(\mathcal{F}_{1}\) (\%) \\ \hline StutterNet (BL5) (Clean) [29] & 42.84 \\ \hline Class Imbalance & \\ _StutterNetNet_ (Clean) & 41.02 \\ _StutterNet_\({}_{\mathrm{MB}}\) (Clean) & 42.26 \\ _At\({}_{\mathrm{mc}}^{\mathrm{max}}\)_ (Clean) & 44.42 \\ _At\({}_{\mathrm{mc}}^{\mathrm{max}}\)_ (Clean) & 44.80 \\ \hline Data Augmentation & \\ \hline StutterNet + A4 & 45.30 \\ _StutterNet_\({}_{\mathrm{MC}}\) + A4 & 44.34 \\ _StutterNet_\({}_{\mathrm{MB}}\) + A4 & 44.03 \\ _At\({}_{\mathrm{mc}}^{\mathrm{max}}\)_ + A4 & 44.06 \\ _At\({}_{\mathrm{mc}}^{\mathrm{max}}\)_ + A4 & 45.76 \\ \hline Multi-Contextual & \\ \hline MC StutterNet (Clean) & 43.86 \\ MC StutterNet + A4 & **46.00** \\ \hline \end{tabular} \end{table} TABLE V: Summary of Results of Proposed Methods (A4: All Four Augmentation (Bb + Mu + No + Rv) ## VI Conclusion This paper addresses the problem of class imbalance in the stuttering domain. We address the class imbalance problem via two strategies including weighted cross entropy and a multi-branch training scheme. The weighted cross entropy loss function forces the _SutterNet_ classifier to give more attention to minority classes. We also investigate the effectiveness of data augmentation in the SD domain. For data augmentation, we employ reverberations and additive noises from the MUSAN dataset [51]. Additionally, we propose a MC _SutterNet_, time delay based neural network for SD. The proposed _MC StutterNet_ is a multi-class classifier that exploits different variable contexts trained jointly on different contexts (5, 9) using CE. More importantly, the experiments using data augmentation over the FluencyBank dataset revealed that our methodology generalizes better in the cross corpora domain. For class imbalance, we have used only a simple weighting scheme in the cross-entropy loss function, which results in an accuracy trade-off between majority and minority classes. In general, the data augmentation helps in stuttering domain, however, the use of data augmentation in the stuttering domain is not straightforward, thus, is limited because most data augmentations, such as time stretch, speed perturbation, and so on, completely alter the underlying structure of the stuttering speech sample. In order for data augmentation to be more effective, a domain stuttering specific data augmentation is required to be developed. In addition, the stuttering detection domain has not matured enough, so a single metric like other speech domains which reflects the overall performance of a model is yet to be developed. In addition to accuracy metric, we have also used a macro F1-score (\(\mathcal{F}_{1}\)) which gives a good indication for the better evaluation of proposed methods. Moreover, we use joint training over multi contexts in this work, and it is possible that one context can dominate the training. A visualisation summary of macro \(\mathcal{F}_{1}\) score of models in stuttering detection is shown in Fig. 7. The proposed methodology show promising results and it can detect if the stuttering is present in the speech sample or not, however, it cannot predict where exactly the stuttering occurs in the speech frames. For future study, we would like to explore combining the other different types of neural networks in SD to predict the frames where exactly the stuttering occurs. In addition to varying context, the investigation of varying depth and different number of convolutional kernels is also an interesting topic to study in SD. Moreover, the temporal information captured by recurrent neural networks can also be investigated in a multi-stream fashion for the identification of disfluent speech frames. The performance comparison of the proposed systems with two state-of-the-art systems demonstrate that even though we achieve a noticeable advancements, the automated stuttering detection requires further research for developing a clinically usable system. The stuttering detection is fundamen \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{1}{c}{Accuracy} & & & & & & \\ \hline Method & TrainSet & TestSet & R & P & B & In & F & TA & \(\mathcal{F}_{1}\)(\%) \\ \hline MC _SutterNet_ & SEP-28k & SEP-28k & 33.36 & 37.15 & 08.34 & 59.63 & 78.34 & 59.77 & 43.86 \\ MC _SutterNet_ (Clean) & SEP-28k & FluencyBank & 19.48 & 35.80 & 01.83 & 56.36 & 80.85 & 56.48 & 38.92 \\ MC _SutterNet_ + 4A & SEP-28k & FluencyBank & 22.54 & 42.22 & 4.36 & 64.56 & 84.04 & 60.92 & 44.07 \\ \hline MC _SutterNet_ & LibriStutter & LibriStutter & 93.36 & 76.26 & NA & NA & 98.19 & 96.11 & 91.00 \\ MC _SutterNet_(clean) & SEP-28k & LibriStutter & 06.05 & 00.48 & NA & NA & 99.95 & 77.25 & 30.00 \\ MC _SutterNet_ + 4A & SEP-28k & LibriStutter & 01.11 & 02.39 & NA & NA & 97.35 & 75.43 & 31.00 \\ MC _SutterNet_ & LibriStutter & SEP-28k & 24.24 & 55.11 & NA & NA & 60.60 & 53.21 & 41.00 \\ \hline \hline \end{tabular} \end{table} TABLE VI: Results on Cross Corpora FluencyBank and Simulated LibriStutter Datasets. The First Row is From Table IV Which Shows the Results on the Same Corpora SEP-28K Dataset. The Last Two Rows in the first half of the Table Show the Results, Where the Model is Trained on SEP-28k and Tested on FluencyBank in Cross Corpora Setting. (B: Block, F: Fluent, R: Repetition, P: Prolongation, In: Interjection), Bb: Babble, Rv: Reverberation, Mu: Music, No: Noise, A4: All Four Bb + Rv + Mu + No Augmentation. The Multi-branched Training is also Considered in MC _SutterNet_. Fig. 7: Macro \(\mathcal{F}_{1}\) score summary of proposed and baseline models (BL1 and BL4). The red, light blue and purple bars indicate the baseline, same-corpora setting, and cross-corpora settings. (CC: Cross Corpora with training on SEP-28k and evaluated on FluencyBank dataset, A4: All four data augmentation. The second last bar shows cross-corpora performance on FluencyBank when trained using a clean SEP-28k dataset and the last bar shows the same with data augmentation). tally a challenging task due to inter-person variations, language/accent/dialect variability and other speaking variations. The scopes of the current work are limited to addressing basic problems related to stuttering detection. This work can be extended with speaker-adaptive training, domain adaptation to further improve the stuttering detection problem. Regarding cognitive aspects of stuttering are well modeled or not as the explainability analysis of the used deep models along with auxiliary data (e.g., functional magnetic resonance imaging or fMRI data) is yet to be explored. However, we think the base architecture (i.e., MC _StutterNet_) developed in this work could still be useful for such explainability analysis. The stuttering detection can be improved with multimodality by integrating with visual cues related to head nodding, lip tremors, unusual lip shapes, quick eye blinks, facial expressions, etc. We think that the same base acoustic model can be used in a fusion framework. We have found that the blocks are the most difficult to detect. It would be interesting to analyze the disfluencies of the speakers which are hardest to identify by doing more ablation analysis. The future work includes exploring self-supervised models that exploit unlabelled audio data. ## Acknowledgment This work was made with the support of the French National Research Agency, in the framework of the project ANR BENEPHIDIRE (18-CE36-0008-03). Experiments presented in this paper were carried out using the Grid'5000 testbed, supported by a scientific interest group hosted by Iria and including CNRS, RENATER and several universities as well as other organizations(see [https://www.grid5000.fr](https://www.grid5000.fr)) and using the EXPLOR centre, hosted by the University of Lorraine.
2307.03199
The key role of Lagrangian multiplier in mimetic gravitational theory in the frame of isotropic compact star
Recently, the mimetic gravitational theory has gained much attention in the frame of cosmology as well as in the domain of astrophysics. In this study, we show that in the frame of mimetic gravitation theory we are not able to derive an isotropic model. As a result, our focus shifts towards combining mimetic gravitational theory with the Lagrangian multiplier. The field equations of a static isotropic gravitational system that controls the geometry and dynamics of star structure are studied in the frame of mimetic theory coupled with a Lagrangian multiplier using a non-linear equation of state. An energy density is assumed from where all the other unknowns are fixed and a new isotropic model is derived. The physical analysis of this model is studied from different viewpoints and consistent results compatible with a realistic isotropic star are investigated analytically and graphically. Ultimately, we demonstrate the stability of the model in question by employing the adiabatic index technique.
G. G. L. Nashed
2023-07-04T09:04:18Z
http://arxiv.org/abs/2307.03199v1
The key role of Lagrangian multiplier in mimetic gravitational theory in the frame of isotropic compact star ###### Abstract Recently, the mimetic gravitational theory has gained much attention in the frame of cosmology as well as in the domain of astrophysics. In this study, we show that in the frame of mimetic gravitation theory we are not able to derive an isotropic model. As a result, our focus shifts towards combining mimetic gravitational theory with the Lagrangian multiplier. The field equations of a static isotropic gravitational system that controls the geometry and dynamics of star structure are studied in the frame of mimetic theory coupled with a Lagrangian multiplier using a nonlinear equation of state. An energy density is assumed from where all the other unknowns are fixed and a new isotropic model is derived. The physical analysis of this model is studied from different viewpoints and consistent results compatible with a realistic isotropic star are investigated analytically and graphically. Ultimately, we demonstrate the stability of the model in question by employing the adiabatic index technique. pacs: 04.50.Kd, 04.25.Nx, 04.40.Nr ## I Introduction Recently, conclusive evidence developed proposing that Einstein's theory of general relativity (GR), is needy to be modified.The justifications for this observation stem from the challenges in renormalizing general relativity and the uncertain behavior exhibited in high gravity regions, such as the exterior of black holes and neutron stars. Additionally, the confirmed accelerated expansion of our universe can not be explained by GR. To be able to use GR to explain the phenomena of accelerated expansion of our universe an assumption of the presence of exotic matter fields like dark matter and dark energy must be taken into account. Up to date no experimental support for these was imminent. Another method is to try to amend Einstein's GR somehow so that we keep its basic gains. Despite these issues in GR, it still submits success in the solar system. The success of the Event Horizon Telescope in capturing the image of a black hole's shadow in M87 [1] and the advancements made in detecting gravitational waves [2] have significantly bolstered the position of General Relativity as the preeminent gravitational theory compared to other theories of gravity. However, aspects that GR's have not clarified must also face. We advocate the perspective that a modification of the governing field equations within the geometric sector holds the key to addressing these unresolved concerns. Modified gravitational theories have made significant progress and performance in explaining some of the unsolved issues in GR. There are many modified theories of gravity that can overcome the shortcomings of GR. One of the modification of GR is to add a scalar field. In the frame of a scalar field coupled with Ricci scalar a static neutron star perspective using two types of cosmological inflationary attractor theories, i.e., the induced inflationary attractors and the quadratic inflationary attractors are investigated [3]. Among these modification is the \(f(R)\) gravitational theory [see for example 4; 5]. In the frame of \(f(R)\) a study of the neutron star phenomenology of \(R^{p}\) attractor theories in the Einstein frame have been analyzed [6]. In this study we are interested in another modification of GR, i.e., we are focused on the gravitational mimetic theory which has been suggested as a fresh approach to studying the problem of dark matter.[7]. This concerns the introduction of a mimetic scalar field denoted as \(\eta\), which, despite lacking dynamics in its construction, plays a crucial role in imparting dynamism to the longitudinal degree of freedom within the gravitational system. The gravitational system's dynamic longitudinal degree serves as an analogue to pressureless dark matter, mimicking its properties [7]. The problem of cosmological singularities [8] and the singularity at the core of a black hole [9] can be effectively tackled through the altered variant of mimetic gravitational theory. Furthermore, the gravitational theory of mimetic has substantiated that the propagation of gravitational waves at the speed of light aligns in perfect accordance with the discoveries made from the event GW170817 and the corresponding optical observations [10; 11; 12]. Furthermore, it has been demonstrated that mimetic theory can explore the coherent rotational patterns observed in spiral galaxies without relying on the existence of dark matter particles [13; 14]. Lately, there has been a significant surge of enthusiasm surrounding the cosmological framework due to the emergence of the mimetic theory [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32] and black holes physics [33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51]. The theory has further extended its scope to include \(f(R)\) mimetic gravity, incorporating additional insights and explanations [52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65] and mimetic Gauss-Bonnet gravity [66; 67; 68; 69; 70]. Specifically, a comprehensive framework combining early inflation and late-time acceleration within the context of mimetic \(f(R)\) gravity was formulated [71]. It has been stressed that within the context of mimetic \(f(R)\) gravity, the period of inflation can be identified [71]. Nowadays, the mimetic theory is one of the most compelling theories of gravity, which without introducing any additional field of matter, represents the dark side of the universe which is represented as a geometric effect. The observations ensure that approximately 26% of the energy content of the universe is related to the dark matter sector, while approximately 69% construct the dark energy [72]. Many facts ensure the presence of dark matter and dark energy [73]. Dark energy, which has gained significance in recent times, is believed to be a smooth element characterized by negative pressure. It possesses an anti-gravity characteristic and is actively propelling the universe, playing a key role in the accelerated expansion of the cosmos [74]. Dark matter performs two crucial functions in the development of the universe: Firstly, it provides the necessary gravitational force for the rotation of spiral galaxies and galaxy clusters. Secondly, it plays a significant role as an essential component in the amplification of disturbances and the formation of structures during the early stages of the universe. As a result, dark matter begins to condense into an intricate system of dark matter halos, whereas regular matter undergoes collapse due to photon radiation, eventually settling into the well-formed potential of dark matter. In the absence of dark matter, the process of galaxy formation in the universe would be significantly more extensive than what has been observed. The structure of the current investigation is outlined as: In Sec. II, we introduce the fundamental principles of the mimetic theory combined with the Lagrangian multiplier. In Sec. III, we list the necessary conditions that must be obeyed by any realistic isotropic model. Also, in Sec.n III we show that the model under consideration satisfies all the necessary conditions, analytically and graphically, that must possess by any realistic isotropic star. In Section IV we study the stability of the model presented in this study using the adiabatic index. Final Section is devoted to discussing the main results of the present study. ## II Isotropic solution in the Mimetic theory combined with the Lagrange multiplier What is called "dark matter of mimetic" was delivered in the scientific society in [15]. Despite of this mimetic theories had already been discussed in [21; 42; 46; 47; 51; 75; 76; 77; 78; 79; 80]. According to the theory of mimetic, the metric \(g_{\alpha\beta}\), which is a physical one, is linked to the metric \(\bar{g}_{\alpha\beta}\), which is an auxiliary metric, and to the mimetic scalar field \(\eta\) by the conformal transformation: \[g_{\alpha\beta}=-\left(\bar{g}^{\mu\nu}\partial_{\mu}\eta\partial_{\nu}\eta \right)\bar{g}_{\alpha\beta}\,, \tag{1}\] where the metric \(\bar{g}^{\alpha\beta}\) undergoes conformal transformations, specifically \(\bar{g}^{\alpha\beta}\rightarrow\Omega\bar{g}^{\alpha\beta}\), the metric \(g^{\alpha\beta}\) remains invariant, meaning that it remains unchanged. In the present study, we will coin the mimetic-like gravity coupled with the Lagrange multiplier. The action of the gravity model resembling the mimetic theory, in conjunction with the Lagrange multiplier \(\lambda\) and the function \(\omega\), is described as follows: has the form1[52]: Footnote 1: It was shown in [81] that the function \(\omega\) is necessary in the construction of the mimetic theory which allows us to achieve the geometric characteristics of a black hole, including the presence of one or more horizons. \[S=\int\mathrm{d}x^{4}\sqrt{-g}\left\{R+\lambda\left(g^{\mu\nu}\omega\partial_ {\mu}\eta\partial_{\nu}\eta+1\right)\right\}+L_{\mathrm{matt}}\,, \tag{2}\] where \(L_{\mathrm{matt}}\) is the Lagrangian of the matter field and \(\eta\) is the mimetic scalar field. The field equations can be obtained by taking the variation of the action (2) with respect to the metric tensor \(g_{\mu\nu}\), resulting in the following expressions: \[0=R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R+\frac{1}{2}g_{\mu\nu}\left\{\lambda\left(g ^{\rho\sigma}\omega\partial_{\rho}\eta\partial_{\sigma}\eta+1\right)\right\}- \lambda\partial_{\mu}\eta\partial_{\nu}\eta+\frac{1}{2}T_{\mu\nu}\,, \tag{3}\] where \(T_{\mu\nu}\) is the energy-momentum tensor corresponding to the matter field. Additionally, differentiating the action (2) with respect to the mimetic field \(\eta\) yields: \[2\nabla^{\mu}(\lambda\omega\,\partial_{\mu}\eta)=0\,. \tag{4}\] When the action (2) is varied with respect to the Lagrange multiplier \(\lambda\), the following outcome is obtained: \[g^{\rho\sigma}\omega\partial_{\rho}\eta\partial_{\sigma}\eta=-1\,. \tag{5}\] It is the time to apply the field equations (3) and (4), using the constrains of Eq. (5), to the following spherically symmetric spacetime \[ds^{2}=f(r)dt^{2}-\frac{dr^{2}}{f_{1}(r)}-r^{2}\left(d\theta^{2}+r^{2}\sin^{2} \theta d\phi^{2}\right)\,, \tag{6}\] to derive an interior solution. The functions \(f(r)\) and \(f_{1}(r)\) mentioned here are unfamiliar functions that will be determined by solving the set of field equations. Furthermore, we assume that \(\eta\) is solely dependent on \(r\). Using Eqs. (3) and (4) to the spacetime (6), yields the following set of differential equations. The \((t,t)\)-component of the field equation (3) is: \[\rho(r)= \frac{1-f_{1}-rf_{1}^{t}}{r^{2}}\,, \tag{7}\] the field equation (3) can be expressed as the \((r,r)\)-component. \[p(r)= \frac{f_{1}f^{\prime}r-f+ff_{1}-\lambda\omega(r)\eta^{2}ff_{1}r^{ 2}}{r^{2}f}\,, \tag{8}\] the components of the field equation (3) in terms of \((\theta,\theta)\) and \((\phi,\phi)\) have the following structure: \[p(r)= \frac{2\,f_{1}f^{\prime\prime}fr-f^{\prime 2}f_{1}r+f\left(2\,f_{1}+f_ {1}^{\prime}r\right)f^{\prime}+2\,f_{1}^{\prime}f^{2}}{4f^{2}r}\,, \tag{9}\] and the field equation (4) takes the form: \[0=2\lambda^{\prime}\omega fr+\left[\omega^{\prime}fr+\omega\left(f^{\prime}r+ 4\,f\right)\right]\lambda\,, \tag{10}\] where \(f\equiv f(r)\), \(f_{1}\equiv f_{1}(r)\), \(\omega\equiv\omega(r)\), \(\lambda\equiv\lambda(r)\), \(f^{\prime}=\frac{df}{dr}\), \(f_{1}^{\prime}=\frac{df_{1}}{dr}\), \(\eta^{\prime}=\frac{d\eta}{dr}\), \(\omega^{\prime}=\frac{d\omega}{dr}\), and \(\lambda^{\prime}=\frac{d\lambda}{dr}\). Furthermore, we make the assumption that the energy-momentum tensor of the isotropic fluid can be represented in the following manner: \[T_{\mu\nu}=(\rho+p)u_{\mu}u_{\nu}+pg_{\mu\nu}\,, \tag{11}\] In this case, \(\rho\) represents the energy density, \(p\) denotes the pressure, and \(u^{\mu}\) is a timelike vector defined as \(u^{\mu}=[1,0,0,0]\).In this particular investigation, we consider the matter content to be characterized by the energy density \(\rho\) and the pressure \(p\) respectively.2. Footnote 2: In the entirety of this research, we will utilize geometrized units where the constants \(G\) and \(c\) are set to \(1\). Assuming the fluid under consideration has a perfect fluid verifying the equation of state (EoS), \(p=p\left(\rho\right)\). Using the conservation law of matter gives: \[0=\nabla^{\mu}\,T_{\mu r}=2f\frac{dp}{dr}+f^{\prime}\left(\rho+p\right)\,. \tag{12}\] In this particular context, we make an assumption that the energy density and pressure of the system exhibit variations based on the radial coordinate. If the form of EoS \(\rho=\rho(p)\) is presented, then Eq. (12) yield: \[\frac{1}{2}\ln f=-\int^{r}dr\frac{\frac{dp}{dr}}{\rho+p}=-\int^{p(r)}\frac{dp }{\rho(p)+p}\,. \tag{13}\] In the interior of the star, Eq. (13) can be used but in the exterior, Eq. (13) cannot be used. Nevertheless, it is possible to assume that both \(f(r)\) and its derivative \(f^{\prime}(r)\) exhibit continuity at the surface of the star. Considering the count of unknown functions and independent equations, we find a total of six unknown functions within the compact star. To address this, we will employ the constraint given by Eq. (5), which states that \(\eta=\frac{1}{\sqrt{-\omega f_{1}}}\). These unknown functions include two metric potentials, namely \(f(r)\) and \(f_{1}(r)\) in (6), as well as the Lagrangian multiplier, the function \(\omega\), the energy density \(\rho\), and the pressure \(p\) present in the action (2). However, we have 3 independent equations, that are, the three components, Eqs. (7), (8), and (9) of the field equation (3) of the mimetic equations, an equation of state \(\rho=\rho(p)\), and the conservation law (12). As we mentioned above, the scalar field equation (4) can be obtained from the field equations (3) corresponding to the mimetic equation and the conservation law \(\nabla_{\mu}T^{\mu\nu}=0\) of the matter, and therefore the scalar field equation (4) is not independent. Hence, we have \(6-5=1\) undetermined function remaining from the equations. To address this remaining aspect, we opt for the specific configuration of energy density \(\rho=\rho(r)\) within the compact celestial object. Moreover, outside the star, we have \(\rho=p=0\), and the unknown functions, \(f(r)\), \(f_{1}(r)\) and \(\eta\) (or \(\omega(\eta)\)). We have 3 independent equations, that are, the three components, Eqs. (7), (8), and (9). Thus there are \(5-4=1\) function undetermined from the equations, again. If we consider a compact star like neutron star, one usually consider the EoS as: 1. Energy-polytrope \[p=k\rho^{1+\frac{1}{s}}\,,\] (14) where \(k\) and \(s\) are constants. It is well known that for neutron star, \(s\) lies in the interval \(s\in[0.5\,,\,1]\). 2. Mass-polytrope \[\rho=\rho_{m}+s_{1}p\,,\qquad\qquad p=m_{m}\rho_{m}^{1+\frac{1}{s_{m}}}\,,\] (15) with \(\rho_{m}\) being the rest mass energy density and \(m_{m}\), \(s_{1}\), and \(s_{m}\) are constants. Now, let us turn our attention to the study of the energy-polytrope case. Then EoS (14) can be rewritten as: \[\rho=\tilde{k}p^{(1+\frac{1}{s})}\,,\quad\tilde{k}\equiv k^{-\frac{1}{1+\frac {1}{s}}}\,,\quad\tilde{s}\equiv\frac{1}{\frac{1}{1+\frac{1}{s}}-1}=-1-s\,. \tag{16}\] Eq. (13) can take the form: \[\frac{1}{2}\ln f=-\int^{p(r)}\frac{dp}{\tilde{k}p^{1+\frac{1}{s}}+p}=\frac{c_ {1}}{2}+\tilde{s}\ln\left(1+\tilde{k}^{-1}p^{-\frac{1}{s}}\right)=\frac{c_{1} }{2}-(1+s)\ln\left(1+k\rho^{\frac{1}{s}}\right)\,, \tag{17}\] where \(c_{1}\) is a constant of integration. Using the same method of polytrope we get for mass-polytrope (15) the function \(f\) as: \[\frac{1}{2}\ln f=\frac{\tilde{c}}{2}+\ln\left(1-k_{m}\rho_{m}^{\frac{1}{s_{m} }}\right)\,, \tag{18}\] where \(\tilde{c}\) is a constant of integration. To provide an illustrative example, taking into account one of the mentioned equations of state, we can make an assumption about the profile of \(\rho=\rho(r)\) as follows. \[\rho=\left\{\begin{array}{cc}\rho_{0}\left(1-\frac{r}{R}\right)&\text{when }r<R\\ 0&\text{when }r\geq R\end{array}\right.\,. \tag{19}\] In this scenario, \(\rho_{0}\) represents a fixed value expressing the energy concentration at the core of the condensed celestial object, whereas \(R\) symbolizes an additional fixed value denoting the size of the compact star's outer boundary. As clear from Eq. (19), the energy density \(\rho\) vanishes at the surface \(r=R\). By using the energy-polytrope EoS (14) or the mass-polytrope EoS (15), we find that the pressure \(p\) also vanishes at the surface. We have introduced the mass parameter \(M\) as a constant value, which is associated with the mass of the compact star. This parameter is defined specifically for the polytropic equation of state (EoS) as follows: \[M=4\pi\int_{0}^{R}y^{2}\rho(y)dy=\frac{\pi\rho_{0}R^{3}}{3}\,. \tag{20}\] Then Eq. (17) gives, \[f=\frac{\mathrm{e}^{c_{1}}}{\left(1+k\rho_{0}\left(1-\frac{r}{R}\right)\right)^{4 }}\,. \tag{21}\] Using Eq. (19) in (7) we get: \[f_{1}=1-\frac{8\pi r^{2}}{3}+\frac{2r^{3}}{R}\,. \tag{22}\] The Lagrangian multiplier of the above model has the form \[\lambda(r)= \frac{c_{2}\,\left(R+kR-kr\right)^{2}}{r^{5/2}}\,. \tag{23}\] To finalize the determination of the remaining unknowns we assume, for simplicity, the form of the function \(\omega=c_{3}r\) and the mimetic scalar field becomes: \[\eta(r)= \sqrt{c_{3}r\left(\frac{8\pi r^{2}}{3}-\frac{2r^{3}}{R}-1\right)}\,. \tag{24}\] To finalize this section we stress on the fact that if we follow the same procedure and put \(\lambda(r)=0\) in Eqs. (7), (8), and (9) we get a system that we cannot derive from it an isotropic model. ## III Necessary conditions for a real physical star Any physical reliable isotropic star model must obey the below conditions in the interior configurations: \(\bullet\) It is essential for the metric potentials (\(g_{tt}\) and \(g_{r}r\)) to provide precise explanations for the density and momentum components by establishing unambiguous definitions and displaying consistent patterns within the core of the star and its interior. \(\bullet\) Within the interior of the star, it is required that the energy density \(\rho\) maintains a non-negative value, that is, \(\rho\geq 0\). Additionally, the energy density has a finite positive value at the central region of the star and shows a decreasing pattern as it extends towards the surface, characterized by the condition \(\frac{de}{dr}\leq 0\). \(\bullet\) Inside the fluid configuration, it is necessary for the pressure \(p\) to be positive or zero (\(p\geq 0\)). Furthermore, within the interior of the star, it is expected that the pressure decreases with respect to the radial coordinate, as indicated by the condition \(\frac{d\mu}{dr}<0\). On the outermost boundary of the star, specifically at the surface where the distance from the center is denoted as \(r=R\), the pressure \(p\) should be precisely zero. This implies that there is no pressure exerted at the star's outer edge. \(\bullet\) The energy conditions of an isotropic fluid sphere are given by: (i) The null energy condition (NEC) implies that the energy density \(\rho\) must be greater than zero. (ii) According to the weak energy condition (WEC), the sum of the pressure \(p\) and the energy density \(\rho\) must be greater than zero, i.e., \(p+\rho>0\). (iii) In accordance with the strong energy condition (SEC), the sum of the energy density \(\rho\) and three times the pressure \(p\) must be greater than zero, i.e., \(\rho+3p>0\). \(\bullet\)Furthermore, to ensure a realistic model, the causality condition must be satisfied within the interior of the star. This condition imposes a restriction on the speed of sound, requiring it to be less than 1. In this context, assuming the speed of light \(c\) is equal to 1, the condition can be expressed as \(\frac{dp}{dr}>0\) and \(1>\frac{dp}{dr}\). \(\bullet\) Finally, the adiabatic index must has a value more than \(\frac{4}{3}\). It is our purpose to study the above conditions on the isotropic model and see if it is real model or not. ## IV The physical characteristics of the model To determine whether the model described by Eqs. (17), (19), and (22) corresponds to a realistic stellar structure, we will examine the following aspects: ### Non singular model * The components of the metric potentials \(g_{tt}\) and \(g_{rr}\) fulfill the following conditions:3, Footnote 3: We will rewrite all the physical quantities like metric potentials, density and pressure in terms of the dimensionless quantity \(x\) where \(x=\frac{r}{R}\). \[f(x\to 0)=\frac{e^{c}}{(1+k\rho_{c})^{4}}\qquad\text{and}\qquad f_{1}(x\to 0)=1. \tag{25}\] As a consequence of this requirement, it is necessary for the metric potentials \(g_{tt}\) and \(g_{rr}\) to possess finite values at the central point of the stellar configuration. Furthermore, their derivatives also possess finite values at the center of the star, specifically: \(a^{\prime}(r\to 0)=0\) and \(a^{\prime}_{1}(r\to 0)=\frac{4c_{1}k}{R(1+k)^{3}}\). The mentioned limitations guarantee that the measurement remains consistent at the core and demonstrates a positive characteristic within the inner region of the star. ii-At center of star, the density (19) and pressure (14) take the following form: \[\rho(x\to 0)=\rho_{0}\qquad p(x\to 0)=k{\rho_{0}}^{2}. \tag{26}\] By examining Eq. (26), when \(\rho_{0}>0\) and \(k>0\), it becomes evident that the density and pressure in the central region of the star remain consistently positive. Apart from that, if \(\rho_{0}\) or \(k\) is non-positive, the density and pressure can become negative. These observations align with the information depicted in Figures 1 (a), 1 (b), and 1 (c), providing further consistency to the discussed aspects. iii-The density and pressure gradients of our model are provided in the following manner: \[\rho^{\prime}=-\rho_{0},\qquad\qquad p^{\prime}=-2k{\rho_{0}}^{2}\left(1-x \right). \tag{27}\] Here \(\rho^{\prime}=\frac{d\rho}{dx}\) and \(p^{\prime}=\frac{dp}{dx}\). Equation (27) illustrates that the components of the energy-momentum derivatives exhibit negative values. iv-The calculation of the speed of sound, using relativistic units where the values of \(c\) and \(G\) are equal to \(1\), is achieved by [82]: \[{v_{r}}^{2}=\frac{dp}{d\rho}=2\,k\,\rho_{0}\left(1-x\right). \tag{28}\] Figure 1: A visual representation is provided, illustrating the relationship between the metric potentials (21) and (22), in comparison to the dimensionless x; (b) the profile of density; and (c) profile of pressure. We have put \(\rho_{0}=1\) and \(K=0.4\). At this point, we are prepared to graphically represent the aforementioned conditions in order to observe their behaviors. In Figure 1 (a), we illustrate the characteristics of the metric potentials. As depicted in Figure 1 (a), the metric potentials take on the values \(a_{1}(x\to 0)=0.4\) and \(a(x\to 0)=1\) as \(x\) approaches 0. This implies that within the central region of the star, both metric potentials exhibit finite positive values. We proceed to create graphs illustrating the density and pressure, as outlined in Equation (19), represented in Figure 1 (b) and 1 (c). As depicted in Figure 1 (b) and 1 (c), the components of the energy-momentum exhibit positive values, which are consistent with predictions for a reasonable stellar arrangement. Moreover, as depicted in Figure 1 (b) and (c), the components of the energy-momentum tensor exhibit elevated values at the core and gradually decrease as they approach the periphery. These observed patterns are characteristic of a plausible star. Figure 2 illustrates the presence of adverse values in the derivatives of the components of the energy-momentum tensor, indicating a uniform decrease in both density and pressure throughout the entirety of the star's structure. Figure 3 presents visual depictions of the speed of sound, the relationship between mass and radius, and the compactness parameter. As depicted in Fig. 3(a), the speed of sound is found to be less than one, which verifies that the causality condition is not violated within the interior of the stellar configuration when the parameter of the equation of state (EoS) is \(k<0.5\). Furthermore, Fig. 3(c) illustrates that the compactness of our model is restricted within the interval of \(0<C<0.003\), where \(C\) is defined as the ratio of \(M\) to \(xR\) in the stellar arrangement. The energy conditions are depicted in Figure 4, showcasing the characteristics of each condition. More specifically, Figure 3: A diagram is presented, depicting the behavior of various quantities with respect to the dimensionless value of \(x\). Specifically, the plot includes the sound speed (labeled as (a)), the relation between mass and radius (labeled as (b)), and the level of compactness exhibited by the celestial body(labeled as (c)). Figure 2: A graph is shown, displaying the variations in the gradient of density and pressure from Equation (19), plotted against the dimensionless value of \(x\). Fig. 4 (a), (b), and (c) display the presence of positive values for the NEC (Null Energy Condition), WEC (Weak Energy Condition), and SEC (Strong Energy Condition) respectively. This verification provides assurance that all energy conditions are met across the entire stellar configuration, aligning with the criteria for a physically feasible stellar model. Figure 5 represents the plot of the Equation of State (EoS). In particular, Figure 5 (a) indicates that the EoS exhibits a linear behavior. ## V The model's stability At this point, we are prepared to examine the stability concern of our model by conducting tests involving the the index of adiabatic and the static case. Figure 4: A graph is generated to illustrate the behavior of the null, weak, and strong energy conditions, as determined by Equation (19), plotted against the dimensionless value of \(x\). Figure 5: A plot is presented to visualize the equation of state (EoS) as a function of the radial coordinate \(r\), labeled as (a). Additionally, the redshift is plotted and labeled as (b). ### An adiabatic index To investigate the stable balance of a spacetime that possesses spherical symmetry, an analysis of the adiabatic index can be conducted. The adiabatic index plays a vital role in evaluating the stability requirement, serving as an essential instrument for this purpose. Specifically, the adiabatic perturbation, denoted as \(\Gamma\), is defined as follows [86, 87, 88, 89, 83, 84, 85]: \[\Gamma=\left(\frac{\rho+p(x)}{p(x)}\right)\left(\frac{dp(x)}{d\rho(x)}\right)\,. \tag{29}\] If the adiabatic index \(\Gamma\) is greater than \(\frac{4}{3}\), a Newtonian isotropic sphere will have a stable equilibrium [90]. Using Eq. (29), we get \[\Gamma=2+2k\rho_{0}(1-x). \tag{30}\] Figure 6 displays the adiabatic index \(\Gamma\). The graph clearly indicates that the value of \(\Gamma\) remains consistently above \(4/3\) throughout the star's interior. Therefore, we can infer that the stability requirement is met. ### Stability in the stationary state Another approach to validate the stability of model (19) involves investigating the static state proposed by Harrison, Zeldovich, and Novikov [91, 92, 93]. In a previous study by Harrison, Zeldovich, and Novikov, it was established that a stable configuration of a star necessitates a positive and increasing derivative of mass with respect to the central density \(\rho(x\to 0)\), denoted as \(\frac{\partial M}{\partial\rho_{0}}>0\). By applying this condition, we can determine the specific form of the central density as follows: \[\rho(r\to 0)=\rho_{0}\,. \tag{31}\] Subsequently, utilizing Eq. (20), we can derive the mass corresponding to the central density, denoted as: \[M(\rho_{0})4\pi\int_{0}^{R}y^{2}\rho(y)dy=\frac{\pi\rho_{0}R^{3}}{3}\,. \tag{32}\] The behavior of the mass derivative with respect to the central density can be described by the following pattern: \[\frac{\partial M(\rho_{0})}{\partial\rho_{0}}=\frac{\pi R^{3}}{3}\,. \tag{33}\] Equations (32) and (33) guarantee the verification of stability condition of our model. Figure 6: Plot of the gravitational, and the hydrostatic forces vs. the dimensionless \(x\). Discussion and conclusions The primary objective of this investigation is to analyze a compact star's static configuration, which possesses both spherical symmetry and is within the framework of mimetic gravity theory. This theory incorporates two main components: a scalar field and a Lagrangian multiplier. In our formulation, we demonstrated the ability to construct a model that accurately replicates the profile of a given spherically symmetric spacetime, regardless of the equation of state (EoS) for matter and energy density. The shape of the density function \(\rho(x)\) is of utmost importance in determining the radius \(R\) and mass \(M\) of the compact star. By manipulating the Lagrangian multiplier \(\lambda(\eta)\), it becomes feasible to establish a flexible connection between the radius \(R\) and the compact star. This creates a situation where the Lagrangian multiplier \(\lambda(x)\) and the equation of state (EoS) describing the model exhibit a degenerate relationship. As a result, it is clear that relying solely on the mass-radius relation is inadequate for fully constraining the model. To illustrate this further, we take a closer look at the polytrope equation of state (EoS) given by (14). By selecting a specific form for the density in Eq. (19), we construct a practical isotropic model. We then proceed to investigate the physical characteristics of this model. Through rigorous analysis using different analytical techniques and validations, we carefully scrutinize the obtained analytic solution. This comprehensive examination enables us to observe and analyze the physical behavior manifested by our solution. It is important to highlight that the preceding discussion confirms the satisfaction of all the physical conditions by the spherically symmetric interior spacetime configuration considered in this study within the framework of mimetic gravitational theory coupled with a Lagrangian multiplier. However, it should be noted that an alternative form of mimetic gravitational theory that does not involve the coupling with a Lagrangian multiplier may not support the existence of the isotropic model, as evidenced by equations (7), (8), and (9). Moreover, the field equations governing the equilibrium of rapidly rotating neutron stars in scalar-tensor theories of gravity, as well as representative numerical solutions are discussed in [94]. New models of slowly rotating, perfect-fluid neutron stars are constructed by extending the classical Hartle-Thorne formalism to generic scalar-tensor theories of gravity [95]. An investigation of self-consistently slowly rotating neutron and strange stars in R-squared gravity is investigated in [96]. A study of static neutron stars in the framework of a class of non-minimally coupled inflationary potentials have been presented in [97]. Are all the previous studies presented in [94; 95; 96; 97] can be discussed in the framework of the present study? This will be done elsewhere.
2310.11152
Revisiting representations of quark mixing matrix
Using unitarity, unlike the approaches available in the literature, we have constructed 9 independent representations of CKM matrix starting with each of the 9 elements of the matrix. The relationship of these independently constructed representations with the already available ones in the literature has been compared and discussed. Further, the implications of these representations have been explored for some of the CKM parameters such as \delta, J and \epsilon_k. Interestingly, we find that the PDG representation which is equivalent to our first representation seems to be most appropriate to incorporate the hierarchy of the elements of the CKM matrix as well as to describe the related phenomenology.
Gurjit Kaur, Aakriti Bagai, Gulsheen Ahuja, Manmohan Gupta
2023-10-17T11:17:53Z
http://arxiv.org/abs/2310.11152v3
# Representations of quark mixing matrix ###### Abstract Recent observation of \(2.2\sigma\) deviation from unitarity in the first row of CKM matrix motivates one to have a relook at the unitarity based analysis of CKM phenomenology. In the absence of rigorous formulation of representations of CKM matrix, we have revisited these keeping in mind the constraints of unitarity. To this end, we have constructed the representations in an ab-initio manner as well as have explored the inter-relation of these with the ones given in the literature. The implications of these representations, incorporating unitarity constraints, on some of the CKM parameters has been explored using the latest data. In the case of CP violating parameter \(\epsilon_{k}\), some very interesting results emerge. ## 1. Introduction Over the last few decades, Cabibbo-Kobayashi-Maskawa (CKM) [1, 2] phenomenology has registered remarkable progress on the theoretical as well as experimental front. On the theoretical front, CKM paradigm has played a crucial role in understanding several important features of flavor physics. On the experimental front, remarkable progress has been made in generating large amount of data for the measurement of various CKM parameters. Several groups like Particle Data Group (PDG) [3], CKMfitter [4], HFLAV [5], UTfit [6], etc., have been actively engaged in continously updating their analyses to arrive at more and more refined conclusions. At present, we have several CKM parameters which are determined with good deal of accuracy, e.g., the matrix element \(|V_{us}|=0.2243\pm 0.0008\) is determined within an accuracy of fraction of a percent [3]. Also, there are several CKM parameters which are known within an error of few percent, e.g., the parameter \(\sin 2\beta\), representing angle \(\beta\) of the unitarity triangle, with its world average being \((22.2\pm 0.7)^{{}^{\circ}}\)[3, 5]. Similarly, the angle \(\alpha\) of the unitarity triangle is also known within a few percent level, e.g., the world average is \((85.2^{+4.8}_{-4.3})^{{}^{\circ}}\)[3]. Inspite of remarkable progress in the context of CKM phenomenology, at present, we are saddled with several issues which need to be addressed. Recently, PDG [3] has reported a \(2.2\sigma\) deviation from unitarity in the first row of the mixing matrix, i.e., \[|V_{ud}|^{2}+|V_{us}|^{2}+|V_{ub}|^{2}=0.9985\pm 0.0007. \tag{1}\] Similarly, for a long time there has been a persistent divergence in the inclusive and exclusive values of the CKM matrix element \(V_{ub}\)[7, 8], showing a divergence of \(3.9\sigma\) in the two values [9]. Also, in the literature [9] it has been emphasised that there is a perceptible deviation in the inclusive and exclusive values of the CKM matrix element \(V_{cb}\), however, recently Belle collaboration [10] advocates an accepted value \(|V_{cb}|=(40.6\pm 0.9)\times 10^{-3}\), also endorsed by P. Langacker [11] and G. Martinelli _et al._[12]. Further, although the angles \(\alpha\) and \(\beta\) of the unitarity triangle are known with good deal of precision, however in the case of angle \(\gamma\) one has divergent values yielded by different experiments. For example, incorporating the time-independent analysis of the \(B\to Dk\) decay [13], one gets \(\gamma=(64.9^{+3.9}_{-4.5})^{{}^{\circ}}\), in agreement with the PDG value [3], however, the time-dependent \(B_{s}^{{}^{\circ}}\to D_{s}^{\pm}k^{\pm}\) decay [9] yields \(\gamma=(131^{+17}_{-22})^{\circ}\), showing a \(3\sigma\) deviation. Apart from the above mentioned developments regarding CKM elements, several recent theoretical lattice QCD based developments have taken place which have implications for the CKM phenomenological calculations. In particular, this has also brought to fore the undervaluation of \(\epsilon_{k}\) within Standard Model (SM), in case exclusive values of \(V_{cb}\) are used [14]. Similarly, implications of the precise determinations of ratio \(\left|\frac{V_{ub}}{V_{cb}}\right|\) and the angle \(\gamma\) of the unitarity triangle on the parameter \(\epsilon_{k}\) value leads to incompatibility of \(\epsilon_{k}\) with the SM prediction [15]. A deeper investigation of the above mentioned issues may lead to signals beyond the SM, however, before reaching at firm conclusions in this regard one needs to have a critical look at the unitarity based key features of the CKM paradigm. One of the crucial aspect of the CKM paradigm is the quark mixing matrix which is unitary and is characterized by three mixing angles and one non-removable CP violating phase. Interestingly, in the literature, several authors [16]-[18] have proposed different representations/parametrizations of the CKM matrix, however, their construction, interrelation as well as implications in explaining different CKM phenomena have not been explored in sufficient details. Keeping the above issues in mind, the purpose of the present paper is to construct all possible independent parametrizations of CKM matrix in rigorous and ab-initio manner. The relationship of these independently constructed representations with the already available representations in the literature would also be explored. Further, we would like to find out the implications of unitarity as well as suitability of specific representations in explaining the CP violating parameter \(\epsilon_{k}\), etc.. ## 2 Revisiting representations of the CKM matrix To begin with, let us define the CKM matrix, e.g., \[\begin{pmatrix}d^{\prime}\\ s^{\prime}\\ b^{\prime}\end{pmatrix}=V_{CKM}\begin{pmatrix}d\\ s\\ b\end{pmatrix},\text{ where }V_{CKM}=\begin{pmatrix}V_{ud}&V_{us}&V_{ub}\\ V_{cd}&V_{cs}&V_{cb}\\ V_{td}&V_{ts}&V_{tb}\end{pmatrix}. \tag{2}\] The CKM matrix can have several representations which are equivalent, as far as their physical implications are concerned, however, these could differ in respect to their suitability for particular applications. The commonly used representation of CKM matrix is given by PDG, e.g., \[V_{CKM}=\begin{pmatrix}c_{12}c_{13}&s_{12}c_{13}&s_{13}e^{-i\delta}\\ -s_{12}c_{23}-c_{12}s_{23}s_{13}e^{i\delta}&c_{12}c_{23}-s_{12}s_{23}s_{13}e^{ i\delta}&s_{23}c_{13}\\ s_{12}s_{23}-c_{12}c_{23}s_{13}e^{i\delta}&-c_{12}s_{23}-s_{12}c_{23}s_{13}e^{ i\delta}&c_{23}c_{13}\end{pmatrix}, \tag{3}\] where \(c_{ij}=\cos\theta_{ij},\ s_{ij}=\sin\theta_{ij}\) for i, j=1, 2, 3, with \(\theta_{12},\ \theta_{23},\ \ \theta_{13}\) and \(\delta\) being the 3 mixing angles and the CP violating phase respectively. The mixing matrix \(V_{CKM}\) being a \(3\times 3\) unitary matrix can have only 9 independent representations. In the literature [16]-[18], several authors have constructed these representations, however without presenting their rigorous formulations. To begin with, we first discuss the methodology given by C. Jarlskog [16], wherein, it has been noted that the matrix \(V_{CKM}\) can be written as a product of three rotation matrices, e.g., \(R_{12}\), \(R_{23}\) and \(R_{13}\), given by \[R_{12}(\theta_{12})=\begin{pmatrix}c_{12}&s_{12}&0\\ -s_{12}&c_{12}&0\\ 0&0&1\end{pmatrix},\ \ R_{23}(\theta_{23})=\begin{pmatrix}1&0&0\\ 0&c_{23}&s_{23}\\ 0&-s_{23}&c_{23}\end{pmatrix},\ \ R_{13}(\theta_{13})=\begin{pmatrix}c_{13}&0&s_{13} \\ 0&1&0\\ -s_{13}&0&c_{13}\end{pmatrix}, \tag{4}\] where \(s_{12}\), \(s_{23}\) and \(s_{13}\) denote the sines of the three mixing angles. The author mentions 12 different ways to arrange product of these rotation matrices, yielding \[\begin{array}{ll}R=R_{23}(\theta_{23})R_{13}(\theta_{13})R_{12}(\theta_{12}),&R= R_{23}(\theta_{23})R_{12}(\theta_{12})R_{13}(\theta_{13}),\\ R=R_{12}(\theta_{12})R_{23}(\theta_{23})R_{13}(\theta_{13}),&R=R_{12}(\theta_{12 })R_{13}(\theta_{13})R_{23}(\theta_{23}),\\ R=R_{13}(\theta_{13})R_{12}(\theta_{12})R_{23}(\theta_{23}),&R=R_{13}(\theta_{13 })R_{23}(\theta_{23})R_{12}(\theta_{12}),\\ R=R_{12}(\theta_{12})R_{23}(\theta_{23})R_{12}(\theta_{12}^{\prime}),&R=R_{12}( \theta_{12})R_{13}(\theta_{13})R_{12}(\theta_{12}^{\prime}),\\ R=R_{23}(\theta_{23})R_{12}(\theta_{12})R_{23}(\theta_{23}^{\prime}),&R=R_{23}( \theta_{23})R_{13}(\theta_{13})R_{23}(\theta_{23}^{\prime}),\\ R=R_{13}(\theta_{13})R_{12}(\theta_{12})R_{13}(\theta_{13}^{\prime}),&R=R_{13}( \theta_{13})R_{23}(\theta_{23})R_{13}(\theta_{13}^{\prime}),\end{array} \tag{5}\] where \(\theta_{ij}^{\prime}\neq\theta_{ij}\). To obtain a possible unitary representation of CKM matrix, it was suggested that phase factor \(\delta\) could be added in 3 different ways leading to 36 representations of the CKM matrix, obviously all of these cannot be independent. Considering the possibility \(R=R_{23}(\theta_{23})R_{13}(\theta_{13})R_{12}(\theta_{12})\), the phase factor \(\delta\) can be added in 3 possible ways as \(R_{23}(\theta_{23},\delta)\)\(R_{13}(\theta_{13},0)\)\(R_{12}(\theta_{12},0)\) or \(R_{23}(\theta_{23},0)\)\(R_{13}(\theta_{13},0)\)\(R_{12}(\theta_{12},0)\) or \(R_{23}(\theta_{23},0)\)\(R_{13}(\theta_{13},0)\)\(R_{12}(\theta_{12},\delta)\). For example, \[R_{23}(\theta_{23},\delta)=\begin{pmatrix}1&0&0\\ 0&c_{23}&s_{23}e^{i\delta}\\ 0&-s_{23}e^{-i\delta}&c_{23}\end{pmatrix}\text{or}\,R_{13}(\theta_{13},\delta )=\begin{pmatrix}c_{13}&0&s_{13}e^{i\delta}\\ 0&1&0\\ -s_{13}e^{-i\delta}&0&c_{13}\end{pmatrix}\text{or}\,R_{12}(\theta_{12}, \delta)=\begin{pmatrix}c_{12}&s_{12}e^{i\delta}&0\\ -s_{12}e^{-i\delta}&c_{12}&0\\ 0&0&1\end{pmatrix}.\] H. Fritzsch and Z. Z. Xing [17], after an analysis of the 12 combinations given by Jarlskog, mentioned in equation (5), found that only 9 out of these are'structurally' independent. They also noted that the phase factor \(\delta\) can be associated in 3 different manners with any of the rotation matrix, however, it can be shown that these are all equivalent. Noting the availability of rephasing invariance in the case of CKM matrix, they constructed 9 possible independent representations of CKM matrix, as shown in Table 1. A. Rasin [18] had also attempted to construct possible representations of the CKM matrix using the unitarity constraints of the CKM matrix. The author has sketched the methodology of the construction of representations, the different possibilities have been presented in Table 2. We have closely examined these representations and find that only 6 of these are independent. For example, the representation 9 can be obtained from representation 4 by re-designating angle \(\theta_{23}\) of representation 4 as \(\theta_{13}\), as well as by changing the quadrant of the angles \(\theta_{12}\) and \(\theta_{12}^{\prime}\), the representation 4 thus becomes \[\begin{pmatrix}-s_{12}s_{12}^{\prime}+c_{12}c_{12}^{\prime}c_{13}e^{-i\delta}& s_{12}c_{12}^{\prime}+c_{12}c_{13}s_{12}^{\prime}e^{-i\delta}&s_{13}c_{12}e^{-i \delta}\\ -c_{12}s_{12}^{\prime}e^{i\delta}-c_{13}s_{12}c_{12}^{\prime}&c_{12}c_{12}^{ \prime}e^{i\delta}-c_{13}s_{12}s_{12}^{\prime}&-s_{12}s_{13}\\ -s_{13}c_{12}^{\prime}&-s_{13}s_{12}^{\prime}&c_{13}\end{pmatrix}. \tag{6}\] \begin{table} \begin{tabular}{|c|c|c c|} \hline S.No. & Product of rotation matrices & \multicolumn{2}{c|}{Resultant Matrix} \\ \hline 1 & \(R_{12}(\theta_{12})R_{23}(\theta_{23},\delta)R_{12}^{-1}(\theta_{12}^{\prime})\) & \(\begin{pmatrix}c_{23}s_{12}s_{12}^{\prime}+c_{12}c_{12}^{\prime}e^{-i\delta}&c_{23 }s_{12}c_{12}^{\prime}-c_{12}s_{12}^{\prime}e^{-i\delta}&s_{23}s_{12}\\ c_{23}c_{12}s_{12}^{\prime}-s_{12}c_{12}^{\prime}e^{-i\delta}&c_{23}c_{12}c_{12}^ {\prime}+s_{12}s_{12}^{\prime}e^{-i\delta}&s_{23}c_{12}\\ -s_{23}s_{12}^{\prime}&-s_{23}c_{12}^{\prime}&c_{23}\end{pmatrix}\) \\ \hline 2 & \(R_{23}(\theta_{23})R_{12}(\theta_{12},\delta)R_{23}^{-1}(\theta_{23}^{\prime})\) & \(\begin{pmatrix}c_{12}&s_{12}c_{23}^{\prime}&-s_{12}s_{23}^{\prime}\\ -s_{12}c_{23}&c_{12}c_{23}^{\prime}c_{23}+s_{23}^{\prime}s_{23}e^{-i\delta}&- c_{12}s_{23}^{\prime}c_{23}+c_{23}^{\prime}s_{23}e^{-i\delta}\\ s_{12}s_{23}&-c_{12}c_{23}^{\prime}s_{23}+s_{23}^{\prime}c_{23}e^{-i\delta}&c_ {12}s_{23}^{\prime}s_{23}+c_{23}^{\prime}c_{23}e^{-i\delta}\end{pmatrix}\) \\ \hline 3 & \(R_{23}(\theta_{23})R_{13}(\theta_{13},\delta)R_{12}(\theta_{12})\) & \(\begin{pmatrix}c_{13}c_{12}&c_{13}s_{12}&s_{13}\\ -s_{13}c_{12}s_{23}-s_{12}c_{23}e^{-i\delta}&-s_{13}s_{12}s_{23}+c_{12}c_{23}e ^{-i\delta}&c_{13}s_{23}\\ -s_{13}c_{12}c_{23}+s_{12}s_{23}e^{-i\delta}&-s_{13}s_{12}c_{23}-c_{12}s_{23}e ^{-i\delta}&c_{13}c_{23}\end{pmatrix}\) \\ \hline 4 & \(R_{12}(\theta_{12})R_{13}(\theta_{13},\delta)R_{23}^{-1}(\theta_{23})\) & \(\begin{pmatrix}c_{13}c_{12}&s_{13}c_{12}s_{23}+s_{12}c_{23}e^{-i\delta}&s_{13} c_{12}c_{23}-s_{12}s_{23}e^{-i\delta}\\ -c_{13}s_{12}&-s_{13}s_{12}s_{23}+c_{12}c_{23}e^{-i\delta}&-s_{13}s_{12}c_{23} -c_{12}s_{23}e^{-i\delta}\\ -s_{13}&c_{13}s_{23}&c_{13}c_{23}\end{pmatrix}\) \\ \hline 5 & \(R_{13}(\theta_{13})R_{12}(\theta_{12},\delta)R_{13}^{-1}(\theta_{13}^{\prime})\) & \(\begin{pmatrix}c_{12}c_{13}c_{13}^{\prime}+s_{13}s_{13}^{\prime}e^{-i\delta}&s_{ 12}c_{13}&-c_{12}c_{13}s_{13}^{\prime}+s_{13}c_{13}^{\prime}e^{-i\delta}\\ -s_{12}c_{13}^{\prime}&c_{12}&s_{12}s_{13}\\ -c_{12}s_{13}c_{13}^{\prime}+c_{13}s_{13}^{\prime}e^{-i\delta}&-s_{12}s_{13}&c _{12}s_{13}s_{13}^{\prime}+c_{13}c_{13}^{\prime}e^{-i\delta}\end{pmatrix}\) \\ \hline 6 & \(R_{12}(\theta_{12})R_{23}(\theta_{23},\delta)R_{13}(\theta_{13})\) & \(\begin{pmatrix}-s_{23}s_{13}s_{12}+c_{13}c_{12}e^{-i\delta}&c_{23}s_{12}&s_{23 }c_{13}s_{12}+s_{13}c_{12}e^{-i\delta}\\ -s_{23}s_{13}c_{12}-c_{13}s_{12}e^{-i\delta}&c_{23}c_{12}&s_{23}c_{13}c_{12}-s_ {13}s_{12}e^{-i\delta}\\ -c_{23}s_{13}&-s_{23}&c_{23}c_{13}\end{pmatrix}\) \\ \hline 7 & \(R_{23}(\theta_{23})R_{12}(\theta_{12},\delta)R_{13}^{-1}(\theta_{13})\) & \(\begin{pmatrix}c_{12}c_{13}&s_{12}&-c_{12}s_{13}\\ -s_{12}c_{13}c_{23}+s_{13}s_{23}e^{-i\delta}&c_{12}c_{23}&s_{12}s_{13}c_{23}+ c_{13}s_{23}e^{-i\delta}\\ s_{12}c_{13}s_{23}+s_{13}c_{23}e^{-i\delta}&-c_{12}s_{23}&-s_{12}s_{13}s_{23}+ c_{13}c_{23}e^{-i\delta}\end{pmatrix}\) \\ \hline 8 & \(R_{13}(\theta_{13})R_{12}(\theta_{12},\delta)R_{23}(\theta_{23})\) & \(\begin{pmatrix}c_{12}c_{13}&s_{12}c_{13}c_{23}-s_{13}s_{23}e^{-i\delta}&s_{12}c_ {13}s_{23}+s_{13}c_{23}e^{-i\delta}\\ -s_{12}c_{12}c_{23}&c_{12}s_{23}\\ -c_{12}s_{13}&-s_{12}s_{13}c_{23}-c_{13}s_{23}e^{-i\delta}&-s_{12}s_{13}s_{23}+ c_{13}c_{23}e^{-i\delta}\end{pmatrix}\) \\ \hline 9 & \(R_{13}(\theta_{13})R_{23}(\theta_{23},\delta)R_{12}^{-1}(\theta_{12})\) & \(\begin{pmatrix}-s_{23}s_{13}s_{12}+c_{13}c_{12}e^{-i\delta}&-s_{23}s_{13}c_{12}-c_ {13}s_{12}e^{-i\delta}&c_{23}s_{13}\\ c_{23}s_{12}&c_{13}c_{12}&s_{23}\\ -s_{23}c_{13}s_{12}-s_{13}c_{12}e^{-i\delta}&-s_{23}c_{13}c_{12}+s_{13}s_{12}e^{-i \delta}&c_{23}c_{13}\end{pmatrix}\) \\ \hline \end{tabular} \end{table} Table 1: Representations of the CKM matrix given by H. Fritzsch and Z. Z. Xing [17]. \begin{table} \begin{tabular}{|c|c c c|} \hline S.No. & \multicolumn{3}{|c|}{Resultant Matrices} \\ \hline 1 & \(\begin{pmatrix}c_{12}c_{13}&c_{13}s_{12}&s_{13}e^{-i\delta}\\ -s_{13}c_{12}s_{23}e^{i\delta}-s_{12}c_{23}&-s_{12}s_{23}s_{13}e^{i\delta}+c_{ 12}c_{23}&c_{13}s_{23}\\ -s_{13}c_{12}c_{23}e^{i\delta}+s_{12}s_{23}&-s_{12}s_{13}c_{23}e^{i\delta}-c_{ 12}s_{23}&c_{23}c_{13}\end{pmatrix}\) \\ \hline 2 & \(\begin{pmatrix}c_{12}c_{13}&s_{12}e^{-i\delta}&c_{12}s_{13}\\ -s_{12}c_{23}c_{13}e^{i\delta}-s_{13}s_{23}&c_{12}c_{23}&-s_{13}s_{12}c_{23}e^ {i\delta}+c_{13}s_{23}\\ s_{12}c_{13}s_{23}e^{i\delta}-s_{13}c_{23}&-c_{12}s_{23}&s_{13}s_{23}s_{12}e^{ i\delta}+c_{23}c_{13}\end{pmatrix}\) \\ \hline 3 & \(\begin{pmatrix}s_{12}s_{23}s_{13}e^{-i\delta}+c_{12}c_{23}&-s_{23}s_{13}c_{12}e^ {-i\delta}+c_{13}s_{12}&c_{23}s_{13}e^{-i\delta}\\ -c_{23}s_{12}&c_{12}c_{23}&s_{23}\\ s_{12}s_{23}c_{13}-s_{13}c_{12}e^{i\delta}&-s_{23}c_{12}c_{13}-s_{12}s_{13}e^ {i\delta}&c_{13}c_{23}\end{pmatrix}\) \\ \hline 4 & \(\begin{pmatrix}-c_{23}s_{12}s_{12}^{\prime}e^{-i\delta}+c_{12}c_{12}^{\prime}&c_ {23}s_{12}c_{12}^{\prime}e^{-i\delta}+c_{12}s_{12}^{\prime}&s_{12}s_{23}e^{-i \delta}\\ -c_{12}c_{23}s_{12}^{\prime}-s_{12}c_{12}^{\prime}e^{i\delta}&c_{12}c_{23}c_{ 12}^{\prime}-s_{12}s_{12}^{\prime}e^{i\delta}&s_{23}c_{12}\\ s_{12}^{\prime}s_{23}&-s_{23}c_{12}^{\prime}&c_{23}\end{pmatrix}\) \\ \hline 5 & \(\begin{pmatrix}-c_{23}s_{13}s_{13}^{\prime}e^{-i\delta}+c_{13}c_{13}^{\prime}& -s_{13}s_{23}e^{-i\delta}&c_{13}^{\prime}s_{13}c_{23}e^{-i\delta}+c_{13}s_{13}^ {\prime}\\ -s_{13}^{\prime}s_{23}&c_{23}&s_{23}c_{13}^{\prime}\\ -c_{13}c_{23}s_{13}^{\prime}-s_{13}c_{13}^{\prime}e^{i\delta}&-s_{23}c_{13}& c_{13}c_{23}c_{13}^{\prime}-s_{13}s_{13}^{\prime}e^{i\delta}\end{pmatrix}\) \\ \hline 6 & \(\begin{pmatrix}c_{12}&s_{12}c_{23}^{\prime}&s_{12}s_{23}^{\prime}\\ -s_{12}c_{23}&c_{12}c_{23}c_{23}^{\prime}-s_{23}s_{23}^{\prime}e^{i\delta}&c_ {12}s_{23}^{\prime}c_{23}+c_{23}^{\prime}s_{23}e^{i\delta}\\ s_{12}s_{23}e^{-i\delta}&-c_{12}c_{23}^{\prime}s_{23}e^{-i\delta}-s_{23}^{ \prime}c_{23}&-c_{12}s_{23}s_{23}^{\prime}e^{-i\delta}+c_{23}c_{23}^{\prime} \end{pmatrix}\) \\ \hline 7 & \(\begin{pmatrix}c_{13}c_{12}c_{13}^{\prime}-s_{13}s_{13}^{\prime}e^{i\delta}& s_{12}c_{13}&c_{13}c_{12}s_{13}^{\prime}+s_{13}c_{13}^{\prime}e^{i\delta}\\ -s_{12}c_{13}^{\prime}&c_{12}&-s_{12}s_{13}^{\prime}\\ -c_{12}s_{13}c_{13}^{\prime}e^{-i\delta}-c_{13}s_{13}^{\prime}&-s_{13}s_{12}e^ {-i\delta}&-c_{12}s_{13}s_{13}^{\prime}e^{-i\delta}+c_{13}c_{13}^{\prime} \end{pmatrix}\) \\ \hline 8 & \(\begin{pmatrix}c_{13}&-s_{13}s_{23}^{\prime}&s_{13}c_{23}^{\prime}\\ -s_{13}s_{23}e^{-i\delta}&-c_{13}s_{23}s_{23}^{\prime}e^{i\delta}+c_{23}c_{23}^ {\prime}&c_{13}c_{23}^{\prime}s_{23}e^{i\delta}+s_{23}^{\prime}c_{23}\\ -s_{13}c_{23}&-c_{13}s_{23}^{\prime}c_{23}-c_{23}^{\prime}s_{23}e^{-i\delta}&c_ {13}c_{23}c_{23}^{\prime}-s_{23}s_{23}^{\prime}e^{-i\delta}\end{pmatrix}\) \\ \hline 9 & \(\begin{pmatrix}c_{12}c_{12}^{\prime}c_{13}-s_{12}s_{12}^{\prime}e^{i\delta}&c_ {12}c_{13}s_{12}^{\prime}+s_{12}c_{12}^{\prime}e^{i\delta}&s_{13}c_{12}\\ -c_{13}s_{12}c_{12}^{\prime}e^{-i\delta}-c_{12}s_{12}^{\prime}&-c_{13}s_{12}s_{12 }^{\prime}e^{-i\delta}+c_{12}c_{12}^{\prime}&-s_{12}s_{13}e^{-i\delta}\\ -s_{13}c_{12}^{\prime}&-s_{13}s_{12}^{\prime}&c_{13}\end{pmatrix}\) \\ \hline \end{tabular} \end{table} Table 2: Representations of the CKM matrix given by A. Rasin [18]. By multiplying the above matrix from the left side by the matrix \[\begin{pmatrix}e^{i\delta}&0&0\\ 0&e^{-i\delta}&0\\ 0&0&1\end{pmatrix} \tag{7}\] one obtains representation 9. Similarly, representations 5 and 6 are related to 7 and 8 respectively. ## 3. Cartesian derivation of independent representations of the CKM matrix To understand the issue of construction of 9 independent representations of the \(V_{CKM}\), we have attempted to carry out this task in a rigorous ab-initio manner, without involving the rotation matrices, henceforth these would be referred to as Cartesian representations. To this end, we follow an approach wherein the 9 independent representations are constructed using any individual element of a \(3\times 3\) complex unitary matrix V given by \[V=\begin{pmatrix}a_{11}&a_{12}&a_{13}\\ a_{21}&a_{22}&a_{23}\\ a_{31}&a_{32}&a_{33}\end{pmatrix}. \tag{8}\] It may be noted that the CKM matrix is sandwiched between quark fields which allows 5 out of 6 phases of above \(3\times 3\) unitary matrix to be removed using rephasing invariance, leaving the matrix having 3 independent angles and 1 non removable phase. Further, the elements of the CKM matrix should obey the following unitarity constraints \[\underset{i=1}{\overset{3}{\sum}}a_{\alpha i}a_{\beta i}^{*}=\delta_{\alpha \beta},\qquad\qquad\underset{\alpha=1}{\overset{3}{\sum}}a_{\alpha i}a_{ \alpha j}^{*}=\delta_{ij}, \tag{9}\] where \(\alpha,\ \beta\equiv(1,\ 2,\ 3)\) and \(i,\ j\equiv(1,\ 2,\ 3)\). Taking into consideration the physical structure of CKM matrix, while constructing its representations one needs to consider the diagonal elements of matrix V, given in equation (8), to be nearly equal to unity whereas the off diagonal elements should be much smaller than unity. To facilitate construction of the representations in the Cartesian approach, therefore, when initiated from any of the diagonal elements one should consider cosines of the angles, whereas when it is initiated from any of the off diagonal elements one needs to start with sines of the angles. To illustrate our procedure, we consider an example wherein we begin with a complex element \(a_{21}\) of the matrix V, defined as \[a_{21}\equiv s_{1}e^{i\phi_{21}},\quad\text{where }s_{1}=\sin\theta_{1}. \tag{10}\] Following the unitarity constraints, one may introduce two more angles \(\theta_{2}\) and \(\theta_{3}\) such that \[a_{11}=c_{1}c_{2}e^{i\phi_{11}},\quad a_{31}=c_{1}s_{2}e^{i\phi_{31}},\quad a_{ 22}=c_{1}c_{3}e^{i\phi_{22}},\quad a_{23}=c_{1}s_{3}e^{i\phi_{23}}, \tag{11}\] where \(c_{i}=\cos\theta_{i}\) and \(s_{i}=\sin\theta_{i}\), with i = 1, 2, 3. It is interesting to note that this is a unique way to define the above elements in terms of the mixing angles and any other way would disturb the unitarity relations. After defining the above 5 CKM matrix elements and keeping in mind the following unitarity constraints \[\begin{array}{l}|a_{11}|^{2}+|a_{12}|^{2}+|a_{13}|^{2}=1,\\ |a_{31}|^{2}+|a_{32}|^{2}+|a_{33}|^{2}=1,\end{array} \tag{12}\] we get relations for the remaining 4 elements as \[\begin{array}{l}|a_{12}|^{2}+|a_{13}|^{2}=c_{1}^{2}s_{2}^{2}+s_{1}^{2}\quad \text{ or }\quad s_{1}^{2}c_{2}^{2}+s_{2}^{2},\\ |a_{32}|^{2}+|a_{33}|^{2}=s_{1}^{2}s_{2}^{2}+c_{2}^{2}\quad\text{ or }\quad c_{1}^{2}c_{2}^{2}+s_{1}^{2}.\end{array} \tag{13}\] It can be checked easily that due to the unitarity constraints, the combinations \(s_{1}^{2}c_{2}^{2}+s_{2}^{2}\) and \(c_{1}^{2}c_{2}^{2}+s_{1}^{2}\) respectively are not viable. It may be noted that out of the 6 independent phases of unitary matrix, 5 phases have been already introduced in equations (10) and (11), therefore we are left with one phase which needs to be incorporated. To do so, keeping in mind the unitarity constraints, we introduce a phase \(\delta\) in the remaining elements of matrix V, i.e., \[\begin{array}{l}a_{13}=-s_{1}c_{2}s_{3}-s_{2}c_{3}e^{-i\delta},\quad\quad \ a_{12}=-s_{1}c_{2}c_{3}+s_{2}s_{3}e^{-i\delta},\\ a_{33}=-s_{1}s_{2}s_{3}+c_{2}c_{3}e^{-i\delta},\quad\quad a_{32}=-s_{1}s_{2}c _{3}-c_{2}s_{3}e^{-i\delta}.\end{array} \tag{14}\] Using equations (10), (11) and (14), we then obtain the following unitary matrix \[\begin{pmatrix}c_{1}c_{2}e^{i\phi_{11}}&-s_{1}c_{2}c_{3}+s_{2}s_{3}e^{-i \delta}&-s_{1}c_{2}s_{3}-s_{2}c_{3}e^{-i\delta}\\ s_{1}e^{i\phi_{21}}&c_{1}c_{3}e^{i\phi_{22}}&c_{1}s_{3}e^{i\phi_{23}}\\ c_{1}s_{2}e^{i\phi_{31}}&-s_{1}s_{2}c_{3}-c_{2}s_{3}e^{-i\delta}&-s_{1}s_{2}s_ {3}+c_{2}c_{3}e^{-i\delta}\end{pmatrix}. \tag{15}\] 5 of the 6 phases of the above matrix can be factored out, i.e., \[\begin{pmatrix}e^{i\phi_{11}}&0&0\\ 0&e^{i\phi_{21}}&0\\ 0&0&e^{i\phi_{31}}\end{pmatrix}\begin{pmatrix}c_{1}c_{2}&-s_{1}c_{2}c_{3}+s_ {2}s_{3}e^{-i\delta}&-s_{1}c_{2}s_{3}-s_{2}c_{3}e^{-i\delta}\\ s_{1}&c_{1}c_{3}&c_{1}s_{3}\\ c_{1}s_{2}&-s_{1}s_{2}c_{3}-c_{2}s_{3}e^{-i\delta}&-s_{1}s_{2}s_{3}+c_{2}c_{3}e ^{-i\delta}\end{pmatrix}\begin{pmatrix}1&0&0\\ 0&e^{i(\phi_{12}-\phi_{11})}&0\\ 0&0&e^{i(\phi_{13}-\phi 11)}\end{pmatrix}.\] The factored out phases can be removed by using the facility of rephasing invariance, yielding the following representation of the CKM matrix in terms of 3 mixing angles and 1 non removable phase \[\begin{pmatrix}c_{1}c_{2}&-s_{1}c_{2}c_{3}+s_{2}s_{3}e^{-i\delta}&-s_{1}c_{2}s_{ 3}-s_{2}c_{3}e^{-i\delta}\\ s_{1}&c_{1}c_{3}&c_{1}s_{3}\\ c_{1}s_{2}&-s_{1}s_{2}c_{3}-c_{2}s_{3}e^{-i\delta}&-s_{1}s_{2}s_{3}+c_{2}c_{3}e^ {-i\delta}\end{pmatrix}. \tag{16}\] Similarly, the other independent representations can also be constructed by using a different starting element of the matrix V. In Table 3, we have summarized the 9 independent Cartesian parametrizations of the CKM matrix along with the corresponding starting element of unitary matrix V. As a next step, it is desirable to explore their relationship with the representations available in literature. For example, considering the representation 1 of the CKM matrix given in Table 3, one can obtain the parametrization advocated by PDG, given in equation (3). To do so, we need to re-designate \(s_{1}\to s_{13}\), \(s_{2}\to s_{12}\) and \(s_{3}\to s_{23}\) as well as carry out rephasing of the quark fields using the multiplication of matrices \[\begin{pmatrix}e^{-i\delta}&0&0\\ 0&1&0\\ 0&0&1\end{pmatrix}\qquad\text{and}\qquad\begin{pmatrix}e^{i\delta}&0&0\\ 0&e^{i\delta}&0\\ 0&0&1\end{pmatrix},\] respectively on the left and right side of Cartesian representation 1. The Kobayashi-Maskawa (KM) [2] representation can be shown to be related to the Cartesian representation 7. A look at the representations given in Table 3 and the ones given by Fritzsch and Xing in Table 1 reveal that our representations 1, 2, 3, 4, 5, 6, 7, 8 and 9 are related to 3, 7, 8, 9, 4, 6, 2, 5 and 1 respectively. Similarly, on comparison with representations given by A. Rasin in Table 2, it is found that our parametrizations 1, 2, 4, 7, 8 and 9 are related to 1, 2, 3, 6, 7 and 4 of Table 2 respectively. It needs to be emphasized that there are no representations in Table 2 which corresponds to Cartesian representations 3, 5 and 6. ## 4. Phenomenological analysis of the Cartesian representations To understand the significance of the various representations, we have to carried out a unitarity based phenomenological analysis of these using the latest data. It may be noted that all representations are equivalent as far as their physical implications are concerned, however, while calculating certain CKM parameters involving approximations, it is likely that some representations may be more useful \begin{table} \begin{tabular}{|c|c|c c c|} \hline S.No. & Starting element of V & \multicolumn{4}{c|}{Resultant Matrix} \\ \hline 1 & \(a_{13}=s_{1}e^{i\phi_{13}}\) & \(\begin{pmatrix}c_{1}c_{2}&c_{1}s_{2}&s_{1}\\ -s_{1}c_{2}s_{3}-s_{2}c_{3}e^{-i\delta}&-s_{1}s_{2}s_{3}+c_{2}c_{3}e^{-i \delta}&c_{1}s_{3}\\ -s_{1}c_{2}c_{3}+s_{2}s_{3}e^{-i\delta}&-s_{1}s_{2}c_{3}-c_{2}s_{3}e^{-i \delta}&c_{1}c_{3}\end{pmatrix}\) \\ \hline 2 & \(a_{12}=s_{1}e^{i\phi_{12}}\) & \(\begin{pmatrix}c_{1}c_{2}&s_{1}&c_{1}s_{2}\\ -s_{1}c_{2}c_{3}+s_{2}s_{3}e^{-i\delta}&c_{1}c_{3}&-s_{1}s_{2}c_{3}-c_{2}s_{3} e^{-i\delta}\\ -s_{1}c_{2}s_{3}-s_{2}c_{3}e^{-i\delta}&c_{1}s_{3}&-s_{1}s_{2}s_{3}+c_{2}c_{3} e^{-i\delta}\end{pmatrix}\) \\ \hline 3 & \(a_{21}=s_{1}e^{i\phi_{21}}\) & \(\begin{pmatrix}c_{1}c_{2}&-s_{1}c_{2}c_{3}+s_{2}s_{3}e^{-i\delta}&-s_{1}c_{2}s_{ 3}-s_{2}c_{3}e^{-i\delta}\\ s_{1}&c_{1}c_{3}&c_{1}s_{3}\\ c_{1}s_{2}&-s_{1}s_{2}c_{3}-c_{2}s_{3}e^{-i\delta}&-s_{1}s_{2}s_{3}+c_{2}c_{3} e^{-i\delta}\end{pmatrix}\) \\ \hline 4 & \(a_{23}=s_{1}e^{i\phi_{23}}\) & \(\begin{pmatrix}-s_{1}s_{2}s_{3}+c_{2}c_{3}e^{-i\delta}&-s_{1}s_{2}c_{3}-c_{2}s_{ 3}e^{-i\delta}&c_{1}s_{2}\\ c_{1}s_{3}&c_{1}c_{3}&s_{1}\\ -s_{1}c_{2}s_{3}-s_{2}c_{3}e^{-i\delta}&-s_{1}c_{2}c_{3}+s_{2}s_{3}e^{-i\delta} &c_{1}c_{2}\end{pmatrix}\) \\ \hline 5 & \(a_{31}=s_{1}e^{i\phi_{31}}\) & \(\begin{pmatrix}c_{1}c_{2}&-s_{1}c_{2}s_{3}-s_{2}c_{3}e^{-i\delta}&-s_{1}c_{2}c _{3}+s_{2}s_{3}e^{-i\delta}\\ c_{1}s_{2}&-s_{1}s_{2}s_{3}+c_{2}c_{3}e^{-i\delta}&-s_{1}s_{2}c_{3}-c_{2}s_{ 3}e^{-i\delta}\\ s_{1}&c_{1}s_{3}&c_{1}c_{3}\end{pmatrix}\) \\ \hline 6 & \(a_{32}=s_{1}e^{i\phi_{32}}\) & \(\begin{pmatrix}-s_{1}s_{2}s_{3}+c_{2}c_{3}e^{-i\delta}&c_{1}s_{3}&-s_{1}c_{2}s_{ 3}-s_{2}c_{3}e^{-i\delta}\\ -s_{1}s_{2}c_{3}-c_{2}s_{3}e^{-i\delta}&c_{1}c_{3}&-s_{1}c_{2}c_{3}+s_{2}s_{ 3}e^{-i\delta}\\ c_{1}s_{2}&s_{1}&c_{1}c_{2}\end{pmatrix}\) \\ \hline 7 & \(a_{11}=c_{1}e^{i\phi_{11}}\) & \(\begin{pmatrix}c_{1}&s_{1}c_{2}&s_{1}s_{2}\\ -s_{1}c_{3}&c_{1}c_{2}c_{3}-s_{2}s_{3}e^{-i\delta}&c_{1}s_{2}c_{3}+c_{2}s_{ 3}e^{-i\delta}\\ s_{1}s_{3}&-c_{1}c_{2}s_{3}-s_{2}c_{3}e^{-i\delta}&-c_{1}s_{2}s_{3}+c_{2}c_{3} e^{-i\delta}\end{pmatrix}\) \\ \hline 8 & \(a_{22}=c_{1}e^{i\phi_{22}}\) & \(\begin{pmatrix}c_{1}c_{2}c_{3}-s_{2}s_{3}e^{-i\delta}&s_{1}c_{2}&-c_{1}c_{2}s_{ 3}-s_{2}c_{3}e^{-i\delta}\\ -s_{1}c_{3}&c_{1}&s_{1}s_{3}\\ c_{1}s_{2}c_{3}+c_{2}s_{3}e^{-i\delta}&s_{1}s_{2}&-c_{1}s_{2}s_{3}+c_{2}c_{3} e^{-i\delta}\end{pmatrix}\) \\ \hline 9 & \(a_{33}=c_{1}e^{i\phi_{33}}\) & \(\begin{pmatrix}-c_{1}s_{2}s_{3}+c_{2}c_{3}e^{-i\delta}&c_{1}s_{2}c_{3}+c_{2}s_{ 3}e^{-i\delta}&s_{1}s_{2}\\ -c_{1}c_{2}s_{3}-s_{2}c_{3}e^{-i\delta}&c_{1}c_{2}c_{3}-s_{2}s_{3}e^{-i\delta}& s_{1}c_{2}\\ s_{1}s_{3}&-s_{1}c_{3}&c_{1}\end{pmatrix}\) \\ \hline \end{tabular} \end{table} Table 3: Cartesian representations of the CKM matrix. for understanding a particular phenomenon. To begin with, we have calculated mixing angles and CP violating phase for each parametrization for numerical evaluation of the corresponding CKM matrix. As a first step, we have considered the representation 1 of Table 3, this being equivalent to the PDG representation. Using the already mentioned well defined parameters \(|V_{us}|\), \(|V_{cb}|\), \(\alpha\) and \(\beta\), we have made an attempt to find the mixing angles \(\theta_{1}\), \(\theta_{2}\) and \(\theta_{3}\) as well as the CP violating phase \(\delta\). For this particular representation, to a very good approximation (less than a fraction of a percent), the CKM parameters \(V_{ub}\), \(V_{us}\) and \(V_{cb}\) can be considered equal to the sines of the three mixing angles, i.e., \(s_{1}\), \(s_{2}\) and \(s_{3}\) respectively. Therefore, considering the PDG 2022 value [3] of \(|V_{us}|\), one gets \[s_{2}\cong|V_{us}|=0.2243\pm 0.0008,\ \mbox{implying}\ \theta_{2}=0.2262\pm 0.0008.\] Considering the value of \(|V_{cb}|\) as advocated by Belle collaboration [10], we obtain \[s_{3}\cong|V_{cb}|=(40.6\pm 0.9)\times 10^{-3},\ \mbox{implying}\ \theta_{3}=0.0406\pm 0.0009.\] In order to find \(\theta_{1}\), we have evaluated \(|V_{ub}|\) using a unitarity based analysis involving the 'db' triangle [7], shown in Figure 1. From this triangle, one gets \[|V_{ub}|\equiv\frac{|V_{cb}||V_{us}|\sin\beta}{\sin\alpha}. \tag{17}\] Using the values of CKM matrix element \(|V_{us}|\) and \(|V_{cb}|\) mentioned above and the values of \(\alpha\) and \(\beta\) as given by PDG 2022 [3], we get \[s_{1}\cong|V_{ub}|=(3.4529\pm 0.1312)\times 10^{-3},\ \mbox{implying}\ \theta_{1}=0.00345\pm 0.00013.\] Figure 1: The db unitarity triangle. This is a rigorous unitarity based value of \(V_{ub}\), which is in agreement with values given in Refs. [7, 19]. This value of \(V_{ub}\) implies the ratio \(\left|\frac{V_{ub}}{V_{cb}}\right|=0.08505\pm 0.00374\), in agreement with measurements from \(\Lambda_{b}\to\rho\mu\nu\) and \(B_{s}\to K\mu\nu\) decays [5]. As a next step, we evaluate the phase \(\delta\) corresponding to this representation, which is almost equal to the angle \(\gamma\) of the unitarity triangle. This may be checked by expressing the angle \(\gamma\) of the unitarity triangle in terms of the elements of the mixing matrix, i.e., \[\gamma=arg\left[-\frac{V_{ud}V^{*}{}_{ub}}{V_{cd}V^{*}{}_{cb}}\right] \tag{18}\] which can be further expressed as \[\gamma=\tan^{-1}\left[\frac{s_{2}c_{3}\sin\delta}{c_{2}s_{3}s_{1}+s_{2}c_{3} \cos\delta}\right]. \tag{19}\] From the above equation, one can find \[\sin\gamma=\frac{s_{2}c_{3}(\sin\delta\cos\gamma-\cos\delta\sin\gamma)}{s_{1} c_{2}s_{3}} \tag{20}\] from which one can deduce \[\sin(\delta-\gamma)=\frac{s_{1}c_{2}s_{3}\sin\gamma}{s_{2}c_{3}}. \tag{21}\] This can be simplified further to obtain an expression of \(\delta\) in terms of \(\gamma\) as \[\delta=\gamma+\sin^{-1}(\sin\gamma\frac{s_{1}c_{2}s_{3}}{s_{2}c_{3}}). \tag{22}\] It can be easily checked numerically that \(\delta\) and \(\gamma\) differ only to the extent of fraction of a percent, implying \(\delta\cong\gamma\). For the present unitarity based analysis, \(\gamma\) can be found using the closure property of the angles of the unitarity triangle yielding \[\delta\cong\gamma=(72.6\pm 4.5541)^{\circ}. \tag{23}\] After having found the three mixing angles and the phase \(\delta\), we obtain the corresponding CKM matrix for the representation, i.e., \[V_{CKM}=\begin{pmatrix}0.97451\pm 0.00018&0.2243\pm 0.0008&0.00345\pm 0.00013 \\ 0.2242\pm 0.00080&0.97371\pm 0.00019&0.0406\pm 0.0009\\ 0.00871\pm 0.00033&0.0398\pm 0.00087&0.9992\pm 0.000037\end{pmatrix}. \tag{24}\] A look at this matrix reveals that this shows an excellent overlap with the one obtained by PDG[3] \[\begin{pmatrix}0.97435\pm 0.00016&0.22500\pm 0.00067&0.00369\pm 0.00011\\ 0.22486\pm 0.00067&0.97349\pm 0.00016&0.04182^{+0.00085}_{-0.00074}\\ 0.00857^{+0.00020}_{-0.00018}&0.04110^{+0.00083}_{-0.00072}&0.999118^{+0.00003 1}_{-0.000036}\end{pmatrix}. \tag{25}\] It needs to be emphasized that the matrix given in equation (24) has been obtained using well measured CKM parameters and the unitarity based constraints. It is interesting to mention that in the representation considered by us, the hierarchy of the CKM matrix elements is very well captured by the hierarchy of the mixing angles. After numerically constructing this representation of the CKM matrix, as a next step we have made an attempt to find the angles and phases of the remaining parametrizations in order to arrive at the numerical values of the corresponding matrix elements. The numerical evaluation of other representations is not straight forward as in these cases the CP violating phase cannot be considered to be nearly equal to the angle \(\gamma\) of the unitarity triangle. Therefore, for these representations, along with \(|V_{us}|\), \(|V_{cb}|\) and \(|V_{ub}|\) as inputs, instead of phase \(\delta\), we consider the numerical value of the element \(|V_{td}|\) from the CKM matrix given in equation (24). It may be noted that the element \(|V_{td}|\) captures the effects of CP violating phase \(\delta\) as is emphasized in the literature [8]. Using these inputs, for each Cartesian parametrization, we can then find the values of the 3 mixing angles, \(\theta_{1}\), \(\theta_{2}\), \(\theta_{3}\) and the CP violating phase \(\delta\), these have been presented in column 2 of Table 4. It may be noted that for different parametrizations, magnitudes of the corresponding CKM matrix elements are not affected as these are rephasing invariant quantities. Corresponding to the different representations, we have also found the expressions for the CP violating rephasing invariant Jarlskog's parameter \(J\) defined as \[J\sum_{k,\gamma=1}^{3}(\epsilon_{ijk}\epsilon_{\alpha\beta\gamma})=|Im(V_{i \alpha}V_{j\beta}V^{*}_{i\beta}V^{*}_{j\alpha}|. \tag{26}\] For all the Cartesian representations of the CKM matrix, one can obtain an expression of \(J\) in terms of the corresponding mixing angles and CP violating phase, these have been given in column 3 of Table 4. On evaluating the parameter \(J\)[20], as expected, its value comes out to be same for all the representations, also being in agreement with the PDG value [3], i.e., \((3.08^{+0.15}_{-0.13})\times 10^{-5}\). To carry our analysis further, for different representations, we have evaluated \(\epsilon_{k}\), the CP violation defining parameter in the \(K-\bar{K}\) system. It would be interesting to check its evaluation for different representations as the usually considered formula to evaluate \(\epsilon_{k}\) involves short distance \begin{table} \begin{tabular}{|c|c|c|c|} \hline Representation & Mixing angles and phase & Jarlskog’s Invariant ‘J’ & \(\epsilon_{k}\) \\ \hline \multirow{4}{*}{1} & \(\theta_{1}=0.00345\pm 0.00013\) & \multirow{4}{*}{\(J=s_{1}s_{2}s_{3}c_{1}^{2}c_{2}c_{3}\;\sin\delta\)} & \multirow{4}{*}{\((2.0690\pm 0.2468)\times 10^{-3}\)} \\ & \(\theta_{2}=0.2262\pm 0.0008\) & & \\ & \(\theta_{3}=0.0406\pm 0.0009\) & & \\ & \(\delta=(72.6\pm 4.5541)^{\lx@math@degree}\) & & \\ \hline \multirow{4}{*}{2} & \(\theta_{1}=0.2262\pm 0.0008\) & \multirow{4}{*}{\(J=s_{1}s_{2}s_{3}c_{1}^{2}c_{2}c_{3}\;\sin\delta\)} & \multirow{4}{*}{\((2.0695\pm 0.1804)\times 10^{-3}\)} \\ & \(\theta_{2}=0.00354\pm 0.00013\) & & \\ & \(\theta_{3}=0.0408\pm 0.0009\) & & \\ & \(\delta=(108.437\pm 6.4154)^{\lx@math@degree}\) & & \\ \hline \multirow{4}{*}{3} & \(\theta_{1}=0.2261\pm 0.0008\) & \multirow{4}{*}{\(J=s_{1}s_{2}s_{3}c_{1}^{2}c_{2}c_{3}\;\sin\delta\)} & \multirow{4}{*}{\((2.3437\pm 0.1105)\times 10^{-3}\)} \\ & \(\theta_{3}=0.0417\pm 0.0009\) & & \\ & \(\delta=(158.361\pm 0.9484)^{\lx@math@degree}\) & & \\ \hline \multirow{4}{*}{4} & \(\theta_{1}=0.0406\pm 0.0009\) & \multirow{4}{*}{\(J=s_{1}s_{2}s_{3}c_{1}^{2}c_{2}c_{3}\;\sin\delta\)} & \multirow{4}{*}{\((2.3437\pm 0.1105)10^{-3}\)} \\ & \(\theta_{2}=0.00345\pm 0.00013\) & & \\ & \(\theta_{3}=0.2262\pm 0.0008\) & & \\ & \(\delta=(107.407\pm 6.4645)^{\lx@math@degree}\) & & \\ \hline \multirow{4}{*}{5} & \(\theta_{1}=0.0087\pm 0.00032\) & \multirow{4}{*}{\(J=s_{1}s_{2}s_{3}c_{1}^{2}c_{2}c_{3}\;\sin\delta\)} & \multirow{4}{*}{\(0.1578\pm 0.0869\)} \\ & \(\theta_{3}=0.0398\pm 0.0009\) & & \\ & \(\delta=(22.701\pm 0.9967)^{\lx@math@degree}\) & & \\ \hline \multirow{4}{*}{6} & \(\theta_{1}=0.0398\pm 0.0009\) & \multirow{4}{*}{\(J=s_{1}s_{2}s_{3}c_{1}^{2}c_{2}c_{3}\;\sin\delta\)} & \multirow{4}{*}{\(0.1578\pm 0.0869\)} \\ & \(\theta_{2}=0.0087\pm 0.00032\) & & \\ & \(\theta_{3}=0.2264\pm 0.0008\) & & \\ & \(\delta=(157.331\pm 0.9958)^{\lx@math@degree}\) & & \\ \hline \multirow{4}{*}{7} & \(\theta_{1}=0.2262\pm 0.0008\) & \multirow{4}{*}{\(J=s_{1}^{2}s_{2}s_{3}c_{1}c_{2}c_{3}\;\sin\delta\)} & \multirow{4}{*}{\((2.0695\pm 0.1804)\times 10^{-3}\)} \\ & \(\theta_{3}=0.1792\pm 0.0039\) & & \\ & \(\delta=(178.902\pm 0.0688)^{\lx@math@degree}\) & & \\ \hline \multirow{4}{*}{9} & \(\theta_{1}=0.0408\pm 0.0009\) & \multirow{4}{*}{\(J=s_{1}^{2}s_{2}s_{3}c_{1}c_{2}c_{3}\;\sin\delta\)} & \multirow{4}{*}{\(0.1578\pm 0.0869\)} \\ & \(\theta_{2}=0.0848\pm 0.0037\) & & \\ & \(\theta_{3}=0.2155\pm 0.0095\) & & \\ & \(\delta=(93.165\pm 6.5224)^{\lx@math@degree}\) & & \\ \hline \end{tabular} \end{table} Table 4: Calculated parameters using different representations effects dominated by 't' quark. To this end, we use the following standard expression [3, 21] \[\epsilon_{k}=\frac{{G_{F}}^{2}{F_{k}}^{2}\hat{B_{k}}m_{k}{M_{W}}^{2}{\kappa_{ \epsilon}}e^{\epsilon\phi_{\epsilon}}}{12\sqrt{2}\pi^{2}\Delta m_{k}}[{\lambda^ {*}}_{c}^{2}\eta_{cc}S_{0}(x_{c})+{\lambda^{*}}_{t}^{2}\eta_{tt}S_{0}(x_{t})+2{ \lambda^{*}}_{c}^{*}\lambda^{*}\eta_{ct}S_{0}(x_{c},x_{t})], \tag{27}\] where \({\lambda^{*}}_{i}=V_{id}{V^{*}}_{is}\) for i=c,t, \(\kappa_{\epsilon}\) is the correction to \(\epsilon_{k}\) due to long distance effects with value \(\approx 0.94\pm 0.02\)[3]. Also, \(\hat{B_{k}}=0.717\pm 0.024\)[3] is the bag parameter determined from lattice QCD. \(\phi_{\epsilon}\approx(43.52\pm 0.05)^{\lx@math@degree}\)[3] and \(S_{0}\) are the Inami-Lim functions [22] defined as \[\begin{array}{l}S_{0}(x_{i})=\frac{4x_{i}-11x_{i}^{2}+x_{i}^{3}}{4(1-x_{i})^ {2}}-\frac{3x_{i}^{3}lnx_{i}}{2(1-x_{i})^{3}},\\ S_{0}(x_{i},x_{j})=x_{i}x_{j}\left[\frac{lnx_{j}}{x_{j}-x_{i}}\left(\frac{1}{4 }+\frac{3}{2(1-x_{j})}-\frac{3}{4(1-x_{j})^{2}}\right)+\frac{lnx_{i}}{x_{i}-x_ {j}}\left(\frac{1}{4}+\frac{3}{2(1-x_{i})}-\frac{3}{4(1-x_{i})^{2}}\right)- \frac{3}{4(1-x_{i})(1-x_{j})}\right],\end{array} \tag{28}\] where \(x_{i}=\frac{m_{i}^{2}}{m_{W}^{2}}\)[23] and \(\eta_{ij}\) are perturbative QCD corrections. The various other parameters used in calculation of \(|\epsilon_{k}|\) are given in Table 5. Using the Cartesian representation 1, expressing \(V_{cs}\), \(V_{cd}\), \(V_{ts}\) and \(V_{td}\) in terms of the corresponding mixing angles and \(\delta\) as well as using the numerical values of these inputs, we get \(\epsilon_{k}\) as mentioned in row 1 of column 4 in Table 4. This value is in complete agreement with the one given by PDG, i.e., \((2.228\pm 0.011)\times 10^{-3}\). The same excercise has been carried out for the remaining parametrizations and the values of \(\epsilon_{k}\) so obtained are mentioned in column 4 of Table 4. Intriguingly, we find that out of the 9 representations, 6 representations are able to provide an appropriate fit to the parameter \(\epsilon_{k}\), defined in equation (27). The other 3 representations, i.e., 5, 6 and 9 are very much off the mark. In principle, all the representations, being equivalent, should be providing equally appropriate fit to parameter \(\epsilon_{k}\). This, however, can be understood when one closely examines the expression for parameter \(\epsilon_{k}\) and the assumptions behind its derivation. Crucial ingredients of the formula are Inami-Lim functions \(\eta_{ij}\) which characterize the QCD corrections in the short distance \begin{table} \begin{tabular}{|c|c|c|} \hline Parameter & Value & Reference \\ \hline \(G_{F}\) & \(1.1663787(6)\times 10^{-2}\)\(GeV^{-5}\) & [3] \\ \(m_{k}\) & 497.611(13) & [3] \\ \(\Delta m_{k}\) & 3.484(6) & [3] \\ \(m_{W}\) & 80.356(6) & [24] \\ \(\eta_{cc}\) & 1.72(27) & [24] \\ \(\eta_{ct}\) & 0.496(47) & [24] \\ \(\eta_{tt}\) & 0.5765(65) & [24] \\ \(F_{k}\) & 155.7(3) \(MeV\) & [25] \\ \(m_{t}\) & 162.83(67) \(GeV\) & [25] \\ \(m_{c}\) & 1.279(13) \(GeV\) & [25] \\ \hline \end{tabular} \end{table} Table 5: Inputs used for evaluating \(|\epsilon_{k}|\). limit. It can be easily understood that in the short distance limit, the correction \(\eta_{cc}\), \(\eta_{tt}\) and \(\eta_{ct}\), characterizing the loops involving the c and t quark, would play a dominant role in the absence of the incorporation of the long distance effects. When this fact is coupled with imaginary part of the product of the \(\lambda\) factors, one can easily understand why in the representations 5, 6 and 9, one is not able to reproduce the parameter \(\epsilon_{k}\). ## 5. Summary and Conclusions Recent observation of \(2.2\sigma\) deviation from unitarity in the first row of CKM matrix as well as a few persistent anomalies regarding some of the CKM matrix elements, provide an immediate motivation to revisit some of the basic aspects of CKM phenomenology. A key factor in the formulation of CKM phenomenology is unitarity of CKM matrix and its implications. Interestingly when one revisits representations of CKM matrix, one finds the entire issue needs a deeper attention than been given in the literature. In the present paper, we have revisited the representations of CKM matrix available in the literature in the absence of their rigorous formulation. Using Cartesian approach, we have constructed the representations, incorporating unitarity constraints, in an ab-initio manner as well as we have explored the inter-relation of the present representations with the ones given in literature. The implication of these representations on various CKM parameters have been explored using the latest data. In the case of CP violating parameter \(\epsilon_{k}\), some very interesting results emerge. ## Acknowledgements The authors would like to thank the Chairperson, Department of Physics, Panjab University, Chandigarh, for providing the facilities to work. Gurjit Kaur would also like to acknowledge CSIR, Government of India, Grant No. 09/135/(0851)/2019-EMR-I, for financial support.
2305.15182
HiTIN: Hierarchy-aware Tree Isomorphism Network for Hierarchical Text Classification
Hierarchical text classification (HTC) is a challenging subtask of multi-label classification as the labels form a complex hierarchical structure. Existing dual-encoder methods in HTC achieve weak performance gains with huge memory overheads and their structure encoders heavily rely on domain knowledge. Under such observation, we tend to investigate the feasibility of a memory-friendly model with strong generalization capability that could boost the performance of HTC without prior statistics or label semantics. In this paper, we propose Hierarchy-aware Tree Isomorphism Network (HiTIN) to enhance the text representations with only syntactic information of the label hierarchy. Specifically, we convert the label hierarchy into an unweighted tree structure, termed coding tree, with the guidance of structural entropy. Then we design a structure encoder to incorporate hierarchy-aware information in the coding tree into text representations. Besides the text encoder, HiTIN only contains a few multi-layer perceptions and linear transformations, which greatly saves memory. We conduct experiments on three commonly used datasets and the results demonstrate that HiTIN could achieve better test performance and less memory consumption than state-of-the-art (SOTA) methods.
He Zhu, Chong Zhang, Junjie Huang, Junran Wu, Ke Xu
2023-05-24T14:14:08Z
http://arxiv.org/abs/2305.15182v2
# HiTIN: Hierarchy-aware Tree Isomorphism Network for ###### Abstract Hierarchical text classification (HTC) is a challenging subtask of multi-label classification as the labels form a complex hierarchical structure. Existing dual-encoder methods in HTC achieve weak performance gains with huge memory overheads and their structure encoders heavily rely on domain knowledge. Under such observation, we tend to investigate the feasibility of a memory-friendly model with strong generalization capability that could boost the performance of HTC without prior statistics or label semantics. In this paper, we propose Hierarchy-aware Tree Isomorphism Network (HiTIN) to enhance the text representations with only syntactic information of the label hierarchy. Specifically, we convert the label hierarchy into an unweighted tree structure, termed coding tree, with the guidance of structural entropy. Then we design a structure encoder to incorporate hierarchy-aware information in the coding tree into text representations. Besides the text encoder, HiTIN only contains a few multi-layer perceptions and linear transformations, which greatly saves memory. We conduct experiments on three commonly used datasets and the results demonstrate that HiTIN could achieve better test performance and less memory consumption than state-of-the-art (SOTA) methods. ## 1 Introduction Hierarchical text classification is a sub-task of text multi-label classification, which is commonly applied in scenarios such as news document classification (Lewis et al., 2004; Sandhaus, Evan, 2008), academic paper classification (Kowsari et al., 2017), and so on. Unlike traditional classification tasks, the labels of HTC have parent-child relationships forming a hierarchical structure. Due to the complex structure of label hierarchy and the imbalanced frequency of labels, HTC becomes a challenging task in natural language processing. Recent studies in HTC typically utilize a dual-encoder framework (Zhou et al., 2020), which consists of a text encoder for text representations and a structure encoder to inject the information of labels into text. The text encoder could be a traditional backbone for text classification, for instance, TextRCNN (Lai et al., 2015) or BERT (Devlin et al., 2019). The structure encoder is a Graph Neural Network (GNN) that treats the label hierarchy as a Directed Acyclic Graph (DAG) and propagates the information among labels. To maximize the propagation ability of the structure encoder, Zhou et al. (2020) learn textual features of labels and count the prior probabilities between parent and child labels. Based on the dual-encoder framework, researchers further complicated the model by adding complementary networks and loss functions from different aspects, such as treating HTC as a matching problem (Chen et al., 2021), introducing mutual information maximization (Deng et al., 2021). However, more complementary components result in more memory consumption, as shown in Figure 1. On Figure 1: Micro-F1 score and the number of trainable parameters of our method and SOTAs with dual encoders on Web Of Science dataset. the other hand, their structure encoders still rely on the prior statistics (Zhou et al., 2020; Chen et al., 2021) or the representation of labels (Zhou et al., 2020; Deng et al., 2021). That is, their models require a mass of domain knowledge, which greatly reduces the generalization ability. To this end, we intend to design a more effective structure encoder with fewer parameters for HTC. Instead of introducing domain knowledge, we try to take full advantage of the structural information embedded in label hierarchies. Inspired by Li and Pan (2016), we decode the essential structure of label hierarchies into coding trees with the guidance of structural entropy, which aims to measure the structural complexity of a graph. The coding tree is unweighted and could reflect the hierarchical organization of the original graph, which provides us with another view of the label hierarchy. To construct coding trees, we design an algorithm, termed CodIng tRee Construction Algorithm (CIRCA) by minimizing the structural entropy of label hierarchies. Based on the hierarchical structure of coding trees, we propose Hierarchical-aware Tree Isomorphism Network (HiTIN). The document representations fetched by the text encoder are fed into a structure encoder, in which we iteratively update the node embeddings of the coding tree with a few multi-layer perceptions. Finally, we produce a feature vector of the entire coding tree as the final representation of the document. Compared with SOTA methods of dual encoders on HTC tasks (Zhou et al., 2020; Chen et al., 2021; Deng et al., 2021; Wang et al., 2022), HiTIN shows superior performance gains with less memory consumption. Overall, the contributions of our work can be summarized as follows: * To improve the generalization capability of dual-encoder models in HTC, we decode the essential structure of label hierarchies with the guidance of structural entropy. * We propose HiTIN, which has fewer learnable parameters and requires less domain knowledge, to fuse the structural information of label hierarchies into text representations. * Numerous experiments are conducted on three benchmark datasets to demonstrate the superiority of our model. For reproducibility, our code is available at [https://github.com/Rooooyy/HiTIN](https://github.com/Rooooyy/HiTIN). ## 2 Related Work Hierarchical Text Classification.Existing works for HTC could be categorized into local and global approaches (Zhou et al., 2020). Local approaches build classifiers for a single label or labels at the same level in the hierarchy, while global approaches treat HTC as a flat classification task and build only one classifier for the entire taxonomy. Previous local studies mainly focus on transferring knowledge from models in the upper levels to models in the lower levels. Kowsari et al. (2017) first feed the whole corpus into the parent model and then input the documents with the same label marked by the parent model into a child model. In the next few years, researchers try different techniques to deliver knowledge from high-level models to low-level models (Shimura et al., 2018; Huang et al., 2019; Banerjee et al., 2019). Global studies in HTC try to improve flat multi-label classification by introducing various information from the hierarchy. Gopal and Yang (2013) propose a recursive regularization function to make the parameters of adjacent categories have similar values. Peng et al. (2018) propose a regularized graph-CNN model to capture the non-consecutive semantics from texts. Besides, various deep learning techniques, such as sequence-to-sequence model (Yang et al., 2018; Rojas et al., 2020), attention mechanism (You et al., 2019), capsule network (Aly et al., 2019; Peng et al., 2021), reinforcement learning (Mao et al., 2019), and meta-learning (Wu et al., 2019) are also applied in global HTC. Recently, Zhou et al. (2020) specially design an encoder for label hierarchies which could significantly improve performance. Chen et al. (2020) learn the word and label embeddings jointly in the hyperbolic space. Chen et al. (2021) formulate the text-label relationship as a semantic matching problem. Deng et al. (2021) introduce information maximization which can model the interaction between text and label while filtering out irrelevant information. With the development of Pretrained Language Model (PLM), BERT(Devlin et al., 2019) based contrastive learning(Wang et al., 2022), prompt tuning(Wang et al., 2022), and other methods (Jiang et al., 2022) have brought huge performance boost to HTC. Structural Entropy.Structural entropy (Li and Pan, 2016) is a natural extension of Shannon en tropy Shannon (1948) on graphs as structure entropy could measure the structural complexity of a graph. The structural entropy of a graph is defined as the average length of the codewords obtained by a random walk under a specific coding scheme. The coding scheme, termed coding tree Li and Pan (2016), is a tree structure that encodes and decodes the essential structure of the graph. In other words, to minimize structural entropy is to remove the noisy information from the graph. In the past few years, structural entropy has been successfully applied in network security Li et al. (2016), medicine Li et al. (2016), bioinformatics Li et al. (2018), graph classification Wu et al. (2022), 2022b, a), text classification Zhang et al. (2022), and graph contrastive learning Wu et al. (2023). ## 3 Problem Definition Given a document \(D=\{w_{1},w_{2},\ldots,w_{n}\}\), where \(w_{i}\) is a word and \(n\) denotes the document length, hierarchical text classification aims to predict a subset \(\mathcal{Y}\) of the holistic label set \(Y\). Besides, every label in \(Y\) corresponds to a unique node on a directed acyclic graph, i.e. the label hierarchy. The label hierarchy is predefined and usually simplified as a tree structure. In the groud-truth label set, a non-root label \(y_{i}\) always co-occurs with its parent nodes, that is, for any \(y_{i}\in\mathcal{Y}\), the parent node of \(y_{i}\) is also in \(\mathcal{Y}\). ## 4 Methodology Following the dual-encoder scheme in HTC, the architecture of HiTIN that consists of a text encoder and a structure encoder is shown in Figure 2. The text encoder aims to capture textual information from the input document while the structure encoder could model the label correlations in the hierarchy and inject the information from labels into text representations. ### Text Encoder In HTC, text encoder generally has two choices, that is, TextRCNN encoder and BERT encoder. TextRCNN Lai et al. (2015) is a traditional method in text classification, while BERT Devlin et al. (2019) has shown its powerful ability in sequence feature extraction and has been widely applied in natural language processing in the past few years. TextRCNN Encoder.The given document \(D=\{w_{1},w_{2},\ldots,w_{n}\}\), which is a sequence of word embeddings, is firstly fed into a bidirectional GRU layer to extract sequential information. Then, multiple CNN blocks along with max pooling over time are adopted to capture n-gram features. Formally, \[H_{RCNN}=MaxPool(\Phi_{CNN}(\Phi_{GRU}(D))), \tag{1}\] where \(\Phi_{CNN}(\cdot)\) and \(\Phi_{GRU}(\cdot)\) respectively denote a CNN and a GRU layer, while \(MaxPool(\cdot)\) denotes the max pooling over time operation. Besides, \(H_{RCNN}\in\mathbb{R}^{n_{C}\times d_{C}},\) where \(n_{C}\) denotes the number of CNN kernels and \(d_{C}\) denotes the output channels of each CNN kernel. Figure 2: An example of HiTIN with \(K=2\). As shown in Section 4.1, the input document is first fed into the text encoder to generate text representations. Next, the label hierarchy is transformed into a coding tree via Coding Tree Construction Algorithm proposed in Section 4.2. The text representations are mapped into the leaf nodes of the coding tree and we iteratively update the non-leaf node embeddings in Section 4.2. Finally, we produce a feature vector of the entire coding tree and calculate the classification probabilities in Section 4.3. Besides, HiTIN is supervised by binary cross-entropy loss and recursive regularization Gopal and Yang (2013). The final representation \(H\in\mathbb{R}^{n_{C}*d_{C}}\) of document \(D\) is the concatenation of \(H_{RCNN}\). That is, \[H=Concat(H_{RCNN}). \tag{2}\] BERT Encoder.Recent works in HTC also utilize BERT for learning textual features Chen et al. (2021); Wang et al. (2022). Since there are few changes made to the vanilla BERT, we only introduce the workflow of our model and omit the details of BERT. Given a input document \(D=\{w_{1},w_{2},\ldots,w_{n}\}\), we pad the document with two special tokens: \[\tilde{D}=\{[CLS],w_{1},w_{2},\ldots,w_{n},[SEP]\}, \tag{3}\] where \([CLS]\) and \([SEP]\) respectively denote the beginning and the end of the document. After padding and truncating, document \(\tilde{D}\) is fed into BERT. Then BERT generates embeddings for each token in the document: \[H_{BERT}=\Phi_{BERT}(\tilde{D}), \tag{4}\] where \(H_{BERT}\in\mathbb{R}^{(n+2)\times d_{B}}\), and \(\Phi_{BERT}(\cdot)\) denotes the BERT model. We adopt the CLS embedding as the representation of the entire text sequence. Thus, the final representation \(H\) of document \(D\) is: \[H=H_{BERT}^{0},H\in\mathbb{R}^{d_{B}}, \tag{5}\] where \(d_{B}\) is the hidden dimension. ### Structure Encoder The semantic information provided by text encoder is then input into the structure encoder. Unlike previous works, we do not utilize the prior statistics or learn representations of the label hierarchy. Instead, we design a suite of methods guided by structural entropy Li and Pan (2016) to effectively incorporate the information of text and labels. Structural Entropy.Inspired by Li and Pan (2016), we try to simplify the original structure of the label hierarchy by minimalizing its structural entropy. The structural entropy of a graph is defined as the average length of the codewords obtained by a random walk under a specific coding pattern named coding tree Li and Pan (2016). Given a graph \(G=(V_{G},E_{G})\), the structural entropy of \(G\) on coding tree \(T\) is defined as: \[H^{T}(G)=-\sum_{\alpha\in T}\frac{g_{\alpha}}{vol(G)}\log\frac{vol(\alpha)}{vol (\alpha^{-})}, \tag{6}\] where \(\alpha\) is a non-root node of coding tree \(T\) which represents a subset of \(V_{G}\), \(\alpha^{-}\) is the parent node of \(\alpha\) on the coding tree. \(g_{\alpha}\) represents the number of edges with only one endpoint in \(\alpha\) and the other end outside \(\alpha\), that is, the out degree of \(\alpha\). \(vol(G)\) denotes the volume of graph \(G\) while \(vol(\alpha)\) and \(vol(\alpha^{-})\) is the sum of the degree of nodes that respectively partitioned by \(\alpha\) and \(\alpha^{-}\). For a certain coding pattern, the height of the coding tree should be fixed. Therefore, the \(K\)-dimensional structural entropy of the graph \(G\) determined by the coding tree \(T\) with a certain height \(K\) is defined as: \[H_{K}(G)=\min_{\{T|height(T)\leq K\}}H^{T}(G). \tag{7}\] Coding Tree Construction Algorithm.To minimize the structural entropy of graph \(G\), we design a CodIng tRee Construction Algorithm (CIRCA) to heuristically construct a coding tree \(T\) with a certain height no greater than \(K\). That is, \(T=CIRCA(G,K)\), where \(T=(V_{T},E_{T})\), \(V_{T}=(V_{T}^{0},\ldots,V_{T}^{h})\). To better illustrate CIRCA, we make some definitions as follows, **Definition 1**: _Let \(T=(V_{T},E_{T})\) be a coding tree for graph \(G=(V_{G},E_{G})\), \(v_{r}\) be the root node of \(T\). For any \((v_{i},v_{j})\in T\), if \(v_{i}\) is the direct child node of \(v_{j}\), denote that_ \[v_{i}\in v_{j}.children;\] _and \(v_{j}\) is equivalent to \(v_{i}.parent\)._ **Definition 2**: _Following Definition 1, given any two nodes \((v_{i},v_{j})\in T\), in which \(v_{i}\in v_{r}.children\) and \(v_{j}\in v_{r}.children\)._ _Define a member function \(merge(v_{i},v_{j})\) of \(T\). \(T.merge(v_{i},v_{j})\) could insert a new node \(v_{\epsilon}\) bewtween \(v_{r}\) and \((v_{i},v_{j})\). Formally,_ \[v_{\epsilon}.children \gets v_{i};\] \[v_{\epsilon}.children \gets v_{j};\] \[v_{r}.children \gets v_{\epsilon};\] \[V_{T}^{v_{i}.height+1} \gets v_{\epsilon}; E_{T}\leftarrow(v_{\epsilon},v_{i}),(v_{\epsilon},v_{j});\] **Definition 3**: _Following Definition 1, given a node \(v_{i}\). Define a member function \(delete(v_{i})\) of \(T.Delete(v_{i})\) could delete \(v_{i}\) from \(T\) and attach the child nodes of \(v_{i}\) to its parent node. Formally,_ \[v_{i}.parent.children\gets v_{i}.children;\] \[V_{T} :=V_{T}-\{v_{i}\};\] \[E_{T} :=E_{T}-\{(v_{i}.parent,v_{i})\};\] \[E_{T} :=E_{T}-\{(v_{i},v)|v\in v_{i}.children\};\] **Definition 4**: _Following Definition 1, given any two nodes\((v_{i},v_{j})\), in which \(v_{i}\in v_{j}.children\). Define a member function \(shift(v_{i},v_{j})\) of \(T\). \(T.shift(v_{i},v_{j})\) could insert a new node \(v_{\epsilon}\) between \(v_{i}\) and \(v_{j}\):_ \[v_{\epsilon}.children\gets v_{i}; v_{j}.children\gets v_{\epsilon};\] \[V_{T}^{v_{i}.height+1}\gets v_{\epsilon}; E_{T}\leftarrow\{(v_{j},v_{\epsilon}),(v_{\epsilon},v_{i})\};\] Based on the above definitions, the pseudocode of CIRCA can be found in Algorithm 1. More details about coding trees and CIRCA are shown in Appendix A. ``` 0: A graph \(G=(V_{G},E_{G})\), a positive integer \(K\) 0: Coding tree \(T=(V_{T},E_{T})\) of the graph \(G\) with height \(K\) 1:\(V_{T}^{0}:=V\); {Stage 1: Construct a full-height binary-tree} 2:while\(|v_{r}.children|>2\)do 3:\((v_{i},v_{j})=argmax_{(v,v^{\prime})}\{H^{T}(G)-H^{T.merge(v,v^{\prime})}(G)\}\) 4:\(T.merge(v_{i},v_{j})\) 5:endwhile 6: {Stage 2: Squeeze \(T\) to height \(K\)} 7:while\(T.height>K\)do 8:\(v_{i}=argmin_{v}\{H^{T.remove(v)}(G)-H^{T}(G)\}\) 9:\(T.remove(v_{i})\) 10:endwhile 11: {Stage 3: Erase cross-layer links} 12:for\(v_{i}\in T\)do 13:if\(|v_{i}.parent.height-v_{i}.height|>1\)then 14:\(T.shift(v_{i},v_{i}.parent)\) 15:endif 16:endfor 17:return\(T\) ``` **Algorithm 1** Coding Tree Construction Algorithm **Hierarchy-aware Tree Isomorphism Network.** For representation learning, we reformulate the label hierarchy as a graph \(G_{L}=(V_{G_{L}},E_{G_{L}},X_{G_{L}})\), where \(V_{G_{L}},E_{G_{L}}\) respectively denotes the node set and the edge set of \(G_{L}\), \(V_{G_{L}}=Y\) while \(E_{G_{L}}\) is predefined in the corpus. In our work, \(V_{G_{L}}\) and \(E_{G_{L}}\) are represented by the unweighted adjacency matrix of \(G_{L}\). \(X_{G_{L}}\) is the node embedding matrix of \(G_{L}\). Instead of learning the concept of labels, we directly broadcast the text representation to the label structure. Specifically, \(X_{G}\) is transformed from the text representation \(H\) by duplication and projection. Formally, \[X_{G}=W_{d}HW_{p}+B_{H}, \tag{8}\] where \(W_{d}\in\mathbb{R}^{|Y|\times 1}\) and \(W_{p}\in\mathbb{R}^{d_{H}*d_{V}}\) are learnable weights for the duplication and projection. \(|Y|\) is the volume of the label set. \(d_{H}\) and \(d_{V}\) respectively denote the dimension of text and node. \(B_{H}\) indicates the learnable bias and \(B_{H}\in\mathbb{R}^{|Y|\times d_{v}}\). Next, we simplify the structure of the label hierarchy into a coding tree with the guidance of structural entropy. Given a certain height \(K\), the coding tree \(T_{L}=(V_{T_{L}},E_{T_{L}},X_{T_{L}})\) of the label hierarchy could be constructed by CIRCA, \[(V_{T_{L}},E_{T_{L}})=CIRCA(G_{L},K), \tag{9}\] where \(V_{T_{L}}=\{V_{T_{L}}^{0},V_{T_{L}}^{1},...V_{T_{L}}^{K}\}\) are the layer-wise node sets of coding tree \(T_{L}\) while \(X_{T_{L}}=\{X_{T_{L}}^{0},X_{T_{L}}^{1},...,X_{T_{L}}^{K}\}\) represents the node embeddings of \(V_{T_{L}}^{i}\), \(i\in[0,K]\). The coding tree \(T_{L}\) encodes and decodes the essential structure of \(G_{L}\), which provides multi-granularity partitions for \(G_{L}\). The root node \(v_{r}\) is the roughest partition which represents the whole node set of \(G_{L}\), so \(V_{T_{L}}^{K}\) = \(\{v_{r}\}\). For every node \(v\) and its child nodes \(\{v_{1},v_{2},\ldots,v_{z}\}\), \(v_{1},v_{2},\ldots,\) and \(v_{z}\) formulate a partition of \(v\). Moreover, the leaf nodes in \(T_{L}\) is an element-wise partition for \(G_{L}\), that is, \(V_{T_{L}}^{0}=V_{G_{L}}\), \(X_{T_{L}}^{0}=X_{G_{L}}\). Note that \(\{V_{T_{L}}^{i}|i\in[1,K]\}\) is given by CIRCA while their node embeddings \(\{X_{T_{L}}^{i}|i\in[1,K]\}\) remain empty till now. Thus, we intend to update the un-fetched node representation of coding tree \(T_{L}\). Following the message passing mechanism in Graph Isomorphism Network (GIN) [11], we design Hierarchy-aware Tree Isomorphism Network (HiTIN) according to the structure of coding trees. For \(x_{v}^{i}\in X_{T_{L}}^{i}\) in the \(i\)-th layer, \[x_{v}^{i}=\Phi_{MLP}^{i}(\sum\nolimits_{n\in C(v)}x_{n}^{i-1}), \tag{10}\] where \(v\in V_{T}^{i}\), \(x_{v}^{i}\in\mathbb{R}^{d_{V}}\) is the feature vector of node \(v\), and \(C(v)\) represents the child nodes of in coding tree \(T_{L}\). \(\Phi^{i}_{MLP}(\cdot)\) denotes a two-layer multi-layer perception within BatchNorm (Ioffe and Szegedy, 2015) and ReLU function. The learning stage starts from the leaf node (layer 0) and learns the representation of each node layer by layer until reaching the root node (layer \(K\)). Finally, a read-out function is applied to compute a representation of the entire coding tree \(T_{L}\): \[\begin{split} H_{T}=Concat(Pool(\{x_{v}^{i}|v\in V_{T_{L}}^{i}\}) \\ |i\in[0,K])),\end{split} \tag{11}\] where \(Concat(\cdot)\) indicates the concatenation operation. \(Pool(\cdot)\) in Eq. 11 can be replaced with a summation, averaging, or maximization function. \(H_{T}\in\mathbb{R}^{d_{T}}\) denotes the final representation of \(T_{L}\). ### Classification and Loss Function Similar to previous studies (Zhou et al., 2020; Wang et al., 2022), we flatten the hierarchy by attaching a unique multi-label classifier. \(H_{T}\) is fed into a linear layer along with a sigmoid function to generate classification probability: \[P=Sigmoid(H_{T}\cdot W_{c}+b_{c}), \tag{12}\] where \(W_{c}\in\mathbb{R}^{d_{T}\times|Y|}\) and \(b_{c}\in\mathbb{R}^{|Y|}\) are weights and bias of linear layer while \(|Y|\) is the volume of the label set. For multi-label classification, we adopt the Binary Cross-Entropy Loss as the classification loss: \[L^{C}=-\frac{1}{|Y|}\sum_{j}^{|Y|}y_{j}log(p_{j})+(1-y_{j})log(1-p_{j}), \tag{13}\] where \(y_{j}\) is the ground truth of the \(j\)-th label while \(p_{j}\) is the \(j\)-th element of \(P\). Considering hierarchical classification, we use recursive regularization Gopal and Yang (2013) to constrain the weights of adjacent classes to be in the same distributions as formulated in Eq. 14: \[L^{R}=\sum_{p\in Y}\sum_{q\in child(p)}\frac{1}{2}||w_{p}^{2}-w_{q}^{2}||, \tag{14}\] where \(p\) is a non-leaf label in \(Y\) and \(q\) is a child of \(p\). \(w_{p},w_{q}\in W_{c}\). We use a hyper-parameter \(\lambda\) to control the strength of recursive regularization. Thus, the final loss function can be formulated as: \[L=L^{C}+\lambda\cdot L^{R}. \tag{15}\] ## 5 Experiments ### Experiment Setup Datasets and Evaluation Metrics.We conduct experiments on three benchmark datasets in HTC. RCV1-v2 Lewis et al. (2004) and NYT Sandhaus, Evan (2008) respectively consist of news articles published by Reuters, Ltd. and New York Times, while WOS (Kowsari et al., 2017) includes abstracts of academic papers from Web of Science. Each of these datasets is annotated with ground-truth labels in a given hierarchy. We split and pre-process these datasets following Zhou et al. (2020). The statistics of these datasets are shown in Table 1. The experimental results are measured with Micro-F1 and Macro-F1 (Gopal and Yang, 2013). Micro-F1 is the harmonic mean of the overall precision and recall of all the test instances, while Macro-F1 is the average F1-score of each category. Thus, Micro-F1 reflects the performance on more frequent labels, while Macro-F1 treats labels equally. Implementation Details.The text embeddings fed into the TextRCNN encoder are initialized with GloVe (Pennington et al., 2014). The TextRCNN encoder consists of a two-layer BiGRU with hidden dimension 128 and CNN layers with kernel size=[2, 3, 4] and \(d_{C}\)=100. Thus, the hidden dimension of the final text representation is \(d_{H}=r_{C}*d_{C}=3*100=300\). The height \(K\) of the coding tree is 2 for all three datasets. The hidden dimension \(d_{V}\) of node embedding \(X_{G}\) is set to 512 for RCV1-v2 while 300 for WOS and NYTimes. \(Pool(\cdot)\) in Eq. 11 is summation for all the datasets. The balance factor \(\lambda\) for \(L^{R}\) is set to 1e-6. The batch size is set to 16 for RCV1-v2 and 64 for WOS and NYTimes. The model is optimized by Adam (Kingma and Ba, 2014) with a learning rate of 1e-4. For BERT text encoder, we use the BertModel of bert-base-uncased and there are some negligible changes to make it compatible with our method. \(d_{B}=d_{H}=d_{V}=768\). The height \(K\) of the coding tree is 2 and the \(Pool(\cdot)\) in Eq. 11 is averaging. The batch size is set to 12, and the BertModel is fine-tuned by Adam (Kingma and Ba, 2014) with a learning rate of 2e-5. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Dataset & \(|Y|\) & \(Avg(y_{i})\) & Depth & \# Train & \# Dev & \# Test \\ \hline WOS & 141 & 2.0 & 2 & 30,070 & 7,518 & 9,397 \\ RCV1-v2 & 103 & 3.24 & 4 & 20,833 & 2,316 & 781,265 \\ NYTimes & 166 & 7.6 & 8 & 23,345 & 5,834 & 7,292 \\ \hline \hline \end{tabular} \end{table} Table 1: Summary statistics of datasets. Baselines.We compare HiTIN with SOTAs including HiAGMZhou et al. (2020), HTCInfoMax Deng et al. (2021), HiMatch Chen et al. (2021), and HGCLR Wang et al. (2022). HiAGM, HTCInfoMax, and HiMatch use different fusion strategies to model text-hierarchy correlations. Specifically, HiAGM proposes a multi-label attention and a text feature propagation technique to get hierarchy-aware representations. HTCInfoMax enhances HiAGM-LA with information maximization to model the interaction between text and hierarchy. HiMatch treats HTC as a matching problem by mapping text and labels into a joint embedding space. HGCLR directly incorporates hierarchy into BERT with contrastive learning. ### Experimental Results The experimental results with different types of text encoders are shown in Table 2 and Table 3. HiAGM is the first method to apply the dual-encoder framework and outperforms TextRCNN on all the datasets. HTCInfoMax improves HiAGM-LA Zhou et al. (2020) by introducing mutual information maximization but is still weaker than HiAGM-TP. HiMatch treats HTC as a matching problem and surpasses HiAGM-TPZhou et al. (2020) on WOS and RCV1-v2. Different from these methods, HiTIN could further extract the information in the text without counting the prior probabilities between parent and child labels or building feature vectors for labels. As shown in Table 2, when using TextRCNN as the text encoder, our model outperforms all baselines on the three datasets. Based on TextRCNN, HiTIN brings 3.55% and 4.72% improvement of Micro-F1 and Macro-F1 on average. As for pretrained models in Table 3, our model also beats existing methods in all three datasets. Compared with vanilla BERT, our model can significantly refine the text representations by respectively achieving 1.2% and 3.1% average improvement of Micro-F1 and Macro-F1 on the three datasets. In addition, our method can achieve 3.69% improvement of Macro-F1 on NYT, which has the deepest label hierarchy in the three datasets. It demonstrates the superiority of our model on the dataset with a complex hierarchy. Compared with BERT-based HTC methods, our model observes a 1.12% average improvement of Macro-F1 against HGCLR. On RCV1-v2, the performance boost of Macro-F1 even reaches 1.64%. The improvement of Macro-F1 shows that our model could effectively capture the correlation between parent and child labels even without their prior probabilities. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{Attention Models} & \multicolumn{2}{c}{WOS} & \multicolumn{2}{c}{RCV1-v2} & \multicolumn{2}{c}{NYTines} \\ \cline{2-7} & Micro-F1 & Macro-F1 & Micro-F1 & Macro-F1 & Macro-F1 & Macro-F1 \\ \hline HTTINRandom & 84.74 & 77.90 & 82.41 & 61.46 & 71.99 & 58.26 \\ & w/o \(L^{R}\) & 86.48 & 80.48 & 84.14 & 63.12 & 79.93 & 59.95 \\ & HiTIN & **86.66** & **81.11** & **84.51** & **64.37** & **75.13** & **61.09** \\ \hline \hline \end{tabular} \end{table} Table 4: Performance when replacing or removing a component of HiTIN. HiTIN(Random) denotes the results produced by HiTIN within the random algorithm. w/o \(L^{R}\) stands for the parameter \(\lambda\) is set to 0. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Hierarchy-aware Models} & \multicolumn{2}{c}{WOS} & \multicolumn{2}{c}{RCV1-v2} & \multicolumn{2}{c}{NYTines} & \multicolumn{2}{c}{Average} \\ \cline{2-9} & Micro-F1 & Macro-F1 & Micro-F1 & Macro-F1 & Micro-F1 & Macro-F1 & Micro-F1 & Macro-F1 \\ \hline TextRCNN Zhou et al. (2020) & 83.55 & 76.99 & 81.57 & 59.25 & 70.83 & 56.18 & 78.65 & 64.14 \\ HiAGM Zhou et al. (2020) & 85.82 & 80.28 & 83.96 & 63.35 & 74.97 & 60.83 & 81.58 & 68.15 \\ HTCInfoMax Deng et al. (2021) & 85.58 & 80.05 & 83.51 & 62.71 & - & - & - & - \\ HiMatch Chen et al. (2021) & 86.20 & 80.53 & 84.73 & 64.11 & - & - & - & - \\ \hline HiTIN & **86.66** & **81.11** & **84.81** & **64.37** & **75.13** & **61.09** & **82.20** & **68.86** \\ \hline \hline \end{tabular} \end{table} Table 2: Main Experimental Results with TextRCNN encoders. All baselines above and our method utilize GloVe embeddings Pennington et al. (2014) to initialize documents and encode them with TextRCNN Lai et al. (2015). \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Pretrained Language Models} & \multicolumn{2}{c}{WOS} & \multicolumn{2}{c}{RCV1-v2} & \multicolumn{2}{c}{NYTines} & \multicolumn{2}{c}{Average} \\ \cline{2-9} & Micro-F1 & Macro-F1 & Micro-F1 & Macro-F1 & Micro-F1 & Macro-F1 & Micro-F1 & Macro-F1 \\ \hline BERT \(\dagger\) & 85.63 & 79.07 & 85.65 & 67.02 & 78.24 & 65.62 & 83.17 & 70.57 \\ BERT+HiAGM\(\dagger\) & 86.04 & 80.19 & 85.58 & 67.93 & 78.64 & 66.76 & 83.42 & 71.63 \\ BERT+HTCInfoMax\(\dagger\) & 86.30 & 79.97 & 85.53 & 67.09 & 78.75 & 67.31 & 83.53 & 71.46 \\ BERT+HiMatch Chen et al. (2021) & 86.70 & 81.06 & 86.33 & 68.66 & - & - & - & - \\ HGCLR Wang et al. (2022) & 87.11 & 81.20 & 86.49 & 68.31 & 78.86 & 67.96 & 84.15 & 72.49 \\ \hline HiTIN & **87.19** & **81.57** & **86.71** & **69.95** & **79.65** & **69.31** & **84.52** & **73.61** \\ \hline \hline \end{tabular} \end{table} Table 3: Main Experimental Results with BERT encoder. All baselines above and our method adopt BERTDevlin et al. (2019) as the text encoder. \(\dagger\) denotes the results are reported by Wang et al. (2022). ### The Necessity of CIRCA In this subsection, we illustrate the effectiveness of CIRCA by comparing it to a random algorithm. The random algorithm generates a coding tree of the original graph \(G\) with a certain height \(K\) just like CIRCA. First, the random algorithm also takes all nodes of graph \(G\) as leaf nodes of the tree. But different from CIRCA, for each layer, every two nodes are randomly paired and then connect to their parent node. Finally, all nodes in the \(K-1_{th}\) layer are connected to a root node. We generate coding trees with the random algorithm and then feed them into our model. As shown in Table 4, the results demonstrate that the random algorithm leads to a negative impact which destroys the original semantic information. Thus, it is difficult for the downstream model to extract useful features. On the contrary, the coding tree constructed by CIRCA can retain the essential structure of the label hierarchy and make the learning procedure more effective. Besides, our model could achieve good performance without Eq. 14, which proves that CIRCA could retain the information of low-frequency labels while minimizing the structural entropy of label hierarchies. ### The Height of Coding Tree The height of the coding tree directly affects the performance of our model. The higher the coding tree, the more information is compressed. To investigate the impact of \(K\), we run HiTIN with different heights \(K\) of the coding tree while keeping other settings the same. Figure 3 shows the test performance of different height coding trees on WOS, RCV1-v2, and NYTimes. As \(K\) grows, the performance of HiTIN is severely degraded. Despite the different depths of label hierarchy, the optimal heights of the coding tree for the three datasets are always 2. A probable reason is that the 2-dimensional structural entropy roughly corresponds to objects in the 2-dimensional space as the text and label are both represented with 2-D tensors. On the other hand, as \(K\) grows, more noisy information is eliminated, but more useful information is also compressed. ### The Memory-saving Feature of HiTIN In this subsection, we compare the number of learnable parameters of HiTIN with that of the baselines. We set \(K\) to 2 and run these models on WOS while keeping the other hyper-parameter the same. The numbers of trainable parameters are counted by the \(number(\cdot)\) function in PyTorch (Paszke et al., 2019). As shown in Figure 4, we can observe that the parameter of our model is slightly greater than TextRCNN (Zhou et al., 2020) but significantly smaller than HiAGM (Zhou et al., 2020), HiMatch (Chen et al., 2021), and HTCInfoMax (Deng et al., 2021). One important reason is the simple and efficient architecture of HiTIN, which contains only a few MLPs and linear transformations. On the contrary, HiAGM-LA (Zhou et al., 2020) needs extra memory for label representations, HiAGM-TP uses a space-consuming method for text-to-label transformation, and both of them utilized gated network as the structure encoder, which further aggravates memory usage. HiMatch (Chen et al., 2021) and HTCInforMax (Deng et al., 2021) respectively introduce auxiliary neural networks Figure 4: The number of trainable parameters of HiTIN and baseline models on WOS. Figure 3: Test performance of HiTIN with different height \(K\) of the coding tree on three datasets. based on HiAGM-TP and HiAGM-LA. Thus, their memory usages are even larger. ## 6 Conclusion In this paper, we propose a suite of methods to address the limitations of existing approaches regarding HTC. In particular, tending to minimize structural entropy, we design CIRCA to construct coding trees for the label hierarchy. To further extract textual information, we propose HiTIN to update node embeddings of the coding tree iteratively. Experimental results demonstrate that HiTIN could enhance text representations with only structural information of the label hierarchy. Our model outperforms existing methods while greatly reducing memory increments. ## Limitations For text classification tasks, the text encoder is more important than other components. Due to the lack of label semantic information and simplified learning procedure, the robustness of text encoders directly affects the performance of our model. From Table 2 and 3, we could observe that BERT has already surpassed TextRCNN by 4.52% and 6.43% on Micro-F1 and Macro-F1. Besides, BERT beats all the TextRCNN-based methods on RCV1-v2 and NYTimes. However, when applying BERT as the text encoder, our model makes slight improvements to Micro-F1, especially on WOS. A probable reason is that BERT was pre-trained on news corpus while WOS consists of academic papers. ## Acknowledgements This research was supported by NSFC (Grant No. 61932002).
2304.13802
Green UAV-enabled Internet-of-Things Network with AI-assisted NOMA for Disaster Management
Unmanned aerial vehicle (UAV)-assisted communication is becoming a streamlined technology in providing improved coverage to the internet-of-things (IoT) based devices. Rapid deployment, portability, and flexibility are some of the fundamental characteristics of UAVs, which make them ideal for effectively managing emergency-based IoT applications. This paper studies a UAV-assisted wireless IoT network relying on non-orthogonal multiple access (NOMA) to facilitate uplink connectivity for devices spread over a disaster region. The UAV setup is capable of relaying the information to the cellular base station (BS) using decode and forward relay protocol. By jointly utilizing the concepts of unsupervised machine learning (ML) and solving the resulting non-convex problem, we can maximize the total energy efficiency (EE) of IoT devices spread over a disaster region. Our proposed approach uses a combination of k-medoids and Silhouette analysis to perform resource allocation, whereas, power optimization is performed using iterative methods. In comparison to the exhaustive search method, our proposed scheme solves the EE maximization problem with much lower complexity and at the same time improves the overall energy consumption of the IoT devices. Moreover, in comparison to a modified version of greedy algorithm, our proposed approach improves the total EE of the system by 19% for a fixed 50k target number of bits.
Muhammad Ali Jamshed, Ferheen Ayaz, Aryan Kaushik, Carlo Fischione, Masood Ur-Rehman
2023-04-26T19:51:11Z
http://arxiv.org/abs/2304.13802v2
# Green UAV-enabled Internet-of-Things Network with AI-assisted NOMA for Disaster Management ###### Abstract Unmanned aerial vehicle (UAV)-assisted communication is becoming a streamlined technology in providing improved coverage to the internet-of-things (IoT) based devices. Rapid deployment, portability, and flexibility are some of the fundamental characteristics of UAVs, which make them ideal for effectively managing emergency-based IoT applications. This paper studies a UAV-assisted wireless IoT network relying on non-orthogonal multiple access (NOMA) to facilitate uplink connectivity for devices spread over a disaster region. The UAV setup is capable of relaying the information to the cellular base station (BS) using decode and forward relay protocol. By jointly utilizing the concepts of unsupervised machine learning (ML) and solving the resulting non-convex problem, we can maximize the total energy efficiency (EE) of IoT devices spread over a disaster region. Our proposed approach uses a combination of k-medoids and Silhouette analysis to perform resource allocation, whereas, power optimization is performed using iterative methods. In comparison to the exhaustive search method, our proposed scheme solves the EE maximization problem with much lower complexity and at the same time improves the overall energy consumption of the IoT devices. Moreover, in comparison to a modified version of greedy algorithm, our proposed approach improves the total EE of the system by 19% for a fixed 50k target number of bits. Unmanned aerial vehicle (UAV), non-orthogonal multiple access (NOMA), disaster management, internet-of-things (IoT), energy efficiency (EE). ## I Introduction Transportation and ground infrastructure are often prone to destruction by natural disasters. In 2019, about 27% of the world's roads and railways were affected by at least one kind of disaster [1]. Timely warnings and relief operations can play a significant role in mitigating damages caused by disasters, which require effective disaster management and rescue operations. Advanced disaster management systems can potentially utilize emerging 6G technologies and internet-of-things (IoT) networks [2] for communications and control. The future 6G era envisions seamless intelligence and connectivity of ground and aerial users including vehicles, smartphones, unmanned aerial vehicles (UAVs) and millions of other IoT devices which can be used for monitoring, surveillance and telemetry. Specifically, UAVs have features such as low weight, small structure, ease of mobility and aerial outreach which are beneficial for surveillance in areas where access to humans or ground nodes is challenging [3]. For example, UAVs can be used to monitor disaster-affected harsh environments when ground infrastructure is either damaged or prone to destruction [4]. Real time monitoring and surveillance of affected area through electronic sensors and communicating warnings via UAVs have been discussed as one of the feasible solutions to disaster related challenges [5]. Furthermore, UAVs can also play an important role in extending their service as a mobile base station (BS) and offering low-cost deployment without geographical limitations to support ubiquitous coverage and growing traffic needs of 6G [6, 7]. This feature of UAVs is particularly significant in disaster-affected area where BS cannot be reached due to destruction. The mode of communications and associated technologies are crucially important to provide reliable connectivity during disaster management. It has already been studied that the massive increase in data traffic, high data rate requirements and energy demands are extremely challenging for the practical implementation of 6G IoT networks [8]. To cater these requirements, one of the promising 6G technologies is non-orthogonal multiple access (NOMA), known for its higher spectral efficiency (SE) than conventional schemes employing orthogonal multiple access (OMA). Furthermore, its successive interference cancellation (SIC) feature, which is implemented at the receiver to mitigate interference, supports multiple users at the same time-frequency resource [9]. In addition, it also reduces the issue of low latency in existing OMA based 5G technologies. There have been also recent advances in multiple access approaches such as in NOMA based multicast systems [10], index modulation based reconfigurable intelligent surfaces-aided multiple access [11] and rate splitting multiple access (RSMA) [12]. Nevertheless, the roadmap of NOMA has already been prepared by technical organizations, such as the 3rd Generation Partnership Project (3GPP) [13]. Motivated by the applications of NOMA and UAVs in 6G, we study a NOMA-based UAV-assisted IoT network in this paper as a disaster management solution. However, NOMA systems are associated with the trade-off problem between SE and energy efficiency (EE). The increased energy requirement is already a challenge in 6G networks due to the rapid rise in data traffic, network capacity and high frequency band operation [14]. Furthermore, the additional computational load to support ubiquitous intelligence envisioned in 6G requires high energy consuming machine learning (ML) algorithms. Therefore, green computing and communications are now emerging as potential solutions to reduce energy consumption [15]. Energy-efficient computing and communication models for green networks are not only helpful to reduce electricity cost but also lead towards a sustainable system suited to future environment goals by balancing demand and supply of energy. This paper aims to propose a green and sustainable NOMA based UAV-assisted IoT network particularly for disaster management. Specifically, we study the EE maximization problem of a NOMA-based IoT network including UAVs to provide ubiquitous coverage during a disaster situation. The main contributions of the paper are as follows * We propose a new EE optimization framework to improve the energy utilization of IoT devices available in a disaster region. The proposed framework uses a UAV setup to relay the uplink information of IoT devices using the green artificial intelligence (AI) approach and NOMA-based scheme. * We utilize k-medoids, an unsupervised ML technique to cluster the IoT devices, whereas the Silhouette analysis is used to select the best number of IoT devices per subcarrier by exploiting the NOMA characteristics. Furthermore, the power allocation is performed by solving the resulting non-convex problem. * We provide a thorough step-wise complexity analysis to validate the enhanced effectiveness of our proposed GREEN-AI algorithm. The computational complexity of the proposed algorithm results lower than the exhaustive search-based optimal method. * We compare our GREEN-AI based proposed solution with a modified greedy algorithm. Simulation results show that the GREEN-AI approach improves the total EE of the system by 19% for a fixed 50k target number of bits. The rest of the paper is organized as follows. Section II discusses related works, Section III presents the system model and problem definition, Section IV explains the proposed solution, Section V presents simulation findings, and Section VI concludes the paper. Moreover, the notations used in the paper are listed in Table I. ## II Related Works ### _UAVs in Disaster Management_ UAVs can be used for multiple applications in disaster management including monitoring, surveillance, forecasting, early warning predictions and notifications, logistics and evacuation support, search and rescue missions, providing medical aid, and supporting infrastructure and communication systems. In [16], UAVs are used communicate data related to early warning disaster notifications and act as supporters of resource-constrained wireless sensor networks which play a lead role in sensing geophysical, meteorological or climatic conditions because of their high precision and less operational time. Additionally, UAV networks are also proposed to accomplish a critical task in post-disaster operations by establishing short-distance cellular connectivity with the affected users and then transferring data to the backbone cellular infrastructure via a relay network. The challenges of UAV-assisted disaster management are discussed in [17], where energy-availability is highlighted as one of the open issues of sustainable UAV systems. ### _Energy Efficiency in UAVs_ Generally, EE is one of the crucial challenges of IoT networks and has been widely discussed in existing literature. EE perspective of UAVs in emergency situations is addressed in [18] by simultaneous wireless information and power transfer. A dense deployment of IoT devices is considered and UAVs are equipped with multi-beam antenna arrays for simultaneous transfer of energy to IoT devices. As a future direction, it is suggested to adopt NOMA scheme and model energy optimization as a maximization problem of same-rate. In [19] and [20], EE is treated as an optimization problem in UAV-enabled NOMA networks but disaster situation is not considered, where IoT devices are often unable to communicate with BS due to infrastructure damage. NOMA is compared with RSMA in [12]. EE of RSMA is found better than NOMA in downlink mmWave transmission of a cellular connected UAV network. However, the standardization of RSMA has not been considered by 3GPP [21]. Apart from NOMA based communication systems, various other solutions related to UAV-enabled applications such as task scheduling and offloading [22], and trajectory optimization [23] propose to model EE as an optimization problem resolved by simple heuristic algorithms based on greedy approach [24]. \begin{table} \begin{tabular}{|c|c|} \hline **Notation** & **Definition** \\ \hline \(EE\) & Energy efficiency \\ \(k\) & IoT device index \\ \(n\) & Subcarrier index \\ \(K\) & Total number of IoT devices \\ \(N\) & Total number of subcarriers \\ \(g_{k,n}\) & Channel gain \\ \(\beta_{o}\) & Channel gain power at one meter reference \\ \(h_{k,n}\) & Fading coefficient \\ \(d_{k}\) & Distance between UAV and \(k^{th}\) device \\ \(I_{k,n}\) & Interference of \(k^{th}\) device \\ \(p_{k,n}\) & Transmit power of \(k^{th}\) device \\ \(\zeta\) & Reference threshold \\ \(bt_{k,n}\) & Achievable data rate \\ \(B^{Sum}\) & Total data rate of \(K\) devices over \(N\) subcarriers \\ \(Bt_{n}\) & Total data rate of \(K\) devices over \(n^{th}\) subcarrier \\ \(\alpha_{k,n}\) & Subcarrier allocation index \\ \(u\) & Subcarrier bandwidth \\ \(\sigma^{2}\) & Noise variance \\ \(P_{t}\) & Circuit power \\ \(P_{t}\) & Transmit power of \(K\) devices \\ \(p_{max}\) & Maximum allowable power of \(k^{th}\) device \\ \(U_{k}\) & Maximum permitted devices on a subcarrier \\ \(X\), \(Y\), \(Z\) & Position coordinates \\ \(\lambda_{k}\), \(\mu_{k}\) and \(\delta_{k,n}\) & Lagrange multipliers \\ \(O\) & Optimization problem \\ \(\mathcal{C}\) & Number of Clusters \\ \(\mathcal{O}\) & At most (in complexity analysis) \\ \hline \end{tabular} \end{table} Table I: List of notations and their definition ### _Artificial Intelligence (AI) based Energy Efficiency Approaches_ AI based solutions involving Reinforcement Learning (RL) [23] and Deep RL (DRL) [25] combined with greedy algorithms are usually proposed in literature to achieve EE in UAV-assisted applications. In [25], only the time complexity of a deep neural network during testing phase is analyzed while the computational complexity during learning is not discussed. An energy-efficient and sustainable NOMA-based UAV network is formulated in [26] by utilizing DRL. However, the complexity analysis of the solution is not evaluated in [26]. Another AI-based scheme including integration of fuzzy inference and RL is presented as a solution to enhance EE in NOMA-based UAV networks in [27] whose complexity is proportional to learning cycles. The EE approaches in UAV networks are summarized in Table II. Keeping in view the limitations of related works, the EE solution for NOMA-based UAV networks, particularly for disaster management is worth investigating. ## III System Model Consider an uplink communication scenario for \(K\) IoT devices, placed in a disaster region as shown in Fig. 1. We assume a single cellular-type disaster region, where \(K\) deployed IoT devices are unable to communicate directly with the BS. A UAV at a fixed height \(Z_{UAV}\) is considered to assist the IoT devices to upload their data to the BS. In the proposed model, we take into account a half-duplex connection in which the NOMA protocol sends data from IoT devices to BS in two time slots. More specifically, the UAV collects the data from \(K\) IoT devices using the NOMA protocol, and stores them in the buffer in first time slot. In second time slot, the UAV re-transmits the superimposed signal to BS. The assumptions made for the system are as follows * The system has perfect channel state information (CSI). * A single omnidirectional antenna for uplink communication is assigned with each IoT device. * The channels among devices are independent and suffer from Rayleigh fading. * The position of UAV is static. * We consider the transmission in only first time slot. The transmission between IoT devices and UAV takes place in the first time slot. Let \(g_{k,n}\) denote the channel gain of device \(k\) over subcarrier \(n\). Incorporating the path loss and fading effect, it can be mathematically defined as \[g_{k,n}=\frac{\beta_{o}|h_{k,n}|}{d_{k}^{2}}, \tag{1}\] where \(\beta_{o}\) is the channel gain power at a distance of one meter as reference, \(h_{k,n}\) is the coefficient of fading and \(d_{k}\) is the distance between a UAV and \(k^{th}\) device. \(d_{k}^{2}\) is defined as \[d_{k}^{2}=\Big{[}(X_{k}-X_{UAV})^{2}+(Y_{k}-Y_{UAV})^{2}+Z_{UAV}^{2}\Big{]}, \tag{2}\] where \(X\) and \(Y\) denote the x-coordinate and y-coordinate of the locations of UAV and IoT devices respectively. Since we are using NOMA protocol, the UAV receives a superimposed signal and decodes it using traditional SIC. The multiplexing process of multiple devices on a single subcarrier occurs due to NOMA resulting in the following interference on \(k^{\text{th}}\) device \[I_{k,n}=\sum_{k=1}^{U_{k}}p_{k,n}\cdot g_{k,n}, \tag{3}\] where \(p_{k,n}\) is the transmit power of a device \(k\) over subcarrier \(n\) and \(U_{k}\) represents the limit of allowable devices on a single subcarrier. The SIC is conducted at the receiving side of signal to perform decoding of multiplexed devices. In NOMA uplink situation, the devices with the best channel gain are demultiplexed first, followed by the decoding of the devices with the poorest channel gain. The multiplexing approach used in NOMA heavily relies on SIC's ability to effectively decode a multiplexed signal. It is possible if the following condition is satisfied: \[(p_{k,n}g_{k,n})/I_{k,n}\geq\zeta, \tag{4}\] where \(\zeta\geq 1\) is the reference threshold [28]. The achievable data rate is given as \[bt_{k,n}(\alpha_{k,n},p_{k,n})=w\alpha_{k,n}\log_{2}\Bigg{(}1+\frac{p_{k,n}g_ {k,n}}{\sigma^{2}+I_{k,n}}\Bigg{)}, \tag{5}\] where \(\alpha_{k,n}\) is subcarrier allocation index, \(w\) is the subcarrier bandwidth and \(\sigma^{2}\) is the noise variance. Thus the total EE of \(K\) devices is defined as \[EE=B^{\text{Sum}}/P_{f}+P_{t}, \tag{6}\] where \(B^{\text{Sum}}=\sum_{k=1}^{K}\sum_{n=1}^{N}bt_{k,n}\) is the total data rate of \(K\) devices over \(N\) subcarriers, \(B_{t}=\sum_{k=1}^{K}bt_{k,n}\) is the total data rate of \(K\) devices over \(n^{\text{th}}\) subcarrier, \(P_{f}\) is the circuit power and \(P_{t}\) is the flexible transmit power of \(K\) devices [29, 30]. \begin{table} \begin{tabular}{|c|c|} \hline **Solution** & **Limitation** \\ \hline Simultaneous wireless information and power transfer [18] & Only considers NOMA as future approach \\ \hline EE optimization problem [19, 20, 22, 23] & Do not consider damages due to disaster \\ \hline RSMA-based communications [12] & Not standardized by 3GPP \\ \hline DRL based algorithm [25, 26] & Computational complexity not thoroughly analyzed \\ \hline Fuzzy inference and RL based algorithm [27] & Complexity proportional to learning cycles \\ \hline \end{tabular} \end{table} Table II: Existing EE approaches for UAV networks and their limitations. ### _Problem Formulation_ We define the problem of maximizing EE as follows \[O1:\underset{\mathbf{\alpha},\mathbf{p}}{\text{max}}EE\] (7) s.t. (8) \[\mathcal{C}1:\sum_{k=1}^{K}bt_{k,n}(\mathbf{\alpha},\mathbf{p})=Bt_{n},\forall k\] \[\mathcal{C}2:\sum_{k=1}^{K}\alpha_{k,n}p_{k,n}\leq P_{k}^{\max} \quad\forall k,\] \[\mathcal{C}3:\sum_{k=1}^{k}\alpha_{k,n}\leq U_{k}\quad\forall k,\] \[\mathcal{C}4:(p_{k,n}g_{k,n})/I_{k,n}\geq\zeta\quad\forall k, \forall n, \tag{9}\] where the total EE of \(K\) devices is defined by \(EE(\mathbf{\alpha},\mathbf{p})\) in objective function \(O1\). Firstly, the constraint \(\mathcal{C}1\) safeguards the quality of service (QoS) of each device. Secondly, constraint \(\mathcal{C}2\) is associated with the maximum allowable power \(P_{k}^{\max}\) of each device. Thirdly, the constraint \(\mathcal{C}3\) represents the maximum number of devices permitted on a particular subcarrier denoted by \(U_{k}\). Finally, the constraint \(C4\) is relevant to the effective implementation of SIC at the receiver end. The binary nature of \(\alpha_{k,n}\) and the non-affine nature of constraint \(\mathcal{C}1\) make the problem non-convex [31]. To deal with the binary nature of \(\alpha_{k,n}\), a standard relaxation procedure can be used. It can be attained by employing a sequential subcarrier and power allocation strategy which involves subcarrier allocation for a certain fixed power and vice-versa [32]. Furthermore, the CSI is expected to be available via utilizing uplink pilot signals. ``` 1:INPUT (\(K\), \(N\), \(g_{k,n}\), \(\mathbf{\alpha}\), \(\zeta\), \(P_{k}^{\max}\), \(Bt_{n}\), \(\sigma^{2}\), \(w\)) 2:Step A: Allocating devices to a group 3:for\(C_{l}\) = 2 : \(K-1\)do 4: k-medoids based clustering \(\mathbf{g}_{n}=[g_{1,n},...,g_{K,n}]\); 5: Set \(U_{n}=C\) according to Silhouette analysis; 6:endfor 7:Step B: Allocating subcarrier 8: Use \(G_{k,n}=\frac{g_{k,n}}{g_{k}}\ \forall k,n\), where \(\hat{g}_{k}\) represents mean of channel gain. 9: Use \(\text{max}(G_{...})\) within each cluster for each device. 10: Assign \(S=\frac{K}{U}\) subcarrier to each device. 11:Step C: Optimizing power levels 12: Set \(p_{k,n}\)=\(P_{k}^{\max}/S\) to evaluate foremost interference; 13:repeat 14:for\(k\) = 1 : \(K\)do 15: Estimate \(r_{k,n}\) employing water-filling and adhering to the SIC constraint; 16: Re-evaluate \(I_{k,n}\) utilizing (4); 17: Estimate \(EE_{k}\) utilizing (6); 18:endfor 19:until convergence 20:Compute \(EE\) utilizing (\(O2\)); 21:OUTPUT\(EE\); ``` **Algorithm 1**Learning based NOMA and UAV-assisted Energy Efficient IoT Framework for Disaster Management (**GREEN-AI**). ## IV The Proposed Energy Efficient Solution This section includes a description of subcarrier allocation, power allocation algorithm, and complexity analysis of the proposed solution. Algorithm 1 describes the detailed procedure of subcarrier and power allocation as the proposed solution. ### _Subcarrier Allocation_ NOMA allows \(U_{k}\) devices to be allocated on a single subcarrier with varying channel properties and use the levels of received power for demultiplexing, which results in achieving high SE of a communication system. The capability Figure 1: UAV-assisted IoT devices in a disaster situation. of the receiver to distinguish among power levels governs subcarrier assignment [28]. An efficient subcarrier assignment is implemented when devices or users on each subcarrier are grouped effectively. To reach greater effectiveness and a higher probability of convergence than heuristic techniques, an ML technique is used to perform subcarrier assignment by exploiting a clustering algorithm for grouping devices or users [33]. Two simple and popular clustering algorithms are k-means and k-mediods clustering. The k-means algorithm performs sorting according to the nearest mean value and is therefore highly affected by outliers. The k-mediods algorithm performs sorting around method, i.e., a designated centre of the cluster [34]. In [34], k-mediods is shown as an algorithm with less number of computations. To achieve lower complexity than the solution proposed in [33], we utilize k-medoids based clustering instead of k-means. Furthermore, a Silhouette analysis is proposed instead of the traditional elbow method for predicting the optimum number of clusters. The reason for not using the elbow technique is that it offers a range based on elbow criteria, which creates ambiguity in determining the optimal value of cluster \(C\). On the other hand, the Silhouette analysis offers enhanced robustness and eliminates ambiguity. ### _Power Allocation_ After ML based subcarrier allocation, optimized power allocation of each device needs to be achieved to attain maximum \(EE\) of \(K\) devices available in a disaster-affected region. Since the constraint \(\mathcal{C}1\) in \(O1\) is non-convex, (5) is transformed to overcome its non-linearity as \[p_{k,n}=\frac{(\sigma^{2}+I_{k,n})(2^{r_{k,n}}-1)}{g_{k,n}}, \tag{10}\] where \[r_{k,n}=\frac{bt_{k,n}}{w\alpha_{k,n}}. \tag{11}\] The problem \(O1\) can now be converted into \[O2:\text{max}_{\mathbf{r}}\,\sum_{k=1}^{K}\sum_{n=1}^{N}\frac{ \alpha_{k,n}(\sigma^{2}+I_{k,n})(2^{r_{k,n}}-1)}{g_{k,n}}, \tag{12}\] \[s.t.\] \[\mathcal{C}5:\,w\sum_{n=1}^{N}\alpha_{k,n}r_{k,n}=Bt_{n},\] \[\mathcal{C}6:\,\sum_{n=1}^{N}\frac{\alpha_{k,n}(\sigma^{2}+I_{k,n })(2^{r_{k,n}}-1)}{g_{k,n}}\leq P_{k}^{\max},\] \[\mathcal{C}7:\,\frac{\alpha_{k,n}(2^{r_{k,n}}-1)(\sigma^{2}+I_{k, n})}{I_{k,n}}\geq\zeta.\] Contrary to \(O1\), the problem \(O2\) is convex where the constraints are affine. Therefore, its Lagrangian can be easily obtained and is defined as \[r_{k,n}=\max\Bigg{(}0,\log_{2}\chi+\log_{2}\bigg{(}\frac{w(g_{k,n})}{\ln(2)( \sigma^{2}+I_{k,n})}\bigg{)}\Bigg{)}, \tag{14}\] where, \(\chi\) is expressed as \[\chi=\frac{\lambda_{k}^{\star}}{(1-(\mu_{k}^{\star}+\delta_{k,n}^{\star}g_{k,n })/I_{k,n}))}, \tag{15}\] where \(\lambda_{k}\), \(\mu_{k}\) and \(\delta_{k,n}\) are associated Lagrange multipliers of \(\mathcal{C}5\), \(\mathcal{C}6\) and \(\mathcal{C}7\). Different iterative solutions can be used to solve (14), as it is a water-filling based equation [33]. ### _Complexity Analysis_ The proposed ML-based NOMA and UAV-assisted method consists of three steps. Step A is responsible for grouping devices while relying on k-medoids and the Silhouette method. The computational complexity in the worst-case scenario of the k-medoids method is \(\mathcal{O}(K^{2}CN)\). On the other hand, the computational complexity of the Silhouette method used within the k-medoids is \(\mathcal{O}(K^{2})\). Step B is responsible for allocating subcarrier to \(K\) users. The computational complexity in the worst-case scenario of Step B is \(\mathcal{O}(KN)\). Step C is responsible for finding the optimum rate value to meet the required QoS for \(K\) devices. The computational complexity in the worst-case scenario of Step C is \(\mathcal{O}((KN)^{2})\). Overall the worst-case computational complexity of Algorithm 1, i.e., our proposed solution is \(\mathcal{O}((KN)^{2})\). If \(O1\) is directly employed as an exhaustive search technique, the worst-case computational complexity of \(\mathcal{O}(2^{KN/2})\) is required which is much higher than the proposed algorithm. ## V Results and Discussions In this section, we discuss the MATLAB simulations performed to evaluate the performance of our proposed GREEN-AI as an energy-efficient solution for NOMA based UAV-assisted framework and compare it with the greedy algorithm. The IoT devices are placed uniformly in a disaster region. The simulation parameters used are listed in Table III. To model propagation effects, we assume Rayleigh fading. The path loss models used are defined in [35]. In Fig. 2, we show the effectiveness of the proposed GREEN-AI solution for a fixed uplink power level at each \(k^{th}\) device and a fixed target number of bits, \(Bt_{n}\). The total \(EE\) of the system is observed with respect to \(P_{f}\). In order to perform a fair comparison, we have used the greedy algorithm as a benchmark. The modified version of the greedy algorithm adopted for comparison follows the same steps as defined in Algorithm 1, omitting line 8, which is used to maintain the fairness among all devices. In general, increasing the values of \(P_{f}\) results in a reduction in total \(EE\) of the \(K\) IoT devices \begin{table} \begin{tabular}{|c|c|} \hline **Parameter** & **Value** \\ \hline Coverage radius & 500 m \\ \(Z_{UAV}\) & 100 m \\ \(K\) & 70 \\ \(w\) & 10 MHz \\ \(\sigma^{2}\) & -174 dBm/Hz \\ \(P_{k}^{max}\) & 0.2 W \\ \(\zeta\) & 1 \\ \hline \end{tabular} \end{table} Table III: Simulation parameters present in the disaster region. In comparison to the modified version of greedy algorithm, our proposed scheme improves \(EE\) by 36.5%, when \(P_{f}=0.1003\), and by 19%, when \(P_{f}=1.4002\). It is noted that for for a very high circuit power, the performance gap between the two algorithms starts to reduce because higher values of \(P_{f}\) ultimately result in a lower \(EE\) of the IoT devices, irrespective of the approach. In Fig. 3, \(EE\) achieved by the proposed solution is illustrated with varying target number of bits, \(Bt_{n}\) whereas the uplink power level, \(P_{k}^{max}\), circuit power \(P_{f}\), and number of devices are fixed. In general, by increasing \(Bt_{n}\) for a fixed \(K=70\) devices, the total \(EE\) decreases, which shows the proper functioning of the proposed solution. In comparison to modified greedy algorithm, our proposed GREEN-AI improves the \(EE\) consistently for all values of \(Bt_{n}\). Approximately, the performance gap remains the same throughout in Fig. 3, which is due the fixed value of \(P_{f}\) It is noted \(P_{f}\) has a predominate effect in achieving an increased value of \(EE\). Nevertheless, our proposed algorithm can improve \(EE\) of a large number IoT devices in the disaster region. It can allow them to upload more data while still relying on a minimum fixed power level, i.e., \(P_{k}^{max}=0.2\) W. In Fig. 4, we have studied the effectiveness of our proposed algorithm with respect to various number of devices available in a disaster region, whereas target number of bits is fixed at \(Bt_{n}=50\) kbits, and the circuit power is fixed at \(P_{f}=1.4002\). As shown in Fig. 4, the increasing number of devices results in rising \(EE\) of the system and forming an exponential curve, which also verifies the complexity analysis of our proposed algorithm. In comparison to the modified version of the greedy algorithm, our GREEN-AI scheme improves the \(EE\) by a fair margin and the performance gap widens significantly with an increase in the number of IoT devices available in the disaster region. The proposed GREEN-AI improves the \(EE\) by at least 19% in comparison of the modified version of greedy algorithm, when the value of \(K=70\). ## VI Conclusion In this paper, we have presented an energy-efficient NOMA-enabled relay-based UAV-assisted communication model to support the uplink communication of IoT devices situated in a disaster region with the assistance of ML algorithm. Specifically, we have utilized k-medoids to perform resource allocation. The Silhouette analysis is used to find the best number of IoT devices per subcarrier. Finally, the power allocation is performed using iterative methods. Overall, the proposed approach maximizes EE with much lower complexity, in comparison to an exhaustive search. Also, our proposed GREEN-AI scheme improves the total \(EE\) of \(K\) IoT devices at least by 19% for a fairly increased value of circuit power as compared to the greedy approach. All the simulation results clearly demonstrate the effectiveness of our proposed UAV-assisted NOMA strategy over the modified baseline greedy algorithm, which is mainly unable to achieve high EE with large number of IoT devices. In the future, we aim to further present solutions for multiple UAVs and heterogeneous network scenarios. ## Acknowledgement This work is supported by the UKRI Higher Education Innovation Fund projects "Net Zero and Sustainable 6G: Communications, Sensing and Computing," and "Green and Autonomous UAVs for Advanced Airborne Ecosystem".
2306.05819
Binning is Sinning: Redemption for Hubble Diagram using Photometrically Classified Type Ia Supernovae
Bayesian Estimation Applied to Multiple Species (BEAMS) is implemented in the BEAMS with Bias Corrections (BBC) framework to produce a redshift-binned Hubble diagram (HD) for Type Ia supernovae (SNe Ia). BBC corrects for selection effects and non-SNIa contamination, and systematic uncertainties are described by a covariance matrix with dimension matching the number of BBC redshift bins. For spectroscopically confirmed SN Ia samples, a recent "Binning is Sinning" article (BHS21, arxiv:2012.05900) showed that an unbinned HD and covariance matrix reduces the systematic uncertainty by a factor of ~1.5 compared to the binned approach. Here we extend their analysis to obtain an unbinned HD for a photometrically identified sample processed with BBC. To test this new method, we simulate and analyze 50 samples corresponding to the Dark Energy Survey (DES) witha low-redshift anchor; the simulation includes SNe Ia, and contaminants from core-collapse SNe and peculiar SNe Ia. The analysis includes systematic uncertainties for calibration, and measures the dark energy equation of state parameter (w). Compared to a redshift-binned HD, the unbinned HD with nearly 2000 events results in a smaller systematic uncertainty, in qualitative agreement with BHS21, and averaging results among the 50 samples we find no evidence for a w-bias. To reduce computation time for fitting an unbinned HD with large samples, we propose an HD-rebinning method that defines the HD in bins of redshift, color, and stretch; the rebinned HD results in similar uncertainty as the unbinned case, and shows no evidence for a w-bias.
Richard Kessler, Maria Vincenzi, Patrick Armstrong
2023-06-09T11:41:23Z
http://arxiv.org/abs/2306.05819v2
Binning is Sinning: Redemption for Hubble Diagram using Photometrically classified Type Ia Supernovae ###### Abstract Bayesian Estimation Applied to Multiple Species (BEAMS) is implemented in the BEAMS with Bias Corrections (BBC) framework to produce a redshift-binned Hubble diagram (HD) for Type Ia supernovae (SNe Ia). BBC corrects for selection effects and non-SNIa contamination, and systematic uncertainties are described by a covariance matrix with dimension matching the number of BBC redshift bins. For spectroscopically confirmed SN Ia samples, a recent "Binning is Sinning" article (BHS21, arxiv:2012.05900) showed that an unbinned HD and covariance matrix reduces the systematic uncertainty by a factor of \(\sim\)1.5 compared to the binned approach. Here we extend their analysis to obtain an unbinned HD for a photometrically identified sample processed with BBC. To test this new method, we simulate and analyze 50 samples corresponding to the Dark Energy Survey (DES) with a low-redshift anchor; the simulation includes SNe Ia, and contaminants from core-collapse SNe and peculiar SNe Ia. The analysis includes systematic uncertainties for calibration, and measures the dark energy equation of state parameter (\(w\)). Compared to a redshift-binned HD, the unbinned HD with nearly 2000 events results in a smaller systematic uncertainty, in qualitative agreement with BHS21, and averaging results among the 50 samples we find no evidence for a \(w\)-bias. To reduce computation time for fitting an unbinned HD with large samples, we propose an HD-rebinning method that defines the HD in bins of redshift, color, and stretch; the rebinned HD results in similar uncertainty as the unbinned case, and shows no evidence for a \(w\)-bias. Subject headings:cosmology: supernovae ## 1. Introduction Following the discovery of cosmic acceleration using a few dozen Type Ia supernovae (SNe Ia) (Riess et al., 1998; Perlmutter et al., 1999), increasingly large SN Ia samples have been used to improve measurements of the dark energy equation of state parameter, \(w\). While the most precise \(w\) measurements are based on spectroscopically confirmed samples, photometric imaging surveys have been discovering far more supernovae than spectroscopic resources can observe. Existing SN surveys include the Sloan Digital Sky Survey-II (SDSS),1 Supernova Legacy Survey (SNLS), Panoramic Survey Telescope and Rapid Response System-1 (PS1),2 and Dark Energy Survey (DES);3 future wide-area surveys that will overwhelm spectroscopic resources include the Legacy Survey of Space and Time (LSST)4 and the Nancy Grace Roman Space Telescope.5 Footnote 5: [https://roman.gsfc.nasa.gov](https://roman.gsfc.nasa.gov) Footnote 2: [https://panstarrs.stsci.edu](https://panstarrs.stsci.edu) Footnote 3: [https://www.darkenergswarvey.org](https://www.darkenergswarvey.org) Footnote 4: [https://www.lsst.org](https://www.lsst.org) Footnote 5: [https://roman.gsfc.nasa.gov](https://roman.gsfc.nasa.gov) To make full use of these large SN Ia samples, photometric identification using broadband filters has been developed over the past decade. Photometric methods include template matching (Sako et al., 2011) and machine learning (Locher et al., 2016; Moller & de Boissiere, 2020; Qu et al., 2021), and they determine the probability (\(P_{\rm Ia}\)) for each event to be an SN Ia. A framework to incorporate the resulting \(P_{\rm Ia}\) was developed to measure cosmological parameters. This framework, called "Bayesian Estimation Applied to Multiple Species" (BEAMS: Kunz et al. (2007); Hlozek et al. (2012)), was first used in a SN-cosmology analysis for the PS1 photometric sample (Jones et al., 2018). As part of the DES SN-cosmology analysis, BEAMS was extended to "BEAMS with Bias Corrections" (BBC: Kessler & Scolnic (2017); hereafter KS17), a fitting procedure designed to produce a Hubble diagram (HD) that is corrected for selection biases and for contamination from core-collapse SNe (SNCC) and peculiar SNe Ia. BBC has been used in the SN Ia cosmology analysis for spectroscopic samples from Pantheon (Scolnic et al., 2018), DES (DES Collaboration, 2019), and Pantheon+ (Brout et al., 2022). BBC has also been used on a photometric sample from PS1 (Jones et al., 2019), and to examine contamination biases for the photometric DES sample (Vincenzi et al., 2023). The BBC fit is performed in redshift bins to deter mine nuisance parameters and SN Ia distances that are independent of cosmological parameters, which enables more flexible use of cosmology-fitting programs. BBC therefore produces both a redshift binned and unbinned HD. Previous analyses using BBC took advantage of the binned HD to reduce computation time in cosmology-fitting programs. However, Brout et al. (2021) (hereafter BHS21), showed that while the statistical uncertainty is the same using a binned or unbinned HD, the systematic uncertainty is \(\sim\)1.5 smaller using an unbinned HD and unbinned covariance matrix. The uncertainty reduction is from an effect known as self-calibration (Faccioli et al., 2011). BHS21 demonstrated the uncertainty reduction using BBC with a spectroscopically confirmed sample. Here we expand the use of unbinned HDs to photometric samples where BEAMS is used. In anticipation of very large samples in future analyses, we also explore the possibility of reducing computation time with a smaller HD and covariance matrix and still benefit from self-calibration: a rebinned HD in the space of redshift, color, and stretch. This choice of variables is motivated by the color-dependent systematic explored in BHS21. While the unbinned approach is optimal, the rebinned approach may be useful for the many intermediate simulation tests prior to unbinding. We validate the unbinning and rebinning methods using simulations of DES that include SNe Ia, SNCC, and peculiar SNe Ia. The simulation and analysis presented here are similar to those in Vincenzi et al. (2023), and all analysis software used in this analysis is publicly available. The software for simulations, light-curve fitting, BBC, and cosmology fitting is from the **S**uper**N**ova **AN**alysis package (SNAA; Kessler et al. (2009)).6 The photometric classification software is from SuperNlova (SNN; Moller & de Boissiere (2020)).7 For workflow orchestration we used Pippin (Hinton & Brout, 2020).8 Footnote 6: [https://github.com/RickKessler/SNAA](https://github.com/RickKessler/SNAA) Footnote 7: [https://github.com/supernova](https://github.com/supernova) Footnote 8: [https://github.com/dessn/Pippin](https://github.com/dessn/Pippin) The outline of this Letter is as follows. The SALT2 and BBC formalism is reviewed in SS2. The unbinning and rebinning procedures are presented in SS3. The validation analysis is described in SS4, and the validation results are given in SS5. ## 2. Review of SALT2 and BBC Using the SALT2 framework (Guy et al., 2010), BBC is a fitting procedure that delivers an HD corrected for selection effects and corrected for contamination. BBC incorporates three main features: (1) BEAMS (Kunz et al., 2007), (2) fitting in redshift bins to avoid dependence on cosmological parameters (Marinner et al., 2011), and (3) detailed simulation to correct distance biases (Kessler et al., 2019). For each event, a SALT2 light-curve fit determines the time of peak brightness (\(t_{0}\)), stretch (\(x_{1}\)), color parameter (\(c\)), and amplitude (\(x_{0}\)) with \(m_{B}\equiv-2.5\log_{10}(x_{0})\). Within BBC, the measured distance modulus is defined by the Tripp equation (Tripp, 1998): \[\mu=m_{B}+\alpha x_{1}-\beta c+\mathcal{M}-\Delta\mu_{\rm bias}\, \tag{1}\] where \(\alpha\) and \(\beta\) are the stretch- and color-luminosity parameters, \(\mathcal{M}\) is a global offset, and \(\Delta\mu_{\rm bias}\) is a distance-bias correction for each event, \(\mu-\mu_{\rm true}\), determined from a large simulation. The \(\Delta\mu_{\rm bias}\) value for each event is evaluated by interpolating in a 5-dimensional space of \(\{z,x_{1},c,\alpha,\beta\}\). In Eq. 1, there is an implicit SN index \(i\) for \(\mu,m_{B},x_{1},c\), and \(\Delta\mu_{\rm bias}\); this index is suppressed for readability. To simplify this study, host-SN correlations have been ignored in Eq. 1, and also in the simulations used for validation. Following Section 5 of KS17 and making a few simplifications for this review, the BBC fit maximizes a likelihood of the form \(\mathcal{L}=\prod_{i=1}^{N}\mathcal{L}_{i}\), where \(\mathcal{L}_{i}\) for event \(i\) is \[\mathcal{L}_{i}=P_{\rm Ia,\mu}D_{\rm Ia,\mu}+(1-P_{\rm Ia,\mu})D_{\rm CC,i}\, \tag{2}\] where \(P_{\rm Ia,\mu}\) is the photometric classification probability for event \(i\) to be an SN Ia. The SN Ia component of \(\mathcal{L}_{i}\) is \(D_{\rm Ia}\)\(\sim\exp[-\chi^{2}_{\rm HR}/2]\), where \(\chi^{2}_{\rm HR}={\rm HR}^{2}/\sigma_{\mu}^{2}\), HR is a Hubble residual described below, and \(\sigma_{\mu}\) is the uncertainty on \(\mu\) in Eq. 1 as shown in Eq. 3 of KS17. The non-SNIa (contamination) component, \(D_{\rm CC}\), is evaluated from a simulation. To remove the dependence on cosmological parameters in the BBC fit, we follow Marriner et al. (2011) and define the Hubble residual for the \(i\)'th SN (\({\rm HR}_{i}\)) as \[{\rm HR}_{i}\equiv\mu_{i}-[\mu_{\rm ref}(z_{i},\vec{\mathcal{C}}_{\rm ref})+M_ {\zeta}] \tag{3}\] where \(\mu_{\rm ref}=\mu_{\rm ref}(z_{i},\vec{\mathcal{C}}_{\rm ref})\) is a reference distance computed from redshift \(z_{i}\) and an arbitrary choice of reference cosmology parameters denoted by \(\vec{\mathcal{C}}_{\rm ref}\). Our choice for \(\vec{\mathcal{C}}_{\rm ref}\) is flatness and \[\vec{\mathcal{C}}_{\rm ref}\equiv\{\Omega_{\rm M}=0.3,{\rm w}=-1\}. \tag{4}\] \(M_{\zeta}\) are fitted distance offsets in redshift bins denoted by \(\zeta\). An important concept is that using a cosmological model for \(\mu_{\rm ref}\) in Eq. 3 is a convenience, not a necessity. For example, \(\mu_{\rm ref}\) could be replaced with a polynomial function of redshift or any function that approximates the distance-redshift relation within each redshift bin. The BBC fit determines \(\alpha\), \(\beta\), \(\gamma\), \(M_{\zeta}\), and an intrinsic scatter term (\(\sigma_{\rm int}\)) added to the distance uncertainties (SS3) that results in a reduced \(\chi^{2}\) of one. The final binned HD is obtained as follows. First, each binned redshift (\(z_{\zeta}\)) is computed from \[z_{\zeta}=\mu^{-1}[\overline{\mu_{\rm ref}}_{\zeta}] \tag{5}\] where \(\mu^{-1}\) is an inverse-distance function that numerically determines redshift from the weighted average of \(\mu_{\rm ref}\) in redshift bin \(\zeta\). The weight for each event is \(\sigma_{\mu}^{-2}\). Next, the BBC-fitted distance in each redshift bin (\(\mu_{\zeta}\)) is \[\mu_{\zeta}=\overline{\mu_{\rm ref}}_{\zeta}+M_{\zeta}. \tag{6}\] and the collection of \(\{z_{\zeta},\mu_{\zeta}\}\) is the binned HD corrected for selection effects and contamination. The uncertainty on \(\mu_{\zeta}\) is the BBC-fitted uncertainty for \(M_{\zeta}\). If a different choice of \(\vec{\mathcal{C}}\) is used for \(\mu_{\rm ref}\), the fitted \(M_{\zeta}\) will change but the \(\mu_{\zeta}\) remain the same. For spectroscopically confirmed samples with all \(P_{\rm Ia,\mu}=1\), the unbinned HD is the collection of \(\{z_{i},\mu_{i}\}\) where the \(\mu_{i}\) are computed from Eq. 1 using the BBC-fitted parameters and each distance uncertainty (\(\sigma_{\mu,i}\)) is computed from Eq. 3 in KS17. This procedure is an approximation that we rigorously test (SS5.1) with high-statistics simulations. ## 3. Unbinning and Rebinning After BBC Fit For an unbinned HD, we use the BBC-fitted parameters and compute the distances defined in Eq. 1. The unbinned distance uncertainties (\(\sigma_{\mu,\mathrm{unbin},i}\)), however, are not the naively computed distance uncertainties (\(\sigma_{\mu,i}\)) for a spectroscopically confirmed sample. To determine \(\sigma_{\mu,\mathrm{unbin},i}\), we require that the weighted average uncertainty in each redshift bin is equal to \(\sigma_{M,\zeta}\), the BBC-fitted uncertainty on \(M_{\zeta}\); \[1/\sigma_{M,\zeta}^{2}= \sum_{i\in\zeta}1/\sigma_{\mu,\mathrm{unbin},i}^{2} \tag{7}\] \[= \sum_{i\in\zeta}\mathcal{P}_{\mathrm{B(Ia)},i}/[S_{\zeta}\sigma_ {\mu,i}]^{2} \tag{8}\] where \(i\) is the SN index within redshift bin \(\zeta\), \[\mathcal{P}_{\mathrm{B(Ia)},i}=\frac{P_{\mathrm{Ia},i}D_{\mathrm{Ia},i}}{P_{ \mathrm{Ia},i}D_{\mathrm{Ia},i}+(1-P_{\mathrm{Ia},i})D_{\mathrm{CC},i}} \tag{9}\] is the BEAMS probability for event \(i\) to be an SN Ia, and \(S_{\zeta}\) is a \(\zeta\)-dependent uncertainty scale that is computed to satisfy Eq. 8. We find that \(S_{\zeta}\) is a few percent greater than 1 because of small correlations between the fitted parameters (\(\alpha\),\(\beta\),\(M_{\zeta}\)). Eq. 8 is an ad hoc assumption and does not have a rigorous derivation. From Eqs. 7-8, the unbinned distance uncertainty is \[\sigma_{\mu,\mathrm{unbin},i}=S_{\zeta}\sigma_{\mu,i}/\sqrt{\mathcal{P}_{ \mathrm{B(Ia)},i}}. \tag{10}\] As a crosscheck, the weighted average of the Hubble residuals (\(\langle\mathrm{HR}_{\zeta}\rangle\)) should be zero for each redshift bin: \[\langle\mathrm{HR}_{\zeta}\rangle=\Big{[}\sum_{i\in\zeta}\mathrm{HR}_{i}W_{i }\Big{]}\Big{/}\Big{[}\sum_{i\in\zeta}W_{i}\Big{]} \tag{11}\] where \(W_{i}=\sigma_{\mu,\mathrm{unbin},i}^{-2}\) and \(\mathrm{HR}_{i}\) is defined in Eq. 3. To limit the size of the HD and still benefit from reduced systematics, we propose rebinning in the space of redshift, stretch, and color, denoted by \(\vec{\zeta}=\{z,x_{1},c\}\). The distance modulus in each 3D \(\vec{\zeta}\) cell is a weighted average of distances in the cell, \[\mu_{\vec{\zeta}}=\sum_{i\in\vec{\zeta}}\mu_{i}W_{i}\Big{/}\sum_{i\in\vec{ \zeta}}W_{i} \tag{12}\] and following Eq. 7 the uncertainty on \(\mu_{\vec{\zeta}}\) is \[1/\sigma_{\mu,\zeta}^{2}=\sum_{i\in\vec{\zeta}}1/\sigma_{\mu,\mathrm{unbin},i}^ {2} \tag{13}\] ## 4. Validation I: Simulation and Analysis We test the unbinning and rebinning procedure (SS3) by analyzing 50 simulated data-sized samples that closely follow Vincenzi et al. (2023). Each simulation corresponds to the 5-year DES photometric sample for events with an accurate spectroscopic redshift of the host galaxy, combined with a spectroscopically confirmed low-redshift (LOWZ) sample (\(z<0.1\)). The LOWZ sample uses the cadence and signal-to-noise ratio (S/N) properties for the Carnegie Supernova Project (CSP),9 Center for Astrophysics (CFA3,CFA4; Hicken et al. (2009, 2012)), and Foundation Supernova Survey (Foley et al., 2018). Footnote 9: [https://csp.obs.carnegiescience.edu](https://csp.obs.carnegiescience.edu) Footnote 10: [https://github.com/RutgersSN/SNIax-PLAsTiCC](https://github.com/RutgersSN/SNIax-PLAsTiCC) The simulated models include: * SNe Ia generated from the SALT2 model in Guy et al. (2010) using trained model parameters from Betoule et al. (2014); * SNCC generated from spectral energy distribution (SED) templates in Vincenzi et al. (2019); * Peculiar SN Iax using the SED model from Kessler et al. (2019)10 and extinction correction from Vincenzi et al. (2021); * Peculiar 91bg-like SNe Ia using the SED model from Kessler et al. (2019); and * \(\Lambda\)CDM with \(\Omega_{\mathrm{M}}=0.311\), \(\Omega_{\Lambda}=0.689\), and \(w=-1\). Footnote 10: [https://github.com/supernova](https://github.com/supernova) All simulated events are analyzed as follows: * Use SALT2 light-curve fit for each event to determine \(\{t_{0},m_{B},x_{1},c\}\). * Apply selection requirements (cuts): * at least two passbands with maximum signal-to-noise ratio SNR\(>5\); * at least one observation before \(t_{0}\); * at least one observation \(>10\) days after \(t_{0}\) (rest frame); * \(|x_{1}|<3\) and \(|c|<0.3\); * fitted uncertainties \(\sigma_{x1}\)\(<\)1.0 and \(\sigma_{t0}\)\(<\)2 days; * SALT2 light curve fit probability \(>0.001\); * valid bias correction in the BBC fit (see below). For the 50 samples, the average number of events passing cuts is 1897 (1622, 172, and 103 for DES, Foundation, and LOWZ, respectively). * Determine \(P_{\mathrm{Ia}}\) using the "SuperNNova" photometric classification (Moller & de Boissiere, 2020)11 based on recurrent neural networks. Footnote 11: [https://github.com/supernnova](https://github.com/supernnova) * Use the BBC fit to determine redshift-binned HD corrected for selection effects and non-SNIa contamination. We use 20 \(z\)-bins, with bin size proportional to \((1+z)^{3}\) so that there is finer \(z\)-binning at lower redshift. Footnote 12: [https://csp.obs.carnegiescience.edu](https://csp.obs.carnegiescience.edu) * Create statistical+systematic covariance matrix as in Conley et al. (2011). * Use methods from SS3 to produce an unbinned HD and two rebinned HDs. The first rebinned HD has 2 stretch and 4 color bins (Rebin2x4), and the total number of HD bins is \(20\times 2\times 4=160\). The second rebinned HD has 4 stretch and 8 color bins (Rebin4x8), and a total of 640 HD bins. * Perform cosmology fit using a fast minimization program that combines an SN Ia HD with a cosmic microwave background (CMB) prior that uses an \(R\)-shift parameter computed from the same cosmology parameters as in the simulated samples. To match the constraining power from Planck Collaboration et al. (2020), the \(R\)-uncertainty is \(\sigma_{R}=0.006\). We fit for \(\Omega_{\rm M}\) and \(w\) using the \(w\)CDM model, and we also fit for \(\Omega_{\rm M},w_{0},w_{a}\) using the \(w_{0}w_{a}\)CDM model. * For the \(w_{0}w_{a}\)CDM model, the figure of merit (FoM) is computed based on the dark energy task force (DETF) definition in Albrecht et al. (2006), \[{\rm FoM}=[\sigma(w_{0})\times\sigma(w_{a})\times\sqrt{1-\rho^{2}}]^{-1}\] (14) where \(\rho\) is the reduced covariance between \(w_{0}\) and \(w_{a}\). Here we consider 70 systematic uncertainties that include the following: * For each of the 34 passbands, shift the zero point using the uncertainty from Brout et al. (2022b), * For each of the 34 passbands, shift the filter transmission wavelength using the uncertainty from Brout et al. (2022b), * Correlated zero point shift, \(0.00714\lambda/{\rm micron}\), corresponding to the Hubble Space Telescope calibration uncertainty for primary reference C26202. * Galactic extinction uncertainty is 4%. For each of the 68 zero-point and wavelength systematics, the SALT2 model is retrained and the shift is propagated in the simulated data. M. Vincenzi et al. in preparation present the complete set of systematic uncertainties that includes calibration covariances. ## 5. Validation II: Results ### BBC Sensitivity to Reference Cosmology We begin by evaluating the sensitivity of fitted BBC distances to the reference cosmology \(\vec{\mathcal{C}}_{\rm ref}\) defined in Eq. 3. Using the \(w\)CDM model to vary \(\vec{\mathcal{C}}_{\rm ref}\), we vary \(\Omega_{\rm M}\) up to \(\pm 0.1\) with fixed \(w=-1\), and we vary \(w\) up to \(\pm 0.2\) with fixed \(\Omega_{\rm M}=0.3\). We define a sensitivity metric to be \[{\rm STD}_{\mu}={\rm STD}(\mu_{i,\vec{\mathcal{C}}}-\mu_{i,\vec{\mathcal{C}}_ {\rm ref}}) \tag{15}\] where STD is the standard deviation, \(\mu_{i,\vec{\mathcal{C}}_{\rm ref}}\) are unbinned distances (Eq. 1) using \(\vec{\mathcal{C}}_{\rm ref}\) (Eq. 4) in the BBC fit, and \(\mu_{i,\vec{\mathcal{C}}}\) are unbinned distances from using a different \(\vec{\mathcal{C}}\) in the BBC fit. Results for 8 \(w\)CDM model variants are shown in Table 1 and we find \({\rm STD}_{\mu}\sim 10^{-4}\) mag, which is about a 1000 times smaller than the intrinsic scatter. The last 3 rows of Table 1 are based on a polynomial function of redshift for \(\mu_{i,\vec{\mathcal{C}}}\) to illustrate the BBC performance with poorer \(\mu_{i,\vec{\mathcal{C}}}\) estimates. A constant \(\mu_{i,\vec{\mathcal{C}}}\) (p0) results in \({\rm STD}_{\mu}\) that is more than 1 order of magnitude larger than for the \(w\)CDM models, but is still well below 0.01 mag and thus works surprisingly well for such a poor \(\mu_{i,\vec{\mathcal{C}}}\) estimate. Using 3rd- and 6th-order polynomial fits to the baseline \(\Lambda\)CDM model (p3 and p6) works much better than constant \(\mu_{i,\vec{\mathcal{C}}}\), but still not quite as well as for the \(w\)CDM models. ### Uncertainty Scale and HR check For the Ia+CC samples, the uncertainty scale (\(S_{\zeta}\) in Eqs. 8 and 10) vs. redshift bin is shown in the upper panel of Fig. 1, averaged over the 50 data samples. The average \(S_{\zeta}\) value is \(\sim 1.01\) at all redshifts, and is thus a small correction. The Hubble residual crosscheck (\(\langle{\rm HR}_{\zeta}\rangle\) defined in Eq. 11) vs. redshift is shown in the lower panel of Fig. 1. The values are within 0.001 mag of the expected value of zero. ### Bias Results for \(w\)Cdm We define the \(w\)-bias to be \(\Delta w\equiv w-w_{\rm true}\), and we define \(\langle w\)-bias\(\rangle\) to be the average over the 50 simulated data samples. The \(\langle w\)-bias\(\rangle\) uncertainty is the standard deviation of the \(\Delta w\) values divided by \(\sqrt{50}\). We begin with an SN Ia only sample that has no contamination and \(P_{\rm Ia}=1\) for all events. The \(\langle w\)-bias\(\rangle\) results are shown in the upper half of Table 2 with no systematics (i.e., only statistical uncertainties), with systematics included, and with four binning options: (i) binned, (ii) \begin{table} \begin{tabular}{|l|c|} \hline \(\vec{\mathcal{C}}\) Variant & \({\rm STD}_{\mu}\times 10^{4}\) \\ \hline \(\Omega_{\rm M}=0.20\) & 1.0 \\ \(\Omega_{\rm M}=0.25\) & 1.7 \\ \(\Omega_{\rm M}=0.35\) & 0.4 \\ \(\Omega_{\rm M}=0.40\) & 0.7 \\ \(w=-1.2\) & 0.7 \\ \(w=-1.1\) & 0.5 \\ \(w=-0.9\) & 0.3 \\ \(w=-0.8\) & 0.6 \\ \hline p0 (\(\mu_{i,\vec{\mathcal{C}}}\)= 40) & 59 \\ p3a & 6.8 \\ p6b & 3.1 \\ \hline \end{tabular} \end{table} Table 1STD\({}_{\mu}\) for Different \(\vec{\mathcal{C}}\) choices Figure 1.— \(S_{\zeta}\) vs. redshift (upper panel) and \(\langle{\rm HR}_{\zeta}\rangle\) vs. redshift (lower panel). The error bars show the rms among the 50 simulated data samples. unbinned, (iii) rebinned with 2 stretch bins and 4 color bins (Rebin2x4), and (iv) rebinned with 4 stretch bins and 8 color bins (Rebin4x8). A bias consistent with zero at the 2\(\sigma\) level (\(N_{\sigma}\) in Table 2) is considered to be unbiased. All \(\langle w\)-bias\(\rangle\) results are unbiased except for a 2.4\(\sigma\) bias for Ia-Only using an unbinned HD and systematics. By averaging results over 50 samples, the bias uncertainty and constraint is nearly 1 order of magnitude smaller than the average \(w\)-uncertainty (\(\langle\sigma_{w}\rangle\) in Table 2) for a single data sample. The primary motivation for an unbinned HD is to reduce the total uncertainty. With systematics, the average total uncertainty (\(\langle\sigma_{w}\rangle\) column in Table 2) is reduced by \(\sim\)7% compared to the binned result. Naively subtracting the no-syst contribution in quadrature, the systematic uncertainties are 0.0182 and 0.0143 for the binned and unbinned, respectively, resulting in an \(\sim\)20% reduction in the systematic uncertainty. The rebinned uncertainties are comparable to that of the unbinned result. Our 20% reduction in systematic uncertainty is smaller than the 50% reduction in BHS21 because we did not include the intrinsic scatter systematic that is reduced by more than a factor of 3. For the calibration systematics used in both analyses, the reduction in BHS21 (see their Table 2) is similar to ours. ### Bias Results for \(w_{0}w_{a}\)Cdm For the \(w_{0}w_{a}\)CDM model, the bias summary is shown in Table 3. The biases are consistent with zero at the 2\(\sigma\) level, and the bias uncertainty and constraint are nearly an order of magnitude smaller than the average single-sample uncertainty (\(\langle\sigma_{w0}\rangle\) and \(\langle\sigma_{wa}\rangle\) in Table 3). For the Ia+CC sample, the average FoM is \(\langle\)FoM\(\rangle\)= 45 for the binned option, and the unbinned option increases \(\langle\)FoM\(\rangle\) to 55. The Rebin2x4 option results in \(\langle\)FoM\(\rangle\)= 51 that is between the binned and unbinned \(\langle\)FoM\(\rangle\). The Rebin4x8 option results in \(\langle\)FoM\(\rangle\)= 54 that is very close to the unbinned \(\langle\)FoM\(\rangle\). ### Binning Option Consistency Without systematics, \(w\)CDM \(w\)-results for all binning options, with and without contamination, agree to within \(<0.001\). For the \(w_{0}w_{a}\)CDM model and Ia-Only, all binning options agree to within 0.001 and \(<0.01\) for \(w_{0}\) and \(w_{a}\), respectively, and \(w_{a}\), respectively. With contamination, the rebin results differ by 0.003 and 0.01 for \(w_{0}\) and \(w_{a}\), respectively, suggesting a subtle bias with the rebin procedure. With systematics, the binned and rebinned \(w\)CDM results agree to within 0.001 in \(w\), while the unbinned \(w\)-result differs significantly by \(0.006\pm 0.001\), which corresponds to 20% of the total uncertainty. This comparison is the same with and without contamination. For the \(w_{0}w_{a}\)CDM model, the binned and rebinned results agree to within 0.002 in \(w_{0}\); the \(w_{a}\) results agree to within \(<0.01\) for Ia-Only and differ by up to nearly 0.03 with contamination. The unbinned results differ by \(\sim\)0.01 and 0.06 for \(w_{0}\) and \(w_{a}\), respectively. While all binning options show unbiased cosmology results, there is evidence for a small difference between the binned the unbinned results. This difference is present with our without contamination. ### Impact of \(\mathcal{P}_{\rm B(Ia)}\) Term To check the impact of the \(\mathcal{P}_{\rm B(Ia)}\) term in Eq. 10, we forced \(\mathcal{P}_{\rm B(Ia)}=1\) and reevaluated the \(w_{0}w_{a}\)CDM bias for an unbinned HD. We find more than 10\(\sigma\) bias, which illustrates the necessity of accurately evaluating this term. ## 6. Conclusion As a follow up to the original binned Hubble diagram (HD) from BBC, we have developed methods to derive cosmological results from an unbinned HD, and from a rebinned HD in the space of redshift, stretch, and color. Averaging analysis results from 50 simulated data samples, we find biases consistent with zero and bias constraints almost 1 order of magnitude smaller than the single-sample uncertainty. We also find that using an unbinned HD results in a reduced total uncertainty consistent with BHS21. This conclusion holds for both the \(w\)CDM and \(w_{0}w_{a}\)CDM models, and we find the same results with or without photometric contamination. Using a rebinned HD with 2 stretch and 4 color bins (Rebin2x4), we recover unbiased cosmology results and also benefit from the reduced uncertainty in the unbinned HD. Using more bins (4 stretch and 8 color bins), there is still no bias and the total uncertainty is similar to the unbinned case. With \(\sim\)2000 events in the DES unbinned HD, the rebinned cosmology-fitting speed is only a factor of few faster compared to the unbinned case. With anticipated future samples in the 10\({}^{4}\)-10\({}^{5}\) range, the rebinned HD size does not increase and therefore the cosmology-fitting speed improvement will be much more significant. While our unbiased results are encouraging, we note that Mitra et al. (2023) reported a significant cosmology bias using an unbinned HD from a simulated LSST data sample of pure SNe Ia. We therefore recommend repeating our bias tests on simulated data for future analyses. ## 7. Acknowledgements R.K. is supported by DOE grant DE-SC0009924. P.A acknowledges parts of this research were carried out on the traditional lands of the Ngunnawal people. We pay our respects to their elders past, present, and emerging. P.A. was supported by an Australian Government Research Training Program (RTP) Scholarship. We acknowledge the University of Chicago's Research Computing Center for their support of this work. \begin{table} \begin{tabular}{|l|l|l|l|l l l|} \hline SN types & Syst & Bin Option & \(\langle w\)-bias\(\rangle\) & \(N_{\sigma}\)a & \(\langle\sigma_{w}\rangle\) \\ \hline \hline Ia-Only & None & Binned & \(+0.0002\pm 0.0030\) & 0.1 & 0.0249 \\ & & Unbin & \(+0.0004\pm 0.0030\) & 0.1 & 0.0250 \\ & & Rebin2x4b & \(+0.0004\pm 0.0030\) & 0.1 & 0.0249 \\ & & Rebin4x8c & \(+0.0005\pm 0.0030\) & 0.2 & 0.0250 \\ \hline Ia-Only & All & Binned & \(+0.0008\pm 0.0031\) & 0.3 & 0.0308 \\ & & Unbin & \(-0.0069\pm 0.0029\) & 2.4 & 0.0288 \\ & & Rebin2x4 & \(+0.0009\pm 0.0031\) & 0.3 & 0.0293 \\ & & Rebin4x8 & \(+0.0010\pm 0.0030\) & 0.3 & 0.0286 \\ \hline \hline Ia+CC & None & Binned & \(+0.0020\pm 0.0030\) & 0.7 & 0.0250 \\ & & Unbin & \(+0.0026\pm 0.0030\) & 0.9 & 0.0250 \\ & & Rebin2x4 & \(+0.0022\pm 0.0030\) & 0.7 & 0.0250 \\ & & Rebin4x8 & \(+0.0022\pm 0.0030\) & 0.7 & 0.0250 \\ \hline Ia+CC & All & Binned & \(+0.0024\pm 0.0032\) & 0.7 & 0.0309 \\ & & Unbin & \(-0.0044\pm 0.0030\) & 1.5 & 0.0288 \\ & & Rebin2x4 & \(+0.0025\pm 0.0031\) & 0.8 & 0.0294 \\ & & Rebin4x8 & \(+0.0019\pm 0.0032\) & 0.6 & 0.0286 \\ \hline \end{tabular} \end{table} Table 2Average \(w\)-Bias, Significance, and Uncertainty vs. Redshift Binning Option (\(w\)CDM) \begin{table} \begin{tabular}{|l|l|l|l l l l l l l|} \hline SN types & Syst & Bin Option & \(\langle w_{0}\)-bias\(\rangle\) & \(N_{\sigma}\)a & \(N_{\sigma}\)b & \(\langle\sigma_{wa}\rangle\) & \(\langle\)FoM\(\rangle\) \\ \hline Ia-Only & None & Binned & \(-0.005\pm 0.014\) & 0.3 & 0.101 & \(-0.022\pm 0.063\) & 0.3 & 0.482 & 78 \\ & & Unbin & \(-0.004\pm 0.014\) & 0.3 & 0.100 & \(-0.023\pm 0.063\) & 0.4 & 0.480 & 78 \\ & & Rebin2x4c & \(-0.005\pm 0.014\) & 0.4 & 0.101 & \(-0.017\pm 0.064\) & 0.3 & 0.482 & 78 \\ & & Rebin4x8d & \(-0.006\pm 0.014\) & 0.4 & 0.101 & \(-0.015\pm 0.064\) & 0.2 & 0.481 & 78 \\ \hline Ia-Only & All & Binned & \(-0.003\pm 0.015\) & 0.2 & 0.138 & \(-0.049\pm 0.069\) & 0.7 & 0.634 & 45 \\ & & Unbin & \(+0.008\pm 0.014\) & 0.5 & 0.122 & \(-0.122\pm 0.066\) & 1.9 & 0.575 & 56 \\ & & Rebin2x4 & \(-0.002\pm 0.015\) & 0.1 & 0.130 & \(-0.047\pm 0.069\) & 0.7 & 0.595 & 51 \\ & & Rebin4x8 & \(-0.001\pm 0.014\) & 0.1 & 0.125 & \(-0.044\pm 0.065\) & 0.7 & 0.576 & 55 \\ \hline \hline Ia+CC & None & Binned & \(-0.010\pm 0.014\) & 0.7 & 0.102 & \(+0.013\pm 0.065\) & 0.2 & 0.481 & 77 \\ & & Unbin & \(-0.010\pm 0.014\) & 0.7 & 0.101 & \(+0.013\pm 0.065\) & 0.2 & 0.479 & 78 \\ & & Rebin2x4c & \(-0.013\pm 0.014\) & 0.9 & 0.102 & \(+0.025\pm 0.066\) & 0.4 & 0.481 & 77 \\ & & Rebin4x8 & \(-0.013\pm 0.014\) & 0.9 & 0.102 & \(+0.025\pm 0.065\) & 0.4 & 0.480 & 77 \\ \hline Ia+CC & All & Binned & \(-0.004\pm 0.015\) & 0.3 & 0.139 & \(-0.039\pm 0.069\) & 0.6 & 0.631 & 45 \\ & & Unbin & \(+0.005\pm 0.015\) & 0.3 & 0.123 & \(-0.100\pm 0.067\) & 1.5 & 0.574 & 55 \\ & & Rebin2x4 & \(-0.003\pm 0.016\) & 0.2 & 0.131 & \(-0.037\pm 0.072\) & 0.5 & 0.595 & 51 \\ & & Rebin4x8 & \(-0.008\pm 0.015\) & 0.5 & 0.126 & \(-0.012\pm 0.069\) & 0.2 & 0.575 & 54 \\ \hline \end{tabular} \end{table} Table 3Average \(w\)0, \(w_{a}\)-bias, Significance, Uncertainty, and FoM vs. Redshift Binning Option (\(w_{0}w_{a}\)CDM)
2301.11613
Broadband three-mode converter and multiplexer based on cascaded symmetric Y-junctions and subwavelength engineered MMI and phase shifters
Mode-division multiplexing has emerged as a promising route for increasing transmission capacity while maintaining the same level of on-chip integration. Despite the large number of on-chip mode converters and multiplexers reported for the silicon-on-insulator platform, scaling the number of multiplexed modes is still a critical challenge. In this paper, we present a novel three-mode architecture based on multimode interference couplers, passive phase shifters and cascaded symmetric Y-junctions. This architecture can readily operate up to the third-order mode by including a single switchable phase shifter. Moreover, we exploit subwavelength grating metamaterials to overcome bandwidth limitations of multimode interference couplers and phase shifters, resulting in a simulated bandwidth of 161 nm with insertion loss and crosstalk below 1.18 dB and -20 dB, respectively.
David González-Andrade, Irene Olivares, Raquel Fernández de Cabo, Jaime Vilas, Antonio Dias, Aitor V. Velasco
2023-01-27T09:30:43Z
http://arxiv.org/abs/2301.11613v1
Broadband three-mode converter and multiplexer based on cascaded symmetric Y-junctions and subwavelength engineered MMI and phase shifters ###### Abstract Mode-division multiplexing has emerged as a promising route for increasing transmission capacity while maintaining the same level of on-chip integration. Despite the large number of on-chip mode converters and multiplexers reported for the silicon-on-insulator platform, scaling the number of multiplexed modes is still a critical challenge. In this paper, we present a novel three-mode architecture based on multimode interference couplers, passive phase shifters and cascaded symmetric Y-junctions. This architecture can readily operate up to the third-order mode by including a single switchable phase shifter. Moreover, we exploit subwavelength grating metamaterials to overcome bandwidth limitations of multimode interference couplers and phase shifters, resulting in a simulated bandwidth of 161 nm with insertion loss and crosstalk below 1.18 dB and -20 dB, respectively. 1 ## 1 Introduction The relentless growth of global Internet traffic has been driven in recent years by the emergence of data-hungry services and their mass adoption by an increasingly interconnected society [1, 2, 3]. Moreover, the cloud nature of many new applications such as machine learning or artificial intelligence require large data sets to be processed on internal servers or transferred between data centers. This resource-intensive paradigm for accessing, computing, and storing data has led to the creation of hyperscale data centers consisting of thousands of servers located in the same physical facility [4]. To cope with the resulting zeta scale of annual data flow, modern data centers have been relying on optical technologies for both long-haul and few-meter interconnects. Compared to their electronic counterparts, these optical technologies offer higher processing speeds, broader bandwidths and lower latency and energy consumption. Silicon photonics, leveraging the mature fabrication facilities of the microelectronics industry, plays a key role in the optical interconnect industry due to its capacity for high-yield and low-cost mass production of high-performance optoelectronic circuits [5, 6]. However, the development of next-generation datacenters for Tbps communications and exascale computing systems is not feasible by scaling infrastructures alone and requires increasingly efficient optical interconnects for short-reach distances [7]. As single-mode transmission approaches its fundamental limits, space-division multiplexing has emerged as a promising way to further improve the transmission capacity of optical interconnects through the use of multicore or multimode waveguides [8]. The latter, which is also called mode-division multiplexing (MDM), has attracted an increasing interest as it leverages the orthogonality of the eigenmodes supported by a single multimode waveguide, thus allowing to maintain the same level of on-chip integration [9, 10]. That is, MDM enables encoding different data channels into specific spatial modes, increasing capacity proportionally to the number of modes used. Numerous on-chip mode converters and multiplexers/demultiplexers (MCMD) have been proposed for the silicon-on-insulator (SOI) platform to date. Asymmetric Y-junctions are based on the principle of mode evolution in adiabatic structures, which results in broad operating bandwidths but also in long device lengths [11, 12, 13]. The minimum feature size of current lithography processes also has a significant impact in these devices, since the finite resolution at which the tip can be fabricated severely hampers their performance. Asymmetric directional couplers (ADCs) [14], relying on evanescent coupling between adjacent waveguides, are well suited for implementing high-channel count MDM systems, but they typically exhibit narrow bandwidths, and their performance is highly susceptible to fabrication errors. Adiabatic tapers have been employed in the coupling region of ADCs to improve the bandwidth and the resilience against fabrication deviations [15]. MCMDs building upon multimode interference (MMI) couplers and other auxiliary components such as phase shifters (PSs) and symmetric Y-junctions have been proposed as well [16, 17], yielding low losses and low crosstalk over a relatively broad wavelength range (\(\sim\)100 nm). The patterning of silicon at the subwavelength scale has proven to be a simple yet powerful tool to tailor the medium optical properties while inhibiting diffractive effects [35]. More specifically, subwavelength (SWG) metamaterials can behave as a homogeneous metamaterial that provides flexible dispersion and anisotropy engineering, non-feasible in conventional strip and rib waveguides. These properties have led to the realization of Si devices with unprecedented performance over the past 15 years [36, 37, 38]. In the MDM field, MCMDs based on subwavelength pixelated structures have demonstrated ultra-compact footprints [18]. SWGs have also been applied to ADCs and triple-waveguide couplers to improve fabrication tolerances and extend the operation bandwidth of conventional counterparts [19, 20, 21]. Furthermore, low losses and low crosstalk within ultra-broad bandwidths have also been reported using subwavelength engineered MMI couplers and PSs, and SWG-slot-assisted adiabatic couplers [22, 23, 24, 25, 26]. Despite the large number of available two-mode MCMDs, scaling the number of multiplexed modes beyond the fundamental and first-order modes is of great importance to multiply the capacity of next-generation datacom systems. Although it is fairly straightforward to extend operation to a larger number of modes in asymmetric Y-junctions and conventional and tapered ADCs [27], three- and four-mode MCMD based on MMI couplers have only recently been reported [28, 29, 30, 31, 32]. However, the proposed architectures are still limited by narrow operating bandwidths and high crosstalk values. In this work, we propose a novel MCMD architecture based on a 4\(\times\)4 MMI, three phase shifters and four symmetric 1\(\times\)2 Y-junctions arranged in a conventional cascaded configuration. The device operates as a three-mode MCMD with passive phase shifters but can readily convert up to the third-order mode by including a single switchable phase shifter. Moreover, we demonstrate loss and crosstalk reduction in a broad bandwidth by SWG-engineering of both the MMI coupler and phase shifters. Simulations show operation bandwidth of 161 nm with insertion loss and crosstalk below 1.18 dB and -20 dB, respectively. ## 2 Principle of operation and device design To explain the operation principle and the device design, let us first focus on the nanophotonic structure shown in Fig. 1(a) consisting of a conventional 4\(\times\)4 MMI, three phase shifters (PS1, PS2 and PS3) and four symmetric 1\(\times\)2 Y-junctions (three identical Y1 and a different one Y2). SWG enhancement of the proposed architecture, shown in Fig. 1(b) and Fig. 1(d) will be discussed in epigraphs 4 and 5. An SOI platform with a thin Si wire surrounded by SiO\({}_{2}\) bottom layer and upper cladding are considered. A schematic view of the waveguide cross-section is shown in Fig. 1(c) for clarity. In order to illustrate the operation of the MCMD, let us focus on the mode evolution and phase relations in each individual constituent of the MCMD. Here, we aim Figure 1: Three-dimensional schematic of the proposed three-mode converter and multiplexer/demultiplexer comprising a 4\(\times\)4 MMI, three phase shifters and four symmetric Y-junctions implemented with (a) conventional homogeneous and (b) SWG metamaterial waveguides. (c) Cross-sectional view of the SOI strip waveguides with a SiO\({}_{2}\) cladding. (c) Top view of the SWG waveguides with their main geometrical parameters. at mode conversion and multiplexing of the first four modes for transverse-electric (TE) polarization, that is, the fundamental mode (TE\({}_{0}\)), the first-order mode (TE\({}_{1}\)), the second-order mode (TE\({}_{2}\)) and the third-order mode (TE\({}_{3}\)). Our MCMD includes two types of symmetric multimode 1\(\times\)2 Y-junctions: Y1, with a stem supporting up to two modes; and Y2, with a wider stem supporting up to four modes. In general, multimode symmetric 1\(\times\)2 Y-junctions transform the two in-phase \(m^{th}\)-order modes in the arms into the \((2m)^{th}\)-order mode in the stem when \(m\) is even, and into the \((2m+1)^{th}\)-order mode in the steam when \(m\) is odd [33]. Likewise, two anti-phase \(m^{th}\)-order modes in the arms are transformed into the \((2m+1)^{th}\)-order mode in the stem when \(m\) is even, and into the \((2m)^{th}\)-order mode in the stem when \(m\) is odd. Figure 2(a) illustrates how this principle affects Y1 operation. Since only two modes are supported by the Y1 stem, a TE\({}_{0}\) (red) mode at the stem results in two in-phase TE\({}_{0}\) modes at the arms, whereas TE\({}_{1}\) (orange) mode at the stem results in two anti-phase TE\({}_{0}\) modes at the arms. Figure 2(b) shows the extension of this behavior to four mode operation in Y2. Operation for TE\({}_{0}\) (red) and TE\({}_{1}\) (orange) is the same as in Y1, whereas injection of TE\({}_{2}\) (green) and TE\({}_{3}\) (purple) modes through the stem waveguide generates two anti-phase TE\({}_{1}\) or two in-phase TE\({}_{1}\) modes at the arms, respectively. Therefore, by cascading Y1 and Y2, and judiciously tailoring the phase relations induced by the rest of the MCMD, mode conversion and multiplexing between up to four modes can be achieved. We will hence study the phase shift induced by the 4\(\times\)4 MMI coupler, and subsequently design a phase shifter architecture that satisfies the phase distributions imposed by the cascaded Y-junctions. Bachmann _et al._ already derived a set of equations to calculate the phase relations of \(N\times N\) MMI couplers [34]. At this point, it is important to mention that the definition of the phase in this work is \(\varphi=\beta x-\omega t\), where \(\beta\) is the phase constant (also known as propagation constant), \(x\) is the propagation direction and the term \(-\omega t\) corresponds to the temporal dependence. As in [34] the authors used the opposite phase convention, i.e., \(=\omega t-\beta x\), equations can be rewritten as follows: \[i+j\text{ even: }\varphi_{ij}=-\varphi_{0}-\pi-\frac{\pi(j-(2N-j+1) }{4N}, \tag{1}\] \[i+j\text{ odd: }\varphi_{ij}=-\varphi_{0}-\frac{\pi(j+1)(2N-j-i+1) }{4N}, \tag{2}\] where \(\varphi_{0}\) is a constant phase, \(i\) and \(j\) are the indices of the \(N\) inputs and outputs, respectively. Using Eqs. (1) and (2), the phase relations of a 4\(\times\)4 MMI coupler can be calculated as shown in Table 1. Please note the input and output numbering in Fig. 3. We then calculate, for each input port, the resulting phase difference at the two upper output ports (\(\Delta\varphi_{12}\)) and the two lower ports (\(\Delta\varphi_{34}\)) as: \[\Delta\varphi_{12}=\varphi_{11}-\varphi_{12}, \tag{3}\] \[\Delta\varphi_{34}=\varphi_{31}-\varphi_{14}, \tag{4}\] Calculated phase differences are shown in Table 2. Since phase evolution at both the MMIs and Y-splitters are fixed, we then need to design a combination of PSs (placement and phase shift values), that results in the required phase relations. As shown in Figure 1(a), we achieve this goal by including a first phase shifter (PS1) between inputs 3 and 2 of the MMI, with a phase shift of \(\pi/2\); a second phase shifter (PS2) between outputs 2 and 1, with a phase shift of \(-\pi/4\); and a third phase shifter (PS3) between outputs 3 and 4 with a phase shift of \(3\,\pi/4\). An additional two-mode Y-junction (Y1) is included at MCMD port 2 to satisfy even-order modes phase conditions, as discussed hereunder. Figure 4 shows the operation of the device working in multiplexer configuration, including the value of the phase relations at different locations for clarity. It should be noted that phase values have been calculated with \begin{table} \begin{tabular}{l c c c c} \hline \hline \(t\) & \(l\) & \(l\) & \(2\) & \(3\) & \(4\) \\ \hline 1 & \(-\pi\) & \(-3\pi/4\) & \(-7\pi/4\) & \(-\pi\) \\ 2 & \(-3\pi/4\) & \(-\pi\) & \(-\pi\) & \(-7\pi/4\) \\ 3 & \(\pi/4\) & \(-\pi\) & \(-\pi\) & \(-3\pi/4\) \\ 4 & \(-\pi\) & \(\pi/4\) & \(-3\pi/4\) & \(-\pi\) \\ \hline \hline \end{tabular} \end{table} Table 1: Calculated phase relations \(\boldsymbol{\varphi}_{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{ }}}}}}}}}}}}}}}\) of a 4\(\times\)4 MMI coupler. Figure 3: Schematic of a 4\(\times\)4 MMI coupler, illustrating port and phase notations. Figure 2: Schematic and principle of operation of a multimode symmetric 1\(\times\)2 Y-junction for (a) a two-mode stem, and (b) a four-mode stem. with respect to the mode phase at the input ports, which is considered to be zero for simplicity. When light is injected through MCMD port 1 [Fig. 4(a)], the combination of all the aforementioned phase relations results in all modes arriving in-phase at the arms of the cascaded Y-junctions. Thus, two in-phase TE\({}_{0}\) modes are coupled into Y1 stems, which subsequently generate the desired TE\({}_{0}\) mode at the multimode stem waveguide of Y2 (MCMD port 4). When light is injected through MCMD port 2 [Fig. 4(b)], the combination of Y1 and PS1 results in simultaneous light coupling to MMI input ports 2 and 3, but with a \(\pi/2\) phase difference. This in turn generates in-phase modes in the upper arms that are in anti-phase with the two in-phase modes in the lower arms at their arrival at the cascaded Y-junctions. This combination results in TE\({}_{1}\) generation at the MCMD output. Finally, when light is injected through MCMD port 3 [Fig. 4(c)], that is, MMI input port 4, in-phase modes are generated in the middle arms, which are in anti-phase with the two in-phase modes generated in the top and bottom arms, before the cascaded Y-junctions. This results in anti-phase TE\({}_{1}\) modes at the output of Y1 stems, which subsequently generate the TE\({}_{2}\) mode at the MCDM output. So far, we have only considered passive PSs, that is, PSs with a fixed phase shift. However, if the phase introduced by PS1 is \(3\pi/2\) instead of \(\pi/2\), it is possible to generate the TE\({}_{3}\) mode at MCMD output [Fig. 4(d)]. For illustrative purposes, we represent this phase shift by switching the position of the tapers in PS1. This feature opens the possibility of extending MCMD operation to four modes using a single switchable PS. ## 3 Proof-of-concept results To verify the principle of operation explained in the previous section, we firstly optimized each constituent (i.e., MMI, phase shifters and Y-junctions) for a design wavelength of 1550 nm. We chose a standard silicon thickness of \(H=220\) nm and an interconnection waveguide width of \(W_{I}=400\) nm. Thus, symmetric Y-junctions are designed with stem widths of \(2W_{I}=800\) nm for Y1 and \(4W_{I}=1600\) nm for Y2. Geometrical parameters of the 4\(\times\)4 MMI coupler, the phase shifters and the symmetric Y-junction are summarized in Table 3. In order to evaluate the performance of each constituent element, the figures of merit for the MMI are the excess loss (EL), imbalance (IB) and phase error (PE): \[\text{EL}_{i}\left[\text{dB}\right]=-10\text{log}_{10}\left(\sum_{j} \abs{\text{S}_{ji}}^{2}\right)\text{,}\] (5) \[\text{IB}_{i}^{jk}\left[\text{dB}\right]=10\text{log}_{10}\left( \abs{\text{S}_{ji}}^{2}/\abs{\text{S}_{ki}}^{2}\right)\text{,}\] (6) \[\text{PE}_{i}^{jk}[\bullet]=\left[\angle(\text{S}_{ji}/\text{S}_{ki} )-\varphi_{ideal}\right]\cdot 180/\pi\text{,}\] (7) where \(\text{S}_{ji}\) and \(\text{S}_{ki}\) are the scattering parameters for input \(i\) and outputs \(j\) and \(k\), and \(\varphi_{ideal}\) is the ideal phase relation depending on selected input and output ports as shown in Table 1. The designed 4\(\times\)4 MMI exhibits EL \(<\) \begin{table} \begin{tabular}{l l l} \hline \(t\) & \(\Delta\boldsymbol{\varphi_{12}}\) & \(\Delta\boldsymbol{\varphi_{34}}\) \\ \hline 1 & \(-\pi/4\) & \(-3\pi/4\) \\ 2 & \(\pi/4\) & \(3\pi/4\) \\ 3 & \(5\pi/4\) & \(-\pi/4\) \\ 4 & \(-5\pi/4\) & \(\pi/4\) \\ \hline \end{tabular} \end{table} Table 2: Calculated phase differences between MMI output pairs for each input. Figure 4: Principle of operation of the proposed three-mode converter and multiplexer/demultiplexer for (a) TE\({}_{0}\), (b) TE\({}_{1}\), (c) TE\({}_{2}\) and (d) TE\({}_{3}\) mode multiplexing. \begin{table} \begin{tabular}{l l l l} \hline \multicolumn{2}{l}{Constituent} & \multicolumn{2}{l}{Parameter} & \multicolumn{2}{l}{Value} \\ \hline Waveguides & Width & \(W_{I}\) & \(400\) nm \\ MMI & Separation & \(W_{S}\) & \(500\) nm \\ & Access width & \(W_{A}\) & \(1.3\) μm \\ & Taper length & \(L_{T}\) & \(6\) μm \\ & MMI width & \(W_{MMI}\) & \(7.2\) μm \\ & MMI length & \(L_{MMI}\) & \(91\) μm \\ Y1 & Arm width & \(W_{I}\) & \(400\) nm \\ & Arm length & \(L_{Y1}\) & \(5\) μm \\ & Stem width & \(2W_{I}\) & \(800\) nm \\ Y2 & Arm width & \(2W_{I}\) & \(800\) nm \\ & Arm length & \(L_{Y2}\) & \(20\) μm \\ & Stem width & \(4W_{I}\) & \(1.6\) μm \\ PS1 & PS width & \(W_{PS1}\) & \(600\) nm \\ & PS length & \(L_{PS1}\) & \(2.41\) μm \\ PS2 & PS width & \(W_{PS2}\) & \(600\) nm \\ & PS length & \(L_{PS2}\) & \(8.38\) μm \\ PS3 & PS width & \(W_{PS3}\) & \(600\) nm \\ & PS length & \(L_{PS3}\) & \(3.61\) μm \\ \hline \end{tabular} \end{table} Table 3: Geometrical parameters of the three-mode converter and multiplexer/demultiplexer with homogeneous waveguides. 0.54 dB, IB \(<\)\(\pm\)0.4 dB and PE \(<\)\(\pm\)0.32\({}^{\circ}\) at the wavelength of \(\lambda_{0}=1550\) nm. Regarding the spectral response, EL \(<\) 2.15 dB, IB \(<\)\(\pm\)8.1 dB and PE \(<\)\(\pm\)46.03\({}^{\circ}\) are attained in the entire simulated wavelength range (1.45 - 1.65 \(\mu\)m). Designed phase shifters introduce small phase deviations of only 0.12\({}^{\circ}\) for PS1, 0.13\({}^{\circ}\) for PS2 and 0.16\({}^{\circ}\) for PS3, with respect to their target phase difference at 1550 nm. However, considering the simulated bandwidth of 200 nm, phase errors increase up to 9.84\({}^{\circ}\) for PS1, 22.28\({}^{\circ}\) for PS2, and 12.12\({}^{\circ}\) for PS3. Symmetric Y-junctions Y1 and Y2 were also designed showing negligible excess losses and power imbalance between output ports at the design wavelength. More specifically, Y1 losses are lower than 0.01 dB for both TE\({}_{0}\) and TE\({}_{1}\) mode operation in the 1.45 - 1.65 \(\mu\)m wavelength range. Conversely, calculated excess losses for Y2 are below 0.15 dB for TE\({}_{0}\), TE\({}_{1}\), TE\({}_{2}\) and TE\({}_{3}\) mode operation within the same bandwidth. Once all elements were optimized, two-dimensional finite-difference time-domain (FDTD) simulations of the whole MCMD were performed by applying the effective index method to the original three-dimensional structure [see Fig. 1(a)]. The simulated field distribution of the three-mode MCMD is shown in Fig. 5(a)-(d), demonstrating the successful implementation of the phase relations described in section 2. Some ripples can be observed for TE\({}_{0}\) and TE\({}_{2}\) mode multiplexing in the stem waveguide of Y-junction Y2 [see Figs. 5(a) and 5(c)], which we attribute to a higher crosstalk between both modes compared to TE\({}_{1}\) and TE\({}_{3}\) mode multiplexing. The transmittance as a function of the wavelength was computed for the complete MCMD [see Fig.5(e)-(h)]. At the central wavelength of \(\lambda_{0}=1550\) nm, insertion losses are lower than 0.53 dB, 0.79 dB and 0.59 dB for the generation of TE\({}_{0}\), TE\({}_{1}\) and TE\({}_{2}\) modes in the stem waveguide, respectively. Our device also exhibits a low crosstalk at the same wavelength with values below -21.61 dB for TE\({}_{0}\), -28.94 dB for TE\({}_{1}\) and -21.11 dB for TE\({}_{2}\). By tuning the value of PS1 to 3 \(\pi/2\), TE\({}_{3}\) mode (instead of TE\({}_{1}\) mode) can be generated. In this case, insertion losses are below 0.75 dB, and the crosstalk is better than -28.79 dB, both at 1550 nm. These results corroborate the higher crosstalk for TE\({}_{2}\) mode operation, which leads to a slight ripple in the field distribution at port 4. Regarding performance across the spectrum, insertion losses lower than 1 dB are attained for a 55 nm bandwidth (1542 - 1597 nm), whereas the crosstalk is below -20 dB for a 60 nm bandwidth (1537 -1597 nm) as shown with vertical lines in Fig. 5. These results prove the correct operation of the proposed architecture, but the overall bandwidth is significantly limited by the narrow spectral response of both the MMI and PSs. ## 4 SWG performance enhancement To overcome these bandwidth limitations, we propose the MCMD with SWG metamaterials shown in Fig. 1(b). The design of each of the constituents of the SWG MCMD was performed by individual three-dimensional FDTD simulations. The three symmetric Y-junction labeled Y1 maintain the same geometrical dimensions as those used for the conventional multiplexer for the arm and stem widths (see Table 3), but arm length was shortened to \(L_{Y1}=2\)\(\mu\)m. Y-junction Y2 was slightly redesigned to reduce the crosstalk between TE\({}_{0}\) and TE\({}_{2}\) modes by increasing the length of the arms to \(L_{Y2}=40\)\(\mu\)m. A procedure similar to those already reported in [39, 40] was followed for the optimization of the 4\(\times\)4 SWG MMI. We restrict the value of the duty cycle (DC = \(a/\Lambda\)) to 0.5 in order to maximize the minimum feature size for a given period (\(\Lambda\)) [see Fig.1(d)]. We explored then different periods and found that \(\Lambda=222\) nm significantly flattens the beat length across the spectrum. Compared to the conventional MMI section design, the width \(W_{SMMI}\) is increased by 0.8 \(\mu\)m but the length \(L_{SMMI}\) is reduced by more than half to \(\sim\)41.3 \(\mu\)m. To increase the quality of the interferometric patterns formed in the MMI, the access width is \(W_{B}=1.7\)\(\mu\)m and the Figure 5: Electric field amplitude \(|E|\) in the XY plane at the middle of the silicon layer for (a) TE\({}_{0}\), (b) TE\({}_{1}\), (c) TE\({}_{2}\) and (d) TE\({}_{3}\) mode multiplexing. Simulated transmittance to output port 4 as a function of the wavelength when TE\({}_{0}\) mode is launched into (e) input port 1, (f) input port 2 with PS1 = \(\pi/2\), (g) input port 3 and (h) input port 2 with PS1 = \(3\pi/2\). Vertical lines indicate the bandwidth where IL \(<\) 1 dB (55 nm) and KT \(<\) -20 dB (60 nm) are achieved for all modes simultaneously. separation is reduced to \(W_{R}\) = 0.3 \(\upmu\)m. The transition between the interconnection waveguides (\(W_{I}\) = 400 nm) and the access to the MMI section (\(W_{B}\)) is performed by means of adiabatic SWG tapers with a length \(L_{ST}\) = 13.32 \(\upmu\)m. The performance of the 4\(\times\)4 SWG MMI is shown in Fig. 6(a)-(c). Owing to the symmetry of the structure, only the results obtained when injecting light into input ports 1 and 2 are depicted. It is observed that the device exhibits EL < 0.77 dB, IB < \(\pm\)1 dB and PE < \(\pm\)8.02\({}^{\circ}\) within a broad bandwidth of 200 nm (1.45 - 1.65 \(\upmu\)m). To drastically extend the operating bandwidth of the nanophotonic phase shifters, we build upon the strategy we recently reported in [41] to develop SWG phase shifters SPS1, SPS2 and SPS3. Notwithstanding, here we employ four parallel SWG waveguides of two different widths to implement SPS2 and SPS3. That is, each PS has three identical reference SWG waveguides with width \(W_{D}\), and one dissimilar SWG waveguide with width \(W_{R}\). Both the reference and dissimilar waveguides have a length of \(L_{SPS}\). Note that for SPS1 this configuration is not necessary as only two MMI inputs are illuminated for TE\({}_{1}\) and TE\({}_{3}\) mode generation. Analogous to the 4\(\times\)4 SWG MMI, a flat phase shift can be achieved by judicious selecting the SWG period and duty cycle. A duty cycle of 0.5 was fixed to maximize MTS, while a period of 200 nm resulted in minimum phase shift deviation. In order to induce \(\pi/4\), \(\pi/2\), and 3\(\pi/4\) phase shifts, we selected respectively \(W_{D2}\) = 1.8 \(\upmu\)m, \(W_{R2}\) = 1.6 \(\upmu\)m, \(L_{SPS2}\) = 6.2 \(\upmu\)m and \(L_{S2}\) = 3.0 \(\upmu\)m for SPS2; \(W_{D1}\) = 1.8 \(\upmu\)m, \(W_{R1}\) = 1.6 \(\upmu\)m, \(L_{SPS1}\) = 16.8 \(\upmu\)m and \(L_{S71}\) = 3.0 \(\upmu\)m for SPS1; and \(W_{D3}\) = 1.8 \(\upmu\)m, \(W_{R3}\) = 1.6 \(\upmu\)m, \(L_{SPS3}\) = 28.2 \(\upmu\)m and \(L_{S73}\) = 3.0 \(\upmu\)m for SPS3. The simulated phase shifts are shown in Fig. 6(d). Negligible deviations can be appreciated with phase shift errors as small as 2.29\({}^{\circ}\) for SPS1, and 1.15\({}^{\circ}\) for SPS2 and SPS3 within the entire 1.45 - 1.65 \(\upmu\)m wavelength range. ## 5 SWG results The simulation of the entire MCMD is quite resource-intensive and time-consuming due to the device footprint and the need for a fine mesh to simulate SWG-based devices. Thus, instead of performing the full device simulation, we leverage the S-parameter matrices calculated during the design process and concatenate all of them using a circuit simulator to obtain the S Figure 6: Simulated performance of the 4\(\times\)4 SWG MMI including (a) excess loss, (b) imbalance and (c) phase error between output ports. (d) Phase error of each SWG PSs as a function of the wavelength. Figure 7: Simulated transmittance as a function of the wavelength of the MCMD with SWG metamaterials when TE\({}_{0}\) mode is launched into (a) input port 1, (b) input port 2 with SPS1 = \(\pi/2\), (c) input port 3 and (d) input port 2 with SPS1 = \(3\pi/2\). Vertical lines indicate the bandwidth where IL < 1 dB (183 nm) and XY < \(-\)20 dB (161 nm) are achieved for all modes simultaneously. parameter matrix and hence the spectral response of the complete device. The circuit simulator enables bidirectional signals to be accurately simulated, including coupling of modes in the single elements. Figure 7 shows the overall transmittance of the SWG MCMD. Insertion losses (ILs) are lower than 0.37 dB, 0.47 dB and 0.37 dB for TE\({}_{0}\), TE\({}_{1}\) and TE\({}_{2}\) multiplexing, respectively, at the central wavelength of \(\lambda_{0}=\) 1550 nm. Moreover, low crosstalk (XT) is achieved at the same wavelength with values below -21.54 dB for TE\({}_{0}\), -32.89 dB for TE\({}_{1}\) and -21.24 dB for TE\({}_{2}\) multiplexing. When SPS1 takes the value of 3 \(\pi/4\), insertion losses for TE\({}_{3}\) multiplexing reach a low value of 0.47 dB at 1550 nm, while crosstalk values are lower than -39.48 dB for the same wavelength. This design also shows an excellent performance over a broad bandwidth (BW) of 200 nm with insertion loss lower than 1.18 dB and crosstalk below -16.53 dB. Insertion losses decrease to 1 dB when the bandwidth is restricted to 183 nm (1450 \(-\) 1633 nm), whereas a crosstalk below -20 dB is achieved over a 161 nm bandwidth (1489 \(-\) 1650 nm). For the sake of comparison, Table 4 summarizes the performance of other three- and four-mode MCMD that are based on MMI couplers and have been reported in the state of the art. To the best of our knowledge, it is the first time such low losses and crosstalk are achieved in an outstanding 161 nm wavelength range. ## 6 Conclusions In this work, we have proposed a novel architecture to scale the number of multiplexed modes of mode converters and multiplexer based on MMI couplers. Unlike other reported architectures that use unconventional 1\(\times\)4 Y-junctions or 1\(\times\)3 \(\Psi\)-junctions, here we employ symmetric 1\(\times\)2 Y-junctions arranged in a conventional cascaded configuration. The design methodology was proposed on the basis of a two-dimensional model with conventional homogenous components (i.e., without patterning the silicon waveguide). The conventional mode converter and multiplexer features sub-decibel insertion loss and crosstalk better than -20 dB in the 1542 \(-\) 1597 nm wavelength range. Once the principle of operation was verified, we redesigned and optimized the mode converter and multiplexer by incorporating subwavelength grating metamaterials to leverage the additional degrees of freedom they introduced into the design. A broad design bandwidth of 161 nm for insertion losses below 1.18 dB and crosstalk lower than -20 dB was confirmed by 3D FDTD simulations, comparing very favorably to state-of-the-art three- and four-mode converters and multiplexers. The crosstalk between TE\({}_{0}\) and TE\({}_{1}\) modes could be further reduced by including optimized Y-junction geometries that mitigate the effect of the non-perfect tip at the junction [42-44]. We believe that our design strategy will open promising prospects for the development of high-performance mode converters and multiplexer based on MMI couplers with a high channel count. ## Credit authorship contribution statement **David Gonzalez-Andrade:** Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Data curation, Writing - original draft, Visualization. **Irene Olivares:** Methodology, Software, Validation, Formal analysis, Data curation, Writing - review & editing. **Raquel Fernandez de Cabo:** Software, Validation, Data curation, Writing - review & editing. **Jaime Vilas:** Writing - review & editing. **Antonio Dias:** Resources, Writing - review & editing, Project administration, Funding acquisition. **Aitor V. Velasco:** Resources, Writing - review & editing, Supervision, Project administration, Funding acquisition. ## Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## Acknowledgements This work has been funded in part by the Spanish Ministry of Science and Innovation (MICINN) under grants RTI2018-097957-B-C33, PID2020-115353RAI00; the Spanish State Research Agency (MCIN/AEI/10.13039/501100011033); the Community of Madrid - FEDER funds (S2018/NMT-4326); the European Union - NextGenerationEU through the Recovery, Transformation and Resilience Plan (DIN2020-011488, PTQ2021-011974); and the European Union's Horizon Europe research and innovation program under the Marie Sklodowska-Curie grant agreement N* 101062518.
2305.12835
Open-Domain Event Graph Induction for Mitigating Framing Bias
Researchers have proposed various information extraction (IE) techniques to convert news articles into structured knowledge for news understanding. However, none of the existing methods have explicitly addressed the issue of framing bias that is inherent in news articles. We argue that studying and identifying framing bias is a crucial step towards trustworthy event understanding. We propose a novel task, neutral event graph induction, to address this problem. An event graph is a network of events and their temporal relations. Our task aims to induce such structural knowledge with minimal framing bias in an open domain. We propose a three-step framework to induce a neutral event graph from multiple input sources. The process starts by inducing an event graph from each input source, then merging them into one merged event graph, and lastly using a Graph Convolutional Network to remove event nodes with biased connotations. We demonstrate the effectiveness of our framework through the use of graph prediction metrics and bias-focused metrics.
Siyi Liu, Hongming Zhang, Hongwei Wang, Kaiqiang Song, Dan Roth, Dong Yu
2023-05-22T08:57:42Z
http://arxiv.org/abs/2305.12835v1
# Open-Domain Event Graph Induction for Mitigating Framing Bias ###### Abstract Researchers have proposed various information extraction (IE) techniques to convert news articles into structured knowledge for news understanding. However, none of the existing methods have explicitly addressed the issue of _framing bias_ that is inherent in news articles. We argue that studying and identifying framing bias is a crucial step towards trustworthy event understanding. We propose a novel task, _neutral event graph induction_, to address this problem. An event graph is a network of events and their temporal relations. Our task aims to induce such structural knowledge with minimal framing bias in an open domain. We propose a three-step framework to induce a neutral event graph from multiple input sources. The process starts by inducing an event graph from each input source, then merging them into one merged event graph, and lastly using a Graph Convolutional Network to remove event nodes with biased connotations. We demonstrate the effectiveness of our framework through the use of graph prediction metrics and bias-focused metrics1. Footnote 1: Our code and data will be available at [https://github.com/liusiyi641/Neutral-Event-Graph](https://github.com/liusiyi641/Neutral-Event-Graph) ## 1 Introduction _News editorials_ are a form of persuasive text that express the opinions of an editor on a controversial topic. Different authors, with varying political or social beliefs, may present the same topic with distinct events, despite being based on the same set of facts. For instance, in the case of "Uvalde shooting" (Figure 1), a right-leaning source (top) may focus on events such as "Top Republicans resist gun control", while a left-leaning source (bottom) may highlight events like "calls for gun control grow". This phenomenon is known as _framing bias_(Goffman, 1974; Entman, 1993; Groeling, 2013), where authors can either describe the same event with different linguistic attributes and sentiments (lexical bias), or deliberately include or exclude certain events to promote a particular interpretation (informational bias) Entman (1993); Fan et al. (2019); Lee et al. (2022). Researchers have proposed various information extraction (IE) methods for understanding news article, for example, encoding events into _sequences_(Schank and Abelson, 1977; Chambers and Jurafsky, 2008, 2009; Mostafazadeh et al., 2016; Jans et al., 2012) or _graphs_(Wanzare et al., 2016; Li et al., 2020, 2021) based on their temporal, causal, and narrative orders. However, none of these methods have specifically addressed the issue of framing bias or attempted to alleviate this problem. To address this, we propose a new event representation called _neutral event graph_. A neutral event graph is a network of events (nodes) and their relations (edges) that aim to induce structural knowledge with minimal framing bias. It is noteworthy that our formulation of event graph is distinct from previous graph-based representations (Wanzare et al., Figure 1: An example of framing bias in news articles. While both news articles are discussing the same event “Uvalde shooting”, a right-leaning article (top) may argue for increased presence of armed teachers, while a left-leaning article (bottom) may advocate for stricter gun control measures. This illustrates how authors with different political or social beliefs can present the same topic with disparate viewpoints, despite being based on the same set of facts. 2016; Li et al., 2020, 2021) in that we do not require a predefined ontology for event types, unlike previous work Li et al. (2020) which uses event types to represent nodes. This is because our approach is tailored for the open-domain scenario. We propose a three-stage approach for neural event graph induction: (1) **Event graph induction for single news article**. We induce an event graph from each individual news article by using a pretrained salience allocation model to extract the most salient sentences, which are treated as atom events. The temporal orders among these events are then calculated using an event temporal relation model, resulting in an event graph. (2) **Event graphs merging**. We match event nodes across event graphs using a cross-document event co-reference method. This involves merging event graphs on the same side into one graph, followed by further merging these representative graphs into the final neutral graph. A sentence neutralizer is used to rewrite event descriptions when merging coreferential events from different sides, which removes any biased linguistic attributes (i.e. the lexical bias). (3) **Framing bias pruning**. We use graph neural networks to train a binary node classification model that decides which nodes should be removed from the final neutral graph. This helps to alleviate information bias by removing events that are deliberately included to promote certain ideologies. We have remolded an existing news dataset to facilitate the induction of neutral event graphs. Our comprehensive experiments have demonstrated the effectiveness of our framework through the use of both graph prediction metrics and evaluations that focus on bias. Our contributions in this paper are as follows: * This is the first study that examines framing bias in event representations and highlights the importance of addressing such bias in event graph induction. * We have proposed a novel method for inducing event graphs, called "neutral event graph", which focuses on extracting structured, unbiased knowledge from multiple input documents. * We have developed a three-stage framework and novel evaluation schemes to demonstrate the effectiveness of our approach in reducing framing bias. ## 2 Related Work ### Framing Bias Framing is a subtle form of media manipulation in which some aspects of an issue are highlighted to promote a particular interpretation (Goffman, 1974; Entman, 1993). In a polarized media environment, journalists make biased decisions regarding which events to cover (informational bias) and how to cover them (lexical bias) to advance certain political agendas Gentzkow and Shapiro (2006); Jamieson et al. (2007); Levendusky (2013); Fan et al. (2019). In natural language processing (NLP), most previous efforts concerning framing bias focus on automatic bias detection and mitigating it in downstream applications (e.g., summarization) (Recasens et al., 2013; Yano et al., 2010; Morstatter et al., 2018; Liu et al., 2019; Fan et al., 2019; Lee et al., 2022; Hamborg et al., 2019). Our proposed task and pipeline focus on mitigating framing bias in event graph induction. Event graph induction methods induce structured knowledge from news articles. However, these news articles can be politically skewed towards certain ideologies or stances, resulting in biased knowledge if not properly mitigated. Our work is the first study that argues for the need to alleviate framing bias in event graph induction and proposes a solution. ### Event Schema Induction Previous efforts in event schema induction focus on set-based, chain-based, and graph-based representations. The set-based methods represent schema as a set where each component is an event trigger (Chambers, 2013; Nguyen et al., 2015; Huang and Ji, 2020; Yuan et al., 2018; Cheung et al., 2013; Shen et al., 2021). These methods do not model the relations among events. Chain-based representations encode a structured knowledge of prototypical real-life event sequences (Schank and Abelson, 1977; Chambers and Jurafsky, 2008, 2009; Pichotta and Mooney, 2016; Rudinger et al., 2015). These representations order the events within a sequence by their event-event relations. Another line of work represents event schemas as graphs (Modi et al., 2016; Wanzare et al., 2016; Li et al., 2021, 2020; Jin et al., 2022; Weber et al., 2020). These methods encode global dependencies among atomic events in an entire graph. Our proposed representation also follows a graph structure. However, we distinguish our graph from the previous graph-based representations in the following aspects: (a). We focus on an open-domain setting, where an ontology for _event types_ is undefined. (b). We represent each node as an atomic event itself and connect them with event-event relations, in contrast to _event types_ as nodes and entity-entity relations as edges. Our motivation and objective are also different from previous work. We aim at constructing a neutral and global knowledge of a topic by learning from different-sided sources, whereas previous graph-based schemas encode stereotypical structures of events and their connections Li et al. (2020, 2021). ## 3 Our Approach ### Problem Formulation Suppose that we have a set of news articles \(\mathcal{A}=\{A_{1},A_{2},\cdots\}\) discussing a specific topic or event. These articles are collected from different media sources and may contain incomplete or biased information. In this work, we focus on the special case of politics, where a news article can be either left-leaning or right-leaning. Therefore, we use \(\mathcal{A}=\{A_{l_{1}},A_{l_{2}},\cdots,A_{r_{1}},A_{r_{2}},\cdots\}\) to represent the set of news articles, where \(A_{l_{i}}\)/\(A_{r_{i}}\) is the \(i\)-th article on the left/right side. Our goal is to construct a _neutral event graph_, \(G_{neutral}\), which covers all the information conveyed in \(\mathcal{A}\), while eliminating any framing bias, including lexical bias and informational bias. In \(G_{neutral}\), nodes represent atom events and edges represent temporal relations between them. As we are working in the open-domain scenario, we do not predefine an ontology for the nodes, but instead represent them using sentences or phrases, which provides richer linguistic features for describing complex events. ### Single Document Event Graph Induction The first step of our proposed method is to construct an event graph for a news article. To do this, we first extract salient events from the input Figure 2: An example of our neutral event graph induction framework. Given two (or more) articles on a topic (Gun control after Uvalde Shooting), we induce an event graph for each article, merge them into a single merged event graph by identifying coreferential event nodes, and use a Graph Convolutional Network to remove biased event nodes and produce a neutral event graph. In this example, the left and right event graphs are induced from a left-leaning and right-leaning news article respectively. The orange, blue, and green nodes represent coreferential nodes and are re-written using our sentence neutralizer to remove lexical bias. The grey nodes represent events that authors deliberately included to sway readers’ opinions (informational bias) and are removed by our GCN. The directed edges represent temporal orders. article. Traditional event extraction methods rely on human-labeled event types as the supervision for their models. However, in the open-domain setting, we don't have any predefined ontology for event types. To overcome this challenge, we use a salience prediction model to determine the salience of events. Specifically, given an input news article, we first use SeasonWang et al. (2022), a transformer-based abstractive summarization model for identifying the most salient sentences in the new article. Then select the top-\(k\) sentences with the highest salience scores are served as the atom events. Inspired by Shen et al. (2021), a dependency parser is used to extract all subject-verb-object (SVO) triplets for the selected sentences. In addition, we use an off-the-shelf temporal relation prediction tool Wang et al. (2020) to predict the temporal relation between atom events. It takes the extracted SVO triplets as input and predicts the temporal relations between events as directed edges in the event graph. Finally, we convert the graph into a directed acyclic graph (DAG) by repeatedly removing the edge with the lowest confidence score until there is no cycle left in the graph. ### Event Graph Merging We have constructed the set of event graphs \(\mathcal{G}=\{G_{l_{1}},G_{l_{2}},\cdots,G_{r_{1}},G_{r_{2}},\cdots\}\) from input articles \(\mathcal{A}=\{A_{l_{1}},A_{l_{2}},\cdots,A_{r_{1}},A_{r_{2}},\cdots\}\). The next step is to merge these event graphs together. We first merge the graphs on the same side into two intermediate graphs: \(G_{left}\) and \(G_{right}\). To accomplish this, we use an event coreference detection tool Yu et al. (2020) to match the event nodes in two graphs. Specifically, to merge two graphs \(G_{l_{i}}\) and \(G_{l_{j}}\), we calculate the matching score of each node \(v\in G_{l_{i}}\) with all nodes in \(G_{l_{j}}\), and select the node with the highest score as the coreferential node for \(v\) if the score exceeds a predefined threshold. We then randomly choose one of the coreferential nodes to represent the merged node. In this way, \(G_{l_{i}}\) and \(G_{l_{j}}\) can be merged into a single graph, which is then converted into a DAG using the same post-processing method described in Section 3.2. The above step is repeated until we obtain a single graph \(G_{left}\). Similarly, \(G_{right}\) can also be obtained using the same process. The final step is to merge \(G_{left}\) and \(G_{right}\) into \(G\). This process is similar to the merging procedure described earlier, with the exception of using a sentence-level neutralizer when merging two coreferential nodes. Specifically, the neutralizer takes the coreferential nodes as input and generate a less biased sentence as the merged node. This is necessary because nodes from different sides do not typically share similar linguistic attributes, even if they are coreferential. Therefore, a pretrained neutralizer is used to rewrite their content into a neural sentence. ### Framing Bias Pruning The graph \(G\) is obtained by merging \(G_{left}\) and \(G_{right}\) together. However, the resulting merged graph is often larger than a typical event graph and contains a significant proportion of nodes that are not relevant to the main aspect of the complex event. Thus, it is necessary to prone the merged graph and remove these unimportant nodes in order to effectively analyze the key aspect of the complex event. One potential solution to this issue is to train a binary node classification model on the merged graph \(G\), which can distinguish between important and unimportant nodes. However, there is a lack of ground truth node labels to train this model. As a solution, we realize that in addition to collecting left-leaning and right-leaning articles on the same topic, we can also gather central-leaning articles (more information can be found in Section 4.1). These central-leaning articles can serve as pseudo ground truth for the pruning procedure. Specifically, we process central-leaning news articles and construct a graph, \(G_{central}\), using the same method described in Section 3.2 and 3.3. We then calculate the exact node matching between \(G\) and \(G_{central}\) by maximizing the following objective: \[\max\sum_{v_{i}\in G,v_{j}\in G_{central}}sim(v_{i},v_{j})\cdot A _{ij} \tag{1}\] \[s.t. A_{ij}\in\{0,1\},\ \text{for all $i$ and $j$},\] \[\sum_{i}A_{ij}\leq 1,\ \ \text{for all $j$},\] \[\sum_{j}A_{ij}\leq 1,\ \ \text{for all $i$},\] where \(sim(v_{i},v_{j})\) is the cosine similarity between the embeddings of \(v_{i}\) and \(v_{j}\)'s content. We solve this problem by greedily selecting a pair of nodes \((v_{i},v_{j})\) with the highest similarity score from the two graphs, as long as they have not been matched to any node yet. Then the nodes \(v_{i}\) and \(v_{j}\) are matched if their similarity score exceeds a predefined threshold. This procedure is repeated until either one of the graphs is fully matched or the remaining similarity scores fall below the threshold. Through this method, the matched nodes in \(G\) can be considered as important, while the unmatched nodes can be viewed as unimportant. These labels can then be used to train the binary node classification model. We use Graph Convolutional Network (GCN) as the implementation for the binary node classification model. GCN learns the representation of each node by aggregating the representations of its' neighbors Kipf and Welling (2016). Here we employ a conventional 2-layer GCN structure: \[Y=\operatorname{softmax}(\hat{A}\operatorname{ReLU}(\hat{A}XW^{(0)})W^{(1)}), \tag{2}\] where \(X\) is the initial node embeddings of \(G\), \(\hat{A}\) is the normalized adjacency matrix of \(G\), \(W^{(0)}\) and \(W^{(1)}\) are the weight matrices of GCN, and \(Y\) is the predicted labels indicating whether a node should be kept in the graph or not. ## 4 Experiments ### Dataset Currently, there are no existing news datasets that provide supervision for sources from different sides of a topic. To address this, we have created our own dataset by extending a multi-document summarization corpus. The dataset, called NeuS, contains 3,564 triplets of news articles from Allsides.comLee et al. (2022). Each triplet includes three news articles from left, right, and center-leaning American publishers on the same event. The dataset is in English and primarily focuses on U.S. political topics. An example triplet is illustrated in Figure 3. We extract the text contents of each news article of every triplet from NeuS using the article links provided by Lee et al. (2022).2 Due to stale and broken links, this results in 1,766 valid triplets of news articles. For each triplet of news, we induce an event graph from the center news article following the same protocol as in Section 3.2. We can then consider this center event graph as our target graph and train a system to induce it from a pair of left and right-leaning news articles on the same issue. It's worth noting that the term "center" in this context does not imply completely framing bias-free. "Center" news outlets are usually less tied to a particular political ideology, which means they are less likely to frame the article in a particular way to promote certain political interpretations. However, their reports may still contain framing bias because editorial judgement naturally leads to human-induced biases Lee et al. (2022). Footnote 2: We use NewsPaper3k library ([https://newspaper.readthedocs.io/en/latest/](https://newspaper.readthedocs.io/en/latest/)) to extract the text contents. ### Baselines Left and Right Event Graphs.The Left and Right baselines refer to the event graphs, \(G_{left}\) and \(G_{right}\), which are induced from the left-leaning and right-leaning articles, respectively, following the induction and merging process outlined in Section 3.2 and 3.3. Salience Ranking Model.Our first baseline is a salience-based event induction model. Specifically, we concatenate the input articles into one, and extract events based on a salience metric. We then induce an event graph using the same temporal prediction tool. We adopt the method used in Shen et al. (2021) and compute the salience score of a word (a predicate lemma or an object head) as follows: \[\textit{Salience(w)}=(1+\log freq(w))^{2}\log\frac{N}{bsf(w)},\] where \(freq(w)\) is the frequency of the word \(w\), \(N\) is the number of sentences in a background corpus, and \(bsf(w)\) is the background sentence frequency of the word \(w\). The concept is similar to TF-IDF Figure 3: A example triplet from Allsides.com and we use the English Wikipedia 20171201 dump as our background corpus, as done in Shen et al. (2021). Event Instance Graph Model.Li et al. (2021) propose an auto-regressive graph generation model that learns to generate the next event type node with its argument. However, in our setting, we don't have any information on the event type and entity level, so we only adapt their procedure of constructing _event instance graphs_ to our setting. Specifically, we extract the events and their temporal relations for each input article on a topic, and construct one _event instance graph_ for the topic by merging all coreferential events nodes. Li et al. (2021) consider the isolated nodes as irrelevant and exclude them in the instance graph, whereas we experiment with both including and excluding these isolated nodes in our study. ### Experimental Details We divide the dataset into train/val/test splits with 70% of the data being used for training, 10% for validation, and 20% for testing. This results in 1,236 instances for training, 176 instances for validation, and 354 instances for testing. Dependency Parser.We follow the method used in Shen et al. (2021) and use the Spacy _en_core_web_lg_ tool as our dependency parser for extracting subject-verb-objects (SVOs). Salience Allocation.We use the model Season to predict the salience score for each sentence in an article Wang et al. (2022). Season is a transformer-based abstractive summarization model that incorporates salience prediction and text summarization into a single network. During training, the model jointly learns to predict the degree of salience for each sentence and is guided by ROUGE-based ground-truth salience allocation to generate the abstractive summary. Season was trained on the CNN/DM dataset See et al. (2017) and achieved 43.08 RougeL performance within the domain. Event Temporal Relation.Wang et al. (2020) proposed a joint-constrained learning framework for predicting event-event relations. We adopt the same framework and use a model trained on the MATRES dataset Ning et al. (2018) as our event temporal relation prediction model. The model achieved 78.8 F1 in the MATRES dataset. Event Coreference.PairwiseRL is a pair-wise representation learning scheme for event mention pairs Yu et al. (2020). We use a model trained on ECB+ Cybulska and Vossen (2014), a cross-document event coreference dataset, to identify coreferential events nodes across documents. Sentence Neutralizer.We fine-tune a pretrained BART model Lewis et al. (2019) using the titles in the NeuS dataset as our sentence neutralizer. The goal of the model is to generate a neutral sentence (the center article's title) given the left and right articles' titles. We chose to use the titles of the news articles as our training data because (1) the title is in the same domain, (2) the title is roughly the same length as an event sentence, and (3) using the titles will not contaminate our evaluation as the titles are excluded from the articles' contents. We trained a bart-large model with 12 encoder and decoder layers for 6 epochs with a learning rate of 1e-7. It achieved 32.96 RougeL score on validation (10%). Graph Convolutional Network.We train a 2-layer Graph Convolutional Network (GCN) following the method of Kipf and Welling (2016) as our node classification model to decide whether to remove an event node. The node feature representations for each event are initialized using SimCSE sentence embeddings with a dimension of 768 Gao et al. (2021). We train it for 10 epochs with a learning rate of 1e-4. ### Evaluation Metrics We propose three metrics to evaluate our framework. The first two metrics evaluate the generated neutral event graph by computing the distance between it and the target center event graph. The last metric evaluates the degree of bias of the neutral event graph in relation to the target center graph. #### 4.4.1 Graph Distance Metrics We evaluate the quality of the generated event graphs by measuring their distance to the center event graphs, which are considered as the target graphs with very little framing bias given their center-leaning ideologies. We propose two distance metrics to compare the generated event graph with the target center graph. One key challenge here is to define a distance metric between two event nodes. Most previous studies determine if two events match by simply comparing whether they have the same event types Li et al. (2020, 2021); Jin et al., 2022). However, in our open-domain setting, we don't have a predefined ontology for event types, so we make use of pre-trained sentence embeddings as our method to compare events. Specifically, for a pair of event sentences, we embed them using SimCSE Gao et al. (2021) and decide that these two events match if the cosine similarity between their sentence embeddings is above 0.5. Node-Level Metric.We compute the pair-wise cosine similarities between all the event nodes in our generated neutral graph and the target center graph. We then greedily take a pair of event nodes with the highest similarity score, count it as a true positive, and remove them from both graphs. We iterate this process until there is no pair of events that have a similarity score higher than 0.5 or we run out of nodes in either graph. We then consider the rest of the nodes left in the predicted neutral graph and target center graph as false positives and false negatives, respectively. Finally, we compute precision, recall, and F1 scores based on these numbers as our final node-level metrics. Edge-Level Metric.We propose a stricter metric that evaluates the similarity of edges between two graphs. Similar to the node-level metric, we match edges instead of nodes. For each edge \((u_{p},v_{p})\) in the predicted graph and each edge \((u_{t},v_{t})\) in the target graph, we calculate the similarity as \(Similarity=(Sim(u_{p},u_{t})+Sim(u_{p},v_{t}))/2\). We consider an edge to be a true positive if its similarity is above 0.5, and iterate until no valid matches can be found. The edge-level precision, recall, and F1 are then calculated in the same way as the node-level metric. #### 4.4.2 Bias Metric We suggest a lexicon-based polarity metric to evaluate the degree of framing bias (more specifically, lexical bias) of our generated neutral event graph, following a similar procedure in Lee et al. (2022). Valence-Arousal-Dominance (VAD) Mohammad (2018) dataset has a large list of lexicons annotated with their valence (v), arousal (a) and dominance (d) scores. Valence, arousal, and dominance represent the direction of polarity (positive, negative), the strength of the polarity (active, passive), and the level of control (powerful, weak), respectively. We use the valence score to determine whether a token is biased, and the arousal score to determine how biased the token is. Specifically, for each event \(u_{i}=[t_{1},t_{2},...,t_{n}]\), \(u_{i}\in V_{neutral}\) from our generated neutral event graph \(G_{neutral}\), we extract every token \(t_{k}\) and add it to a set \(S_{neutral}\). We do the same for each event \(v_{j}\in V_{center}\) for the center graph \(G_{center}\) and get a set of tokens \(S_{center}\). We then filter out all tokens in \(S_{neutral}\) that are present in \(S_{center}\), such that \[S_{neutral}^{*}\subseteq S_{neutral}\] and \[S_{neutral}^{*}\cap S_{center}=\emptyset.\] This ensures that we are measuring the relative polarity of \(V_{neutral}\) in reference to the target \(V_{center}\). Then for each token in \(S_{neutral}^{*}\), we select tokens with either positive valence (v > 0.65) or negative valence (v < 0.35) to eliminate neutral words (e.g., stopwords and non-emotion-provoking words). We then sum the arousal scores for the identified positive and negative tokens and get \(\text{Arousal}_{pos}\) and \(\text{Arousal}_{neg}\) for the set \(S_{neutral}^{*}\). We average these arousal scores across the test-set as our final metric. This metric approximates the existence of lexical bias by quantifying how arousing and sensational the events are in our generated neutral event graph. \begin{table} \begin{tabular}{c|c c c|c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{3}{c|}{Node-level} & \multicolumn{3}{c}{Edge-level} \\ & Precision & Recall & F1 & Precision & Recall & F1 \\ \hline Left & 49.31 & 53.64 & 48.57 & 35.87 & 40.10 & 31.72 \\ Right & 54.38 & 51.53 & 50.35 & 39.54 & 35.23 & 32.53 \\ \hline Salience Ranking & 31.26 & 37.98 & 32.09 & 20.11 & 31.13 & 19.37 \\ Event Instance Graph & 62.78 & 28.40 & 33.22 & 34.63 & 30.22 & 27.56 \\ w/ isolated nodes & 38.98 & 54.88 & 42.13 & 30.01 & 49.19 & 31.69 \\ \hline Ours & **64.30** & **57.97** & **59.08** & **51.31** & **52.79** & **48.47** \\ \hline \hline \end{tabular} \end{table} Table 1: Evaluation results with graph distance metrics. ### Results and Discussion Table 1 demonstrates our results evaluated using graph distance metrics. We see that the neutral event graph generated using our framework improves from the baselines by large margins. We also notice that Event Instance Graph model generates a graph with high node-level precision but low recall. This is because it only keeps the coreferential nodes and excludes other isolated nodes. The coreferential nodes are more likely to be present in \(G_{center}\). But excluding all isolated nodes will fail to preserve some isolated nodes that are also salient to \(G_{center}\), for instance the white node in Figure 2, and therefore leads to a low recall. Similar intuitions can be found with the Event Instance Graph that includes all isolated nodes. It leads to a high recall but low precision. Our framework, on the other hand, alleviates this problem by learning a GCN to predict whether a node should be removed or not. This flexibility provides us gain in both precision and recall. Besides, our framework uses a GCN to learn the structure of the graphs, which offers us substantial gains in edge-level metrics. We present our results with the bias-level metric in table 2. We observe that using our framework, we can mitigate the lexical bias of our generated event graphs from the original left-leaning and right-leaning event graphs. It shows that the event graph induced by our framework contains less events with sensational attributes that are used to promote certain perceptions. Besides, we show that our sentence neutralizer can help remove the arousing linguistic features from their input events by conducting an ablation experiment that excludes the neutralizer in our merging step. We also experiment with different neural network structures in our pruning stage to examine the effect of GCN in our framework. We discover that training a multi-layer perceptron (MLP) on the node features (without any graph-level information) can do fairly well at the task evaluated by both node-level and edge-level metrics. Graph Attention Networks (GAT), on the other hand, performs well under the node F1 with an attention mechanism on the node features, but fails to maintain a good graph structure as shown by the lower edge F1 score. The GCN model exceeds both the other two models by learning a convolution over neighbourhoods. All of the models here are trained under the same settings as described in 4.3 with the same random seed. ## 5 Conclusion In this work, we propose a novel task called "neutral event graph induction" which aims to create a "neutral event graph" that has minimal framing bias from multiple input articles. We present a three-step process to achieve this. The process starts by inducing event graphs from each input document, then merging them based on their stances and finally pruning the merged graph by eliminating biased event nodes. Our experiments demonstrate the effectiveness of our framework using both graph distance metrics and framing bias metrics. In the future, we plan to expand our framework and experiments to a more diverse setting, covering different topics, stances, and domains. ## 6 Limitations One limitation of our task's setup is that there may be events that are significant to center-leaning articles, but are not covered or discussed in both left and right-leaning articles. This creates an information bottleneck for the task, which could be potentially addressed by including more left and right-leaning input articles. Another limitation is that the central-leaning articles do not necessarily imply a complete lack of framing bias. "Central" news outlets are usually less closely associated with a particular political ideology, but their reports may still contain framing bias due to the inherent biases that result from editorial judgment. \begin{table} \begin{tabular}{l c c} \hline \hline Graph & \(\text{Arousal}_{pos}\) & \(\text{Arousal}_{neg}\) \\ \hline Left & 10.97 & 6.31 \\ Right & 8.96 & 5.20 \\ \hline Ours & **6.12** & **3.60** \\ w/o Neutralizer & 7.75 & 4.49 \\ \hline \hline \end{tabular} \end{table} Table 2: Evaluation results with bias metric. The lower the arousal score, the less biased a graph is. w/o Neutralizer means to replace the sentence neutralizer in our framework with a random choice from the input events. \begin{table} \begin{tabular}{l c c} \hline \hline Pruning Model & Node F1 & Edge F1 \\ \hline MLP & 56.72 & 44.67 \\ GAT & 58.44 & 42.90 \\ \hline GCN & **59.08** & **48.47** \\ \hline \hline \end{tabular} \end{table} Table 3: Graph distance metric results of using different neural networks as our pruning model. Ethical Considerations The articles in our dataset are written by professional journalists for the purpose of promoting political interpretations. However, in rare cases, the authors may use extremely sensational language or inappropriate and aggressive language for the purpose of political propaganda. ## 8 Impact Framing bias is inherent in news articles, particularly in editorials on controversial topics. Our aim is to bring attention to the recognition and mitigation of framing bias not only in news articles but also in downstream applications such as event graph induction. We hope that our research can serve as a valuable resource for other researchers in this field.
2310.08403
Vault: Decentralized Storage Made Durable
The lack of centralized control, combined with highly dynamic adversarial behaviors, makes data durability a challenge in decentralized storage systems. In this work, we introduce a new storage system, Vault, that offers strong data durability guarantees in a fully decentralized, permission-less setting. Vault leverages the rateless property of erasure code to encode each data object into an infinite stream of encoding fragments. To ensure durability in the presence of dynamic Byzantine behaviors and targeted attacks, an infinite sequence of storage nodes are randomly selected to store encoding fragments. Encoding generation and candidate selection are fully decentralized: When necessary, Vault nodes use a gossip protocol and a publically verifiable selection proof to determine new fragments. Simulations and large-scale EC2 experiments demonstrate that Vault provides close-to-ideal mean-time-to-data-loss (MTTDL) with low storage redundancy, scales to more than 10,000 nodes, and attains performance comparable to IPFS
Guangda Sun, Michael Hu Yiqing, Arun Fu, Akasha Zhu, Jialin Li
2023-10-12T15:12:37Z
http://arxiv.org/abs/2310.08403v1
# VAULT: Decentralized Storage Made Durable ###### Abstract. The lack of centralized control, combined with highly dynamic adversarial behaviors, makes data durability a challenge in decentralized storage systems. In this work, we introduce a new storage system, Vault, that offers strong data durability guarantees in a fully decentralized, permissionless setting. Vault leverages the rateless property of erasure code to encode each data object into an infinite stream of encoding fragments. To ensure durability in the presence of dynamic Byzantine behaviors and targeted attacks, an infinite sequence of storage nodes are _randomly_ selected to store encoding fragments. Encoding generation and candidate selection are fully decentralized: When necessary, Vault nodes use a gossip protocol and a publically verifiable selection proof to determine new fragments. Simulations and large-scale EC2 experiments demonstrate that Vault provides close-to-ideal mean-time-to-data-loss (MTTDL) with low storage redundancy, scales to more than 10,000 nodes, and attains performance comparable to IPFS. ## 1. Introduction We are witnessing a technological paradigm shift to move away from service centralization, a hallmark of the cloud computing and web services industries in the past two decades. This _decentralization_ movement is catalyzed by the growing concern over content censorship (Wammer et al., 2017), data misuse (Krishnan et al., 2017), monopolistic practices (Krishnan et al., 2017), and single point of organization failure (Krishnan et al., 2017). Following the success of the Bitcoin (Bilman et al., 2017) and Ethereum (Ethereum, 2017) blockchains, a burgeoning of decentralized services are being deployed, including cryptocurrency exchanges (Krishnan et al., 2017; Krishnan et al., 2017), content delivery network (Krishnan et al., 2017), domain name service (Krishnan et al., 2017), messaging (Krishnan et al., 2017), and computation (Krishnan et al., 2017). An attractive avenue for decentralization is data storage (Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017). Compared to their centralized counterparts (Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017), decentralized storage is better positioned to address the issues of data breaches, censorship, content availability, and high storage cost. We have already seen successful large deployment of decentralized storage systems: The InterPlanetary File System (IPFS) (Krishnan et al., 2017) has over 300 thousand active participating peers globally, collectively storing more than 1000 PB of data. A critical requirement for decentralized storage systems is _data durability_, particularly for deployments that store data which represents high-value assets. Ethereum, for instance, holds account records that are worth more than $233 billion in total (Krishnan et al., 2017). Non-fungible tokens (NFT) listed on blockchains such as Solana (Solana, 2017), Polygon (Polygon, 2017), and Arbitrum (Arbitram, 2017) are already worth tens of billions of dollars (Wammer et al., 2017) and quickly growing. These high-value data cannot be stored on centralized systems due to anti-censorship requirements; neither can they be stored on best-effort P2P solutions like BitTorrent (Birgit, 2017) and IPFS, as any data loss will have high financial implications. Decentralization presents fundamental challenges to the durability guarantees of a storage system. To handle data loss caused by failures, redundancy-based approaches such as replication (Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017) and erasure coding (Krishnan et al., 2017) are commonly used. These solutions work well in a centralized deployment, where operators have full control over the infrastructure, failure domains, and placement groups with minimum correlated failures. A decentralized system, on the contrary, has no central authority to manage the behavior and trustworthiness of the participants; nodes join and leave the system at high churn rate; there exists high variance in node performance, reliability, and network conditions; adversarial behaviors are prevalent. In such open and asynchronous environment, traditional redundancy schemes lose their effectiveness in preventing data losses. Recently, more than 1600 TiB of data on the Filecoin (Krishnan et al., 2017) network lost all replica copies simultaneously (Krishnan et al., 2017). In this work, we propose a new decentralized object storage system, Vault, to address this durability challenge. Vault uses _rateless erasure code_ to generate a virtual stream of encoding chunks for each object. When storing an object, the client uses its private key to materialize the infinite stream into a finite set of randomly selected chunks. Critically, the mapping between the materialized chunks and the original object is _opaque_ to all participants except the client. With sufficient objects stored in the system, adversaries have negligible probability to attack enough materialized chunks of the same object to compromise durability. To handle the high churn and failure rate, Vault applies rateless code a second time for storing each encoding chunk. Compared to conventional maximum distance separable code (MDS), rateless code offers better fault tolerance properties in a highly asynchronous network and faulty environment. Rateless code support efficient _large code_ encoding. The large number of code symbols improves statistical reliability of the chunk without increasing redundancy. More importantly, it eschews the need for time-sensitive encoding repair -- an impractical requirement in a decentralized network -- and permits repair at a steady average-rate. The bigger symbol space also facilitates nodes to perform repair _independently_, avoiding costly coordination. To tolerate strong adversaries, Vault applies _verifiable randomness_ to select randomly distributed nodes to store code symbols. The procedure avoids centralization by using verifiable random function (Kumar et al., 2017) to determine participant eligibility. Selection results are unforgeable and publicly verifiable. Randomness and verifiability of the selection outcome ensures that sufficient code symbols are stored on honest nodes, even when significant portion of the participants are adversarial. Vault is of a practical design. We implement Vault prototype and deploy Vault on 10,000 nodes across 5 continents on AWS. Vault shows good performance in both large-scale simulation and physical deployments. It achieves comparable repair traffic overhead when comparing to a baseline distributed replicated storage system. Critically, Vault significantly improves fault tolerance to Byzantine participants and targeted adversaries compared to the baseline. Even when using a coding scheme with small redundancy level, Vault guarantees data durability with more than 33% Byzantine participants, and more than 10% of the nodes under targeted attacks. Vault performs data object store with 2.1x-1.4x latency and query with 0.92x latency compare to replicated baseline system, while achieving similar scalability over thousands of peers. ## 2. Background We consider the problem of designing a _durable_ object storage system in a fully _decentralized_ environment. The system provides a simple object store and retrieval interface to end users. Each object contains a blob of binary data of arbitrary size. The system provides _durability_: Once an object is successfully stored, users are guaranteed to retrieve the object with its original content until it expires. Objects are _immutable_, i.e., users cannot modify an object after it is stored. The system presents a _flat view_ of all objects; it does not offer explicit hierarchies such as those in a file system. It serves as a base persistent storage layer where higher-level functionalities, such as files and relational databases, can be built atop. Performance is not a key design metric of the system. We target the _cold storage_ layer of the storage stack which emphasizes on _data durability_, _storage capacity_, and _cost_, as opposed to the warm (Kumar et al., 2017) or the hot storage (Becker et al., 2017) layers. Amazon Glacier (Becker et al., 2017; Dwork et al., 2018) and Google Coldline (Becker et al., 2018)/Archive (Krishnam et al., 2018) storage are popular production-level systems in this category. Our target deployment environment is a fully _decentralized_ and _open_ network. The deployment model mirrors existing permissionless blockchain systems such as Bitcoin (Becker et al., 2017) and Ethereum (Ethereum, 2018). There is no central entity to manage or administer the system. The network is _permissionless_, i.e., anyone can join the network without any authorization required. Each participating node has some local storage for storing data, as well as network access to the wide area network. There is, however, no requirement on the type, speed, or quality of their physical storage or network devices. Participants can be either _honest_ or _adversarial_. Honest nodes follow the protocol exactly, while adversarial nodes can deviate arbitrarily from the protocol. Adversaries may also collude and attack the system. Our target environment demands stronger fault tolerance. Besides adversarial behaviors, a decentralized network can have higher churn rate than that of a centralized setting. For instance, a recent study of IPFS (Zhu et al., 2018) reveals that 87.6% of session uptime is under 8 hours and only 2.5% of them exceed 24 hours; another study (Zhu et al., 2018) shows similar churn rate in the Bitcoin network. The storage system needs to ensure data durability even when nodes fail and depart at such high rate. _Prior distributed storage systems_. There is a long line of work on centrally managed distributed storage systems (Kumar et al., 2017; Kumar et al., 2017; Kumar et al., 2017; Kumar et al., 2017; Kumar et al., 2017). These systems commonly divide objects into equal-sized chunks, and deploy a large pool of nodes to store those chunks. A dedicated set of nodes are responsible for maintaining and processing storage metadata. It is also common to use a centralized consensus service such as Chubby (Chubby, 2018) to manage configurations and other critical system states. At the deployment scale of these systems, failures of storage devices, servers, racks, clusters, and even entire data centers are frequent events. To guarantee data durability and availability, replication techniques such as primary-backup (Kumar et al., 2017) and state machine replication (Kumar et al., 2017) are commonly deployed. However, replication introduces large redundancy, negatively impacting the overall storage efficiency. To reduce storage redundancy without compromising data reliability, modern storage systems apply erasure coding (Kumar et al., 2017; Kumar et al., 2017; Kumar et al., 2017). In particular, maximum distance separable (MDS) codes, e.g., Reed-Solomon code (Kumar et al., 2017), are a common choice. Being centrally managed, the system administrator defines _placement groups_. Each erasure code strip (or replicas of an object) is mapped to one placement group. To reduce the risk of data losses, placement groups are chosen to minimize correlated failures, where factors such as failure domains and annualized failure rate (AFR) of the storage devices (Kumar et al., 2017; Kumar et al., 2017) are usually considered. These systems, however, only consider crash failures of individual components. All participants of the system are under a single, protected administrative domain, so there is no requirement to tolerate Byzantine failures (Kapf and Barabasi, 2015). _State of the art decentralized storage_. With the popularization of peer-to-peer file sharing networks in the late 90s, designing storage and file systems from a network of decentralized, untrusted computers was an active research topic at the time. Representative work during this period of time include xFS (Birshman et al., 2015), Farsite (Farsite, 2015), and OceanStore (Meyer et al., 2016). Departing from centralized deployment, handling unreliable and malicious participants is a key design goal. To provide data availability and consistency in the presence of Byzantine failures, they apply BFT protocols (Birshman et al., 2015) to replicate file data and metadata to groups of servers. BFT protocols, however, require less than 1/3 of the replication group to be Byzantine faulty. These solutions often sidestep the issue and sometimes rely on centralized, trusted certification authorities (Farsite, 2015), equivalent to a permissioned setting. Recently, there has been renewed interest in decentralized storage, following the success of blockchain systems such as Bitcoin (Birshman et al., 2015) and Ethereum (Ethereum, 2015). One recent successful example is IPFS (Birshman et al., 2015), a widely deployed decentralized storage network that consists of more than 400 thousand peers globally. IPFS divides user files into uniquely identified chunks. File metadata is represented as a Merkle Directed Acyclic Graph (DAG) (Meyer et al., 2016) constructed from the constituent file chunks. IPFS uses content addressing through Kademlia (Kademlia, 2015), a distributed hash table (DHT) (Kademlia, 2015; Kajf and Barabasi, 2015). By writing to the DHT, peers storing a file chunk announce a publisher record which maps the chunk to the peer identifier. The DHT replicates the publisher record to peers in proximity in the hash space and handles peer failures through gossip. Retrieving a file involves querying the DHT for the publisher records and the address of the publishers for each file chunk. IPFS, however, does not offer any file durability guarantee. File content in IPFS is only stored on the original file publishers; the protocol has no explicit redundancy mechanism to enforce file content durability. ## 3. Motivation and Approach Why is it hard to guarantee data durability in a decentralized storage system? In this section, we analyze key challenges faced by existing systems to prevent data losses in an open, permissionless environment. We then present our approach of combining rateless erasure code and verifiable randomness to tackle the durability challenge. ### The Decentralized Durability Challenge How to maintain data durability in the presence of server and device failures? The standard approach is data replication and erasure coding. It is well-studied how _redundancy_ can mask failures - including Byzantine ones - of individual components. The approach is particularly effective in centralized settings, where system participants and failure distributions are managed by a single entity. For instance, the central administrator have full control over placement groups such that out of the \(n\) nodes storing a data object, at most \(n-r\), where \(r\) is the level of redundancy, of them may fail concurrently. The formula changes completely in an open, decentralized system. Nodes are free to join and leave the system at any time. There is no centralized control over who are the participants and their trustworthiness. In these systems, adversarial behaviors are the _common case_, not the exception. Adversaries might also possess strong attacking power. They can partition regions of the network using BGP poisoning or distributed denial-of-service (DDoS); they can collude with each other and compromise honest nodes; they can create and change identities in a highly dynamic fashion. A direct implication of a decentralized environment is that failure assumptions made by prior systems no longer hold. Adversaries can launch Sybil attacks such that they control more than \(n-r\) copies or erasure code symbols of an object; once they locate more than \(n-r\) nodes responsible for an object, they can also launch targeted attacks to partition their network or compromise their systems. Data durability is immediately violated under these attack scenarios. Existing blockchain systems face similar attack vectors. Decentralized storage, however, are particularly vulnerable, given their scale and redundancy constraints. In large scale deployments, the size of the system, which can reach beyond 100K nodes (Birshman et al., 2015), easily overwhelms the replication factor of each storage object, which is typically below 20 (Kademlia, 2015). This is in contrast to a blockchain, in which on-chain data is replicated on every participant. Similar to distributed storage, minimizing redundancy is a key metric to improve the overall storage efficiency. With smaller redundancy, effectiveness of adversarial attacks grows exponentially, even when the adversaries account for a small percentage of the entire system. Another challenge faced by open, decentralized systems is that consensus and coordination are _costly_. To safely confirm a transaction, Bitcoin requires a confirmation latency of around one hour; even the faster Ethereum network incurs a finality latency of over 12 seconds. Besides latency, both networks also suffers low consensus throughput - 7 transaction per second (TPS) for Bitcoin and 20 TPS for Ethereum - and high transaction fees. Decentralized storage designs that rely on frequent consensus are impractical in such settings. ### The Case for Using Rateless Code Rateless codes are a family of erasure codes that have two special properties. Firstly, a rateless code generates an _infinite sequence_ of encoding symbols from \(k\) source blocks. Any \(k+\epsilon\) encoding symbols can be used to reconstruct the original source data. This differs from traditional maximum distance separable (MDS) codes, such as Reed-Solomon codes, which produces \(n\) fixed encoding symbols from the \(k\) source blocks. Second, rateless codes work well with large \((n,k,r)\) parameters, where \(n\), \(k\), \(r\) represents the number of total, original, and redundant symbols respectively. Real deployment of \((1200,1000,200)\) rateless code has been shown to be efficient (Srivastava et al., 2017). Such parameters are two orders of magnitude larger than practical MDS codes used in production systems. _Lazy repair:_ Why do these properties matter in a decentralized storage system? One benefit is _lazy repair friendly_. Using traditional erasure codes with a small value of \(r\), the storage system needs to quickly regenerate encoding symbols when just a few stored symbols are lost due to failures. Delayed repairs risk losing the source data permanently. However, decentralized systems exhibit high degree of asynchrony: Network conditions vary both spatially and temporally; there exists large deviations in processing speeds across nodes; transient unreachabilities are also prevalent. Strict timing requirement for detection and recovery of encoding symbols is therefore impractical in such environment. On the other hand, large values of \(r\) in rateless codes tolerate bursty encoding failures without losing the source data. It thus permits the system to repair at a steady average rate. Such property is more favorable in a highly synchronous environment. For instance, a \((1200,1000,200)\) code can tolerate \(200\) simultaneous symbol failures, while losing \(3\) symbols will result in permanent data loss when deploying a \((12,10,2)\) code. Note that the increase in \(r\) in rateless codes does not lead to higher redundancy. The redundancy rate of the above two codes are identical. _Consensus-free repair._ MDS codes with small values of \((n,k,r)\) necessitate participants to coordinate to agree on the assignment of symbols. Uncoordinated symbol generation may lead to insufficient _unique_ symbols to recover the original data, an issue analogous to the coupon collector's problem (Garay et al., 2016). Rateless codes eschew this issue completely. The _infinite_ encoding space enables participants to generate unique symbols with high probability without explicit coordination. _Strong security against adversaries._ Strong adversaries may not only discard stored data, but can also launch attacks targeting honest nodes to compromise data durability. We apply a novel approach that leverages the _infinite_ symbol sequence of a rateless code to defend such attacks. Specifically, the object owner applies rateless code, and uses _private information_ (e.g., its private key) to pick \(n\) random symbols in the encoding sequence. The owner then stores these selected symbols as opaque data chunks into the storage system. A key property is that these opaque chunks from any object appears _indistinguishable_ to the adversaries. Consequently, targeted attacks can do no better than compromising randomly selected chunks in the system. With enough objects in the system, the chance of simultaneously attacking more than \(r\) out of \(n\) chunks of a particular object becomes negligible. As such, only the owner (who holds private information) can recover the original object with overwhelming probability. ### Verifiable Randomness for Byzantine Tolerance Rateless erasure code alone does not solve the durability challenge. Byzantine participants can claim storage of sufficient number of encoding symbols, resulting in less than \(k+\epsilon\) symbols survived on honest nodes. Once they collude and delete the stored symbols, the original object is permanently lost. Simply increasing the redundancy to tolerate Byzantine behaviors limits the effective storage capacity of the system. For instance, at the fault tolerance level we are targeting (\(1/3\) Byzantine participants), storing symbols of each object on \(\frac{N}{3}+k+\epsilon\) distinct nodes can ensure durability. Unfortunately, under such naive approach, the overall storage capacity will not scale beyond a few times that of an individual node. In Vault, we leverage _randomness_ to tolerate Byzantine participants without sacrificing capacity. By uniform randomly selecting nodes to store encoding symbols for an object, each chosen node has a \(\frac{1}{3}\) independent probability of being Byzantine. Consequently, the redundancy to ensure at least \(k+\epsilon\) selected nodes are honest is _independent_ of \(N\)1, implying that the effective storage capacity scales _linearly_ with increasing system size, a desirable property for storage systems. Note that our model does not rule out strong adversaries generating new identities -- proof of stake (PoS) is used to defend against arbitrary Sybil attacks. Footnote 1: The redundancy calculation can be found in Appendix A We avoid relying on a centralized, trusted entity to provide this random selection service. Instead, Vault performs node selection in a fully decentralized fashion using _verifiable random functions_ (VRF) (Vakul et al., 2016). A verifiable random function takes an input string and the prover's private key, and produces a deterministic hash value plus a short proof. The output hash is indistinguishable from a random value. Using the proof and the prover's public key, anyone can verify that the hash is correctly generated by the prover, and the prover only, from the input string. This public verifiability enables Vault nodes to independently generate checkable _selection proofs_. Specifically, for each encoding symbol, a candidate node generates a VRF hash, and Vault defines publicly-known rules to calculate selection probability _inversely proportional_ to the node's distance to the symbol on the hash ring (Vakul et al., 2016; Vault et al., 2016). The "winning" hash values serves as _unforgeable tokens_ to store encoding symbols, which can be publicly verified using the VRF proof. The infinite sequence of rateless erasure code encoding symbols is used as a publicly-known random seed to the VRF function, ensuring that the selected nodes are randomly distributed in the system. Randomization has been applied by prior systems, such as RAMCloud (Kalalal et al., 2017) and HDFS (Hud ### Design Overview As shown in Figure 1, Vault uses two layers of erasure coding to ensure object durability. When storing an object, a client first applies an outer-layer rateless code to generate a sequence of _encoded chunks_. It then uses its private key and the object hash to deterministically select a set of chunks from the sequence. The process is _irreversible_: Even when external parties know the object content, they can only infer the mapping between the encoded chunks and the object with negligible probability. For each selected chunk, Vault uses a publicly-known rateless code to generate a stream of _encoding fragments_. The client then stores \(R\) encoding fragments, where \(R\) is large enough to ensure _recoverability_ of the chunk, at randomly selected nodes in the system. _Randomized peer selection._ Randomly selecting nodes to store chunk fragments is critical to the safety of Vault. To that end, Vault designs a _decentralized_ selection protocol with verifiable randomness, as shown in Figure 2. To choose responsible nodes for a chunk, Vault first uses a cryptographic hash of the chunk to locate candidates on a distributed hash table (DHT) [(58; 34; 41)]. The _candidate set_ includes nodes who are in proximity to the chunk hash on the hash ring. Vault ensures randomness of the candidate set by generating node IDs using a cryptographic hash of their public key [(41)]. Note that Vault does not require consensus on the candidate set; each node can use the chunk hash to locate an approximation of the set. Each node in the candidate set independently feeds the chunk hash to a publicly-known VRF to generate a random hash and a proof. Selection outcome is derived from the generated hash, the node ID, and the chunk hash, with the selection probability inversely proportional to the hash distance from the chunk. The selected nodes present their VRF outputs as a selection proof, which can be publicly verified by any participant of the system. _Chunk repair._ Storage device failures and node churns can lead to permanent loss of encoding fragments, resulting in violation of the durability invariant. Similar to prior storage systems, Vault performs _repair_ by regenerating new encoding fragments when the number of alive fragments drops below a threshold. Chunk repair in Vault is fully decentralized, as shown in Figure 3. Nodes storing encoding fragments for the same chunk form a _chunk group_. Chunk group members periodically broadcast persistence claims of their stored fragments to other peers in the group (step 1). When repair condition is met, group members perform randomized peer selection to locate new members to replenish the group (step 2). New members pull fragments from existing peers (step 3), decode the original chunk, and generate new encoding fragments using the inner-layer rateless code. _Object retrieval._ To retrieve an object, a client needs to recover \(K_{outer}\) distinct encoded chunks, each of which requires reading \(K_{inner}\) encoding fragments. Vault leverages determinism of a rateless code and verifiable membership to perform object reads. For each stored chunk hash (which is part of the returned object ID), the client applies the same peer selection protocol to locate nodes who are responsible for storing encoding fragments of the chunk. After retrieving enough fragments, the client can reconstruct the encoded chunks, and subsequently the original object. ### Protocol Details. Each Vault node generates a (_sk_, _pk_) key pair, and keeps the secret key _sk_ locally. Public keys are assumed to be known by all nodes in the system. A SHA256 hash of the public key is used as the node ID [(41)]. This ensures that node IDs are randomly distributed on the hash ring. Each node stores a set of encoding fragments. For each stored fragment, it Figure 3. Chunk repair process Figure 2. Randomized peer selection maintains a local view of the alive peers in the constituent chunk group. We define a constant \(R\) to denote the threshold chunk group size. Repair is triggered when group size drops below \(R\). #### 4.3.1. Vault Client Protocol Algorithm 1 lists the client side protocol. Being a decentralized peer-to-peer system, client operations are issued on participating nodes, though they can serve as proxies for light-weight clients. Here, we use _clients_ to denote nodes that issue client operations. ``` 1:procedurestore(\(\mathit{obj}\)) 2:\(\mathit{chunks}\leftarrow\textsc{OuterEncode}(\mathit{obj})\), \(\mathit{chashes}\leftarrow\{\ \}\) 3:for all\(\mathit{chunk}\in\mathit{chunks}\)do 4:\(\mathit{chash}\leftarrow\textsc{Hash}(\mathit{chunks})\) 5:\(\mathit{members}\leftarrow\{\ \},i\gets 0\) 6:while\(|\)stored fragments\(|<R\)do 7:\(\mathit{frag}\leftarrow\textsc{InnerEncode}(\mathit{chunk},i)\) 8:\(\mathit{nodes}\leftarrow\textsc{Locate}(\mathit{chash})\) 9: request \(n\) to store frag where \(n\in\mathit{nodes}\wedge n\notin\mathit{members}\) 10: insert \(n\) into \(\mathit{members}\) 11: increment \(i\) 12: insert \(\mathit{chash}\) into \(\mathit{chashes}\) 13:return\(\mathit{chashes}\) 14:procedurequery(\(\mathit{chashes}\)) 15:for all\(\mathit{chash}\in\mathit{chashes}\)do 16: RetrieveChunk(\(\mathit{chash}\)) 17:\(\mathit{chunks}\leftarrow\) wait for \(\mathit{K_{outer}}\) chunks to be retrieved 18:returnOuterDecode(\(\mathit{chunks}\)) 19:procedureRetrieveChunk(\(\mathit{chash}\)) 20:\(\mathit{frags}\leftarrow\{\ \},i\gets 0\) 21:while\(|\mathit{frags}|<\mathit{K_{inner}}\)do 22:\(\mathit{nodes}\leftarrow\textsc{Locate}(\mathit{chash})\) 23:\(\mathit{frag}\leftarrow\textsc{ retrieve from any }n\in\mathit{nodes}\) 24: insert \(\mathit{frag}\) into \(\mathit{frags}\) 25: increment \(i\) 26:returnInnerDecode(\(\mathit{frags}\)) ``` **Algorithm 1**Vault client protocol To store an object, a client uses the outer layer encoding function OuterEncode to generate a set of encoded chunks. The function applies a rateless code on the object, and uses the client's private key and the object hash to randomly select \(n\) chunks from the encoding sequence. For each chunk, the client then applies the inner layer erasure code to output a stream of fragments and stores \(R\) of them at responsible peers. Peer selection for a fragment is performed by the Locate() procedure, which we further elaborate in SS4.3.2. Once enough fragments are stored, the client also forwards the membership to each group peer for bootstrapping. The hash of all encoded chunks are returned to the client for object retrieval. As there is no dependency among the different chunks, the client can perform all peer selection and fragment store in parallel. To query an object, a client reconstructs \(\mathit{K_{outer}}\) encoded chunks from its list of chunk hashes. For each chunk hash, the client applies Locate() to find peers responsible for storing its encoding fragments. With verifiable selection proof (detailed in SS4.3.2), only peers from the chunk group will be returned. The client then attempts to request chunk fragments from these peers. Once \(\mathit{K_{inner}}\) fragments are retrieved, it uses the inner layer decoding function to reconstruct the chunk. Finally, the original object is recovered by applying outer layer decoding on \(\mathit{K_{outer}}\) reconstructed chunks. Similar to store, all fragment retrievals can be done in parallel. #### 4.3.2. Peer Selection As shown in Algorithm 2, Vault uses verifiable randomness to select nodes to store encoding fragments. The procedure first invokes DHT-Lookup to get a list of candidate nodes for an encoded chunk. The function returns the \(N\) neighbor nodes closest to the chunk hash value on the ring. Note that each lookup may return a different candidate set due to the dynamic and decentralized nature of DHT. However, all final selection will include an unforgeable selection proof, and the protocol tolerates duplicated responsible nodes for a fragment. For each candidate in the set, the caller requests a fragment _selection proof_ from the node. The selection probability of a candidate is inversely proportional to its distance to the chunk hash, and the expected number of selected nodes is approximated \(R\). To do so, the algorithm (Distance) calculates a distance metric in the expected number of nodes in the chunk hash neighborhood up to the candidate. The candidate then feeds its secrete key \(sk\) and the fragment hash into a public-known VRF. The VRF outputs a random hash \(r\) and a proof \(\pi\). Properties of VRF ensure that \(r\) is uniformly distributed between \(0\) and \(2^{hashlen}-1\). By comparing \(r\) to a fraction of the total hash space inverse exponential to the distance metric, the selection probability drops exponentially with each additional node closer to the chunk hash. Once the caller receives a selection proof, it inputs the random hash \(r\), the proof \(\pi\), the chunk hash, and the public key of the prover to the VerifyVRF function. The function verifies that the random hash is properly generated by the prover using the same chunk hash. If VerifyVRF() passes, the caller follows the same procedure in SelectionProof() to verify candidate selection. To derive the ID of the candidate, NodeID() calculates a SHA256 hash of its public key. #### 4.3.3. Chunk Group Maintenance For each stored fragment, a node maintains a local view of the current chunk group membership. The inaugural group peers receive the membership from the client issuing the store (Algorithm 1). However, a node may fail to receive client issued membership. Temporary network asynchrony can also lead to divergence in membership views across peers. To handle such cases, each peer periodically calls MembershipTimer() to _eventually_ synchronize on membership views. The procedure invokes Locate() to find peers currently responsible for storing encoding fragments for the chunk. To detect failures in a chunk group, group members periodically broadcast fragment persistence claims to other peers in the group. The persistence claim includes the fragment index within the encoding stream and the selection proof. To reduce the cost of generating selection proofs every timer interval, nodes can store them alongside the fragment. When a node receives a heartbeat message, it ignores the message if verification of the selection proof fails. Otherwise, the node adds the sender to its membership view and refreshes the sender's liveness. When the number of alive group members drop below \(R\), the node initiates chunk repair. The repair process is elaborated in SS4.3.4. #### 4.3.4. Repairing Encoded Chunks The core of Vault centers around a _decentralized_ repair protocol that ensures the durability of encoded chunks. Repairing an encoded chunk involves locating new peers to store additional fragments, until healthy group size is restored. Each node performs repair independently according to their local view, without explicit consensus. Temporary network asynchrony may result in over-repair, i.e., chunk group exceeding \(R\). This, however, does not impact protocol safety, and the membership protocol eventually synchronize the membership view across peers. To install a new fragment for the group, the repairing node invokes Locate() to find responsible peers for the fragment, and sends them a RepairRequest message. The message includes the sender's local membership view to assist bootstrapping. Upon receiving a RepairRequest, the node immediately responds if it already stores the fragment. Otherwise, it merges the incoming view with its local membership, joins the group, and starts the repair process. Specifically, it requests fragments from each group member in its view. Once the node receives \(K_{inner}\) or more encoding fragments, it uses the inner decoding function to recover the original chunk, then applies the encoding function to construct a randomly selected fragment within the encoding stream. Note that the node may not find enough fragments initially due to incomplete membership view or network asynchrony. MembershipTimer() ensures that the node eventually finds sufficient alive members to complete the repair process. The above procedure transmits at least \(K_{inner}\) fragments to repair a single fragment, resulting in a minimum repair amplification of \(K_{inner}\). To reduce repair traffic, each node optionally caches the original chunk for a limited time. Upon requested by a new peer, the node sends its chunk copy if still available, and the requesting peer can immediately construct the fragment without any further RepairRequest. ### Correctness We now sketch the proof of correctness for Vault. Refer to Appendix A for a full, detailed proof. #### 4.4.1. Durability of the inner-code The durability of our protocol hinges on the invariant that at least \(k\) fragments in each group will persist after some bounded time \(T\). The resultant number of persistent fragments varies from time due to the innate randomness of the system. The durability of each group can be analyzed by a continuous-time Markov chain (CTMC) model. It is important to note that if a group dips below \(k\) fragments, it might never recover. These group states are therefore impossible transition out of and typically referred to as _absorbing states_, _non-absorbing states_ are referred to as _transient states_. As \(T\rightarrow\infty\), the probability a group reaches an _absorbing state_ converges to \(1\). However, we can assert that with some system parameters the groups will reach an absorbing state in \(T=t\) steps with a negligible probability. Lemma 4.1 ().: _For a single data object, the probability that all groups injectively responsible for \(K+R\) chunks, being durable at \(T=t\) can be bounded by:_ \[1-(1-\sum_{T=1}^{t}(I\Theta^{T})_{n-k+1})^{K+R} \tag{1}\] _Where \(I\) is the initial state matrix and \(\Theta\) is a stochastic matrix._ In the CTMC model, we first express the set of initial states of our system and the probabilities associated with those states as a \((n-k+1\times 1)\) initial probability vector \(I\). Since our group identities must be unique, \(I\) can be constructed utilizing the hypergeometric distribution PMF. We then construct a \((n-k+1\times n-k+1)\) stochastic matrix \(\Theta=(\theta_{i,i})\) representing the transition probabilities of one state to another. Where the rows represent the starting states and the columns representing the transitioned state. Where _transient to transient_ states and _transient to absorbing_ states are a function of the churn rate as well as an eviction parameter. Whilst intuitively, _absorbing to absorbing_ states are expressed as a \((1\times n-k+1)\) vector of zeros with the \((n-k+1)^{th}\) element being \(1\). With the stochastic matrix, we perform the sum and exponent to derive the probability any group of a data object reaching an absorbing state at \(T=t\) as (1). With our given system parameters, if the probability is negligible (\(\leq 2^{-128}\)), durability for the whole data object is guaranteed within \(T=t\). #### 4.4.2. Durability of the outer-code Suppose we have an adversary that has the capability to receive accurate reports on every possible group's membership and possess the ability to forcefully disconnect \(\phi\) nodes from the system. The adversary could thus force a group to pre-maturely enter an _absorbing state_. The purpose of the outer-code is to deter such targeted attacks. More precisely, we ensure that if it does succeed, there is high probability it will not compromise the durability of any data object. **Lemma 4.2**.: _Suppose, each node can hold at most \(\mu\) fragments. The probability of an attacker successfully attacking a data object is bounded by:_ \[1-\left(1-\prod_{i=1}^{R}\frac{K+R-i}{\Omega(K+R)-i}\right)^{\binom{(\phi_{i} \neq\mu)}{R+1}} \tag{2}\] To draw an upper-bound on the attackers' success, we assume that it can successfully compromise a maximum of \(\Phi=\phi\) groups or chunks out of a total \(\Omega*(K+R)\) chunks, where \(\Omega\) is the total number of data objects. This upper-bound is simply an extension to the birthday attack problem. Similarly, with the correct system parameters, we can ensure the adversaries' probability of success to be negligible, ensuring durability with high probability. ## 5. Implementation We implement Vault peer node as HTTP server and client, atop the open-source actix-web HTTP server framework (Boges et al., 2017) and its client library awc. The verifiable random selection proof and persistence claim's signature use ed25519 curve. All messages are serialized using bincode (Boges et al., 2017). Nodes send HTTP requests using asynchronous messaging. The requests are handled with an immediate dummy 200 OK response, and any further reply will be sent as a reversed HTTP request. This ensures the system can tolerance arbitrary network delay and node slow down. The node is implemented using a single-thread server and a worker thread pool. All long-running tasks, either blocking or non-blocking (e.g., coding, making request and file system operations), are offloaded from server thread and submitted to worker thread pool. This keeps the node to be responsive; the design is also scalable by fully exploiting the available parallelism. We use wirehair (Wainwright, 2017), an \(O(1)\) time rateless erasure code implementation, as our rateless erasure code. Wirehair not only attains high performance, it can also recover the original data with an expected \(k+0.02\) fragments. The overall implementation is written in 2.1K lines of Rust code. ## 6. Evaluation We use two types of experiments to evaluate Vault: discrete event simulation and physical deployment on geo-distributed EC2 virtual machines. Unless otherwise specified, we use \(K_{inner}=32,R=80\) for the inner code, and the outer code uses \(K_{outer}=8\) and \(10\) chunks are generated for each object. All objects are \(1\) GB in size. When evaluating physical deployments, we compare Vault to an IPFS-like decentralized storage system that uses Kademlia DHT to store publisher record. The system directly uses DHT PUT_RECORD to store object data. We set the replication factor for this IPFS-like system to \(3\), so that the redundancy level is similar to Vault's 3.125. ### Simulations We first use discrete event simulation to evaluate the repair overhead and fault tolerance guarantees of Vault. The simulated network contains a total of 100K nodes. For simplicity, we parametrize one churn rate for all nodes. Node failures (including permanently leaving the system) follow a Poisson distribution according to the churn. We use data object size as the basic unit for repair traffic. The overall storage capacity which Vault occupies is also recorded. We run each simulation \(10\) times with different seed values and take the average. Besides Vault, we also simulate a Ceph-like replicated storage system. This system replicates each object on \(3\) randomly selected peers, and performs object repair immediately after one of the replicas fails. Such replication scheme is commonly employed in data center settings, and we use it as a baseline to compare to our decentralized protocol. Repair trafficWe first simulate how much fragment repair traffic Vault generates, and compare it to the baseline replication approach. Figure 4 shows the total repair traffic (in units of data object sizes) incurred in the first year of system deployment. As discussed in SS4.3.4, Vault nodes can optionally cache chunk data to reduce repair traffic. We therefore also simulate Vault with varying chunk cache expiration time in hours. As expected, repair traffic for Vault and the baseline system both increase linearly with increasing number of data objects. Vault pays a higher repair cost compared to the baseline, as constructing a new fragment requires transmitting \(K_{inner}\) existing fragments. By hitting the chunk cache, repair traffic for each fragment is reduced by \(K_{inner}\) times, making it comparable to the replication system. Concretely, repair traffic is decreased by 6X when the cache duration increases to 48 hours. This demonstrates that our chunk cache optimization is effective. As such, incentives should be provided to encourage nodes to cache for a longer time to reduce repair traffic. Fortunately, as shown later, most repairs finish quickly, so a short caching period can lead to good overall cache hit rate. Figure 4 also shows total repair traffic in the first year with increasing node churn rate. In both Vault and the replicated system, repair traffic grows at the same rate with increasing churn. This is expected as higher churn results in more frequent repair. The result demonstrates that Vault responds well to the change in the system's average churn rate. Another implication is that Vault scales well with an increase in churn, i.e., the overhead per node failure remains constant. As with the previous experiment, longer chunk cache effectively reduces the repair traffic. The drop in repair traffic at high churn is caused by more frequent cache refreshes. degree of tolerance depends on the inner code parameters. Using the default parameter, Vault can tolerate around the targeted 33% faulty nodes, while a more conservative configuration can improve tolerance further at the expense of increased redundancy. We also evaluate Vault's tolerance against targeted adversaries, and show the object lost rate in the lower subplot of Figure 6. Once again, the baseline replication system shows weak fault tolerance, losing all objects when less than 2% of the nodes were attacked. In contrast, our outer layer coding approach effectively defends against targeted adversaries. With a (14, 8) coding configuration, no object is lost until more than 20% of the nodes are being attacked. ### Physical Deployment Next, we evaluate Vault using our Raft implementation. We deploy 10,000 Vault nodes on Amazon EC2 in 5 AWS zones (us-west, ap-southeast, eu-central, sa-east, af-south) across 5 continents. We launch 20 c5.9xlarge instances with 36 vCPU, 72GiB of memory and 12Gbps network bandwidth in each zone, and run 100 peers on each instance. As mentioned earlier, we implemented an IPFS-like decentralized storage system using Kademlia DHT as a comparison system. We directly use Kademlia's PUT_RECORD interface to store the data object, and set the replication factor to 3 to match the redundancy of Vault. Each data object is split into \(K_{inner}\cdot K_{outer}\) records. This splitting scheme ensures good storage load balancing across nodes in the system. In the evaluation of store and query operation, we select a random node as client to perform one pair of operations. The client node first issues store with a randomly generated object. After the store returns an object ID, it immediately issues a query with the returned ID to retrieve the object. The node performs sanity check to make sure the object is properly retrieved. In the evaluation of repairing, we trigger a special command to force nodes to evict the oldest member that stores the chunk. This is equivalent to the member being disconnected from the network. The remaining group members locate a new member to replace the evicted one. We report the latency between the eviction and the instant when the new member is known to successfully store the chunk. We evaluate the system with a simulated DHT routing system that provides node discovery in constant time. This simplification mitigates the effect of DHT routing performance on the result, and focus on the performance differences between the protocols. Latency ResultsWe measure end-to-end latency of store and query operations for each system, and show the results in Figure 7. The store and repairing latencies are higher than the baseline replication system, due to coding overhead and network overhead introduced by verifiable random selection. However, query latency is smaller than the baseline replication system despite the coding overhead. This is because the inner code deployed in DHT allow Vault to recover the object with fragments that are geologically closer to the query node. Concurrent operationsTo measure our system capacity, we perform multiple store and query loops concurrently from different random nodes, while triggering multiple repairing concurrently. The latency result is shown in Figure 8. Vault maintains its performance even up to 100 concurrent store and query operations, and is able to support more than 600 concurrent repairing. We can derive that Vault can support more than 400K store and 720K query per day, and can tolerance over 13M daily object repairs. This suggests that Vault is capable to handle massive real-world workload. Figure 8. Latency of concurrent store and query operation and repairing. Figure 7. Latency of store and query operation and repairing in a world-wide deployment. The top plot varies the outer code parameters of Vault, while the bottom plot varies the inner code. _Scalability._ We also evaluate Vault with increasing system size, i.e., the number participants in the system. The scalability result is shown in Figure 9. Similar to the baseline replicated system, Vault is able to maintain near-constant performance regardless of the system's scale. _Micro-benchmarks._ Lastly, we run micro-benchmarks to evaluate the performance of object encoding and decoding. In the first experiment, we run a single client who applies both the outer layer and inner layer erasure code to encodes a 1 GB object into fragments. Then, the client takes the generated fragments, and uses the decoding function in both erasure codes to recover the original object. Lastly, we set up one peer node to generate a fragment from \(K_{inner}\) existing fragments. As shown in Figure 10, the time to encode and decode a large object is relatively stable across various coding parameters. A direct implication of this result is that the latency increase in Figure 7 is caused by DHT operations, not object encoding/decoding. Chunk repair incurs significantly less CPU overhead, as it only involves the inner code. ### Discussion As our evaluation results have shown, Vault incurs higher performance overhead and management complexity compared to centralized deployments and prior best-effort systems. However, we believe there exists inherent trade-offs between performance and security guarantees (e.g., anti-censorship and strong adversary tolerance). As we demonstrated earlier (SS1 and SS2), these properties are more critical than performance for our target deployment scenarios. ## 7. Related Work _Distributed Storage Systems._ As mentioned in SS2, our work is related to a long line of research and production systems in distributed storage [5, 6, 14, 16, 22, 24, 27, 39, 52, 62]. Unlike Vault, all those systems are centrally managed by a single administrative entity, and all participants are assumed to be non-Byzantine. Our object store interface is similar to Amazon S3 [52] and Google Cloud Storage [27]. Many prior systems deploy a centralized service to manage storage metadata [14, 24], while metadata management is fully decentralized in Vault. Dynamo [16] uses consistent hashing [34] to assign keys to nodes, similar to our DHT-based approach. Object-to-server mapping in Ceph [62] is done through a distribution function CRUSH [63] which maps each object to a placement group based on the object hash. _Decentralized Storage._ Building reliable storage systems in a fully decentralized environment has been explored extensively [2, 3, 35, 37, 60, 64]. Farsite [2] uses BFT replication for directory metadata and CFT replication for file data. Membership management is done through trusted certificate authorities. Vault reduces storage redundancy by using erasure coding, and avoids centralized trust through PoS and verifiable random peer selection. Most closely related to our work is the deep archival storage in OceanStore which stores erasure coded fragments over multiple failure domains. However, OceanStore makes centralized placement decisions and relies on trusted, centralized nodes to perform repairs. As detailed in SS2, IPFS [60] uses DHT to store publisher record without centralization, but offers no mechanism to reliably store objects. Filecoin [37] and Arweave [64] both use tokens to incentivize nodes to store objects longer, but has weak durability guarantees in the presence of failures. ## 8. Conclusion We propose Vault, a novel decentralized storage system with strong durability guarantees even in the presence of strong adversaries. Vault combines rateless erasure code, verifiable randomness, dual-layer encoding, and decentralized repair to ensure object persistence without relying on centralization. Simulation and global-scale experiments show that compared to prior systems, Vault provides stronger Figure 10. Micro-benchmarks showing the utilized CPU time for clients to encode and decode a data object (top plot), and for repairing a fragment in a chunk group. Figure 9. Latency of store and query operation and repairing with varying number of nodes. tolerance to Byzantine failures and targeted attacks, while attaining comparable performance and data repair overhead.
2301.01400
Task Weighting in Meta-learning with Trajectory Optimisation
Developing meta-learning algorithms that are un-biased toward a subset of training tasks often requires hand-designed criteria to weight tasks, potentially resulting in sub-optimal solutions. In this paper, we introduce a new principled and fully-automated task-weighting algorithm for meta-learning methods. By considering the weights of tasks within the same mini-batch as an action, and the meta-parameter of interest as the system state, we cast the task-weighting meta-learning problem to a trajectory optimisation and employ the iterative linear quadratic regulator to determine the optimal action or weights of tasks. We theoretically show that the proposed algorithm converges to an $\epsilon_{0}$-stationary point, and empirically demonstrate that the proposed approach out-performs common hand-engineering weighting methods in two few-shot learning benchmarks.
Cuong Nguyen, Thanh-Toan Do, Gustavo Carneiro
2023-01-04T01:36:09Z
http://arxiv.org/abs/2301.01400v1
# Task Weighting in Meta-learning with Trajectory Optimisation ###### Abstract Developing meta-learning algorithms that are un-biased toward a subset of training tasks often requires hand-designed criteria to weight tasks, potentially resulting in sub-optimal solutions. In this paper, we introduce a new principled and fully-automated task-weighting algorithm for meta-learning methods. By considering the weights of tasks within the same mini-batch as an action, and the meta-parameter of interest as the system state, we cast the task-weighting meta-learning problem to a trajectory optimisation and employ the iterative linear quadratic regulator to determine the optimal action or weights of tasks. We theoretically show that the proposed algorithm converges to an \(\epsilon_{0}\)-stationary point, and empirically demonstrate that the proposed approach out-performs common hand-engineering weighting methods in two few-shot learning benchmarks. ## 1 Introduction Meta-learning has been studied from the early 1990s (Schmidhuber, 1987; Naik and Mammone, 1992; Thrun and Pratt, 1998) and recently gained a renewed interest with the use of deep learning methods that achieves remarkable state-of-art results in several few-shot learning benchmarks (Vinyals et al., 2016; Finn et al., 2017; Snell et al., 2017; Nichol et al., 2018; Ravi and Beatson, 2019; Allen et al., 2019; Khodak et al., 2019; Baik et al., 2020; Flennerhag et al., 2020). However, the majority of existing meta-learning algorithms simply minimise the average loss evaluated on validation subsets of training tasks, implicitly assuming task-balance (analogous to class balance in single-task learning). This assumption is hardly true in practice, and potentially biases the trained meta-learning models toward tasks observed more frequently during training, and consequently, resulting in a large variation of performance when evaluating on different subsets of testing tasks as shown in (Dhillon et al., 2019, Figure 1) and (C. C. Nguyen et al., 2021, Figure 1). One way to address such issue is to exploit the diversity of training tasks, so that the trained meta-learning models can generalise to a wider range of testing tasks. In fact, various studies in task relatedness or task similarity have shown that learning from certain tasks may facilitate the generalisation of meta-learning models (Thrun and O'Sullivan, 1996; Zamir et al., 2018; Achille et al., 2019; C. C. Nguyen et al., 2021). This suggests the design of a re-weighting mechanism to diversify the contribution of each training task when training a meta-learning model of interest. Existing re-weighting methods mostly rely on either hand-crafted criteria to determine those weights (Collins et al., 2020), or additional validation tasks to learn the re-weighting factors of interest (Z. Xu et al., 2021). Such ad-hoc development of re-weighting mechanisms motivates us to design a more principled approach to re-weight tasks for meta-learning. We note that there are also studies learning to balance the contribution of each training task, e.g., learning to balance (H. B. Lee et al., 2020). However, such method focuses on the task-adaptation step (also known as inner loop), while our interest is to explicitly weight the contribution of each task at the meta-learning step (also known as outer-loop). In this paper, we present a new principled and fully-automated task-weighting algorithm, called trajectory optimisation based task weighting for meta-learning (TOW). We note that TOW is not a meta-learning method, but a task weighting framework that can be integrated into existing meta-learning algorithms to circumvent the problematic assumption about the even distribution of training tasks. Our contributions can be summarised as follows: * We propose to cast the task-weighting problem in meta-learning to a finite-horizon discrete-time trajectory optimisation with state denoted by the meta-parameter and action by the re-weight factors of tasks, and solve such problem using the iterative linear quadratic regulator. * We prove that under the conditions of boundedness and smoothness of the loss function used, TOW converges to a particular \(\epsilon_{0}\)-stationary point. * We demonstrate TOW's functionality by incorporating it into two common meta-learning algorithms, namely MAML (Finn et al., 2017) and Prototypical Networks (Snell et al., 2017), and showing that TOW enables meta-learning methods to converge with fewer number of iterations and achieves higher prediction accuracy than some common task re-weighting mechanisms in the literature. ## 2 Background ### Trajectory optimisation Given continuous state \(\mathbf{x}\in\mathbb{R}^{D}\) and action \(\mathbf{u}\in\mathbb{R}^{M}\), the objective of a trajectory optimisation is to find an optimal sequence of actions \(\{\mathbf{u}_{t}^{*}\}_{t=1}^{T}\) that minimises the total cost: \[\min_{\{\mathbf{u}_{t}\}_{t=1}^{T}}\sum_{t=1}^{T}c(\mathbf{x}_{t},\mathbf{u}_{ t})\quad\text{s.t.}\ \mathbf{x}_{t+1}=f(\mathbf{x}_{t},\mathbf{u}_{t}), \tag{1}\] where \(c(\mathbf{x}_{t},\mathbf{u}_{t})\) and \(f(\mathbf{x}_{t},\mathbf{u}_{t})\) are the cost function and the state-transition dynamics at time step \(t\), respectively. These functions are assumed to be twice differentiable. In addition, the initial state \(\mathbf{x}_{1}\) is given, and _trajectory optimisation_ means finding the optimal sequence of actions \(\{\mathbf{u}_{t}^{*}\}_{t=1}^{T}\) for a particular \(\mathbf{x}_{1}\), not for all possible initial states. In trajectory optimisation, the finite-horizon discrete-time problem shown in (1) can be solved approximately by iterative methods, such as differential dynamic programming (DDP) (Jacobson and Mayne, 1970) or iterative linear quadratic regulator (iLQR) (Todorov and Li, 2005; Tassa et al., 2012). These methods rely on a local approximation of the state-transition dynamics and cost function using Taylor series about a nominal trajectory \(\{\dot{\mathbf{x}}_{i},\dot{\mathbf{u}}_{t}\}_{t=1}^{T}\). In DDP, both the state-transition dynamics and cost function are approximated to the second order of their Taylor series, while in iLQR - a "simplified" version of DDP, the state-transition dynamics is approximated up to the first order. In a loose sense, DDP is analogy to the Newton's method, while iLQR is analogous to a quasi-Newton's method. The main idea of iLQR is to cast a general non-linear trajectory optimisation problem shown in (1) to a linear quadratic problem (LQP) in which the state-transition dynamics is linear and the cost function is quadratic. The approximate LQP can then be solved exactly by the linear quadratic regulator (LQR) (Anderson and Moore, 2007). Subsequently, the newly obtained trajectory is used as the nominal trajectory for the next iteration. This process is repeated until the cost function is converged. The detailed derivation of iLQR can be found in Appendix E. Further details of iLQR can be referred to (Todorov and Li, 2005; Tassa et al., 2012). To our best knowledge, there are no previous works that provide a proof on the convergence of iLQR. Therefore, for a complete analysis, we provide the proof of convergence for iLQR adopted from DDP (Sidney Yakowitz and Rutherford, 1984) in Appendix F. ### Meta-learning The setting of the meta-learning considered in this paper follows the _task environment_(Baxter, 2000), where tasks are i.i.d. sampled from an unknown distribution \(p(\mathcal{T})\) over a family of tasks. Each task \(\mathcal{T}_{i}\) is associated with two data subsets: training (or support) subset \(\mathcal{S}_{i}^{(s)}=\{(\mathbf{s}_{ij}^{(s)},y_{ij}^{(s)})_{j=1}^{m^{(s)}}\), where \(\mathbf{s}_{ij}^{(s)}\) denotes a training input and \(y_{ij}^{(s)}\) denotes the corresponding training label, \(i\in\{1,\dots,M\}\), and validation (or query) subset \(\mathcal{S}_{i}^{(s)}\) which is similarly defined. For training tasks \(\{\mathcal{T}_{i}\}_{i=1}^{M}\), both data subsets have labels, while for testing tasks \(\mathcal{T}_{M+1}\), only the data in \(\mathcal{S}_{M+1}^{(s)}\) is labelled. The aim is to learn a meta-parameter \(\mathbf{x}\in\mathbb{R}^{D}\) shared across all tasks, so that \(\mathbf{x}\) can be efficiently fine-tuned on \(\mathcal{S}_{i}^{(s)}\) to produce a task-specific model that can predict accurately the unlabelled data in \(\mathcal{S}_{i}^{(s)}\). One of the simplest forms of meta-learning is analogous to an extension of hyper-parameter optimisation in single-task learning, where the shared meta-parameter \(\mathbf{x}\) is learnt from many tasks. The objective of meta-learning can be expressed as: \[\min_{\mathbf{x}}\frac{1}{M}\mathbf{1}_{M}^{\top}\boldsymbol{\ell}( \mathbf{x}), \tag{2}\] where \(\mathbf{1}_{M}\) is an \(M\)-dimensional vector with all elements equal to 1, and \(\boldsymbol{\ell}(\mathbf{x})\in\mathbb{R}^{M}\) is a vector containing \(M\) validation losses induced by evaluating the meta-parameter \(\mathbf{x}\) on each data subset \(\mathcal{S}_{i}^{(q)}\) of \(M\) training tasks. Each element of \(\boldsymbol{\ell}(\mathbf{x})\) can be expressed as: \[\boldsymbol{\ell}_{i}(\mathbf{x})=\frac{1}{m_{i}^{(q)}}\sum_{j=1}^ {m_{i}^{(q)}}\ell\left(\mathbf{s}_{ij}^{(q)},y_{ij}^{(q)};\phi_{i}(\mathbf{x}) \right),\forall i\in\{1,\ldots,M\}, \tag{3}\] where \(\ell(.)\) is the loss function that is non-negative and twice differentiable, \(\phi(\mathbf{x})\) is the parameter fine-tuned on task \(\mathcal{T}_{i}\): \[\phi_{i}(\mathbf{x})=\mathbf{x}-\frac{\gamma}{m_{i}^{(s)}}\sum_{k=1}^{m_{i}^{ (s)}}\boldsymbol{\nabla}_{\mathbf{x}}\left[\ell\left(\mathbf{s}_{ik}^{(s)},y_ {ik}^{(s)};\mathbf{x}\right)\right], \tag{4}\] and \(\gamma\) is the step size or learning rate for the task adaptation step (also known as inner-loop). Note that the gradient-based task adaptation step in (4) is a special case of meta-learning in which \(\mathbf{x}\) is considered as the initialisation of the neural network of interest (Finn et al., 2017). In metric-based meta-learning (Snell et al., 2017), the task adaptation step in (4) is slightly different where the class prototypes of training data are embedded into a latent space by the meta-model, and the validation loss is based on the distance between the class prototypes to each data-point in \(\mathcal{S}_{i}^{(q)}\). There are also other extensions of (2) using probabilistic approaches (Yoon et al., 2018; Ravi and Beatson, 2019; C. Nguyen et al., 2020). Nevertheless, our approach proposed in Section 3 can be integrated into any of these meta-learning algorithms with a slight modification. ### Task-weighting meta-learning The minimisation of the average validation loss over \(M\) tasks in (2) implicitly implies that those tasks are balanced (similar to class balance in single-task learning). This assumption is, however, hardly true in practice, and consequently, makes the trained meta-model perform poorly for testing tasks that are rarely observed. To address such issue, a task-weighting factor is introduced to diversify the contribution of each training task, allowing the trained meta-model to generalise better to unseen tasks even if those tasks are rare. The objective of such meta-learning problem can be written as: \[\mathbf{x}^{*}=\arg\min_{\mathbf{x}}\mathbf{u}^{\top}\boldsymbol{\ell}( \mathbf{x})\quad\text{s.t.}\ \mathbf{u}\in\mathcal{U}\subseteq\mathbb{R}^{M}, \tag{5}\] where \(\mathbf{u}\) is an \(M\)-dimensional vector that re-weights the influence of \(M\) training tasks, and \(\mathcal{U}\) is the set of feasible weights, i.e., as defined by some weighting criterion. Note that the task-weighting problem in (5) is carried out at the meta level (often known as "outer-loop"). It is, therefore, different from some recent meta-learning methods (Khodak et al., 2019; Baik et al., 2020; Flennerhag et al., 2020; H. B. Lee et al., 2020) that design different learning strategies for \(\phi_{i}(\mathbf{x})\) at the task adaptation step (often known as "inner-loop") to estimate the meta-parameters with the same outer-loop objective shown in (2). The objective in (5) is more flexible than (2), since it allows one to select different weighting criteria to train the meta-learning model of interest. The most widely-used weighting criterion is **uniform**: \(\mathbf{u}_{i}=\nicefrac{{1}}{{M}},\forall i\in\{1,\ldots,M\}\), making the objective in (5) resemble the one in (2). Another popular criterion is to select **difficult** tasks - tasks that have the largest validation losses - for training to optimise the performance on the worst-case scenarios (Collins et al., 2020). However, such difficult tasks may not always be preferred when outliers and noise are present. That leads to another weighting approach which favours the **most familiar** data-points in single-task learning (Kumar et al., 2010; Bengio et al., 2009; Wang et al., 2017) - often referred as _curriculum learning_. The two latter task-weighting approaches can be considered as the "exploration" and "exploitation" strategies used in reinforcement learning (RL), respectively. Similar to the exploration and exploitation dilemma in RL, we hypothesise that the optimality for task weighting is formed by a balance between these two approaches. In the following section, we propose a principled approach to automate re-weighting tasks through an optimisation on a sequence of many mini-batches rather than relying on manually-designed criteria as the previous papers. ## 3 Methodology ### Task-weighting as a trajectory optimisation In practice, the optimisation in (5) is often carried out using a gradient-based optimiser where the next meta-parameter \(\mathbf{x}^{*}\) is obtained from the current meta-parameter \(\mathbf{x}\) and its corresponding \(\mathbf{u}\) via the function \(f\). Such update can be considered as a state-transition dynamics where the meta-parameter \(\mathbf{x}\) is the state and the weighting vector \(\mathbf{u}\) is the action. Given this observation, we explicitly replace the weighting criterion in (5) by a trajectory optimisation to formulate the task-weighting meta-learning problem as follows: \[\mathbf{x}^{*}_{t+1} =f(\mathbf{x}^{*}_{t},\mathbf{u}^{*}_{t})\ \forall t\in\{1,\ldots,T\}\] \[\text{s.t.}\ \{\mathbf{u}^{*}_{t}\}_{t=1}^{T}=\arg\min_{\{ \mathbf{u}\}_{t=1}^{T}}\sum_{t=1}^{T}c(\mathbf{x}_{t},\mathbf{u}_{t})\] \[\text{s.t.}\ \mathbf{x}_{t+1}=f(\mathbf{x}_{t},\mathbf{u}_{t})\ \forall t\in\{1,\ldots,T\},\text{ and }\mathbf{x}_{1}\text{is given}\] \[\mathbf{x}^{*}_{1} =\mathbf{x}_{1}, \tag{6}\] where \(f(.,.)\) corresponds to the formulation of an optimiser such as stochastic gradient descent (SGD) (Robbins and Monro, 1951) or Adam (Kingma and Ba, 2014), \(c(.,.)\) is a cost function representing the weighting criteria, \(\mathbf{x}_{1}\) is the initialisation of the meta-learning parameter, and the subscript denotes the time step. To solve for an optimal re-weighting vector \(\mathbf{u}\) in the constraint of (6), the cost function needs to be defined. Since our interest is the convergence speed and the generalisation of the learnt meta-model, we define the cost function as an un-discounted sum of uniformly-weighted validation losses of tasks belonging to a sequence of \(T\) mini-batches plus a penalisation on the action \(\mathbf{u}\). For simplicity, the penalty on the action \(\mathbf{u}\) is assumed to follow a Gaussian prior with mean \(\mu_{u}\) and precision \(\beta_{u}\). In particular, the cost function can be expressed as: \[c(\mathbf{x}_{t},\mathbf{u}_{t})=\mathbf{1}_{M}^{\top}\boldsymbol{\ell}( \mathbf{x}_{t})+\frac{\beta_{u}}{2}\|\mathbf{u}_{t}-\mu_{u}\mathbf{1}_{M}\|^{ 2}, \tag{7}\] where \(\|.\|\) denotes the \(\ell_{2}\)-norm. Note that the action \(\mathbf{u}_{t}\) is not necessarily normalised to \(1\). We argue that imposing such constraint might not work well in some cases, for example, a mini-batch containing all familiar tasks, and another one containing all unfamiliar tasks. Our hypothesis is to have small weights for familiar tasks in the former mini-batch, while setting large weights for unfamiliar tasks in the latter mini-batch to diversify the learning. Normalising \(\mathbf{u}_{t}\) to \(1\) will, however, be undesirable since the contribution of the tasks in both mini-batches would be the same, making the meta-learning model even biased further toward the familiar tasks in the first mini-batch. Hence, we allow the weights to be determined automatically by the optimisation in (6) with a Gaussian prior. Nevertheless, one can also implement \(\mathbf{u}_{t}\) being normalised to \(1\) by simply replacing it by \(\operatorname{softmax}(\mathbf{u}_{t})\) in the state-transition dynamics \(f\). In general, the constraint in (6) cannot be solved exactly, but approximately using iterative methods such as DDP or iLQR. Given the state-transition dynamics \(f\) follows the formulation of a first-order gradient-based optimiser (refer to Eq. (90) in Appendix G.1 and Eq. (97) in Appendix G.2 for the explicit form of \(f\) using SGD and Adam, respectively), \(f\) consists of the first derivatives of the weighted loss \(\mathbf{u}_{t}^{\top}\boldsymbol{\ell}(\mathbf{x}_{t})\) w.r.t. \(\mathbf{x}\). Hence, applying DDP will result in an intractable solution since DDP requires the second derivatives of \(f\), corresponding to the third derivatives of the weighted loss \(\mathbf{u}_{t}^{\top}\boldsymbol{\ell}(\mathbf{x}_{t})\). In contrast, iLQR needs only the first derivatives of \(f\), which corresponds to the second derivatives of the weighted loss \(\mathbf{u}_{t}^{\top}\boldsymbol{\ell}(\mathbf{x}_{t})\). Although this means that iLQR no longer exhibits the quadratic convergence rate as DDP, in the context of meta-learning, the significant reduction in computation out-weights the speed of convergence for the task weighting vector \(\mathbf{u}\). In this paper, we use iLQR to solve the constraint in (6). The locally-optimal actions obtained is then used to re-weight the tasks in each mini-batch to train the meta-learning model of interest. The approximation using Taylor's series on the state-transition dynamics and cost function is shown in Appendix G and Appendix H, respectively. This approximation leads to the calculation of two Hessian matrices: one for the sum of weighted loss, \(\mathbf{u}_{t}^{\top}\boldsymbol{\ell}(\mathbf{x}_{t})\), in the dynamics, denoted as \(\mathbf{F}_{\mathbf{x}_{t}}\), and the other for the sum of non-weighted loss, \(\mathbf{1}_{M}^{\top}\boldsymbol{\ell}(\mathbf{x}_{t})\), in the cost function, denoted as \(\mathbf{C}_{\mathbf{x}_{t},\mathbf{x}_{t}}\). In addition, while performing recursive backward iLQR, we need to calculate another intermediate matrix of the _cost-to-go_ in (61) (please refer to Appendix E), denoted as \(\mathbf{V}_{t}\), which has the same size as the Hessian matrix. Naively calculating these Hessian matrices comes at the quadratic complexity \(\mathcal{O}(D^{2})\) in terms of running time and storage, resulting in an intractable solution for large-scaled models. To address such issue, the two Hessian matrices \(\mathbf{F}_{\mathbf{x}_{t}}\) and \(\mathbf{C}_{\mathbf{x}_{t},\mathbf{x}_{t}}\) may be approximated by their diagonals which can be efficiently computed using the Hutchinson's method (Bekas et al., 2007). However, as the size of the model increases, using a few samples from the uniform Rademacher distribution produces noisy estimations of the Hessian diagonals, resulting in a poor approximation (Yao et al., 2021). Instead of calculating the Hessian diagonals, we use the Gauss-Newton diagonals as replacements. As the Gauss-Newton matrix is known to be a good approximation of the Hessian matrix (Martens, 2010; Botev et al., 2017), this, therefore, results in a good approximation for the Hessian operator. In addition, Gauss-Newton diagonals can be efficiently calculated using a single backward pass (Dangel et al., 2020). For the matrix \(\mathbf{V}_{t}\), we approximate it by its diagonal matrix. In fact, matrix \(\mathbf{V}_{t}\) is analogous to the inverse Hessian matrix in Newton's method. Thus, approximating matrix \(\mathbf{V}_{t}\) by its diagonal means performing Newton's method separately for each coordinate, which holds when the diagonal of \(\mathbf{V}_{t}\) is dominant. We also provide some additional results using full matrix \(\mathbf{V}_{t}\) in Appendix B. In general, we do not observe any significant difference in terms of accuracy evaluated on the validation set between the diagonal approximation and the one with full Gauss-Newton matrix. Nevertheless, these approximation increases the tractability of our proposed method, allowing to implement the proposed method for very large models, such as deep neural networks. The whole procedure of the proposed approach can be described as follows: first, a meta-parameter \(\mathbf{x}_{1}\) is initialised as the initial state, and then, iLQR is employed to solve the constraint in (6) to determine a locally-optimal action \(\{\mathbf{u}_{t}^{*}\}_{t=1}^{T}\) about an arbitrary-but-feasible trajectory \(\{(\hat{\mathbf{x}}_{t},\hat{\mathbf{u}}_{t})\}_{t=1}^{T}\) with \(\hat{\mathbf{x}}_{1}=\mathbf{x}_{1}\). The obtained weighting vectors \(\{\mathbf{u}_{t}^{*}\}_{t=1}^{T}\) are then used to weight tasks in each mini-batch to train the meta-parameter \(\mathbf{x}_{t+1}^{*}\) in (6). The newly calculated state at the end of the \(T\) time steps, \(\mathbf{x}_{T+1}^{*}\), is then used as the initial state for the next iteration. This process is repeated until the weighted validation loss \(\mathbf{u}^{\top}\boldsymbol{\ell}(\mathbf{x})\) converges to a local minima. In the implementation, we observe that this optimisation converges after less than 10 iterations. The complete algorithm of the proposed task-weighting meta-learning approach is outlined in Algorithm 1. ``` 1:proceduretrain 2: define total loss \(J\) in Eq. (73) 3: define iLQRbackward( ) in Algorithm 2 (Appendix I) 4: initialise \(\mathbf{x}_{1}\) 5:while\(\mathbf{x}\) is not converged do 6: get \(T\) mini-batches, each consists of \(M\) tasks 7: generate a random sequence of action \(\{\hat{\mathbf{u}}_{t}\}_{t=1}^{T}\) 8: obtain the corresponding state \(\{\hat{\mathbf{x}}_{t}\}_{t=1}^{T}\) 9:while iLQR cost is not converged do 10:\(\{\mathbf{K}_{t},\mathbf{k}_{t}\}_{t=1}^{T},\theta_{1}\leftarrow\textsc{iLQR backward}(\{\hat{\mathbf{x}}_{t},\hat{\mathbf{u}}_{t}\}_{t=1}^{T})\) 11:\(\varepsilon=2\) 12:repeat\(\triangleright\)Backtracking line search 13:\(\varepsilon\leftarrow\frac{1}{2}\varepsilon\) 14:for\(t=1:T\)do\(\triangleright\)iLQR forward pass 15:\(\mathbf{u}_{t}=\mathbf{K}_{t}\left(\mathbf{x}_{t}-\hat{\mathbf{x}}_{t}\right) +\varepsilon\mathbf{k}_{t}+\hat{\mathbf{u}}_{t}\) 16:\(\mathbf{x}_{t+1}=f(\mathbf{x}_{t},\mathbf{u}_{t})\) 17:until\(J(\mathbf{u}_{1:N})-J(\hat{\mathbf{u}}_{1:N})\leq\frac{1}{2}\varepsilon\theta_{1}\) and \(\mathbf{u}_{t}\geq 0\) 18:\(\{\hat{\mathbf{x}}_{t}\}_{t=1}^{T}\leftarrow\{\mathbf{x}_{t}\}_{t=1}^{T}\)\(\triangleright\)Update nominal state 19:\(\{\hat{\mathbf{u}}_{t}\}_{t=1}^{T}\leftarrow\{\mathbf{u}_{t}\}_{t=1}^{T}\) 20:\(\mathbf{x}_{1}\leftarrow\mathbf{x}_{T}\) 21:return\(\mathbf{x}_{1}\) ``` **Algorithm 1** Task-weighting for meta-learning To simplify the implementation and convergence analysis, we select the nominal actions that coincide with the uniform weighting, meaning that: \(\hat{\mathbf{u}}_{ti}=\nicefrac{{1}}{{M}},\forall t\in\{1,\dots,T\},i\in\{1, \dots,M\}\). In addition, we constrain that all elements of the weighting vector or action \(\mathbf{u}\) are non-negative since each task would either contribute more or less or even not contribute to the learning for \(\mathbf{x}\). This constraint is incorporated into the stopping condition for iLQR shown in step 17 of Algorithm 1. If there is at least one element \(\mathbf{u}_{ti},t\in\{1,\ldots,T\},i\in\{1,\ldots,M\}\) being negative, the backtracking line search will iterate one more time with \(\varepsilon\) decaying toward 0, forcing \(\mathbf{u}_{t}\) to stay close to the nominal \(\hat{\mathbf{u}}_{t}\). Thus, in the worst-case, \(\varepsilon\) is reduced to 0, making \(\mathbf{u}_{t}\) coincide with \(\hat{\mathbf{u}}_{t}\), which is the uniform weighting. ### Complexity analysis The downside of TOW is the overhead due to the linearisation and quadraticisation for the state-transition dynamics and cost function, and the calculation to obtain the controller \(\mathbf{K}_{t}\) and \(\mathbf{k}_{t}\) shown in Algorithm 1. If \(\mathcal{O}(T_{0})\) is the time complexity to train a meta-learning method following a uniform weighting strategy, then the time complexity required by TOW will consist of the following: * nominal trajectory: \(\mathcal{O}(T_{0})\) * linearisation and quadraticisation using Gauss-Newton matrices: \(\mathcal{O}(n_{\mathrm{iLQR}}m_{0}\eta D)\) * iLQR backward: \(\mathcal{O}(n_{\mathrm{iLQR}}MD)\) * iLQR forward with back-tracking line search: \(\mathcal{O}(n_{\mathrm{iLQR}}n_{\mathrm{ls}}T_{0})\), where \(n_{\mathrm{iLQR}}\) is the number of iterations in iLQR, \(m_{0}=m_{i}^{(q)},i\in\{1,\ldots,M\},\) is the total number of validation samples within a task, \(\eta\) is the number of arithmetic operations in the model of interest, and \(n_{\mathrm{ls}}\) is the number of back-tracking line search. Thus, the final complexity of TOW is: \(\mathcal{O}((n_{\mathrm{iLQR}}n_{\mathrm{ls}}+1)T_{0}+n_{\mathrm{iLQR}}(m_{0} \eta+M)D))\) comparing to \(\mathcal{O}(T_{0})\) in the conventional meta-learning. ### Convergence analysis This subsection proves that the training process for MAML using TOW to weight tasks converges to an \(\epsilon_{0}\)-stationary point where \(\epsilon_{0}\) is greater than some positive constant. Before analysing the convergence of TOW, we state a lemma bounding the norm of the weighting vector (or action) \(\mathbf{u}_{t}\) obtained from iLQR: **Lemma 1**.: _If \(\mathbf{u}_{t}\) is a stationary action of a nominal action \(\hat{\mathbf{u}}_{t}\) obtained from iLQR, then:_ \[\exists\delta>0:\|\mathbf{u}_{t}-\hat{\mathbf{u}}_{t}\|\leq\delta.\] Proof.: Please refer to Appendix A.2.1 for the detailed proof. To analyse the convergence of a general non-convex function, one typically assumes that the loss function, its first derivative and second derivative are bounded and Lipschitz-continuous as shown in Assumptions 1 to 3, respectively (Collins et al., 2020; Fallah et al., 2020). **Assumption 1**.: _The loss function of interest \(\ell\) mentioned in (3) is \(B\)-bounded and \(L\)-Lipschitz._ Formally, Assumption 1 means that the loss function \(\ell\) has the following properties: * Boundedness: \(\exists B>0:\forall\mathbf{x}\in\mathbb{R}^{D},|\ell(\mathbf{s}_{ij},y_{ij}; \mathbf{x})|\leq B\), * Lipschitz continuity: \(\exists L>0:\forall\mathbf{\bar{x}},\mathbf{\bar{x}}\in\mathbb{R}^{D},|\ell( \mathbf{s}_{ij},y_{ij};\mathbf{\bar{x}})-\ell(\mathbf{s}_{ij},y_{ij};\mathbf{ \bar{x}})|\leq L\|\mathbf{\bar{x}}-\mathbf{\bar{x}}\|\). The boundedness assumption is to bound the second moment of the loss function, while the Lipschitz-continuity assumption of the loss function \(\ell\) implies that the gradient norm of \(\ell\) w.r.t. \(\mathbf{x}\) is bounded above (see Lemma 8 in Appendix A.4): \[\|\boldsymbol{\nabla}_{\mathbf{x}}\ell(\mathbf{s},y;\mathbf{x})\|\leq L. \tag{8}\] The bounded gradient norm in (8) also implies that the variance of gradient of the loss function w.r.t. training samples is bounded as shown in Lemma 2. **Lemma 2**.: _If Assumption 1 holds, then the variance of gradient of the loss function is \(\sigma^{2}\)-bounded:_ \[\exists\sigma>0:\forall\mathbf{x}\in\mathbb{R}^{D},\mathbb{E}_{(\mathbf{s}_{ ij},y)\sim\mathcal{D}_{i}}\left[\left\|\boldsymbol{\nabla}_{\mathbf{x}}\ell( \mathbf{s}_{ij},y_{ij};\mathbf{x})-\mathbb{E}_{(\mathbf{s}_{ij},y)\sim\mathcal{ D}_{i}}\left[\boldsymbol{\nabla}_{\mathbf{x}}\ell(\mathbf{s}_{ij},y_{ij}; \mathbf{x})\right]\right\|^{2}\right]\leq\sigma^{2}.\] Proof.: Please refer to Appendix A.2.2 for the detailed proof. The result in Lemma 2 also leads to the boundedness of the variance of the weighted validation loss as shown in Lemma 3. **Lemma 3**.: _Given the result in Lemma 2, the variance of \(\mathbf{\nabla}_{\mathbf{x}}\mathbf{u}_{t}^{\top}\mathbf{\ell}(\mathbf{x}_{t})\) is bounded above by \(\widetilde{\sigma}^{2}=\sigma^{2}\left(\delta+M^{-0.5}\right)^{2}\)._ Proof.: Please refer to Appendix A.2.3 for the detailed proof. **Assumption 2**.: _The gradient of the loss function \(\ell(\mathbf{s},y;\mathbf{x})\) w.r.t. \(\mathbf{x}\) is \(S\)-Lipschitz._ Assumption 2 means that: \[\exists S>0:\forall\widetilde{\mathbf{x}},\overline{\mathbf{x}}\in\mathbb{R}^{ D},\|\mathbf{\nabla}_{\mathbf{x}}\ell(\mathbf{s}_{ij},y_{ij};\widetilde{\mathbf{x}})- \mathbf{\nabla}_{\mathbf{x}}\ell(\mathbf{s}_{ij},y_{ij};\overline{\mathbf{x}})\| \leq S\|\widetilde{\mathbf{x}}-\overline{\mathbf{x}}\|.\] **Assumption 3**.: _The Hessian matrix \(\mathbf{\nabla}_{\mathbf{x}}^{2}\ell(\mathbf{s},y;\mathbf{x})\) is \(\rho\)-Lipschitz._ Assumption 3 implies that: \[\exists\rho>0:\forall\widetilde{\mathbf{x}},\overline{\mathbf{x}}\in\mathbb{R} ^{D},\left\|\mathbf{\nabla}_{\mathbf{x}}^{2}\ell(\mathbf{s}_{ij},y_{ij}; \widetilde{\mathbf{x}})-\mathbf{\nabla}_{\mathbf{x}}^{2}\ell(\mathbf{s}_{ij},y_{ ij};\overline{\mathbf{x}})\right\|\leq\rho\|\widetilde{\mathbf{x}}-\overline{ \mathbf{x}}\|.\] These assumptions are used to bound the gradient of the "true" validation loss of task \(\mathcal{T}_{i}\), which is defined as follows: \[\mathbf{\tilde{\ell}}_{i}(\mathbf{x})=\mathbb{E}_{\mathcal{D}_{i}^{(q)}}\left[ \ell\left(\mathbf{s}_{ij}^{(q)},y_{ij}^{(q)};\phi(\mathbf{x})\right)\right], \tag{9}\] where \(\mathbb{E}_{\mathcal{D}_{i}^{(q)}}\) indicates the expectation over all data pairs \(\{(\mathbf{s}_{ij}^{(q)},y_{ij}^{(q)})\}_{j=1}^{+\infty}\) sampled from the true (validation) probability distribution \(\mathcal{D}_{i}^{(q)}\). **Lemma 4**.: _If the conditions in Assumptions 1 to 3 hold, then the gradient of the true validation loss \(\mathbf{\tilde{\ell}}_{i}(\mathbf{x})\) defined in Eq. (9) is \(\widetilde{S}\)-Lipschitz, where: \(\widetilde{S}=S(1+\gamma S)^{2}+\gamma\rho L\)._ Proof.: Please refer to Appendix A.2.4 for the detailed proof. Given the above assumptions and lemmas, the convergence of TOW can be shown in Theorem 5. We also provide some examples of loss functions and analyse whether they satisfy Assumptions 1 to 3 in Appendix J. **Theorem 5**.: _If Assumptions 1 to 3 hold, the learning rate \(\alpha<\nicefrac{{2}}{{S}}(\delta\sqrt{M}+1)\), and \(\mathbf{z}\) is randomly sampled from \(\{\mathbf{x}_{t}\}_{t=1}^{T_{\mathrm{iter}}}\) returned by Algorithm 1, then:_ \[\mathbb{E}_{\mathbf{z}\sim\{\mathbf{x}_{t}\}_{t=1}^{T_{\mathrm{iter}}}}\left[ \mathbb{E}_{\mathcal{D}_{1:M}^{(q)}}\left[\left\|\mathbf{\nabla}_{\mathbf{z}} \mathbf{u}_{t}^{\top}\mathbf{\ell}_{1:M}\left(\mathbf{z}\right)\right\|^{2}\right] \right]\leq\epsilon_{0}+\frac{\kappa}{T_{\mathrm{iter}}},\] _where:_ \[\epsilon_{0} =\frac{4\delta B\sqrt{M}+\alpha^{2}\widetilde{\sigma}^{2} \widetilde{S}\left(\delta\sqrt{M}+1\right)}{\alpha\left[2-\alpha\widetilde{S} \left(\delta\sqrt{M}+1\right)\right]}>0 \tag{10}\] \[\kappa =\frac{2\mathbf{u}_{1}^{\top}\mathbf{\tilde{\ell}}_{1:M}\left( \mathbf{x}_{1}\right)}{\alpha\left[2-\alpha\widetilde{S}\left(\delta\sqrt{M}+ 1\right)\right]}, \tag{11}\] _with \(T_{\mathrm{iter}}\) as the number of gradient-update for the meta-parameter (or the number of mini-batches of tasks used), and \(\mathbb{E}_{\mathcal{D}_{1:M}^{(q)}}\) as the expectation taken over all data sampled from \(t\) mini-batches \(\{\mathcal{D}_{i}^{(q)}\}_{i=1}^{t}\), each \(\mathcal{D}_{i}^{(q)}\) has \(M\) tasks._ Proof.: Please refer to Appendix A.3 for the detailed proof. Theorem 5 shows that the expectation of squared gradient norm of the weighted validation loss is upper-bounded by a monotonically reducing function w.r.t. the number of iterations \(T_{\mathrm{iter}}\). This implies that Algorithm 1 converges in expectation to an \(\epsilon_{0}\)-stationary point. **Remark 1**.: _The result in Theorem 5 agrees with some previous works on task-weighting for meta-learning, e.g. Collins et al., 2020, Ineq. (75) where the gradient norm is bounded above by some positive constant. The tightness of the bound in Theorem 5 mostly depends on how small the value of \(\epsilon_{0}\) is. In fact, we can observe that \(\lim_{\delta\to 0}\epsilon_{0}=0\). Thus, to ensure that \(\epsilon_{0}\) is small, \(\delta\) needs to be small. We can make \(\delta\) small by imposing a strong prior of \(\mathbf{u}\) by setting a large value of \(\beta_{u}\) in Eq. (7), e.g. \(\beta_{u}=10\) in our experiments presented in Section 5. In addition, we integrate the backtracking line search in Algorithm 1 to force \(\mathbf{u}\) to stay close to the uniform weighting \(\hat{\mathbf{u}}\), making \(\delta\) very small. Another factor contributes to the small value of \(\epsilon_{0}\) is the inverse of the number of tasks in a mini-batch, \(\nicefrac{{1}}{{M}}\), as seen in (10). In practice, \(M\) cannot be too large and often in the range of 5 to 10. And since we can impose constraints to make \(\delta\) tiny, we can guarantee to obtain a small \(\epsilon_{0}\) to make the bound in Theorem 5 tight._ ## 4 Related work Our work directly relates to re-weighting tasks in meta-learning. One notable recent work is TR-MAML (Collins et al., 2020) which places higher weights on tasks with larger validation losses to optimise performance for worst-case scenarios. However, when the number of training tasks is very large, e.g. there will be \(\binom{1000}{5}\approx 8.25\times 10^{12}\) 5-way classification tasks formed from 1000 characters in Omniglot dataset (Lake et al., 2015), learning weight for each training task is intractable. TR-MAML circumvents such issue by clustering tasks into a small number of clusters based on some ad-hoc intuition and learn the weight for each cluster. This, however, reduces the practicability of TR-MAML. Another work, \(\alpha\)-MAML (Cai et al., 2020), provides an upper-bound on the distance between the weighted risk evaluated on training tasks to the expected risk on testing tasks. The re-weight factors can then be obtained to minimise that upper-bound, reducing the variance between training and testing tasks. In reinforcement learning (RL), MWL-MAML (Z. Xu et al., 2021) is recently proposed to employ meta-learning to learn the local optimal re-weight factor of each trajectory using a few gradient descent steps. The downside of MWL-MAML is the need of validation trajectories (or validation tasks in meta-learning) that are representative enough to learn those weights. Furthermore, TR-MAML, \(\alpha\)-MAML and MWL-MAML rely on a single mini-batch of tasks to determine the weights without considering the effect of sequence of mini-batches when training a meta-model, potentially rendering sub-optimal solutions. In contrast, our proposed method does not need to cluster tasks nor require additional set of validation tasks. In addition, our proposed method automates the calculation of task-weighting through an optimisation over a sequence of mini-batches, allowing to obtain better local-optimal solutions outside of a single mini-batch of tasks. There are also other studies about task balancing, such as Learn to Balance (L2B) (H. B. Lee et al., 2020). However, L2B introduces additional parameters in the task adaptation step (inner-loop), while our method explicitly introduces a weighting vector at the meta-parameter update step (outer-loop). Our work is also similar to task-weighting in multi-task learning (Chen et al., 2018; Sener and Koltun, 2018; Guo et al., 2018; L. Liu et al., 2021) where the goal is to obtain an optimal re-weighting vector \(\mathbf{u}\) for all tasks. Such modelling can, therefore, work well with a small number of tasks, but potentially fall short when the number of tasks is very large, e.g. in the magnitude of \(10^{12}\) training tasks for 5-way Omniglot classification, due to the poor scalability of the computational and storage complexities of that modelling. In comparison, our proposed approach does not explicitly learn the weighting vector for all training tasks, but determines the weighting vector for tasks in current and some following mini-batches via a trajectory optimisation technique. In a loose sense, the multi-task learning approaches can be considered as an analogy to a "batch" learning setting w.r.t. the weighting vector \(\mathbf{u}\), while ours is analogous to an "online" learning setting which can scale well to the number of training tasks. At the time of writing this paper, we were not aware of a concurrent work - Auto-Lambda (S. Liu et al., 2022) - that is designed to use meta-learning to learn how to weight tasks in the multi-task setting. Auto-Lambda is similar to TR-MAML, which is designed for a fixed number of tasks in the multi-task learning, while being intractable when the number of tasks is large. Furthermore, Auto-Lambda is also similar to MWL-MAML since Auto-Lambda employs validation subsets of training tasks to meta-learn the weighting of tasks. This paper is motivated from the observation of large variation in terms of prediction performance made by meta-learning algorithms on various testing tasks (Dhillon et al., 2019, Figure 1), implying that the trained meta-model may be biased toward certain training tasks. Such observation may be rooted in task relatedness or task similarity which is a growing research topic in the field of transfer learning. Existing works include task-clustering using k-nearest neighbours (Thrun and O'Sullivan, 1996) or using convex optimisation (Jacob et al., 2009), learning task relationship through task covariance matrices (Y. Zhang and Yeung, 2012), or theoretical guarantees to learn similarity between tasks (Shui et al., 2019). Recently, a large-scale empirical study, known as Taskonomy (Zamir et al., 2018), investigated the relationship between 26 computer vision tasks. Another promising direction to quantify task similarity is to employ task representation, notably Task2Vec (Achille et al., 2019), which is based on Fisher Information matrix to embed tasks into a latent space. One commonality among those studies is that learning from certain training tasks may be beneficial to generalise to unseen tasks. This suggests the design of a mechanism to re-weight the contribution of each training task to improve the performance of the meta-model of interest. Furthermore, our work is related to finite-horizon discrete-time trajectory optimisation or open-loop optimal control which has been well studied in the field of control and robotics. The objective is to minimise a cost function that depends on the states and actions in many consecutive time steps given the state-transition dynamics. Exact solution can be obtained for the simplest problem where the cost is quadratic and the dynamics is linear using linear quadratic regulator (Anderson and Moore, 2007). For a general non-linear problem, approximate solutions can be found via iterative approaches, such as differential dynamic programming (Jacobson and Mayne, 1970; Murray and SJ Yakowitz, 1984; Sidney Yakowitz and Rutherford, 1984) and iterative LQR (iLQR) (Todorov and W. Li, 2005; Tassa et al., 2012). ## 5 Experiments ### N-way k-shot classification In this section, we empirically compare the performance of the proposed trajectory optimisation task weighting (TOW) approach with three baselines: one with uniform weighting, denoted as _uniform_, one with higher weights on difficult tasks (or tasks with higher losses), denoted as _exploration_, and the other one with higher weights on easier tasks (or tasks with lower losses), denoted as _exploitation_. The experiments are based on \(n\)-way \(k\)-shot classification setting used in few-shot learning with tasks formed from Omniglot (Lake et al., 2015) and mini-ImageNet (Vinyals et al., 2016) - the two most widely used datasets to evaluate the performance of meta-learning algorithms. Naively implementing the two baselines, _exploration_ and _exploitation_, will easily lead to trivial solutions Figure 1: Validation accuracy exponential moving average (with smoothing factor 0.1) of different task-weighting strategies evaluated on: (a) and (b) Omniglot, and (c), (d) and (e) mini-ImageNet. where only the task with largest or smallest loss within a mini-batch is selected. Thus, only one task in each mini-batch is used for learning, and consequently, making the learning noisy and unstable. We, therefore, introduce a prior, denoted as \(p(\mathbf{u})\), as a regularisation to prevent many tasks within the same mini-batch from being discarded. The objective to determine the weights for these two baselines can be written as follows: \[\mathbf{u}^{*}=\begin{cases}\operatorname*{arg\,min}_{\mathbf{u}}-\mathbf{u}^ {\top}\boldsymbol{\ell}(\mathbf{x})-\ln p(\mathbf{u})&\text{for \emph{exploration}}\\ \operatorname*{arg\,min}_{\mathbf{u}}\mathbf{u}^{\top}\boldsymbol{\ell}( \mathbf{x})-\ln p(\mathbf{u})&\text{for \emph{exploitation}}.\end{cases} \tag{12}\] In general, the prior \(p(\mathbf{u})\) can be any distribution that has support in \((0,+\infty)\) such as Beta, Gamma or Cauchy distribution. For simplicity, \(p(\mathbf{u})\) is selected as a Dirichlet distribution with a concentration \(\kappa>1\) to constrain the weight vector within a probability simplex. One can then use a non-linear optimisation solver to solve (12) to obtain an optimal \(\mathbf{u}^{*}\) for one of the two baselines. In the implementation, we use Sequential Least SQuares Programming (SLSQP) to obtain \(\mathbf{u}^{*}\). Note that the definition of the _exploration_ baseline above resembles TR-MAML (Collins et al., 2020), but is applicable for common few-shot learning benchmarks where the number of tasks is large. Similarly, the _exploitation_ is an analogy to robust Bayesian data re-weighting (Wang et al., 2017) or _curriculum learning_ in single-task learning. For Omniglot dataset, there are a total of 1,623 different handwritten characters from 50 different alphabets. Each of characters was drawn online via Amazon's Mechanical Turk by 20 different people (Lake et al., 2015). Conventionally, 1,000 randomly-sampled characters are used for training and the remaining is used to testing. Such train-test split might, however, be inadequate since the characters in one alphabet can be present in both training and testing sets, and learning a character in that alphabet might help to classify easily from that same alphabet. To make the classification more challenging, we follow the original train-test split (Lake et al., 2015) by using 30 alphabets for training and 20 alphabets for testing. We also utilise the hierarchy structure of alphabet-character to form finer-grained classification tasks to make the classification more difficult than the random train-test split. Furthermore, we follow the convention from previous work to re-size all those gray images to 28-by-28 pixel2 to have a fair evaluation. Footnote 2: reported in (Snell et al., 2017) For mini-ImageNet, the dataset consists of 100 classes, each class has 600 colour images sampled from 1,000 classes of the ImageNet dataset (Deng et al., 2009). We follow the standard train-test split that uses 64 classes for training, 16 classes for validation and 20 for testing (Ravi and Larochelle, 2017) in our evaluation. To be consistent with previous work, we pre-process all images by resizing them to 84-by-84 pixels2 before carrying out any training or testing. Footnote 2: reported in (Snell et al., 2017) The base model used across the experiments is the 4 CNN module network that is widely used in few-shot image classification (Vinyals et al., 2016; Finn et al., 2017). Each module of the base network consists of 32 filters with 3-by-3 kernel, followed by a batch normalisation, activated by the Rectified Linear Unit (ReLU) and pooled by a 2-by-2 max-pooling layer. The output of the last CNN module is flattened before performing classification. Two common meta-learning algorithms considered in this section include MAML (Finn et al., 2017) and Prototypical Networks (Snell et al., 2017). In MAML, the flattened features are passed to a linear \begin{table} \begin{tabular}{l l l l l l} \hline \hline & \begin{tabular}{c} **Weighting** \\ **method** \\ \end{tabular} & \multicolumn{2}{c}{**Omniglot**} & \multicolumn{2}{c}{**Mini-ImageNet**} \\ \cline{3-6} & & & \multicolumn{2}{c}{**4 layer CNN**} & \multicolumn{2}{c}{**Resnet-10**} \\ \hline \multirow{6}{*}{MAML} & Uniform & 94.86 \(\pm\)0.43 & 48.70 \(\pm\)1.841 & 49.12 \(\pm\)0.76 \\ & Exploration & 92.64 \(\pm\)0.52 & 48.80 \(\pm\)0.72 & 48.72 \(\pm\)0.74 \\ & Exploitation & 95.34 \(\pm\)0.42 & 49.22 \(\pm\)0.74 & 48.44 \(\pm\)0.76 \\ & TOW & 95.94 \(\pm\)0.40 & 51.55 \(\pm\)0.75 & **52.32 \(\pm\)0.80** \\ \hline \multirow{6}{*}{Protonet} & Uniform & 95.21 \(\pm\)0.37 & 49.42 \(\pm\)0.782 & – \\ & Exploration & 94.57 \(\pm\)0.38 & 48.56 \(\pm\)0.77 & – \\ \cline{1-1} & Exploitation & 95.78 \(\pm\)0.40 & 48.39 \(\pm\)0.79 & – \\ \cline{1-1} & TOW & **96.84 \(\pm\)0.37** & **51.05 \(\pm\)0.80** & – \\ \hline \hline \end{tabular} \end{table} Table 1: The classification accuracy averaged on 1,000 random testing tasks generated from Omniglot and 600 tasks from mini-ImageNet with 95 percent confident interval; the bold numbers denote their statistically differences from the ones in the same column. fully-connected layer to classify, while in Prototypical Networks, the classification is based on Euclidean distances to the prototypes of each class. For all experiments, the learning rate \(\gamma\) of task adaptation (also known as inner-loop) shown in Eq. (4) is 0.1 for Omniglot and 0.01 for mini-ImageNet with 5 gradient updates. The learning rate for the meta-parameters, \(\alpha\), is set at \(10^{-4}\) for all the setting. The mini-batch size is \(M=10\) tasks for Omniglot and \(M=5\) tasks for mini-ImageNet. For the Dirichlet concentration of the prior in the _exploration_ and _exploitation_ baselines, we try three values of \(\kappa\in\{0.2,1.2,5\}\), and found that a too small value of \(\kappa\) leads to noisy learning since only the easiest or hardest task is selected, while too large value of \(\kappa\) makes both the baselines identical to uniform weighting. Hence, we select \(\kappa=1.2\) that balances between these two strategies. Note that \(\kappa=1\) results in a random prior, leading to a trivial solution. For the trajectory optimiser iLQR, the state-transition dynamics \(f\) follows the formula of Adam optimiser since Adam provides a less noisy training as in SGD. The nominal trajectory is, as mentioned in Section 3.1, selected with uniform actions: \(\hat{\mathbf{u}}_{tj}=\nicefrac{{1}}{{M}},\forall j\in\{1,\dots,M\},t\in\{1, \dots,T\}\). The number of iterations used in iLQR is 2 to speed up the training, although higher number of iterations can be used to achieve better performance by trading off the running time. We also provide an ablation study with two different numbers of iterations in iLQR in Appendix D, where the larger number of iterations in iLQR slightly improves the prediction accuracy on the validation set of mini-ImageNet. The number of time steps (or number of mini-batches) is \(T=10\) for Omniglot and 5 for mini-ImageNet. The parameters of the prior on the action \(\mathbf{u}_{t}\) are \(\mu_{u}=\nicefrac{{1}}{{M}}\) and \(\beta_{u}=10\). As we do not observe any major difference between different configuration of \(M\) and \(T\) used in this experiment, we report the result for the case \(M=10\) and \(T=5\). Each experiment is carried out on a single NVIDIA Tesla V100 GPU with 32 GB memory following the configuration of NVIDIA DGX-1. Figures 0(a) to 0(d) plot the testing accuracy evaluated on 100 validation tasks drawn from Omniglot and mini-ImageNet following the 5-way 1-shot setting. The validation accuracy curves along the training process show that TOW can achieve higher performance comparing to the three baselines on various datasets and meta-learning methods. We also carry out an experiment using Resnet-10 (He et al., 2016) on mini-ImageNet to demonstrate the scalability of TOW. The results on Resnet-10 in Figure 0(e) shows a a similar observation that TOW out-performs other task-weighting methods. We note that the validation accuracy curves of Resnet-10 fluctuates due to our injected dropout to regularise the network from overfitting since it is known that larger networks, such as Resnet-10 or Resnet-18, severely overfit in the few-shot setting (C. Nguyen et al., 2020). For the evaluation on testing sets, we follow the standard setting in few-shot learning by measuring the prediction accuracy on 1,000 and 600 testing tasks formed from Omniglot and mini-ImageNet, respectively (Vinyals et al., 2016; Finn et al., 2017). The results in Table 1 show that TOW can be at least 2 percent more accurate than the best baseline among Uniform, Exploration, and Exploitation. Note that there a difference between the results shown in Figure 1 and Table 1 due to their differences in terms of (i) the tasks form: one from validation set, while the other from testing set, and (ii) the number of tasks evaluated. Despite the promising results, the downside of TOW is the overhead caused by approximating the cost and state-transition dynamics over \(T\) mini-batches of tasks to determine the locally-optimal \(\{\mathbf{u}_{t}^{*}\}_{t=1}^{T}\). As shown in Table 2, TOW is about 7 to 9 times slower than the three baselines. We also provide a visualisation of the weights \(\mathbf{u}_{t}\) in Appendix C. ### Any-shot classification We also follow the _realistic task distribution_(H. B. Lee et al., 2020) to evaluate further the performance of TOW. The new setting is mostly similar to \(N\)-way \(k\)-shot, except \(k\) is not fixed and might be different for each class within a task. Specifically, with a probability of 0.5, the number of shots for each class is sampled from a uniform distribution: \(k\sim\mathrm{Uniform}(1,50)\) to simulate class imbalance. With the other 0.5 probability, the \begin{table} \begin{tabular}{l c c c} \hline \hline & \multicolumn{1}{c}{**Omniglot**} & \multicolumn{2}{c}{**mini-ImageNet**} \\ \cline{3-4} & & **CNN** & **Resnet-10** \\ \hline Exploration & 1.55 & 5.24 & 7.18 \\ Exploitation & 1.55 & 5.24 & 7.18 \\ Uniform & 1.35 & 5.03 & 7.16 \\ TOW & 7.50 & 38.12 & 67.78 \\ \hline \hline \end{tabular} \end{table} Table 2: Running time (in GPU-hour) of different task-weighting methods based on MAML. same number of shots \(k\sim\mathrm{Uniform}(1,50)\) is used for all classes within that task. The number of validation (or query) samples is kept at 15 samples per class. Similar to the experiments carried out in Section 5.1, TOW demonstrates a higher performance compared to the three baselines: exploitation, exploration and uniform along the training process, as shown in Figure 2. In general, TOW can achieve the state-of-the-art results when evaluating on 3,000 testing tasks formed from mini-ImageNet, as shown in Table 3, compared to common meta-learning methods. To further evaluate TOW, we follow the same setting and use the models trained on mini-ImageNet to test on 50 classes of bird images split from CUB dataset. The results in the last column of Table 3 show that TOW can also work well on out-of-distribution tasks formed from CUB compared to most of the methods in the literature. The main reason that explains the worse performance of TOW, compared with Bayesian TAML, is that the meta-learning based methods used by TOW are MAML and Protonet, which have a smaller number of meta-parameters to model tasks than Bayesian TAML. ## 6 Discussion and conclusion We propose a principled approach based on trajectory optimisation to mitigate the issue of non-uniform distribution of training tasks in meta-learning. The idea is to model the training process in meta-learning by trajectory optimisation with state as meta-parameter and action as the weights of training tasks. The local optimal weights obtained from iLQR - a trajectory optimiser are then used to re-weight tasks to train the meta-parameter of interest. We demonstrate that the proposed approach converges with less number of training tasks and has a final prediction accuracy that out-performs some common hand-crafted task-weighting \begin{table} \begin{tabular}{l c c} \hline \hline **Training set** & \multicolumn{2}{c}{**mini-ImageNet**} \\ \cline{2-3} **Testing set** & **mini-ImageNet** & **CUB** \\ \hline MAML (Finn et al., 2017) & 66.64 \(\pm\)0.22 & 65.77 \(\pm\)0.24 \\ Meta-SGD (Z. Li et al., 2017) & 69.95 \(\pm\)0.20 & 65.94 \(\pm\)0.22 \\ MT-net (Y. Lee and Choi, 2018) & 67.63 \(\pm\)0.23 & 66.09 \(\pm\)0.23 \\ ABML (Ravi and Beatson, 2019) & 56.91 \(\pm\)0.19 & 57.88 \(\pm\)0.20 \\ Protonet (Snell et al., 2017) & 69.11 \(\pm\)0.19 & 60.80 \(\pm\)0.19 \\ Proto-MAML (Triantafillou et al., 2020) & 68.96 \(\pm\)0.18 & 61.77 \(\pm\)0.19 \\ Bayesian TAML (H. B. Lee et al., 2020) & 71.46 \(\pm\)0.19 & **71.71 \(\pm\)0.21** \\ \hline TOW-MAML & 70.02 \(\pm\)0.24 & 68.34 \(\pm\)0.25 \\ TOW-Protonet & **72.12 \(\pm\)0.21** & 64.79 \(\pm\)0.25 \\ \hline \hline \end{tabular} \end{table} Table 3: Prediction results on any-shot classification evaluated on 3,000 testing tasks with 95 percent confident interval; the bold numbers denote the results that are statistically significant. The results of previous methods are reported in (H. B. Lee et al., 2020). Figure 2: Exponential moving average with smoothing factor 0.1 of the prediction accuracy evaluated on validation tasks formed from the any-shot setting of mini-ImageNet dataset mentioned in Section 5.2 where the base model is a 4-module CNN. baselines. Our proposed method also has some limitations that could be addressed in future work. TOW relies on iLQR which is not ideal for large-scale systems with high dimensional state space such as deep neural networks. Despite the approximation of Hessian matrices to use only diagonals as mentioned in Section 3.1, the linearisation of the state-transition dynamics and quadrictisation of the cost function are still time-consuming, and consequently, reduce TOW's efficiency. Future work might find a faster approximation to optimise the running time for TOW as well as evaluate on large-scaled datasets, such as meta-dataset (Triantafillou et al., 2020). Furthermore, our method is local in nature due to the Taylor's series approximation about a nominal trajectory used in iLQR. One way to improve further is to define a "global" or "stationary" policy \(\pi_{\theta}(\mathbf{x}_{t},\mathbf{u}_{t})\), which is similar to Guided Policy Search (Levine and Koltun, 2013; Levine and Koltun, 2014). This policy can then be trained on multiple local optimal trajectories obtained from iLQR. While this approach may offer a superior generalisation for the policy, scalability is an issue since the policy needs to process the high-dimensional state \(\mathbf{x}_{t}\). As a result, a very large model may be required to implement such policy.
2310.17523
Adaptive Resource Management for Edge Network Slicing using Incremental Multi-Agent Deep Reinforcement Learning
Multi-access edge computing provides local resources in mobile networks as the essential means for meeting the demands of emerging ultra-reliable low-latency communications. At the edge, dynamic computing requests require advanced resource management for adaptive network slicing, including resource allocations, function scaling and load balancing to utilize only the necessary resources in resource-constraint networks. Recent solutions are designed for a static number of slices. Therefore, the painful process of optimization is required again with any update on the number of slices. In addition, these solutions intend to maximize instant rewards, neglecting long-term resource scheduling. Unlike these efforts, we propose an algorithmic approach based on multi-agent deep deterministic policy gradient (MADDPG) for optimizing resource management for edge network slicing. Our objective is two-fold: (i) maximizing long-term network slicing benefits in terms of delay and energy consumption, and (ii) adapting to slice number changes. Through simulations, we demonstrate that MADDPG outperforms benchmark solutions including a static slicing-based one from the literature, achieving stable and high long-term performance. Additionally, we leverage incremental learning to facilitate a dynamic number of edge slices, with enhanced performance compared to pre-trained base models. Remarkably, this approach yields superior reward performance while saving approximately 90% of training time costs.
Haiyuan Li, Yuelin Liu, Xueqing Zhou, Xenofon Vasilakos, Reza Nejabati, Shuangyi Yan, Dimitra Simeonidou
2023-10-26T16:16:08Z
http://arxiv.org/abs/2310.17523v2
Adaptive Resource Management for Edge Network Slicing using Incremental Multi-Agent Deep Reinforcement Learning ###### Abstract Multi-access edge computing provides local resources in mobile networks as the essential means for meeting the demands of emerging ultra-reliable low-latency communications. At the edge, dynamic computing requests require advanced resource management for adaptive network slicing, including resource allocations, function scaling and load balancing to utilize only the necessary resources in resource-constraint networks. Recent solutions are designed for a _static_ number of slices. Therefore, the painful process of optimization is required again with any update on the number of slices. In addition, these solutions intend to maximize instant rewards, neglecting long-term resource scheduling. Unlike these efforts, we propose an algorithmic approach based on multi-agent deep deterministic policy gradient (MADDPG) for optimizing resource management for edge network slicing. Our objective is two-fold: (i) maximizing long-term network slicing benefits in terms of delay and energy consumption, and (ii) adapting to slice number changes. Through simulations, we demonstrate that MADDPG outperforms benchmark solutions including a static slicing-based one from the literature, achieving stable and high long-term performance. Additionally, we leverage incremental learning to facilitate a _dynamic_ number of edge slices, with enhanced performance compared to pre-trained base models. Remarkably, this approach yields _superior_ reward performance while _saving_ approximately 90% of training time costs. Multi-access edge computing, network slicing, incremental learning, MADDPG ## I Introduction and Background With the rapid development of the Internet of Things (IoT) and mobile networks, there has been an increasing demand for latency-sensitive and computing-intensive services and applications [1, 2]. In response to this, the concept of multi-access edge computing (MEC) has emerged as a prominent solution in fifth-generation (5G) networks. MEC brings network resources closer to users, decentralizing computing demands from data centers and enabling better network experiences in terms of latency and processing speeds [3, 4]. In the context of edge computing networks, network slicing has gained significant attention as an on-demand approach to provide customized services by dividing the physical edge network into multiple logical ones [5, 6]. Benefiting from proximity and localized data processing capabilities, the combination of MEC and network slicing enables the provisioning of a wide range of ultra-reliable low latency communications (URLLC) services, including real-time video processing, edge analysis, edge storage, etc. [7, 8, 9, 10]. However, due to the dynamic nature of loads and limited resources on MECs, an effective network slicing resource management solution is required to guarantee service quality and maximize resource utilization. Numerous research has focused on the development of management strategies for network slicing, aiming to tackle the allocation of shared resources among slices on edge computing networks in satisfying diverse 5G applications. Based on employed techniques, these works can be categorized into optimization-based [11, 12, 13], game theory-based [14, 15, 16, 17] or deep reinforcement learning (DRL)-based strategy [18, 19, 20, 21, 22, 23], respectively [11, 24]. In particular, Suh, _et al._[25] applied a deep Q learning-based algorithm that decides the resource allocation of MECs to multiple slices. However, their strategy, which employs a single agent to handle policies for multiple network slices, is severely limited by the exponentially growing action space. This approach may encounter difficulties in converging and adapting to complex networks. In comparison, Sun _et al._[20] proposed an autonomous virtual resource-slicing framework, which dynamically reserves resources based on the traffic ratio and then refines the allocation by a single-agent DRL-based algorithm. However, in this method, the single-agent model only determines resource allocation for one network slice. The competition between slices in the same time slot is expanded into time spans of the Markov Decision Process (MDP) and is attenuated by the discount factor. In order to accommodate excessive slice scenarios and simulate the relationship between slices, Vila _et al._[19] designed a collaborative multi-agent DRL algorithm that allocates a DRL agent to each slice to define the capacity shares between slices. In addition, Caballero _et al._[14] established the resource sharing model between slices as a fisher market and converged this game on a Nash equilibrium where each slice reaps the performance benefits of sharing while retaining the ability to customize their owns. However, the authors in [19] and
2303.00167
Sketch2Cloth: Sketch-based 3D Garment Generation with Unsigned Distance Fields
3D model reconstruction from a single image has achieved great progress with the recent deep generative models. However, the conventional reconstruction approaches with template mesh deformation and implicit fields have difficulty in reconstructing non-watertight 3D mesh models, such as garments. In contrast to image-based modeling, the sketch-based approach can help users generate 3D models to meet the design intentions from hand-drawn sketches. In this study, we propose Sketch2Cloth, a sketch-based 3D garment generation system using the unsigned distance fields from the user's sketch input. Sketch2Cloth first estimates the unsigned distance function of the target 3D model from the sketch input, and extracts the mesh from the estimated field with Marching Cubes. We also provide the model editing function to modify the generated mesh. We verified the proposed Sketch2Cloth with quantitative evaluations on garment generation and editing with a state-of-the-art approach.
Yi He, Haoran Xie, Kazunori Miyata
2023-03-01T01:45:28Z
http://arxiv.org/abs/2303.00167v1
# Sketch2Cloth: Sketch-based 3D Garment Generation with Unsigned Distance Fields ###### Abstract 3D model reconstruction from a single image has achieved great progress with the recent deep generative models. However, the conventional reconstruction approaches with template mesh deformation and implicit fields have difficulty in reconstructing non-watertight 3D mesh models, such as garments. In contrast to image-based modeling, the sketch-based approach can help users generate 3D models to meet the design intentions from hand-drawn sketches. In this study, we propose Sketch2Cloth, a sketch-based 3D garment generation system using the unsigned distance fields from the user's sketch input. Sketch2Cloth first estimates the unsigned distance function of the target 3D model from the sketch input, and extracts the mesh from the estimated field with Marching Cubes. We also provide the model editing function to modify the generated mesh. We verified the proposed Sketch2Cloth with quantitative evaluations on garment generation and editing with a state-of-the-art approach. 3D model reconstruction, sketch-based generation, unsigned distance function, garment generation and editing ## I Introduction With the rapid development of graphics and virtual reality technologies (e.g., metaverse), the modeling of garments and clothes is widely adopted in visual applications such as virtual try-on, online shopping, and 3D character modeling. However, the design processes of garments by professional designers are complex and usually take heavy time costs to design and validate the design outcomes. It is challenging and difficult to design the desired garments for common users without modeling skills and design experience. On the other hand, the users can depict the desired 3D objects using hand-drawn sketches which consist of strokes and subjective abstractions of the target objects. Therefore, sketches are commonly used in the early stage of the design process. For common users, sketch-based approaches can facilitate efficient design activities at a low time cost. In this work, we aim to provide sketch-based model generation and editing for 3D garments. 3D model generation from users' sketches has been extensively studied recently. The representative approaches can map the sketch strokes into 3D space and create the 3D model from the sketch [1]. The previous work used sketches to retrieve the target shape from 3D shape dataset by comparing the input sketch with the contour data of 3D models [2]. Although these previous approaches are considered to be effective in 3D modeling, the acquired shapes may be limited in the existing dataset, and 3D modeling skills are required from expert users, which makes it infeasible to amateur users. Recently, deep learning-based approaches can achieve easier and more accurate model generation from a single simple sketch [3, 4]. Sketch2Model tried to generate 3D models from the sketches by deforming a template mesh to a target shape [3], which is a common approach for 3D model reconstruction from single-view images. Sketch2Mesh [4] utilized the implicit function learning and generated the target shapes by learning a Signed Distance Function (SDF). However, Sketch2Model [3] may be limited by the topological structure of the template model, and Sketch2Mesh [4] only for watertight 3D meshes. In this work, we aim to generate non-watertight 3D models from user sketches such as clothing. To solve this issue, we adopt the implicit field of Unsigned Distance Function (UDF) [5] which is an optimal representation of 3D objects with non-watertight meshes in complex surfaces and scenes. In this work, we propose Sketch2Cloth to reconstruct 3D Fig. 1: The proposed Sketch2Cloth for sketch-based 3D garment model generation and editing. The proposed system adopts a sketch as input and generates the 3D garment model as output. Sketch2Cloth allows users to edit the generated garment models. garment models from hand-drawn sketches using UDF implicit fields as illustrated in Figure 1. The proposed Sketch2Cloth framework utilizes the autoencoder structure to learn UDFs implicit representations. The output of the autoencoder is UDF data, and the mesh can be extracted from the UDF using Marching Cubes method. We also provide a user interface for garment modeling and editing that allows the common users to draw sketches and edit the generated 3D garment models. We achieve the model editing with an encoder of the optimized latent vectors when the users edit the contours of the generated model. Finally, we verified the effectiveness of the proposed Sketch2Cloth system in 3D garment model generation from freehand-drawn sketches through evaluation experiments. ## II Related Work ### _Image-based Garment Reconstruction_ With the explosive development of deep learning approaches, numerous approaches for reconstructing 3D models from a single-view image have been explored in recent years, [6, 7, 8, 9]. The majority of them are template-based methods, where the core of the method is to deform a pre-prepared template mesh into the target shape. In these methods, the results are limited to the topological structure of the template mesh and can only generate rough shapes. Therefore, to represent complex shapes such as clothing, other additional information must be provided. For example, Pavlakos et al. [6] use a parametric 3D model to generate a human body model by optimizing the parameters of the model to fit 2D features extracted from the image. DeepWrinkles [7] combines global shape deformation with surface detail by adding fine garment wrinkles to the normal map of a prepared template mesh of clothing. Tex2shape [8] considered the shape representation problem as an image style transfer problem and predicted model surface information such as normal maps and added it to the prepared human body models to generate models of clothed people. Although we aimed to provide more complex surface information and learn complex surfaces, these methods cannot break through the limitations of the topological structure of the prepared template mesh. Apart from the template-based methods, PIFu [10] proposed a deep learning framework for clothed body 3D reconstruction based on implicit function learning. In order to retain more surface shape features, all point (x, y, z) information and 2D image pixels in 3D space were locally aligned to successfully reproduce more complex surface structures. Zhao et al. [9] proposed a deep learning framework based on implicit learning-based methods that predicts a set of key points called Anchor Points, which represent features on a fine surface located around the surface, from color images to obtain a more accurate 3D surface representation. These methods are based on learning implicit functions and are not limited by template-based methods. However, these methods use garment photographs as input, which makes sketch-based generation difficult. ### _3D Model Reconstruction with Implicit Functions_ Various 3D reconstruction methods based on implicit learning have been published. Such methods are based on the representation of 3D objects by the implicit field which has the same advantages as point clouds. Typical examples are the Occupancy Field [11] and SDF [12, 13] and UDF [5, 9]. These methods can learn continuous shape representations, and the reconstructed 3D models have no resolution limitations. Implicit learning with Occupancy Field and SDF can only generate watertight models, while UDF methods can generate non-watertight models or complex surfaces, including watertight models. Implicit model representation has advantages over template-based methods in that it is not limited by the topological structure of the template, can represent models of any size, and can generate non-watertight models (using UDF). Since garment data is basically non-watertight, it is not possible to determine whether a garment is inside or outside, making implicit function learning with UDFs the best method for this purpose. Guillard et al. [14] proposed a method for fast mesh generation from UDFs, which enables efficient mesh extraction from UDFs. In this work, we propose 3D modeling generation for non-watertight clothing meshes using an implicit function learning method with UDF fields. ### _Sketch-based Content Design_ For both expert and amateur users, sketching is a common and easy way to create content to meet users' design intentions. Recently, the sketch-based approach has been applied in various fields of content creation support, such as image generation [15, 16], 3D model reconstruction [4], and animation controlsF [17, 18, 19]. For example, Sketch2VF [17] introduced an interactive user interface for flow design and generated fluid simulations by training a conditional GAN to estimate the velocity field. Similar to this, He et al. [20] used user sketches to generate normal maps of the target 3D object directly. Furthermore, DualSmoke [18] adopted a two-stage structure with conditional GAN for Lagrangian coherent structure generation from hand-drawn sketches to help users design the smoke effect without domain knowledge. Peng et al. [19] proposed the motion retrieval system with hand-drawn sketches, while DualFace [21] proposed a portrait drawing interface for general users to draw better portrait paintings. All these applications show us the potential of freehand sketch-based content creation. Shape generation using sketches has also been widely studied in the computer graphics community. Igarashi et al. [1] mapped hand-drawn line segments to a specific shape, making it possible to create a 3D model from sketches alone. The explosive development of deep learning has also led to the proposal of various learning-based methods [22, 23]. These methods usually require information from multiple viewpoints. Zhang et al. [3] generated a mesh model directly from a sketch, but since it focused on solving the ill-pose problem of the generated geometry (e.g., the direction from which the input sketch was viewed was ambiguous), it did not address complex geometry generation and real-time model generation and editing. Guillard et al. [4] proposed high-quality mesh model generation and editing using SDF. However, these methods can only generate watertight models and cannot handle the generation of non-watertight models such as clothing. Wang et al. [24] can generate a clothed human body model by learning the latent space shared by the sketch, mesh data, and parameters, and mapping the garment shape to the human body model. Chen et al. [25] proposed a method that takes 2D garment sketches as input and combines them with human shape parameters to generate a 3D garment mesh that fits a specific human shape. These approaches were proposed for the generation of a clothed human body that cannot handle only clothing. They also require users to have expertise in 3D modeling and design. To solve these issues, this research aims to develop an easy-to-use user interface that allows users to create sketches and interact with the generated models. ## III Sketch2Cloth The proposed sketch-based garment generation framework, Sketch2Cloth utilizes the autoencoder structure [13] neural network to reconstruct the 3D garment model from users' freehand-drawn sketches. An overview of the framework to generate and edit a 3D model from the input sketch of this research is shown in Figure 2. The objective of this study is to reconstruct a 3D model from an input sketch by means of isosurface extraction and triangulated mesh generation for an unsigned distance field (UDF). The implicit UDF field was sampled from the garment model to construct the learning dataset. We also developed a user interface for drawing sketches and viewing and editing the generated models. ### _Unsigned Distance Fields_ We incorporate unsigned distance fields (UDF) [5] for shape surface representation. Let the 3D space where the 3D model exists be \(\Omega\), for any coordinate \(p\), we have the equation (1) for the computation of UDF \(\Phi(p)\). \[\Phi(p)=d(p,\partial\Omega),p\in\Omega \tag{1}\] Here \(\partial\Omega\) denotes the boundary of \(\Omega\). The distance \(d\) from coordinate \(p\) to \(\partial\Omega\) can be computed by equation (2). \[d(p,\partial\Omega)=min_{r\in\partial\Omega}\mid p-r\mid \tag{2}\] As the shape surface can be represented by the zero level-set \(UDF=0\), we can get dense point clouds by simply moving \(p\) to a new coordinate \(q\) from which the distance to the implicit surface is \(d\) along the negative gradient direction of UDF. We describe the process in equation (3). Then, the dense point clouds can be used for mesh reconstruction algorithms such as Marching Cubes [14]. \[q=p-d(p,\partial\Omega)\cdot\nabla_{p}d(p,\partial\Omega) \tag{3}\] ### _Dataset Construction_ This work used the Multi-Garment dataset [26] and DeepFashion3D [27] dataset to experiment with generating a non-watertight model for reconstructing 3D models of garments. The Multi-Garment dataset contained 328 garment meshes. The dataset was divided into a training set and a test set, with numbers 300 and 28, respectively. The DeepFashion3D dataset contained 2,075 garment meshes and was divided into a training set and a test set, with numbers 1,867 and 208, respectively. To generate UDFs from sketches, we need the pair data of sketches and UDFs. #### Iii-B1 Sampling UDF Data First, the mesh data is normalized to a maximum coordinate value of \(1\) to make it suitable for training. Then, since sampling is performed from both outside and inside the surface, the scale is adjusted to fit a surface of arbitrary radius slightly smaller than \(1\) so that sampling can be performed from outside the surface at the farthest distance from the model center, and sampling is performed on the adjusted mesh. Empirically, the scale is set to \(0.8\) in this study. Then, for each \(M_{i}\) mesh, the number of sampled vertices is set to be \(N\) and the distance \(d_{i}^{u}\) from \(N\) 3D vertex coordinates \(p_{i}\) to the \(M_{i}\) surface is calculated. There is no limit to the number of samplings, but considering the data processing speed, \(N\) is set to \(120,000\) to obtain as many samples as possible in as little time as possible. The sampling data consists of random points on the surface, near the surface, and in the bounding box surrounding the mesh. The sampling was concentrated on the surface of the mesh. Specifically, \(48,000\) of the samples are taken from coordinates within \(0.05\) Fig. 2: The framework of the proposed Sketch2Cloth system. The proposed system consists of two networks: an encoder that encodes the sketch image into a latent code \(z_{c}\) and a decoder that generates a UDF from the latent vector \(z\). The decoder predicts the unsigned distance \(d_{i}^{u}\) for all \(p_{i}\in G\) based on \(z_{c}\) and outputs the UDF. Then, we use the Marching Cubes method to obtain a 3D model mesh. The user can also edit the generated model with the proposed user interface. of the surface, and \(32,000\) from coordinates within \(0.3\) of the surface, in our implementation. In addition, \(24,000\) are sampled randomly from the \(M_{i}\) surface, and the last \(16,000\) are sampled within a bounding box with edge length \(2\) (range \([-1,1]\)). Note that \(d_{i}^{u}\) is calculated using KDTree [28]. No special processing (e.g., water-tightening) is required for the mesh \(M_{i}\). #### Iii-B2 Sketch-Mesh Paired Data The dataset used in this study does not provide paired sketch images. To obtain sketches, rendered images must first be generated for each model, and the generated rendered images are used to generate sketch images. Considering the sparse features of the hand-drawn sketches, we extract the binarized contours of the target meshes from the rendered depth maps at different angles as sketch images. The depth map can remove unnecessary details to produce a higher-quality contour. For each mesh \(M_{i}\), we render depth maps from 36 angles. The mesh data paired with the sketch data obtained by this method is shown in Figure 3. ### _User Interface_ The proposed system consists of 3D garment model generation from sketch input and a user interface for creating and editing sketches. The proposed user interface (UI) is shown in Figure 4, which consists of two areas. The left area is for creating and editing sketches, and the right area is for viewing generated objects. All buttons are located at the top of the UI. In the "Sketch Pad" area, the buttons from left to right are "Brush," "Eraser," "Clear," and "Save." The user can use these functions to create a sketch, and when finished, press "Save" to start model generation from the created sketch. The "Model Viewer" operation buttons are located at the top of the right side of the screen. From left to right are "Capture," "Reset," and "Save Model," respectively. The generated object is displayed on the right side and can be zoomed in and out by scrolling the mouse and rotated by moving the mouse with the right mouse button pressed and held. The user can obtain a snapshot of the current viewpoint with the "Capture" button, and the system will generate a sketch image based on it. The generated image is placed in Sketch Pad and edited using the sketch creation function. After editing is complete, pressing "Save" one more time allows the system to optimize the current model based on the modified sketch and edit the model. The "Reset" function dismisses the user's edits and returns to the initial generation result. Finally, the user can download and save the completed 3D model by pressing "Save Model". We implement the UI as a web application for wide applicability. The server side handles all the computational processing, so the system can be used without a burden on the user's terminal. We incorporate the Angular [29] for the front end. For the server side, we choose Flask [30] in order to connect the UI and the 3D model generation system. The average time cost was \(1.13\) seconds from the user's creation of a sketch to the acquisition of the model. ## IV Implicit Surface Representation Learning In this study, we use the autoencoder structure [12] for UDF training. Let \(E\) be an encoder that encodes the input sketch into a latent vector, \(D\) be a decoder that generates UDF data, and \(S\) be an input sketch. The input sketch \(S\) is encoded into the latent code \(z_{c}\) by the encoder \(E\). The \(Grid\) is the result of sampling a discrete normal Grid in the 3D space of \([-1,1]^{3}\) where the 3D shape exists, denoted by \(G\). The latent code \(z_{c}\) is concatenated with \(G\) to form the latent vector \(z\), which is generated into a UDF by the decoder \(D\). This process is shown in equation (4). The decoder predicts the unsigned distance \(d_{i}^{u}\) for all the input coordinates \(p_{i}\in G\). \[UDF=D(E(S),G) \tag{4}\] ### _Objective Function_ The learning objective of the network is to generate the best-fitting UDF for a given \(S\). The optimization objective in learning is shown in equation (5) by utilizing the \(L1\) distance, where \(df_{gt}^{u}\) is the ground truth. To reduce the influence of sample coordinates scattered in the network, \(df_{gt}^{u}\) and \(D(z)\) are thresholded at the value of \(\delta\) to remove discrete values described as \(clamp\) function. Normalization is also performed on the input latent vector \(z\), and optimization is also performed on the normalization results. The normalization loss is shown in equation (7). In addition, a geometric regularization loss [31] is adapted to improve the network learning as equation (6). The Fig. 4: The proposed user interface of Sketch2Cloth. Fig. 3: Generated sketch-mesh pair data. \(\gamma\) and \(\lambda\) refer to the normalization factor and the UDF values scalar, respectively. Based on the above, the loss function of the network is defined by the equation (8). \[L(D(z),df^{u}_{gt})=\mid clamp(D(z),\delta)-clamp(df^{u}_{gt},\delta)\mid \tag{5}\] \[L_{reg\_geo}=\frac{1}{N}\sum exp(-\gamma\cdot D(z)) \tag{6}\] \[L(z_{i})=\lambda\parallel z_{i}\parallel_{2} \tag{7}\] \[L=L(D(z),df^{u}_{gt})+L(z_{i})+L_{reg\_geo} \tag{8}\] ### _Optimization for Generated Model_ In the optimization and editing of the generated results, this study refers to the method of Remelli et al. [13]. The result and the chamfer distance of the sketch are minimized to optimize the latent vectors as decoder input. The chamfer distance is used as a limiting constraint for the optimization. In this study, the chamfer distance is implemented in 2D, where the 2D Chamfer Loss \(L_{CD}\) is defined by the equation (9). where \(S_{p}\) refers to the projected contour of the 3D mesh and \(F_{s}\) represents the sketch region. \[L_{CD} =\sum_{p_{p}\in S_{p}}min_{p_{s}\mid F_{s}(p_{s})=0}\parallel p_{ p}-p_{s}\parallel^{2}\] \[+\sum_{p_{s}\mid F_{s}(p_{s})=0}min_{p_{p}\in S_{p}}\parallel p_ {p}-p_{s}\parallel^{2} \tag{9}\] The optimization of the generated results essentially backpropagates the gradient values due to the loss between the generated results and the input, but since neither the UDF data nor the 3D model is differentiable, the optimization target is the latent vectors of the decoder inputs. Since the latent vectors are differentiable as part of the network learning process, optimization is possible. This optimization allows the output result of the network to be tuned. Specifically, if the generated 3D mesh is \(M\) and the sampling of its surface vertices is \(V\), the gradient \(\frac{\partial L_{CD}}{\partial z}\) is calculated for the latent vector \(z\) using the equation (10). The calculated gradients are used to optimize the latent vectors through the backpropagate process. \[\frac{\partial L_{CD}}{\partial z}=\sum_{v\in V}-\frac{\partial L_{CD}}{ \partial v}\nabla D(v,z)\frac{\partial D}{\partial z}(v,z) \tag{10}\] The optimized latent vectors are then used to generate UDF data by the decoder and reconstruct the 3D model. ### _Network Training_ The deep learning part of the system was implemented in Pytorch [32] and Pytorch3d [33]. The Auto Encoder used in the UDF generation follows the implementation of Guillard et al. [13]. The decoder consists of 9 perceptron layers, the number of hidden dimensions in the MLP layer is 512, and ReLU is used as the activation function. Fourier position encoding [34] is performed on the input 3D coordinates to reduce the loss of detailed information due to the activation function. The encoder part, which encodes the sketch image into latent vectors, uses ResNet [35]. In learning, the loss function is the sum of the \(L1\) loss due to UDF (equation (5)) and the input normalization loss (equation (7)). Set the scatter removal factor \(\delta\) to \(0.1\) and the normalization strength \(\lambda\) to \(10^{-4}\). The training flow of the network is shown in Figure 5. The batch size was 16 and the number of epochs was \(2000\). The optimizer used was Adam [36], and the learning rate was set according to the number of epochs according to the equation (11). \(lr_{init}\) is the initial learning rate, and the initial value is set to \(0.5\times 10^{-3}\). The \(\omega\) is \(0.5\) and \(\gamma\) is 500. For the encoder, we set \(\alpha=1\), and for the decoder, \(\alpha=0.1\). The training was performed using two NVIDIA GeForce RTX 3090s. The training time per epoch was around \(10\) seconds and about \(5.5\) hours for all epochs. \[learning\_rate=\alpha\times lr_{init}\times(\omega^{\lfloor\frac{epoch}{ \gamma}\rfloor}) \tag{11}\] ## V Evaluation Both the sketch generated by the rendering method and the user interface were used to evaluate this system. Note the user-drawn sketches are created by one of the authors. The effectiveness of the proposed method is also verified by numerical evaluation of the 3D garment models generated by the proposed method, as well as by comparison with state-of-the-art Sketch2Mesh approach [4]). ### _Model Generation from Sketches_ For the generation of the garment model by sketching, we use the garment data provided by Multi-Garment [26]. To verify the effectiveness of the system's generation, we use both sketched images generated from rendered images and hand-drawn sketches. The results of generating 3D garment models from input sketches using this method are shown in Figure 5(a). The system generates 3D non-watertight garment models according to the input sketches. The output quality of the system did not change much from the results for hand-drawn sketches shown in Fig. 5: The learning flow of the network. Figure 8. Figure 5(b) shows some results on DeepFashion3D dataset. we have to note that compare to Multi-Garment, the garment model provided by DeepFashion3D is not as good as Multi-Garment and contains a lot of holes, which affects our sketch generation and the learned UDF field. ### _Comparison Evaluation_ As a state-of-the-art method for model generation by sketching, Sketch2Mesh [4] is used for comparison. Because it is an SDF-based method, the non-watertight model used in this study is trained by water tightening it. The comparison results are shown in Figure 7. The respective front and top views are shown. Existing SDF-based methods can reconstruct the model, but only as one continuous surface. Therefore, it is difficult to be useful for realistic reconstruction tasks. On the other hand, our method utilizes UDFs and can represent complex surfaces, such as non-watertight models. As a result of the numerical evaluation, the Chamfer Distance and Earth Mover's Distance of the generated model by the existing method and the proposed method are shown in Table I. It can be seen that the proposed method has a lower chamfer error and EMD error. ### _Model Editing_ As described in section IV, our system also allows users to edit the output models. As shown in Figure 9, users can edit the output 3D model via 2D sketches that are easy to manipulate, \begin{table} \begin{tabular}{c|c|c} \hline Method & CD(\(\times 10^{-3}\)) & EMD(\(\times 10^{-2}\)) \\ \hline \hline sketch2mesh & 4.24 & 7.95 \\ \hline ours & **3.51** & **7.14** \\ \hline \end{tabular} \end{table} TABLE I: Chamfer Distance and Earth Mover’s Distance of the generated model. Fig. 6: Results of the garment model reconstructed from the rendering sketches on Multi-Garment (a) and DeepFashion3D datasets (b). rather than directly editing the 3D model for areas where they are dissatisfied with the generated results. ## VI Conclusion This work proposed Sketch2Cloth for generating 3D clothing models from hand-drawn sketches, using implicit function learning (UDF) to generate non-watertight clothing models. Sketch2Cloth enables users who do not have expertise in 3D modeling to create 3D models of desired garments easily. In this work, UDF learning was applied to sketch-based generation to generate a 3D model of a garment. We found that it may be difficult to represent the detailed surface feature, such as pockets and buttons. In this work, we used an autoencoder to learn the unsigned distance field. However, we found that the values and gradients of UDF field could be difficult to learn due to the un-differentiability at the zero level set, which may result in the holes or wrong normal of reconstructed meshes. Recent works adopted RGB images for learning with more image features than the binary sketches [9, 31]. To solve this issue, We believe that an estimated normal map from the input sketch [20] may learn the better UDF fields. For the model generation, all garment parts (sleeves, buttons, pockets, etc.) are integrated, and it is expected that the actual design application will require the separation of each part. Although this study used UDF to successfully generate garment models from sketches, we believe that Sketch2Cloth can be extended to the generation of a car's exterior and Fig. 8: Results of the garment model reconstructed from the hand-drawn sketch. Fig. 7: Comparison with state-of-the-art sketch-based model reconstruction method [4]. Fig. 9: Results of editing reconstructed model interior design from sketch input, and interior scenes [5]. It is also expected that the proposed system can be used to create complex scenes.
2306.14914
Analytical Gomboc
Investigation of the mathematical requirements for a three dimensional geometrical object to qualify as a Gomboc (mono-monostatic) has resulted in the discovery of two specific, analytical Gomboc shapes. Analytical in that the function describing the Gomboc surface is infinitely differentiable. In this brief note, the analysis undertaken is summarized and the formulae for the two specific shapes provided.
Millard Lee Sloan
2023-06-19T19:49:19Z
http://arxiv.org/abs/2306.14914v1
## An Analytical Gomboc ## An Analytical Gomboc M. L. Sloan **Abstract:** Investigation of the mathematical requirements for a three dimensional geometrical object to qualify as a Gomboc (mono-monostatic) has resulted in the discovery of two specific, analytical Gomboc shapes. Analytical in that the function describing the Gomboc surface is infinitely differentiable. In this brief note, the analysis undertaken is summarized and the formulae for the two specific shapes provided. ## 1 Introduction A Gomboc is a three dimensional convex solid of constant density which possesses only two equilibrium points, one stable and the other, unstable, when resting on a horizontal surface in the presence of a constant vertical gravitational field. The question of the existence of such an object was raised by Russian mathematician Vladimir Arnold in 1995, with the problem solved in 2006 by Gabor Domokos and Peter Varkonyi of Hungary (P.L. Varkonyi and G. Domokos: Mono-monostatic bodies: the answer to Arnold's question. The Mathematical Intelligencer, Volume 28, Number 4., pp 34-38. (2006)) [1]. The solution found by Domokos and Varkonyi required extreme precision in any physical embodiment, with tolerances of \(10^{-5}\) in their earliest embodiment, and somewhat more reasonable, but still extreme, tolerances of \(10^{-3}\) in later embodiments. [As discussed in various internet articles on the Gomboc _e.g. en.wikipedia.org_ and _plus.maths.org_ ] In this brief note, two specific analytic Gomboc shapes are presented which are simple in form and, to the extent examined, not as sensitive to variations. This presentation will be expository, with none of the details of the multiple, often fruitless, though instructive, research paths undertaken. ## 1 Overall Gomboc Requirements In general, the bounding surface of an analytic Gomboc may specified by a function: \[\psi\left(\mathbf{z,x,y}\right)\ =\ \text{constant}\] where \(\mathbf{x,y,z}\) are standard Cartesian coordinates describing the 3 dimensional space in which the Gomboc is imbedded. We orient the coordinate system so that the origin (\(\mathbf{x=y=z=0}\)) is the center of mass of the homogeneous density Gomboc. In examining the requirements for a Gomboc, polar coordinates \(\mathbf{r,\theta,\phi}\) are appropriate, with: \[\mathbf{x=r\ sin(\theta)\ cos(\phi)}\] \[\mathbf{y=r\ sin(\theta)\ sin(\phi)}\] \[\mathbf{z=r\ cos(\theta)}\] The center of mass of the gomboc is then the point \(\mathbf{r=0}\). In polar coordinates, the bounding surface \(\psi\left(\mathbf{z,x,y}\right)\ =\ \text{constant}\) may then be inverted to yield a solution in \(\mathbf{r}\) for the boundary of the Gomboc: \[\mathbf{r=F\left(\theta,\phi\right)}\] The requirement of convexity ensures that \(\mathbf{r}\) is single valued in \(\theta,\phi\) and moreover positive definite. **Center of Mass Requirements** Since \(\mathbf{r=0}\) is the center of mass of the Gomboc, we require: \[\begin{array}{l}\int\mathbf{d^{3}V\ x(\,r\,,\theta\,,\phi)=0}\\ \int\mathbf{d^{3}V\ y(\,r\,,\theta\,,\phi)=0}\\ \int\mathbf{d^{3}V\ z(\,r\,,\theta\,,\phi)=0}\end{array}\qquad\begin{array}{l} \text{where \ \ }\mathbf{d^{3}V=r^{2}\ sin(\theta\,)\ dr\ d\theta\ d\phi}\,,\\ \text{with the integration carried out}\\ \text{over the volume of the Gomboc}\end{array} \tag{1}\] The boundary \(r=F\left(\theta,\phi\right)\) being single valued in \(\theta\), \(\phi\), the \(r\) integration is trivial and one arrives at the follow center of mass requirements: \[\begin{array}{l}\int\limits_{0}^{2\text{{\sc{IT}}}}\mathbf{d}\,\phi\,\int \limits_{0}^{\text{{\sc{IT}}}}\sin(\theta)\,\cos(\theta)\,\,F^{4}\left(\theta, \phi\right)\,\,\text{d}\theta\,\,=\,0\\ \int\limits_{0}^{2\text{{\sc{IT}}}}\mathbf{d}\,\phi\,\int\limits_{0}^{\text{{ \sc{IT}}}}\sin^{2}(\theta)\,\sin(\phi)\,\,F^{4}\left(\theta,\phi\right)\,\, \text{d}\theta\,\,=\,0\\ \int\limits_{0}^{2\text{{\sc{IT}}}}\mathbf{d}\,\phi\,\int\limits_{0}^{\text{{ \sc{IT}}}}\sin^{2}(\theta)\,\cos(\phi)\,\,F^{4}\left(\theta,\phi\right)\,\, \text{d}\theta\,\,=\,0\end{array} \tag{2}\] Using complex variable notation: \(\exp(\,i\,\phi\,)=\cos(\phi)+i\,\sin(\phi)\), the second two equations may be combined to allow a more succinct statement of the center of mass requirements: \[\int\limits_{0}^{2\text{{\sc{IT}}}}\mathbf{d}\,\phi\,\int\limits_{0}^{\text{{ \sc{IT}}}}\sin(\theta)\,\,\cos(\theta)\,\,\,F^{4}\left(\theta,\phi\right)\,\, \text{d}\theta\,\,=\,0 \tag{3a}\] \[\int\limits_{0}^{2\text{{\sc{IT}}}}\mathbf{d}\,\phi\,\int\limits_{0}^{\text{{ \sc{IT}}}}\sin^{2}(\theta)\,\,\exp(\,i\,\phi\,)\,\,F^{4}\left(\theta,\phi\right) \,\,\text{d}\theta\,\,=\,0 \tag{3b}\] Equilibria Requirements Given a Gomboc shape defined by: \[r\,\,-\,\,F\left(\theta,\phi\right)\,\,=\,0 \tag{4}\] equilibrium points are those points on the surface where the normal to the surface \[\nabla\left(r\,\,-\,\,F\left(\theta,\phi\right)\right) \tag{5}\] lies wholly in the \(r\) unit normal direction. Specifically, then, equilibria are those \(\theta\), \(\phi\) sets of points where the components of the gradient \(\nabla\) in the \(\theta\) and \(\phi\) direction vanish: \[\rho_{\theta}\,F\,(\,\theta\,,\,\phi)=0\,. \tag{7}\] For a shape to be a Gomboc, there can only be two sets of such points. **Convexity Requirement** As with all convex Gomboc shapes, the radius of curvature must be positive at each point on the Gomboc surface. For the specific analytic Gombocs exhibited below, that is easily achieved. **2. Specific Analytic Gombocs** Minimally "bumpy" shapes probably stand the best chance of meeting the Gomboc requirements, since Gombocs are required to exhibit two and only two equilibrium points. That observation, along with many hours of research and false starts, have led the author to restrict investigations of analytic Gomboc geometries to the simplest \(\,\phi\) dependency possible 1. Footnote 1: It is well know that a purely \(\,\phi\,\)-symmetric Gomboc is not possible. \[{\bf F}^{4}\,(\,\theta\,,\,\phi)\,\,=\,{\bf R}(\theta)\,+\,\,{\bf sin}\,(\theta )\,{\bf A}(\theta)\,{\bf cos}(\,\phi\,\,-\,\,{\bf P}(\theta)\,)\,\,,\] (8) Among such possible solutions, the following particular embodiment has proven particularly useful: \[{\bf r}^{4}\,=\,{\bf F}^{4}\,(\,\theta\,,\,\phi)\,\,=1+4\,\beta\,{\bf sin}( \theta)\,{\bf cos}(\,\phi\,\,-\,\,{\bf P}(\theta)\,)\,, \tag{9}\] with \(\,\beta\,\) a small positive constant.2 This formulation greatly simplifies the equilibria and center of mass requirements. In particular, there are two and only two equilibrium points: \[\begin{array}{l}\Theta=\Pi/2,\ \ \phi=\ P(\Pi/2\ )+\Pi\ \[\int\limits_{-\,\Pi}\;\exp\left(\,i\,P(\Pi)\,\right)\,d\eta\;\;=\;0 \tag{14}\] The following particularly simple solution: \[p\left(\eta\right)\;=\;\eta\;=\;3\,\Pi\,/\,2\;\left(\;\cos\left(\theta\right)\; -\,\cos^{3}\left(\theta\right)\,/\,3\right) \tag{15}\] provides a second analytic Gomboc solution: \[r^{4}\;=\;1+4\;\beta\;\sin\left(\theta\right)\;\cos\left(\;\phi\;-\;3\Pi\,/2\; \left(\;\cos\left(\theta\right)\;-\,\cos^{3}\left(\theta\right)\,/\,3\right)\, \right) \tag{16}\] with a single stable equilibrium point at \(\theta\;=\;\Pi\,/\,2\,,\,\phi\;=\;\Pi\) and a single unstable equilibrium point at \(\theta\;=\;\Pi\,/\,2\,,\,\phi\;=\;0\;\). Examination of the convexity requirement for this Gomboc shape indicates that a value of \(\beta\;=\;0.17\) or smaller should suffice. Note that the Eq. (12) Gomboc wraps smoothly two and one-half revolutions around the \(z\) axis ( \(\phi\;=\;0\) to \(5\;\Pi\) as \(\theta\;\) traverses \(0\;\) to \(\;\Pi\;\)), while the second Eq (16) Gomboc exhibits only one full revolution around the \(z\) axis, but with a somewhat nonlinear \(\phi\;\) vs \(\theta\;\)path. Finally, as indicated in Footnote 2, the solutions presented amount to surface perturbations on a unit sphere. Accordingly, the solutions may be scaled up by any constant \(r_{o}^{4}\) to achieve any specific size Gomboc desired. **ACKNOWLEDGEMENT:** Posting and dissemination of this technical note would not have been possible without the guidance of Professor Gabor Domokos, co-inventor of the Gomboc, whose support and encouragement are greatly appreciated.
2308.02402
Effect of thermal fuctuations on the nontrivial topology of the d+id superconducting phase
The behavior of the topological index, characterizing the properties of superconducting phases of quasi-two-dimensional systems with nontrivial topology, is investigated depending on the temperature and parameters of the effective non-Hermitian Hamiltonian. For this purpose, a method of calculating the topological index, based on a self-consistent functional-integral theory, is proposed. The method makes it possible to take into account thermal fluctuations and study the behavior of the topological index as a function of temperature and Hamiltonian parameters. The chiral d+id superconducting phase of a quasi-two-dimensional model with effective attraction between the electrons located at the nearest sites of a triangular lattice is considered. It is shown that the characteristic features in the energy dependence of the self-energy part, which arise when thermal fluctuations are taken into account, have a structure that does not lead to a change in the topological properties of the system. It is found that thermal fluctuations, as well as an increase in effective attraction in this system, contribute to the expansion of the temperature region, in which the value of the topological index is close to the integer C1=-2.
A. G. Groshev, A. K. Arzhnikov
2023-08-02T07:09:23Z
http://arxiv.org/abs/2308.02402v1
# Effect of thermal fluctuations on the nontrivial topology of the \(d+id\) superconducting phase ###### Abstract The behavior of the topological index, characterizing the properties of superconducting phases of quasi-two-dimensional systems with nontrivial topology, is investigated depending on the temperature and parameters of the effective non-Hermitian Hamiltonian. For this purpose, a method of calculating the topological index, based on a self-consistent functional-integral theory, is proposed. The method makes it possible to take into account thermal fluctuations and study the behavior of the topological index as a function of temperature and Hamiltonian parameters. The chiral \(d+id\) superconducting phase of a quasi-two-dimensional model with effective attraction between the electrons located at the nearest sites of a triangular lattice is considered. It is shown that the characteristic features in the energy dependence of the self-energy part, which arise when thermal fluctuations are taken into account, have a structure that does not lead to a change in the topological properties of the system. It is found that thermal fluctuations, as well as an increase in effective attraction in this system, contribute to the expansion of the temperature region, in which the value of the topological index is close to the integer \(C_{1}\simeq-2\). ## I Introduction In recent decades, a field of research related to the nontrivial topology of electronic states has been actively developing in condensed matter physics. It is obvious that, in addition to the unusual fundamental properties of quasiparticles in these phases, the interest of researchers is attracted by the prospects for their technical use in fault-tolerant quantum computers, in the implementation of high-speed information transfer, and in spintronics (see, for example, [1; 2; 3]). In real experiments, systems are under the influence of external and internal flows of energies and particles, which violates the conservation laws in closed systems and makes it impossible to describe the systems under consideration by wave functions in Hilbert space, which corresponds to the ground state of the system. This forces one to consider effective non-Hermitian Hamiltonians, which assume the damping of quantum states. Naturally, the question arises as to whether new topological properties can arise in non-Hermitian systems. Considerable attention has been paid to these issues in recent years (see, for example, [4; 5; 6; 7]). The main purpose of this work is to study the effect of thermal fluctuations on the topological index (TI). The paper considers a quasi-two-dimensional model system of a superconductor with effective attraction of electrons located at neighboring sites of a triangular lattice. It should be noted that although the work is not aimed at describing real superconductors, the considered Hamiltonian and the chosen parameters were used by us earlier to describe layered compounds with a triangular lattice, such as \(Na_{x}CoO_{2}yH_{2}O\) sodium cobaltites intercalated with water [8]. It is important that, in such a model, superconductivity with a nontrivial topology arises in a natural way [9]. The topology of the superconducting phase in quasi-two-dimensional materials is characterized by an integer value of the topological index TI which describes the non-local characteristics of the many-particle wave function of the electronic ensemble and is expressed in terms of one-electron Green functions [10] \[C_{1}=\frac{\varepsilon_{\mu\nu\gamma}}{24\pi^{2}}\int\limits_{- \infty}^{+\infty}d\omega\int\limits_{BZ}d^{2}k\times \tag{1}\] \[\times tr\left[G\partial_{\mu}G^{-1}G\partial_{\nu}G^{-1}G \partial_{\gamma}G^{-1}\right],\] where \(\varepsilon_{\mu\nu\gamma}\) is the antisymmetric Levi-Civita tensor, (summation is assumed over repeated indices \(\mu,\nu,\gamma\) in (1), \(G(i\omega)\) is the Matsubara Green function, \(\mu,\nu,\gamma\) are the frequency-momentum indices \((k_{1},k_{2},\) and \(BZ\) is the Brillouin zone. The considered TI is also used to characterize quasi-two-dimensional topological insulators, integer and fractional quantum Hall effects [11]. When considering the Green functions of a superconductor in the mean field approximation [12; 13] it is possible to integrate over frequency in (1). In this case, the expression for TI reduces to the well-known definition of the Chern number associated with the Berry phase in momentum space [14; 10]. Accounting for many-particle effects, disorder, and fluctuations leads to the energy dependence of the self-energy part of the one-electron Green function. In some cases, this dependence can affect the topological phase or induce transitions to new topological phases [15]. In Ref. [16], the role of quantum fluctuations in topological phase transitions with spin and anomalous Hall effects was studied. It was shown that dynamic fluctuations bring a topologically trivial insulator into the phase with an integer Chern number. In Ref. [17], a topological phase transition was studied in a two-dimensional disordered system with effective repulsion between electrons located at the same site. It was found that the repulsive interaction of electrons at a site contributes to the preservation of the topological phase in the disordered system. In [16], a general mechanism was proposed that ensures the appearance of a topological phase transition. This mechanism is due to the divergences in self-energy. According to Ref. [18], singularities in the energy dependence of the self-energy part, can also arise as a result of resonant scattering of charge carriers by thermal fluctuations of electron-hole pairs, which significantly renormalizes the energy dependence of the self-energy. In this paper, to calculate TI, taking into account thermal fluctuations in superconducting phases, the functional-integral method is used in the general calculation scheme, which is similar to the calculation of TI in the quantum Hall effect based on the Kubo formula [19]. ## II Calculation of the topological index taking into account thermal fluctuations As is known, the topological index in the integer quantum Hall effect is proportional to the Hall conductivity that can be calculated by the Kubo formula [20]. In the case of superconducting phases with nontrivial topology, we also use this approach. In the functional-integral theory the Kubo formula for conductivity has the following form: \[\sigma_{\mu\nu}=\lim_{\omega\to 0}\frac{1}{\Omega_{n}}\langle[\Pi_{\mu\nu}(0)- \Pi_{\mu\nu}(i\Omega_{n})]\rangle\mid_{i\Omega_{n}=\omega+i0}, \tag{2}\] where the angle brackets \(\langle...\rangle\) denote averaging over fluctuating fields, \(\Omega_{n}=2\pi nT\) is the boson Matsubara frequency, \(\Pi_{\mu\nu}(i\Omega_{n})\) is the Matsubara current-to-current correlation function \[\Pi_{\mu\nu}(i\Omega_{n})=\int\limits_{0}^{\beta}d\tau\langle T_{\tau}J_{\mu} (\tau)J_{\nu}(0)\rangle\exp(i\Omega_{n}\tau), \tag{3}\] where \(J_{\mu(\nu)}\) are components of the current operators, \(T_{\tau}\) is the ordering operator in "imaginary time". The angle brackets denote quantum statistical averaging. As is known, fluctuating fields in the functiona-integral theory for superconducting phases arise as a result of the Hubbard-Stratonovich transformation, which allows one to replace the many-particle problem of interacting electron-hole pairs by the one-particle problem with an electron-hole pair interacting with auxiliary random fields [8; 22] (see Appendix A). To calculate TI, we restrict ourselves to the one-loop approximation, which allows one to obtain a standard expression for TI at \(T=0\) and, at the same time, take into account thermal fluctuations at finite temperatures. In this approximation, the correlation function \(\Pi_{\mu\nu}(i\Omega_{n})\) is written as follows: \[\Pi_{\mu\nu}(i\Omega_{n})=-\frac{1}{\beta}\sum_{m}Sp\left[J_{\nu}F(i\omega_{m} -i\Omega_{n})J_{\mu}F(i\omega_{m})\right], \tag{4}\] where \(F(i\omega_{m})\)\(J_{\nu}\) and \(J_{\nu}\) are the Matsubara Green function and component of the current operator averaged over fluctuating fields, \(\omega_{n}=\pi T(2m+1)\) are the Matsubara frequencies for Fermi particles, \(SpA=\sum_{k}trA(k)\), \(tr\) implies summation over spin variables. In deriving this expression, it was taken into account that \(\omega_{m}+\Omega_{n}=\pi T(2m+1+2n)=\omega_{n^{\prime}}\). Note that the potential of fluctuating fields \(\Delta\mathcal{U}_{j\delta}\) in the functional-integral theory is chosen so that its average value \(\langle\Delta\mathcal{U}_{j\delta}\rangle=0\)[22]. Therefore, the components of the current operators \(J_{\nu}\) averaged over fluctuating fields are expressed in terms of the Matsubara Green function with the averaged order parameter \[J_{\nu}=e\partial_{\nu}\langle\mathcal{H}\rangle=e\partial_{\nu}\mathcal{H}_ {AV}=-e\partial_{\nu}\left(G^{AV}\right)^{-1}, \tag{5}\] where \(\partial_{\nu}=\partial/\partial_{k_{\nu}}\). At \(T\neq 0\) there arise thermal fluctuations. Therefore, when obtaining an expression for TI, it is necessary to take into account the discreteness of the Matsubara frequencies \(\omega_{n}\). To this end, the residue theorem is used and the summation over Matsubara frequencies is replaced by integration along the cut lines \(z=E+i0+i\Omega_{n}\), \(z=E-i0+i\Omega_{n}\), and \(z=E+i0\), bypassing the point \(z=i\Omega_{n}\) in the upper half-plane, and along the line \(z=E-i0\) in the lower half-plane of the complex energy. By this means we can go to the retarded (advanced) Green functions \(F^{R(A)}(E)=\langle(E-\mathcal{H}\pm i0)^{-1}\rangle\). As a result, for the correlation function (4) we obtain \[\begin{array}{c}\Pi_{\mu\nu}(i\Omega_{n})=\frac{1}{4\pi i}\int \limits_{-\infty}^{+\infty}dEth\left(\frac{\beta E}{2}\right)\times\\ \times Sp\left\{J_{\nu}\left[F^{A}(E)-F^{R}(E)\right]J_{\mu}F^{R}(E+i\Omega_{ n})+\right.\\ \left.J_{\nu}F^{A}(E-i\Omega_{n})J_{\mu}\left[F^{A}(E)-F^{R}(E)\right]\right\}, \end{array} \tag{6}\] where \(th\left(z\right)\) is the hyperbolic tangent. Restricting ourselves to the quasi-two-dimensional case and passing in (2) to the limit \(\Omega_{n}\to 0\), we single out the antisymmetric part, which determines the Hall conductivity \(\sigma_{H}=\frac{e^{2}}{2\pi\hbar}C_{1}\) in the integer quantum Hall effect. As a result, we obtain a generalizing expression for TI, taking into account thermal fluctuations \[\begin{array}{c}C_{1}=\int\limits_{-\infty}^{+\infty}dE\int\limits_{-\pi}^ {\pi}\int\limits_{-\pi}^{\pi}\frac{dk_{1}dk_{2}}{16\pi^{2}}th\left(\frac{\beta E }{2}\right)\times\\ \times tr\left[\partial_{k_{1}}G^{-1}_{AV}(E)K^{K}(E)\partial_{k_{2}}G^{-1}_{AV }(E)\partial_{E}K^{+}(E)-\right.\\ \left.-K^{-}(E)\partial_{k_{1}}G^{-1}_{AV}(E)\partial_{E}K^{+}(E)\partial_{k_{ 2}}G^{-1}_{AV}(E)\right].\end{array} \tag{7}\] Here we introduce the notation \(K^{+}(E)=[F^{A}(E)+F^{R}(E)]/2\), \(K^{-}(E)=[F^{A}(E)-F^{R}(E)]/2\), \(\partial_{E}=\partial/\partial_{E}\), \(k_{1}\) and are the main reciprocal lattice vectors. To obtain an explicit expression for TI of a quasi-two-dimensional system, it is necessary to multiply the matrices under the sign \(tr\) in (7). To do this, it is convenient to use their expansion in Pauli matrices (see Appendix B). It should be noted that passing from summation over the Matsubara frequencies \(\omega_{n}\) to integration along the imaginary axis \(\frac{1}{\beta}\sum_{m}\rightarrow\int\frac{d\omega}{2\pi}\), valid for \(T\to 0\), in the limit \(\Omega_{n}\to 0\) from (2) we get the standard expression for TI (1). In contrast to (1), integration over energy in (7) is carried out along the real axis. ## III Model and results In this paper we consider a quasi-two-dimensional model system with effective attraction of electrons at neighboring sites of a triangular lattice with the Hamiltonian \[\hat{\mathcal{H}}=\sum_{i,j,s}t_{ij}\hat{c}_{is}^{+}\hat{c}_{js}-\sum_{j}\mu \hat{n}_{j}-V\sum_{j,\delta}\hat{n}_{j\uparrow}\hat{n}_{j+\delta\downarrow}, \tag{8}\] where \(t_{ij}=-t\) are the matrix elements of electron hopping to the nearest sites; \(\hat{c}_{js}^{+}(\hat{c}_{js})\) are the operators of creation (annihilation) of an electron at site \(j\) with spin projection \(s\); \(n_{js}=\hat{c}_{js}^{+}\cdot\hat{c}_{js}\) is the operator of the number of electrons at site \(j\) with spin projection \(s\); \(n_{j}\) is the operator of the total number of electrons at site \(j\); \(\mu\) is the chemical potential; \(V\) is the parameter of inter-electron attraction. The Hamiltonian of the system considered with the averaged order parameter \(\mathcal{H}_{AV}\) in the quasi momentum representation has the form (see [8]): \[\begin{array}{c}\mathcal{H}_{AV}(k)=\left[\begin{array}{cc}\mathcal{H}_{AV }^{\uparrow\uparrow}(k)&\mathcal{H}_{AV}^{\uparrow\downarrow}(k)\\ \mathcal{H}_{AV}^{\downarrow\uparrow}(k)&\mathcal{H}_{AV}^{\downarrow\downarrow }(k)\end{array}\right],\\ \mathcal{H}_{AV}^{\uparrow\uparrow}(k)=\varepsilon_{k},&\mathcal{H}_{AV}^{ \downarrow\downarrow}(k)=-\varepsilon_{k}\\ \mathcal{H}_{AV}^{\uparrow\downarrow}(k)=-2V\overline{\Delta}V_{k},\\ \mathcal{H}_{AV}^{\downarrow\uparrow}(k)=\left(\mathcal{H}_{AV}^{\uparrow \downarrow}(k)\right)^{*},\\ \varepsilon_{k}=-2t\left[\cos k_{1}+\cos k_{2}+\cos\left(k_{2}-k_{1}\right) \right]-\mu,\\ V_{k}(\alpha)=\cos k_{1}+\exp(i\alpha)\cos k_{2}+\\ \qquad\qquad+\exp(-i\alpha)\cos\left(k_{2}-k_{1}\right),\end{array} \tag{9}\] where \(\varepsilon_{k}\) is the dispersion law of electron energy on a triangular lattice with hoppings within the first coordination sphere; \(V_{k}(\alpha)\) is the dispersion law of the superconducting order parameter with symmetry determined by the phase value \(\alpha\). The retarded (advanced) effective Green functions \(F^{R(A)}(E)=(E-\mathcal{H}_{AV}-\Sigma^{R(A)}(E))^{-1}\) entering into (7) have the form: \[\begin{array}{c}F(E)=\left[\begin{array}{cc}F^{\uparrow}(E)&F^{\uparrow \downarrow}(E)\\ F^{\downarrow\uparrow}(E)&F^{\downarrow}(E)\end{array}\right],\\ F^{\uparrow(\downarrow)}(E)=\frac{E\pm\varepsilon_{k}-\Sigma^{\downarrow( \uparrow)}(k,E)}{[E-E_{+}(k)]\left[E-E_{-}(k)\right]},\\ F^{\uparrow\downarrow(\downarrow\uparrow)}(E)=\frac{\Sigma^{\downarrow( \downarrow\uparrow)}(k,E)}{[E-E_{+}(k)]\left[E-E_{-}(k)\right]},\\ E_{\pm}(k)=\left[\Sigma^{\uparrow}(k,E)+\Sigma^{\downarrow}(k,E)\right]/2\pm \\ \pm\left[\left(\varepsilon_{k}+\left[\Sigma^{\uparrow}(k,E)-\Sigma^{ \downarrow}(k,E)\right]/2\right)^{2}+\right.\\ \qquad\qquad\left.\Sigma^{\uparrow\downarrow}(k,E)\Sigma^{\downarrow\uparrow}(k,E)\right]^{1/2}.\end{array} \tag{10}\] For simplicity, we do not indicate here the indices in the notation of the retarded (advanced) Green functions. Just as in [18], we restrict ourselves to an approximation quadratic in fluctuating potential for the self-energy part. The components of the self-energy part have poles at the boundaries of the energy gap. These anomalies arise as a result of resonant scattering of charge carriers by thermal fluctuations of the electron-hole pairs [18; 24]. However, the resulting pole structure \(\Sigma(k,E)\propto(E-E_{k})^{-1}+(E+E_{k})^{-1}\) cannot give rise to new topological phases and transitions [15; 23]. In addition, the approximation we use, which is quadratic in terms of fluctuating potential, allows one to obtain analytical expressions for the derivatives of the components of the self-energy part with respect to energy that simplify numerical calculations of the topological index \(C_{1}\). Expanding \(\hat{\mathcal{H}}_{AV}\) (9) and \(F^{R(A)}(E)\) (10) in Pauli matrices (see Appendix B), we obtain an explicit expression for \(C_{1}\) (7) (see Appendix C). Calculations of the topological index \(C_{1}\) (7) were carried out in the range of concentrations where the superconducting phase of the system under consideration has a \(d+id\) type of symmetry (\(\alpha=2\pi/3\)) (see the phase diagram in Fig. 2) at a value of the interelectronic attraction parameter \(V=t\) typical of \(Na_{x}CoO_{2}yH_{2}O\) compounds [8]. In the concentration range where the superconducting phase has a generalized \(s\) symmetry (\(\alpha=0\)), from the explicit expression for \(C_{1}\) (7) it follows that \(C_{1}=0\) for any parameters (see Appendix C). In Ref. [18] it was shown that, as a result of resonant scattering of charge carriers on thermal fluctuations of electron-hole pairs, in the normal and pairing components of the self-energy part there arise singularities (poles) at energy values corresponding to the boundaries of the energy gap \(E\simeq\pm E_{k}\). The results of calculating the temperature dependence of the topological index \(C_{1}\) for \(d+id\) pairing with account for the above features of the self-energy part, are shown in Fig. 1 for three values of the effective attraction constant: \(V=0.5\), \(V=1\) and \(V=2\). For comparison, is also presented the TI temperature dependence calculated in the Hartree-Fock (HF) approximation without allowance for thermal fluctuations. It can be seen that when fluctuations are taken into account, the temperature range in which the TI value is close to the integer \(C_{1}\simeq-2\) be comes much wider than in the HF approximation. Moreover, in contrast to the results of the HF approximation, when thermal fluctuations are taken into account, with increasing \(V\), the region with \(C_{1}\simeq-2\) expands and approaches the superconducting transition temperature \(T_{c}\) at \(V=2\). Assuming that thermal fluctuations play the role of disorder, the latter result is consistent with that of Ref.[17], in which the repulsive interaction between electrons located at the same site leads to the expansion of the topological phase region in a disordered system. Figure 3 the temperature dependence of the topological index \(C_{1}\) for \(d+id\) pairing is presented at three charge carrier concentrations: \(n\simeq 1.5\), \(n\simeq 1.6\) and \(n\simeq 1.7\). From the phase diagram in Fig. 2 it is seen that, when deviating from the optimal doping \(n\simeq 1.5\), the temperature of the superconducting transition \(T_{c}\) decreases sharply. Despite this, the temperature range in units \(T_{c}\), with a TI close to the integer value \(C_{1}\simeq-2\) changes in Fig. 3 significantly. Figure 4 presents the dependence of TI on the electron concentration at temperatures \(T/T_{c}^{*}=0.02\), \(T/T_{c}^{*}=0.2\) and \(T/T_{c}^{*}=0.8\). Since the temperature of the transition to the superconducting state \(T_{c}\) depends on the concentration of charge carriers \(n\) (see Fig. 2), its largest value at the optimal doping \(n\simeq 1.5\) is chosen as \(T_{c}^{*}\) in Fig. 4. It demonstrates that, just as in the integer quantum Hall effect, with increasing temperature, the concentration region where the TI value is close to the integer value \(C_{1}\simeq-2\) (the plateau region in the integer quantum Hall effect) decreases. ## Conclusion An expression for the topological index (TI) of a quasi-two-dimensional model of superconductor, which takes into account thermal fluctuations, is obtained in the framework of the self-consistent functional-integral theory, similarly to the calculation of TI in the integer quantum Hall effect. Using the expression obtained, the TI behavior in the chiral \(d+id\) superconducting phase of the quasi-two-dimensional one-band model is analyzed. At a temperature \(T>0\), a nonzero density of states appears in the superconducting gap. In this case, it seems incorrect to identify the state of the system as a super Figure 1: Temperature dependence of the topological index \(C_{1}\) for \(d+id\) pairing at three values of the constant of effective attraction \(V\) and electron concentration \(n\simeq 1.5\). The dotted lines represent the temperature dependences of \(C_{1}\) in the Hartree-Fock approximation for the same parameter values. Figure 3: Temperature dependence of the topological index \(C_{1}\) for \(d+id\) pairing at three values of the electron concentration \(n\) and the constant of effective attraction V=1. The dotted lines represent the temperature dependences of \(C_{1}\) in the Hartree-Fock approximation for the same parameter values. Figure 2: The dependence of the amplitude of the averaged order parameter on the charge carrier concentration at a temperature \(T=0.0004t\) (phase diagram). conducting phase with nontrivial topology. However, as shown by the experiments, the system properties, associated with the nontrivial topology, are preserved in a certain temperature range. In particular, the quantum Hall effect takes place at nonzero temperatures when the inequality \(\sigma_{xx}/\sigma_{xy}\ll 1\) is satisfied. In this connection, we believe that in our calculations, when the TI value is close to the integer value \(C_{1}\simeq-2\), the system retains properties that are determined by the nontrivial topology of the chiral \(d+id\) superconducting phase in the ground state. The singularities in the energy dependences of the normal and anomalous components of the self-energy part of the single-particle Green function, which arise as a result of resonant scattering of charge carriers on thermal fluctuations of electron-hole pairs, are taken into account. It is shown that these singularities do not lead to abrupt changes in TI, which may be interpreted as the absence of topological transitions. This is explained by the type of pole structure of the self-energy part of the model considered. It has been found that taking into account thermal fluctuations, as well as increasing the effective attraction between electrons located at neighboring sites, significantly expand the temperature region in which the TI value is close to the integer \(C_{1}\simeq-2\). An expansion of this region, with thermal fluctuations taken into account, is also observed when the system deviates from the optimal doping level. The proposed method can be extended to the impurity disorder and used to self-consistently allow for the effect of thermal fluctuations on TI of topological insulators. This study was supported by the financing program BB-2021-121030100005-1 ## Appendix A Functional-integral method The functional-integral method is based on the Hubbard-Stratonovich transformation. We use this transformation in a two-field representation (see [21; 22]), in which the amplitude \(\Delta_{j,\delta}\) and the phase \(\phi_{j,\delta}\) of the superconducting order parameter act as fluctuating fields. In this approach, the problem of calculating the partition function of interacting electron pairs is reduced to the problem of calculating the partition function of independent electron pairs located in the extended space of auxiliary fluctuating fields \(\Delta_{j,\delta}\) and \(\phi_{j,\delta}\). In the static approximation, fluctuating fields do not depend on time, and the partition function has the form \[\begin{split} Z=\prod_{j,\delta}\int_{0}^{\infty}\Delta_{j, \delta}d\Delta_{j,\delta}\int_{-\pi}^{\pi}d\phi_{j,\delta}\exp{[-\beta\Omega( \Delta,\phi)]},\\ \Omega(\Delta,\phi)=\Omega^{0}(\Delta,\phi)+\Omega^{*}(\Delta, \phi),\\ \Omega^{0}(\Delta,\phi)=V\sum_{j,\delta}\Delta_{j,\delta}^{2}, \\ \Omega^{*}(\Delta,\phi)=-\frac{1}{\beta}\ln{Z^{*}(\Delta,\phi)}, \\ Z^{*}(\Delta,\phi)=SpT_{\tau}\exp{\left[-\int_{0}^{\beta}d \tau\hat{\mathcal{H}}(\Delta,\phi,\tau)\right]},\\ \hat{\mathcal{H}}(\Delta,\phi,\tau)=\hat{\mathcal{H}}_{HF}+ \Delta\hat{\mathcal{U}}(\Delta,\phi,\tau),\end{split} \tag{13}\] where \(Sp\) is the total quantum mechanical trace; \(T_{\tau}\) is the operator of ordering in imaginary time \(\tau\in[0,\beta]\), \(\beta=1/k_{B}T\); \(\hat{\mathcal{H}}_{HF}\) is the Hamiltonian of the model considered in the HF approximation; \(\Delta\hat{\mathcal{U}}\) is the potential of fluctuating fields. The condition for the minimum thermodynamic potential together with the equations for the chemical potential, the distribution function of phase fluctuations, and the self-energy part of the one-electron Green function form a self-consistent system of equations [8; 22]. The numerical solution of this system of equations makes it possible to determine the superconducting properties of the system under consideration, taking into account thermal fluctuations. ## Appendix B Decomposition in Pauli matrices Introduce the following notation for matrices in (7): \(A=\partial_{k_{1}}G_{AV}^{-1}(E)\), \(B=K^{-}(E)\), \(C=\partial_{k_{2}}G_{AV}^{-1}(E)\) and \(D=\partial_{E}K^{+}(E)\). In the two-dimensional case, to multiply the matrices it is convenient to use their decomposition in Pauli matrices \(\sigma_{\nu}\): \(A=A^{\nu}\sigma_{\nu}\), \(B=B^{\mu}\sigma_{\mu}\) and \(AB=(AB)^{\mu}\sigma_{\mu}\) where \[\begin{split}(AB)^{0}=A^{0}B^{0}+A^{x}B^{x}+A^{y}B^{y}+A^{z}B^{z},\\ (AB)^{x}=A^{x}B^{0}+A^{0}B^{x}+i\left(A^{y}B^{z}-A^{z}B^{y} \right),\\ (AB)^{y}=A^{y}B^{0}+A^{0}B^{y}+i\left(A^{z}B^{x}-A^{x}B^{z} \right),\\ (AB)^{z}=A^{z}B^{0}+A^{0}B^{z}+i\left(A^{x}B^{y}-A^{y}B^{x} \right).\end{split} \tag{14}\] Figure 4: The dependence of the topological index \(C_{1}\) on the electron concentration for \(d+id\)- pairing at three temperature values: \(T/T_{e}^{*}=0.02\), \(T/T_{e}^{*}=0.2\) and \(T/T_{e}^{*}=0.8\), and the constant of effective attraction V=1. The product of matrices entering into (7) has the structure \(tr[ABCD-BADC]\). Since at \(\nu\neq 0\)\(tr\sigma_{\nu}=0\) and \(tr\sigma_{\mu}\sigma_{\nu}=2\delta_{\mu\nu}\), from the product of matrices in (7) there remains only the zero component \(tr[ABCD-BADC]=(ABCD)^{0}-(BADC)^{0}\) which can be easily calculated using (13): \[\begin{array}{c}tr[ABCD-BADC]=\\ =2\left[(AB)^{\nu}(CD)^{\nu}-(BA)^{\nu}(DC)^{\nu}\right].\end{array} \tag{14}\] As \((AB)^{0}=(BA)^{0}\), the term \(\nu=0\) is absent in expression (14), therefore \[\begin{array}{c}tr[ABCD-BADC]=\\ =2i\left[(A^{y}B^{z}-A^{z}B^{y})(C^{z}D^{0}+C^{0}D^{x})+\right.\\ \left.+(A^{z}B^{0}+A^{0}B^{x})(C^{y}D^{z}-C^{z}D^{y})+\right.\\ \left.+(A^{z}B^{x}-A^{x}B^{z})(C^{y}D^{0}-C^{0}D^{y})+\right.\\ \left.+(A^{y}B^{0}+A^{0}B^{y})(C^{z}D^{x}-C^{x}D^{z})+\right.\\ \left.+(A^{x}B^{y}-A^{y}B^{x})(C^{z}D^{0}-C^{0}D^{z})+\right.\\ \left.+(A^{z}B^{0}+A^{0}B^{z})(C^{x}D^{y}-C^{y}D^{x})\right].\end{array} \tag{15}\] ## Appendix C Analytical expression for the TI integrand Using the definition of matrices \(A\), \(B\), \(C\) and \(D\) (Appendix B) and the explicit form (9) and (10, we obtain \(A^{0}=0\), \(C^{0}=0\), \[\begin{array}{c}A^{x}=-2V\overline{\Delta}\left[\sin k_{1}-\cos\alpha\sin( k_{2}-k_{1})\right],\\ A^{y}=2V\overline{\Delta}\sin\alpha\sin(k_{2}-k_{1}),\\ A^{z}=-2t\left[\sin k_{1}-\sin(k_{2}-k_{1})\right],\\ C^{x}=-2V\overline{\Delta}\cos\alpha\left[\sin k_{2}+\sin(k_{2}-k_{1})\right],\\ C^{y}=2V\overline{\Delta}\sin\alpha\left[\sin k_{2}-\sin(k_{2}-k_{1})\right],\\ C^{z}=-2t\left[\sin k_{2}+\sin(k_{2}-k_{1})\right],\end{array} \tag{16}\] \[\begin{array}{c}B^{0}=iIm\left[\frac{E-\Sigma_{1}(E)}{det(E)}\right],B^{x }=i2V_{1}Im\left[\frac{\Sigma^{\uparrow\downarrow}(E)}{det(E)}\right],\\ B^{y}=i2V_{2}Im\left[\frac{\Sigma^{\uparrow\downarrow}(E)}{det(E)}\right],B^{z }=iIm\left[\frac{\varepsilon_{k}-\Sigma_{2}(E)}{det(E)}\right],\end{array} \tag{17}\] \[\begin{array}{c}D^{0}=Re\left[\frac{1-\partial_{E}\Sigma_{1}(E)}{det(E)}- \frac{\left[E-\Sigma_{1}(E)\right]\partial_{E}det(E)}{det^{2}(E)}\right],\\ D^{x}=2V_{1}Re\left[\frac{\partial_{E}\Sigma^{\uparrow\downarrow}(E)}{det(E)} -\frac{\Sigma^{\uparrow\downarrow}(E)\partial_{E}det(E)}{det^{2}(E)}\right], \\ D^{y}=2V_{2}Re\left[\frac{\partial_{E}\Sigma^{\uparrow\downarrow}(E)}{det( E)}-\frac{\Sigma^{\uparrow\downarrow}(E)\partial_{E}det(E)}{det^{2}(E)}\right], \\ D^{z}=-Re\left[\frac{\partial_{E}\Sigma_{2}(E)}{det(E)}+\frac{\left[ \varepsilon_{k}-\Sigma_{2}(E)\right]\partial_{E}det(E)}{det^{2}(E)}\right], \end{array} \tag{18}\] where \[\begin{array}{c}det(E)=det(F^{-1}(E))=\left[\;E-E_{+}(k)\right]\left[\;E-E_ {-}(k)\right],\\ \Sigma_{1}(E)=\left[\;\Sigma^{\uparrow}(k,E)+\Sigma^{\downarrow}(k,E)\;\right]/ 2,\\ \Sigma_{2}(E)=\left[\;\Sigma^{\downarrow}(k,E)-\Sigma^{\uparrow}(k,E)\;\right]/ 2,\\ V_{1}=\cos k_{1}+\cos\alpha\left[\cos k_{2}+\cos\left(k_{2}-k_{1}\right)\right],\\ V_{2}=-\sin\alpha\left[\cos k_{2}-\cos\left(k_{2}-k_{1}\right)\right],\\ \partial_{E}det(E)=\partial_{E}det(F^{-1}(E))=\\ =\left[\;1-\partial_{E}\Sigma_{1}(E)+\partial_{E}\Sigma_{2}(E)\right]\times\\ \times\left[\;E+\varepsilon_{k}-\Sigma_{1}(E)-\Sigma_{2}(E)\right]+\\ +\left[\;1-\partial_{E}\Sigma_{1}(E)-\partial_{E}\Sigma_{2}(E)\right]\times\\ \times\left[\;E-\varepsilon_{k}-\Sigma_{1}(E)+\Sigma_{2}(E)\right]-\\ -8\mid\;V_{k}\mid^{2}\Sigma^{\uparrow\downarrow}(E)\partial_{E}\Sigma^{\uparrow \downarrow}(E).\end{array} \tag{19}\] In this paper, we restrict ourselves to an approximation quadratic in the fluctuating potential for the self-energy part. In this approximation, explicit expressions for the derivatives of its components with respect to energy have the form: \[\begin{array}{c}\partial_{E}\Sigma_{1}(E)=-\frac{V_{1}^{2}}{N}\sum_{\mathbf{ k}}\frac{E^{2}+E_{k}^{2}}{\left[E^{2}-E_{k}^{2}\right]^{2}},\\ \partial_{E}\Sigma_{2}(E)=\frac{V_{1}^{2}}{N}\sum_{\mathbf{k}}\frac{E^{2}-E_{k} ^{2}+2E\varepsilon_{k}}{\left[E^{2}-E_{k}^{2}\right]^{2}},\\ \partial_{E}\Sigma^{\uparrow\downarrow}(E)=\frac{V_{2}^{2}-V_{1}^{2}}{N}\sum_{ \mathbf{k}}\frac{4V\overline{\Delta}E\cos k_{1}}{\left[E^{2}-E_{k}^{2}\right]^ {2}},\end{array} \tag{20}\] where \[E_{k}=\sqrt{\;\varepsilon_{k}^{2}+4V^{2}\overline{\Delta}^{2}\mid V_{k}\mid^{2}}, \tag{21}\] Substituting expressions (16)-(20) into (15) and further into (7) gives an explicit form of the integrand for \(C_{1}\), which is not given here because of its cumbersomeness. Recall that the numerical values of the amplitude of the fluctuation-averaged superconducting order parameter \(\overline{\Delta}\) in (16)-(21) for different values of the Hamiltonian parameters and temperature are determined through a self-consistent procedure in the functional-integral method. Note that \(A^{y}\), \(B^{y}\), \(C^{y}\) and \(D^{y}\) are proportional to \(\sin\alpha\). Hence the topological index \(C_{1}\) is also proportional to \(\sin\alpha\). This results in the fact that the superconducting phase with generalized \(s\) symmetry (\(\alpha=0\)) is always topologically trivial \(C_{1}=0\).
2305.13295
Optimal Design of Dallenbach Absorbers Under Broadband Broad-Angle Illumination
The classical scenario where a \emph{single plane-wave} field impinge a Dallenbach absorber is well studied both theoretically and experimentally. However, occasionally a \emph{spectrum of plane-waves} impinges the absorber. Such a scenario occurs for example if an antenna is located adjacent to the absorbing layer. In this paper, for this scenario we obtain the absorbing performance bound and design an \emph{optimized layered absorber} that approaches the bound. In a numerical demonstration, we explore a realistic case where a dipole antenna is placed in the vicinity of a finite, electrically thin, Dallenbach absorber backed by a PEC plane in the 6G frequency range. In the absence of the absorbing layer covering the PEC plane, severe scattering from the plane distorts the radiated fields. These distortions are robustly mitigated by the specifically tailored optimal absorber to yield a more desired radiation pattern. Additionally, we propose a metamaterial realization that emulates the required properties of the absorbing layer for all field polarizations.
Chen Firestein, Amir Shlivinski, Yakir Hadad
2023-05-22T17:52:36Z
http://arxiv.org/abs/2305.13295v2
# Bound and Optimal Design of Dallenbach Absorber Under Finite-Bandwidth Multiple-Angle Illumination ###### Abstract Dallenbach absorbers are lossy substances attached to a perfect electric conductor sheet. For such a configuration Rozanov derived a sum rule that relates the absorber's efficacy with its thickness and frequency-band of operation. Rozanov's derivation is valid only for layer impinged by a normally incident plane wave. Later, this relation was extended for oblique incidence considering both transverse electric and transverse magnetic polarizations. Here, we follow the same approach and present a sum rule that is valid for multiple and possibly a spectrum of oblique incident waves which are arbitrarily weighted. We recast the design of the Dallenbach absorber as an optimization problem, where optimization is performed over its electromagnetic properties. The optimization problem is applicable for practical implementations where finite spectral bandwidth is considered, as well as for theoretical aspects such as determining the tightness of the sum rule over an infinite bandwidth. We provide a numerical example for a practical case where we perform an optimization procedure for a given weight function and finite bandwidth. Additionally, we demonstrate the effect of the weight function on the optimization results. Electromagnetic absorbers. ## I Introduction Wave absorbers have been fascinating researchers throughout the history [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], as they are being used in a variety of electromagnetic applications such as reducing reflections [12, 13, 14, 15], enhanced accuracy of antenna measurements in anechoic chambers [16, 17, 18], decreasing undesired emissions of discrete electric components and printed circuit boards (PCBs) [19, 20, 21] and to capture light in solar cells [22, 23], to name a few. Among the different types of wave absorbers, Dallenbach layer, a lossy substance backed by a perfect electric conductor (PEC) sheet, is widely used due to its compatibility with planar geometries [1]. Rozanov [24] has derived a sum rule relation, a bound, between the absorption efficacy, thickness and the operating frequency band that is valid when the absorber is illuminated by a normal incidence plane wave. Rozanov's derivation relies on the typical assumptions of passivity, linearity and time-invariance (LTI) and causality, while in addition it is based on the long wavelength (i.e. static) behaviour of the reflection coefficient. Over the past few years, several research groups investigated an approach to bypass Rozanov's bound which involves revoking the time-invariance assumption and permitting certain time-variations in the absorbing structure [25, 26, 27]. However, practical implementation of the proposed designs remains an open challenge. Recently, it was shown by our group [28] that by replacing the PEC boundary with a transparent impedance sheet (inductive, capacitive, or resistive), it is possible to bypass the limits imposed by Rozanov's bound. Importantly, this improvement was achieved by revoking the last assumption on the long wavelength behaviour of the reflection coefficient without sacrificing the key assumptions of passivity and LTI. For these configurations, when possible, we have also derived the relevant sum rules. An extension of Rozanov's bound for oblique incidence of a single plane wave, for both transverse electric (TE) and transverse magnetic (TM) polarizations was proposed by Volakis's group [29, 30]. Duality principle dictates that these sum rules apply not only for absorbing systems but also for infinite radiating antenna arrays that are placed above a PEC surface. This insight led several researches to use these bounds as a figure of merit in the design of antenna arrays [31, 32]. Here, we extend the sum rule trade-off for the case of multiple and possibly a continuous spectrum of oblique incidence plane waves. Moreover, we introduce in the sum rule the possibility to include an arbitrary angular weight function \(W(\theta)\) (for both TE and TM polarizations) that reflects the designer absorption requirements for each angle of incidence. We verify numerically that the new sum rule is tight when integrating along the entire bandwidth spectrum. To account for practical systems that operate in a specific (i.e. finite) bandwidth, we set a non-convex optimization problem to obtain the optimal design parameters, namely the permittivity \(\epsilon\) and electric conductivity \(\sigma\) of the Dallenbach absorber. While being restricted to the new relation, the optimal solution maximizes the contribution within the desired bandwidth while simultaneously minimizing it outside. We introduce a numerical example that its aim is to optimize a Dallenbach layer with thickness of \(0.4\,\mathrm{[m]}\), operating at the free-space bandwidth \(0.1\) to \(3\) meters subject to a uniform windowed weight function. Moreover, we consider different weight functions over a finite bandwidth and show that distinct optimization parameters are obtained. ## II Mathematical Formulation Consider a Dallenbach planar layer that is backed by a PEC sheet with electrical permittivity (\(\epsilon\)), magnetic permeability (\(\mu\) and electrical conductivity (\(\sigma\)) surrounded by vacuum (\(\epsilon_{0},\mu_{0}\)). The layer extends infinitely in the \(x\) and \(y\) directions and has a finite thickness \(d\) in the \(z\) direction. The layer is impinged by a wave field that is composed of an angular spectrum of propagating oblique incidence plane waves with either TE or TM polarizations (see Fig. 1 for illustration). Due to the interaction with the Dallenbach layer, some of the wave field is reflected backwards, while the rest is absorbed within the layer. Note that the PEC boundary enforces that there is no transmitted wave beyond the absorbing layer. By following similar approach as in Rozanov's derivation [24], Volakis's group proposed a generalized sum rule that is valid for a single oblique incidence plane-wave at an angle \(\theta\) with TE or TM polarization [29], \[\left|\int_{0}^{\infty}\!\!\ln\left|\rho(\lambda,\theta)\right|d\lambda\right| \leq 2\pi^{2}\mu_{s}d\begin{cases}\cos\left(\theta\right),&\text{TE}\\ 1/\cos\left(\theta\right),&\text{TM}\end{cases}\triangleq\text{RB}\left( \theta\right). \tag{1}\] Where \(\rho\) is the reflection coefficient at the interface between the Dallenbach absorber and its surroundings, \(\lambda\) is the free space wavelength and \(\mu_{s}\) is the static relative permeability. The relationship described in Eq. (1) illustrates a polarization-dependent trade-off between the absorption performance of the layer, its thickness, its static permeability and the angle of incidence. Here, we present an extension to the relationship in Eq. (1) for a wave field that comprises a spectrum (multiple) of oblique incidence plane waves with arbitrary weights. To enrich the discussion for the real-world applications where systems are typically designed to operate at a specific bandwidth, we truncate the wavelength domain into \([\lambda_{1},\lambda_{2}]\), where \(\lambda_{1}\) [\(\lambda_{2}\)] is the minimal [maximal] free-space wavelength of interest. Since both handsides of Eq. (1) are non-negative for any selection of the layer's characteristic parameters \(\{\epsilon,\sigma,\mu\}\), multiplying it by a normalized non-negative weight function \(W(\theta)\) and integrating with respect to the angle of incidence (\(\theta\)) yields, \[\int_{\theta_{1}}^{\theta_{2}}W(\theta)\left|\int_{\lambda_{1}}^{ \lambda_{2}}\!\ln\left|\rho(\lambda,\theta)\right|d\lambda\right|d\theta \leq \tag{2}\] \[\int_{\theta_{1}}^{\theta_{2}}W(\theta)\left|\int_{0}^{\infty}\! \ln\left|\rho(\lambda,\theta)\right|d\lambda\right|d\theta \leq\int_{\theta_{1}}^{\theta_{2}}W(\theta)\text{RB}\left(\theta \right)\text{d}\theta,\] where \([\theta_{1},\theta_{2}]\) are the minimal and maximal desired angles of incidence, respectively and \(\int_{\theta_{1}}^{\theta_{2}}W(\theta)d\theta=1\), \(W(\theta)\geq 0\;\forall\;\theta\in[\theta_{1},\theta_{2}]\). The relationship in Eq. (2) describes the appropriate sum rule (bound) for wave fields composed of a spectrum (multiple) of oblique incidence plane waves with arbitrary weights. The right hand side of Eq. (2) bounds from above the left and middle sides for any selection of the properties of the layers (namely \(\{\epsilon,\mu,\sigma\}\)). This gives rise to an optimization problem for designing a Dallenbach absorber. By employing Eq. (2) as a \(W\)-weighted \(L^{1}\) norm on \(\theta\) space, we formulate the following optimization problem to obtain the absorber's characteristic parameters, \[\min_{\epsilon,\sigma}\int_{\theta_{1}}^{\theta_{2}}\left|W( \theta)\text{RB}-\text{W}(\theta)\left|\int_{\lambda_{1}}^{\lambda_{2}}\!\ln \left|\rho(\lambda,\theta)\right|\text{d}\lambda\right|\right|d\theta \tag{3}\] \[\text{subject}\;\;\text{to}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \rho(\lambda,\theta)=\frac{\text{Z}_{\text{in}}(\lambda,\theta)-\eta_{0}( \theta)}{\text{Z}_{\text{in}}(\lambda,\theta)+\eta_{0}(\theta)}.\] The impedance \(Z_{\text{in}}\) refers to the input impedance at the interface between the absorbing layer and its surroundings, while \(\eta_{0}\) is the free space (TE and TM) characteristic impedance (see [33, 34, 35]). An alternative (equivalent) way to express (3) as a maximization problem is to normalize the cost function by dividing Eq. (2) by its right-hand side. The resulting value of the expression is between \(0\) and \(1\), where a score of \(1\) indicates the highest achievable value of the cost function, i.e., tightest bound, for any given set of design parameters (\(\epsilon,\sigma\)). The alternative optimization problem reads, \[\max_{\epsilon,\sigma} \frac{\int_{\theta_{1}}^{\theta_{2}}W(\theta)\left|\int_{\lambda_ {1}}^{\lambda_{2}}\ln\left|\rho(\lambda,\theta)\right|d\lambda\right|d\theta}{ \int_{\theta_{1}}^{\theta_{2}}W(\theta)\text{RB}\left(\theta\right)\text{d}\theta} \tag{4}\] \[\text{subject}\;\;\text{to}\;\;\;\;\;\;\rho(\lambda,\theta)=\frac{ \text{Z}_{\text{in}}(\lambda,\theta)-\eta_{0}(\theta)}{\text{Z}_{\text{in}}( \lambda,\theta)+\eta_{0}(\theta)}.\] In the following sections we use Eqs. (2)-(4) to explore the tightness of the new sum rule and to design a practical Dallenbach absorber that operates within a finite frequency spectrum and angular spectrum of incidence waves. In addition we explore the influence of the weight function on the optimization results. Fig. 1: Illustration of the problem. A Dallenbach layer with thickness \(d\) is impinged by a wave field that is composed of multiple propagating plane waves within the angular domain \(\theta\in[\theta_{1},\theta_{2}]\). As a result of the interaction with the Dallenbach layer, a portion of the wave field is reflected backwards, while the remaining portion is absorbed inside the layer. The wave field may consist of either one or both of the TE and TM polarizations. ## III The Tightness of the Sum Rule Here, we aim to verify the tightness of the sum rule in Eq. (2), for the case of a wave field with an infinite frequency spectrum within the angular domain \(\left[\theta_{1},\theta_{2}\right]=\left[0,\pi/3\right]\). For that purpose we consider a non-magnetic layer (\(\mu=\mu_{0},\mu_{\mathrm{s}}=1\)) with thickness \(d=0.4\left[\mathrm{m}\right]\). The layer is composed of wavelength dependent permittivity \(\epsilon\left(\lambda\right)\) and electric conductivity \(\sigma(\lambda)\) having a Lorentzian form (note that these parameters satisfy Kramers-Kronig relations - a fundamental requirement for causal materials, [36]), \[\epsilon\left(\lambda\right) =\epsilon_{0}\left(1+\frac{A}{1+j\frac{\lambda_{\mathrm{rel}}}{ \lambda}-\left(\frac{\lambda_{\mathrm{res}}}{\lambda}\right)^{2}}\right) \tag{5}\] \[\sigma\left(\lambda\right) =\frac{\sigma_{\mathrm{s}}}{1+j\frac{2\pi\epsilon_{0}\tau_{ \mathrm{s}}}{\lambda}},\] where \(A\) denotes the strength of the dielectric response, \(\lambda_{\mathrm{res}}[\lambda_{\mathrm{rel}}]\) is the resonant [relaxation] wavelength, \(\sigma_{\mathrm{s}}\) is the static (long wavelength) electric conductivity, \(\epsilon_{0}=1/\sqrt{\epsilon_{0}\mu_{0}}\) is the speed of light in vacuum and \(\tau_{\sigma}\) is a time constant of the conductor. By combining Ampere's law with Ohm's law and Eq. (5), an effective electric permittivity, \(\epsilon_{\mathrm{eff}}(\lambda)\), can be expressed. This effective permittivity takes into account both polarization and conductance effects and is given by \(\epsilon_{\mathrm{eff}}(\lambda)=\epsilon\left(\lambda\right)+\sigma\left( \lambda\right)/(j2\pi\epsilon_{0}\lambda^{-1})\). For the sake of simplicity and concreteness, we set the following parameters: \(A=1,\lambda_{\mathrm{res}}=\lambda_{\mathrm{rel}}=1\left[\mathrm{m}\right], \tau_{\sigma}=10^{-12}\left[\mathrm{sec}\right]\), while the only remaining free-parameter is \(\sigma_{\mathrm{s}}\) that will be scanned over a wide range of values to demonstrate its effect on the tightness of the bound. In addition, we define three weight functions \(\{W_{1}(\theta),W_{2}(\theta),W_{3}(\theta)\}\) within the angular domain \(\theta\in\left[\theta_{1},\theta_{2}\right]\) (see Fig. 2 (a)), \[W_{1}(\theta) =\frac{3}{\pi}, \tag{6}\] \[W_{2}(\theta) =4.5496\;\theta^{5}\] \[W_{3}(\theta) =5.015\;e^{-10\left|\theta-\pi/6\right|},\] and zero elsewhere. The three weight functions were selected to have specific characteristics. Firstly, \(W_{1}(\theta)\) is uniformly distributed, indicating that all angles hold equal importance. Secondly, \(W_{2}(\theta)\) prioritizes larger angles, where extreme values are observed in the sum rule (see Eqs. (1) and (2)). Lastly, \(W_{3}(\theta)\) exhibits symmetry around \(\theta=\pi/6\) and follows an exponential distribution. We define the _tightness_ of the bound as the ratio in the argument of the optimization problem in Eq. (4), \[T=\frac{\int_{\theta_{1}}^{\theta_{2}}W_{i}(\theta)\left|\int_{0}^{\infty}\ln \left|\rho(\lambda,\theta)\right|d\lambda\right|d\theta}{\theta_{1}^{\theta_{2 }}W_{i}(\theta)\mathrm{RB}\left(\theta\right)\mathrm{d}\theta},\qquad 0\leq T\leq 1. \tag{7}\] Figures 2 (b) and (c) depict the evaluation of \(T\) for multiple values of \(\sigma_{\mathrm{s}}\) (sweeping process) and for each \(W_{i}(\theta)\) with \(i\in{1,2,3}\) and for both TE and TM polarizations. The numerical evaluation of the infinite integral in the numerator was performed over a finite but wide wavelength range \(\lambda\in\left[0.001,10^{4}\right]\left[\mathrm{m}\right]\) with \(10^{7}\) uniform sample points that ensures converges. The evaluation of the integration in the numerator uses the exact value of the reflection coefficient, \(\rho(\lambda,\theta)\), that was calculated using the equivalent transmission line model of the layer [33, 34], see in Eqs. (3)-(4). A tight bound with values approaching \(1\), as shown in Fig. 2, can be observed for any of the three weight functions considered, provided that the free parameter \(\sigma_{\mathrm{s}}\) is properly selected. It is important to note that the scanning process can be further refined by including additional free-parameters, such as \(A\), \(\lambda_{\mathrm{res}}\), \(\lambda_{\mathrm{rel}}\), and \(\tau_{\sigma}\) or even to incorporate multiple resonance terms in the Lorentizian model. By exploring different combinations of these parameters, it is possible to identify alternative sets that may also maximize the ratio. ## IV Practical Numerical Example - Finite Bandwidth The discussion in Sec. III aims to verify the tightness of the sum rule in Eq. (2), where an infinite bandwidth is considered. However, in practical systems, absorption performance is typically optimized within a specific bandwidth \(\lambda\in\left[\lambda_{1},\lambda_{2}\right]\), \(\lambda_{1}>0\) and \(\lambda_{2}<\infty\). In this section, we present a practical numerical example where we optimize the absorption performance of a Dallenbach absorber with thickness \(d=0.4[\mathrm{m}]\) via the formulations in Eqs. (3) and (4) over a finite bandwidth of \(\left[\lambda_{1},\lambda_{2}\right]=\left[0.1,3\right][\mathrm{m}]\), while, first, considering a uniform weight function of \(W(\theta)=W_{1}(\theta)\) within the angular range of \(\left[\theta_{1},\theta_{2}\right]=\left[0,\pi/3\right]\). This implies that we maximize the contribution within the desired bandwidth, while simultaneously minimizing it outside. Due to the polarization sensitivity of the design, the optimization process is performed separately for TE and TM polarizations. Fig. 2: Tightness of the bound for several weight functions. In accordance with Eq. (2), the ratio is bounded from above by \(1\) and is non-negative. The ratio is calculated for a wide range of \(\sigma_{\mathrm{s}}\), while holding all other parameters in Eq. (5) constant. (a) The three weight functions. (b) TE polarization. (c) TM polarization. The optimization process that we consider allows also for the design of a stratified inhomogeneous Dallenbach layers. In that case, we consider a piecewise homogeneous design, where, the layer of thickness \(d\) is truncated into \(N\) parallel sub-layers with identical thickness, each of which is homogeneous (see Fig. 3 for illustration). For any value of \(N\) we aim to find optimal design parameters (\(\epsilon,\sigma\)) that maximizes the tightness, \(T\), in Eq. (7). For each sub-layer, we consider the following two models for the design parameters: * constant parameters. We use Eq. (5) with \(\lambda_{\rm rel}=\lambda_{\rm res}\approx 0\) and \(\frac{2\pi\epsilon_{0}\epsilon_{\rm r}}{\lambda}\ll 1\), resulting in an approximatively constant \(\epsilon=\epsilon_{0}\epsilon_{\rm r}\) and \(\sigma=\sigma_{\rm s}\), where \(\epsilon_{\rm r}=1+A\) is the relative permittivity. In this model, there are two design parameters, \(\epsilon_{\rm r}\) and \(\sigma\), which provide only a conductive loss mechanism. * frequency dependent parameters. We use Eq. (5) with \(\epsilon(\lambda)\) providing three design parameters, \(A\), \(\lambda_{\rm rel}\), and \(\lambda_{\rm res}\). To reduce the number of design parameters, we set \(\sigma_{s}=0\), providing only a polarization loss mechanism. Model \(1\) is widely used in microwave engineering, such as for example in transmission-lines where the parameters usually vary slowly within the desired frequency band to maintain wideband impedance matching. In this model the resonances are typically positioned at much higher frequencies than of interest, as opposed to Model \(2\) where they are within (or near the edge) of the operational region. Both models satisfy Kamrberts-Kronig relation over the entire infinite frequency range. Note that other models, such as incorporating electric conductivity \(\sigma(\lambda)\) in Model \(2\) or incorporating multi-resonance polarization response, may also be applicable. However, as the complexity of the model increases, with additional design parameters per sub-layer, the optimization process becomes more involved while the relative improvement decreases. The optimization problem to obtain the absorber's parameters giving the tightest design is _non-convex_. To solve this optimization problem, we adopt a four-level search approach. First forming a relatively coarse calculation grid. In Model 1 realization, the parameters \(\epsilon_{\rm r}\) and \(\sigma\) of each layer are uniformly distributed with \(M\) points in the rages \(0.1\leq\epsilon_{r}\leq 10\) and \(0\leq\sigma\leq 0.1\left[{\rm S/m}\right]\). While in Model \(2\) realization, the parameters of each layer \(A\), \(\lambda_{\rm rel}\) and \(\lambda_{\rm res}\) are uniformly distributed with \(M\) points in the ranges \(0.1\leq A\leq 10\) and \(0.1\leq\lambda_{\rm rel},\lambda_{\rm rel}\leq 10\left[{\rm m}\right]\). Overall, with the parameters of \(N\) sub-layers, the total number of calculation grid points in Model \(1\) is \(M^{2N}\), and for Model \(2\) is \(M^{3N}\). To ensure memory limitations are not exceeded, for a given \(N\), we choose \(M\) such to limit the number of grid points to no more than \(500\) million. For each calculation grid point, in the first calculation level, we evaluate the cost function in Eq. (3) (or its equivalent in Eq. (4)). In the second step, we focus on the top-performing results from the first-level, coarse grid, calculation step, limiting our search to no more than \(0.5\%\) of the total number of grid points. To refine our search further, we perform a local search around each candidate solution of the first-level, evaluating the cost function at steps that are smaller by factor \(10\) than in the first-level. Once we have explored possible refinement in the vicinity of the top-performing results of the previous step, the most optimal solution obtained so far is set as an input to the third step. In the third step, we employ a subsequent round of local search, which involves exploring the immediate surroundings of the optimal solution using smaller step sizes, specifically reduced by a factor of \(25\) compared to the initial step. This process allows us to fine-tune the optimal solution and achieve even greater refinement. As a final step we perform the optimization procedure described in steps \(1-3\) multiple times, adjusting the upper limit of \(\epsilon_{r},\lambda_{\rm rel},\lambda_{\rm res}\) in the first step from \(10\) to \(3,4.5,7.5\). This step aims to improve the optimized results, as the initial grid points are modified. The entire process is done for both TE and TM polarizations and for several values of \(N\). We define a 'parameter of optimality' \(\tau\left(\theta\right)\), as the optimal ratio between the numerator and the denominator integrands in Eq. (4), i.e., the tightness at a specific incident angle as opposed to \(T\) in Eq. (7) that is a global tightness, \[\tau\left(\theta\right)\triangleq\frac{\left|\int_{\lambda_{1}}^{\lambda_{2}} \ln\left|\rho^{\rm opt}(\lambda,\theta)\right|d\lambda\right|}{{\rm RB}\left( \theta\right)}, \tag{8}\] where \(\rho^{\rm opt}(\lambda,\theta)\) is the optimal reflection coefficient that was calculated using the optimized parameters. Note that \(\rho^{\rm opt}(\lambda,\theta)\) and also \(\tau\left(\theta\right)\) depend on the weight function \(W(\theta)\) via the optimization. By its definition, \(\tau\left(\theta\right)\) is bounded between \(0\) to \(1\) for any \(\theta\), where the value \(1\) indicates a maximal best (supremum) result that cannot be improved any further. Figure 4 depicts the optimization results for the practical example with \([\lambda_{1},\lambda_{2}]=[0.1,3][{\rm m}]\) and \(W(\theta)=W_{1}(\theta)\). Figures 4 (a) and (b) show the optimization results for TE and TM polarizations, respectively, using Model \(1\) (with constant parameters) for \(N\in[1,7]\). Similarly, Figures 4 (c) and (d) depict the corresponding results for Model \(2\) for \(N\in[1,4]\) (with frequency-dependent parameters) for TE and TM polarizations, respectively. It can be observed in Fig. 4 that in both models the results at large angles of the TE polarization are better (on average) in comparison to the TM case. This result can be explained by Eq. (1), which shows that for TM polarization the right-hand side of the equation increases infinitely as \(\theta\) increases, implying that in order to maintain high values of \(\tau\left(\theta\right)\) the left hand side should also grow Fig. 3: A piecewise homogenous design where the non-magnetic absorbing layer of thickness \(d\) is truncated into \(N\) parallel sub-layers with identical thickness \(d_{i}=d/N\) (\(i\in[1,N]\)), each of which is homogeneous with \(\epsilon_{i},\sigma_{i}\). significantly (without enlarging the finite frequency range of operation). This suggests that at large angles, the primary contribution originate at larger (or smaller) wavelengths that are beyond the range considered in our example. Table I presents the optimal values of the design parameters and the corresponding value of the normalized cost function in Eq. (4) for TE polarization. Results have been rounded and truncated. Similarly, the corresponding results for the TM case are presented in Table II. ### _The importance of \(W(\theta)\)_ In Section III, we numerically demonstrated that the sum rule in Eq. (2) is tight for several different choices of the weight function \(W\left(\theta\right)\), when performing an integration over the entire frequency bandwidth. However, close inspection of Fig. 2 shows weak variations with respect to \(W\left(\theta\right)\), indicating that the results are relatively insensitive to the choice of the weight function. This observation may raise doubts about the significance of selecting the weight function \(W(\theta)\), as similar parameters lead to a tight sum rule irrespective of the choice of \(W(\theta)\). However, in this section, we demonstrate that the situation is different for practical systems that operate within a finite bandwidth. Namely, we investigate how the design parameters are influenced when integrating over a finite bandwidth as can be encountered in practical systems. To this end, we use the same parameters as in the previous practical example (as in Sec. IV) and design two types of layers: a homogenized layer with \(N=1\) and a truncated layer with \(N=2\). Both of these layers consist of constant parameters (Model 1). We then repeat the optimization process for the weight functions \(W_{1}(\theta)\), \(W_{2}(\theta)\), and \(W_{3}(\theta)\) with \([\lambda_{1},\lambda_{2}]=[0.1,3][\mathrm{m}]\) for both TE and TM polarizations. Figure 5 (a) shows the optimized results for TE polarization with \(N=1\), Figure 5 (b) shows the optimized results for TM polarization with \(N=1\) and Figure 5 (d) shows the optimized results for TM polarization with \(N=2\). Fig. 5 shows that \(\tau\left(\theta\right)\) is affected by the choice of \(W\left(\theta\right)\). Note that the blue lines in Figs. 5 (a)-(d) represent \(W_{1}(\theta)\) cases for \(N=1,N=2\). Therefore, the blue lines in Figs. 4 (a) and (b) and Figs. 5 (a) and (b) are identical, respectively (\(N=1\)). Similarly, the blue lines in Figs. 5 (c) and (d) are identical to the red dashed line in Figs. 4 (a) and (b) (\(N=2\)). It can be observed by comparing Figs. 5 (a) and (c) with Figs. 5 (b) and (d) that \(W_{2}(\theta)\) achieves the highest score for TE polarization, whereas it yields the lowest score for TM polarization. To provide a comprehensive view, the optimized design parameters, (\(\epsilon_{r},\sigma\)), are listed in Table III for each weight function in both polarizations. Table III indicates that the optimized design parameters vary with the choice of the weight function. This implies that as opposed to the theoretical simulations in Sec. III, when designing a practical Dallenbach absorber that operate within a specific bandwidth, the optimization process should be performed with respect to a specific \(W\left(\theta\right)\). Fig. 4: Optimized results for the practical example with \([\lambda_{1},\lambda_{2}]=[0.1,3][\mathrm{m}]\) and \(W(\theta)=W_{1}(\theta)\). (a) and (b) show the optimized results of the ‘parameter of optimality’,\(\tau\left(\theta\right)\), for TE and TM polarizations, respectively, using Model \(1\) (constant parameters) for \(N\in[1,7]\). Similarly, (c) and (d) depict the corresponding results for Model \(2\) (with frequency-dependent parameters) for \(N\in[1,4]\) for TE and TM polarizations, respectively. Fig. 5: \(\tau\left(\theta\right)\) for homogenized Dallenbach layer (\(N=1\)) and a truncated layer with \(N=2\) for different weight functions \(W_{1}(\theta),W_{2}(\theta)\) and \(W_{3}(\theta)\) with \([\lambda_{1},\lambda_{2}]=[0.1,3][\mathrm{m}]\). (a) \(N=1\), TE polarization. (b) \(N=1\), TM polarization. (c) \(N=2\), TE polarization. (d) \(N=2\), TM polarization. ## V Conclusion In this work, we presented a sum rule bound for Dal-lenbach layers subjected to a continuous weighted angular spectrum of oblique incidence plane waves with TE or TM polarizations. We showed that the sum rule is tight when an infinite bandwidth is considered. Moreover, we presented a practical example of designing an optimal absorber operating within a finite bandwidth and specific angular range for both TE and TM cases. By numerically solving an optimization problem, we obtained the design parameters for a piecewise inhomogeneous stratified absorber as a function of the number of sub-layers. Finally, we demonstrated that the optimized design parameters are sensitive to the choice of angular weight function when considering a finite bandwidth. \begin{table} \begin{tabular}{|c|c|c|c|} \hline & Model 1 - Optimal \(\{\overline{\epsilon}_{r},\overline{\sigma}\}\) & Model 1 - Cost Function & Model 2 - Optimal \(\{\overline{A},\lambda_{\text{rel}},\lambda_{\text{res}}\}\) & Model 2 - Cost Function \\ & & in Eq.(4) & & in Eq.(4) \\ \hline \multirow{3}{*}{\(N=1\)} & \(\overline{\epsilon}_{r}\)=\{1.51\} & \multirow{3}{*}{\(0.5217\)} & \(\overline{A}=\{3.54\}\) & \multirow{3}{*}{\(0.6498\)} \\ & \(\overline{\sigma}\)=\{0.0165\} & & \(\overline{\lambda}_{\text{rel}}\)=\{2.85\} & \\ & & & \(\overline{\lambda}_{\text{res}}\)=\{0.01\} & \\ \hline \multirow{3}{*}{\(N=2\)} & \(\overline{\epsilon}_{r}\)=\{1.96,2.69\} & \multirow{3}{*}{\(0.5881\)} & \(\overline{A}=\{1.35,9.896\}\) & \multirow{3}{*}{\(0.7015\)} \\ & \(\overline{\sigma}\)=\{0.0063,0.0464\} & & \(\overline{\lambda}_{\text{rel}}\)=\{0.777,2.653\} & \\ & & & \(\overline{\lambda}_{\text{res}}\)=\{0.01,0.4126\} & \\ \hline \multirow{3}{*}{\(N=3\)} & \(\overline{\epsilon}_{r}\)=\{2.31,3.44,4.38\} & \multirow{3}{*}{\(0.6192\)} & \(\overline{A}=\{1.09,4.876,10.471\}\) & \multirow{3}{*}{\(0.7236\)} \\ & \(\overline{\sigma}\)=\{0,0.0029,0.0918\} & & \(\overline{\lambda}_{\text{res}}\)=\{0.024,1.585,1.982\} & \\ & & & \(\overline{\lambda}_{\text{res}}\)=\{0.01,0.1225,1.514\} & \\ \hline \multirow{3}{*}{\(N=4\)} & \(\overline{\epsilon}_{r}\)=\{2,2.67,3.3,3.72\}\) & \multirow{3}{*}{\(0.6255\)} & \(\overline{A}=\{1.832,1.288,9.107,8.762\}\) & \multirow{3}{*}{\(0.7328\)} \\ & \(\overline{\sigma}\)=\{0,0.0011,0.037,0.114\} & & \(\overline{\lambda}_{\text{res}}\)=\{0.44,2.515,1.405,1.382\} & \\ \hline \multirow{3}{*}{\(N=5\)} & \(\overline{\epsilon}_{r}\)=\{2,2.72,3,4.375,4.48\} & \multirow{3}{*}{\(0.629\)} & \multirow{3}{*}{\(-\)} & \multirow{3}{*}{-} \\ & \(\overline{\sigma}\)=\{0,0.0033,0.024,0.053,0.138\} & & & \\ \hline \multirow{3}{*}{\(N=6\)} & \(\overline{\epsilon}_{r}\)=\{2.39,0.96,3.64,5.15,4.78,3.91\} & \multirow{3}{*}{\(0.6368\)} & \multirow{3}{*}{-} & \multirow{3}{*}{-} \\ & \(\overline{\sigma}\)=\{0,0.0005,0.03,0.107,0.14\} & & & \\ \hline \multirow{3}{*}{\(N=7\)} & \(\overline{\epsilon}_{r}\)=\{2.46,0.9,2.85,4.96,6.24,4.4,4.4,4.56\}\) & \multirow{3}{*}{\(0.644\)} & \multirow{3}{*}{-} & \multirow{3}{*}{-} \\ & \(\overline{\sigma}\)=\{0,0.0005,0.047\} & & & \\ \hline \end{tabular} \end{table} TABLE II: Optimization Results for TM Polarization \begin{table} \begin{tabular}{|c|c|c|c|} \hline & Optimized design & Optimized design \\ & parameters TE & parameters TM \\ \hline \(N\)=1 & \(\overline{\epsilon}_{r}\)=\{1.4568\} & \(\overline{\epsilon}_{r}\)=\{1.5149\} \\ \(W_{1}(\theta)\) & \(\overline{\sigma}\)=\{0.0123\} & \(\overline{\sigma}\)=\{0.0165\} \\ \hline \multirow{3}{*}{\(N=1\)} & \(\overline{\epsilon}_{r}\)=\{1.4024\} & \(\overline{\epsilon}_{r}\)=\{2.31\} \\ \(W_{2}(\theta)\) & \(\overline{\sigma}\)=\{0.0096\} & \(\overline{\sigma}\)=\{0.0204\} \\ \hline \multirow{3}{*}{\(N=1\)} & \(\overline{\epsilon}_{r}\)=\{1.5512\} & \(\overline{\epsilon}_{r}\)=\{1.4224\} \\ \(W_{3}(\theta)\) & \(\overline{\sigma}\)=\{0.0118\} & \(\overline{\sigma}_{l}\)=\{0.0144\} \\ \hline \multirow{3}{*}{\(N=2\)} & \(\overline{\epsilon}_{r}\)=\{2.18,4.03\} & \(\overline{\epsilon}_{r}\)=\{1.96,2.69\} \\ \(W_{1}(\theta)\) & \(\overline{\sigma}\)=\{0.0547\} & \(\overline{\sigma}\)=\{0.0063,0.0464\} \\ \hline \multirow{3}{*}{\(N=2\)} & \(\overline{\epsilon}_{r}\)=\{1.73,2.88\} & \(\overline{\epsilon}_{r}\)=\{2.86,3.03\} \\ & \(\overline{\sigma}\)=\{0.0406\} & \(\overline{\sigma}\)=\{0.0121,0.0553\} \\ \hline \multirow{3}{*}{\(N=2\)} & \(\overline{\epsilon}_{r}\)=\{1.88,3.25\} & \(\overline{\epsilon}_{r}\)=\{1.95,3.01\} \\ & \(\overline{\sigma}\)=\{0.0005,0.0478\} & \(\overline{\sigma}\)=\{0.0037,0.05\} \\ \hline \end{tabular} \end{table} TABLE III: The Importance of \(W(\theta)\) in Uniform Layers \begin{table} \begin{tabular}{|c|c|c|c|} \hline & Optimized design & Optimized design \\ & parameters TE & parameters TM \\ \hline \(N\)=1 & \(\overline{\epsilon}_{r}\)=\{1.4568\} & \(\overline{\epsilon}_{r}\)=\{1.5149\} \\ \(W_{1}(\theta)\) & \(\overline{\sigma}\)=\{0.0123\} & \(\overline{\sigma}\)=\{0.0123\} & \(\overline{\sigma}\)=\{0.0165\} \\ \hline \multirow{3}{*}{\(N=1\)} & \(\overline{\epsilon}_{r}\)=\{1.4024\} & \(\overline{\sigma}_{r}\)=\{2.31\} \\ \(W_{2}(\theta)\) & \(\overline{\sigma}\)=\{0.0096\} & \(\overline{\sigma}_{l}\)=\{0.0096\} \\ \hline \multirow{3}{*}{\(N=2\)} & \(\overline{\epsilon}_{r}\)=\{1.88,2.3,59,8.43\} & \multirow{3}{*}{\(0.764\)} \\ & \(\overline{\sigma}\)=\{0.00033,0.0289,0.1109\} & & \(\overline{\lambda}_{\text{res}}\)=\{0.0037,1.472,8.067\} \\ \cline{1-1} \cline{3-4} & \(\overline{\sigma}\)=\{0.0005,0.0478\} & & \(\overline{\lambda}_{\text{res}}\)=\{0.0037,0.05\} \\ \hline \multirow{3}{*}{\(N=5\)} & \(\overline{\epsilon}_{r}\)=\{1.88,1.7,312,4.86,4.59\}\) & \multirow{3}{*}{\(0.7859\)} & \multirow{3}{*}{-} \\ & \(\overline{\sigma}\)=\{0.00025,0.0025,0.0685,0.1125\} & & & \\ \hline \multirow{3}{*}{\(N=6\)} & \(\overline{\ ## Acknowledgments C. F. would like to thank to the Darom Scholarships and High-tech, Bio-tech and Chemo-tech Scholarships and to Yaakov ben-Yitzhak Hacohen excellence Scholarship. This research was supported by the Israel Science Foundation (grant No. 1353/19).